* Re: Re: Swap Compression
@ 2003-04-25 22:32 rmoser
2003-04-28 21:35 ` Timothy Miller
0 siblings, 1 reply; 16+ messages in thread
From: rmoser @ 2003-04-25 22:32 UTC (permalink / raw)
To: linux-kernel
Yeah you did but I'm going into a bit more detail, and with a very tight algorithm. Heck the algo was originally designed based on another compression algorithm, but for a 6502 packer. I aimed at speed, simplicity, and minimal RAM usage (hint: it used 4k for the code AND the compressed data on a 6502, 336 bytes for code, and if I turn it into just a straight packer I can go under 200 bytes on the 6502).
Honestly, I just never looked. I look in my kernel. But still, the stuff I defined about swapon options, swap-on-ram, and how the compression works (yes, compressed without headers) is all the detail you need about it to go do it AFAIK. Preplanning should be done there--done meaning workable, not "the absolute best."
--Bluefox Icy
---- ORIGINAL MESSAGE ----
List: linux-kernel
Subject: Re: Swap Compression
From: John Bradford <john () grabjohn ! com>
Date: 2003-04-25 21:17:11
[Download message RAW]
> Sorry if this is HTML mailed. I don't know how to control those settings
HTML mail is automatically filtered from LKML.
> COMPRESSED SWAP
We discussed this on the list quite recently, have a look at:
http://marc.theaimsgroup.com/?l=linux-kernel&m=105018674018129&w=2
and:
http://linuxcompressed.sourceforge.net/
John.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Swap Compression
2003-04-25 22:32 Re: Swap Compression rmoser
@ 2003-04-28 21:35 ` Timothy Miller
2003-04-29 0:43 ` Con Kolivas
0 siblings, 1 reply; 16+ messages in thread
From: Timothy Miller @ 2003-04-28 21:35 UTC (permalink / raw)
To: rmoser; +Cc: linux-kernel
rmoser wrote:
>Yeah you did but I'm going into a bit more detail, and with a very tight algorithm. Heck the algo was originally designed based on another compression algorithm, but for a 6502 packer. I aimed at speed, simplicity, and minimal RAM usage (hint: it used 4k for the code AND the compressed data on a 6502, 336 bytes for code, and if I turn it into just a straight packer I can go under 200 bytes on the 6502).
>
>Honestly, I just never looked. I look in my kernel. But still, the stuff I defined about swapon options, swap-on-ram, and how the compression works (yes, compressed without headers) is all the detail you need about it to go do it AFAIK. Preplanning should be done there--done meaning workable, not "the absolute best."
>
>
I think we might be able to deal with a somewhat more heavy-weight
compression. Considering how much faster the compression is than the
disk access, the better the compression, the better the performance.
Usually, if you have too much swapping, the CPU usage will drop, because
things aren't getting done. That means we have plenty of head room to
spend time compressing rather than waiting. The speed over-all would go
up. Theoretically, we could run into a situation where the compression
time dominates. In that case, it would be beneficial to have a tuning
options which uses a less CPU-intensive compression algorithm.
>
>
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Swap Compression
2003-04-28 21:35 ` Timothy Miller
@ 2003-04-29 0:43 ` Con Kolivas
0 siblings, 0 replies; 16+ messages in thread
From: Con Kolivas @ 2003-04-29 0:43 UTC (permalink / raw)
To: Timothy Miller, rmoser; +Cc: linux-kernel
On Tue, 29 Apr 2003 07:35, Timothy Miller wrote:
> rmoser wrote:
> >Yeah you did but I'm going into a bit more detail, and with a very tight
> > algorithm. Heck the algo was originally designed based on another
> > compression algorithm, but for a 6502 packer. I aimed at speed,
> > simplicity, and minimal RAM usage (hint: it used 4k for the code AND the
> > compressed data on a 6502, 336 bytes for code, and if I turn it into just
> > a straight packer I can go under 200 bytes on the 6502).
> >
> >Honestly, I just never looked. I look in my kernel. But still, the stuff
> > I defined about swapon options, swap-on-ram, and how the compression
> > works (yes, compressed without headers) is all the detail you need about
> > it to go do it AFAIK. Preplanning should be done there--done meaning
> > workable, not "the absolute best."
>
> I think we might be able to deal with a somewhat more heavy-weight
> compression. Considering how much faster the compression is than the
> disk access, the better the compression, the better the performance.
>
> Usually, if you have too much swapping, the CPU usage will drop, because
> things aren't getting done. That means we have plenty of head room to
> spend time compressing rather than waiting. The speed over-all would go
> up. Theoretically, we could run into a situation where the compression
> time dominates. In that case, it would be beneficial to have a tuning
> options which uses a less CPU-intensive compression algorithm.
The work that Rodrigo De Castro did on compressed caching
(linuxcompressed.sf.net) included a minilzo algorithm which I used by default
in the -ck patch addon as it performed the best for all the reasons you
mention. Why not look at that lzo code for adoption.
Con
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Re: Swap Compression
2003-04-27 19:04 ` Jörn Engel
2003-04-27 19:57 ` Livio Baldini Soares
[not found] ` <200304271609460030.01FC8C2B@smtp.comcast.net>
@ 2003-04-27 21:52 ` rmoser
2 siblings, 0 replies; 16+ messages in thread
From: rmoser @ 2003-04-27 21:52 UTC (permalink / raw)
To: =?UNKNOWN?Q?J=F6rn?= Engel; +Cc: linux-kernel
Well here's some new code. I'll get to work on a userspace app
to compress files. This code ONLY works on fcomp-standard
and does only one BruteForce (bmbinary is disabled) search for
redundancy. This means three things:
1 - There's no support for messing with the pointer size and mdist/
analysis buffer size (max pointer distance)
2 - The compression ratios will not be the best. The first match,
no matter how short, will be taken. If it's less than 4 bytes, it will
return "no match" to the fcomp_push function.
3 - It will be slow. BruteForce() works. Period. The code is too
simple for even a first-day introductory C student to screw up, and
there are NO bugs unless I truly AM a moron. bmbinary() should
work, and the function it was coded from works in test, but neither
have been written out and proven to work by logic diagram. It's
fast (infinitely faster than BruteForce()), but I'll try it when I feel that
the rest of the code is working right.
This should be complete for fcomp-standard. A little alteration will
allow the fcomp-extended to work. *Wags tail* this past two days
has been fun ^_^;
--Bluefox Icy
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Re: Swap Compression
2003-04-27 19:57 ` Livio Baldini Soares
@ 2003-04-27 20:24 ` rmoser
0 siblings, 0 replies; 16+ messages in thread
From: rmoser @ 2003-04-27 20:24 UTC (permalink / raw)
To: Livio Baldini Soares; +Cc: linux-kernel
First I'd like to note that I'd not seen anything on this topic up until I
actually had said something in #C++. I got the linuxcompressed
sourceforge site but already had my algorithm and I don't intend to
defame anyone, nor have I attempted to steal anyone's ideas.
I'm not rivaling anyone either.
... But I am starting from scratch.
*********** REPLY SEPARATOR ***********
On 4/27/2003 at 4:57 PM Livio Baldini Soares wrote:
>Hello,
>
> Unfortunately, I've missed the beginning of this discussion, but it
>seems you're trying to do almost exactly what the "Compressed Cache"
>project set out to do:
>
Archives.. archives....
http://marc.theaimsgroup.com/?l=linux-kernel&m=105130405702475&w=2
>http://linuxcompressed.sourceforge.net/
>
Saw that. I'm going for plain swap-on-ram tying right in to the normal
swap code. This means that all of these extensions (compression) will
go right to the swap-on-ram driver too. Of course, the kernel should be
smart enough to make a swap-on-ram the first choice by default, meaning
that it goes to SoR before going to disk swaps.
It's a lot less thought-out I'll grant you that, but I'm not the type to start
with someone else's code (I can never understand it). Besides that, I'm
just working on the compression algo.
Both ways can go into the kernel, you know. But I've got time to do this
and I'm gonna funnel whatever I can into it.
> _Please_ take a look at it. Rodrigo de Castro (the author) spent a
>_lot_ of time working out the issues and corner details which a system
>like this entail. I've been also involved in the project, even if not
>actively coding, but giving suggestions and helping out when time
>permitted. This project has been the core of his Master's
>dissertation, which he has just finished writting recently, and will
>soon defend.
>
> It would be foolish (IMHO) to start from scratch. Take a look at the
>web site. There is a nice sketch of the degin he has chosen here:
>
>http://linuxcompressed.sourceforge.net/design/
>
Cool I know, but:
<QUOTE>
Bottom line in swap issue: (increasing) space isn't a issue,
only performance
Discussing these issues we've come to the conclusion that
we should not aim for making swap bigger (keeping compressed
pages in it), that doesn't seem to be an issue for anyone...
Specially because disks nowadays are so low-cost compared
to RAM and CPU. We should design our cache to be tuned for
speed.
</QUOTE>
I'm working on this for handheld devices mainly, but my aim IS size.
Speed is important, though. I hope my algo is fast enough to give
no footprint in that sense :)
> Scott Kaplan, a researcher interested in compression of memory, has
>also helped a bit. This article is something definitely worth reading,
>and was one of Rodrigo's "starting point":
>
>http://www.cs.amherst.edu/~sfkaplan/papers/compressed-caching.ps.gz
>
> (There are other relevant sources available on the web page).
>
> Rodrigo has also written a paper about his compressed caching which
>has much more up-to-date information than the web page. His newest
>benchmarks of the newest compressed cache version shows better
>improvements then the numbers on the web page too. I'll ask him to put
>it somewhere public, if he's willing.
>
>Jörn Engel writes:
>> On Sun, 27 April 2003 14:31:25 -0400, rmoser wrote:
>
>[...]
>
>> Another thing: Did you look at the links John Bradford gave yet? It
>> doesn't hurt to try something alone first, but once you have a good
>> idea about what the problem is and how you would solve it, look for
>> existing code.
>
> I think the compressed cache project is the one John mentioned.
>
>> Most times, someone else already had the same idea and the same
>> general solution. Good, less work. Sometimes you were stupid and the
>> existing solution is much better. Good to know. And on very rare
>> occasions, your solution is better, at least in some details.
>>
>> Well, in this case, the sourceforge project appears to be silent since
>> half a year or so, whatever that means.
>
> It means Rodrigo has been busy writting his dissertation, and, most
>recently, looking for a job :-) I've talked to him recently, and he
>intends to continue on with the project, as he might have some time to
>devote to it.
>
> On a side note, though, one thing that has still not been explored
>is compressed _swap_. Since the project's focus has been performance
>gains, and it was not clear from the beginning that compressing swap
>actually results in performance gains, it still has not been
>implemented. That said, this *is* on the project's to-study list.
>
I'm going for size gains. Performance is something I hope to not hurt,
but an increase there is alright.
>
> Hope this helps,
>
>--
> Livio B. Soares
>-
>To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
>the body of a message to majordomo@vger.kernel.org
>More majordomo info at http://vger.kernel.org/majordomo-info.html
>Please read the FAQ at http://www.tux.org/lkml/
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Re: Swap Compression
[not found] ` <200304271609460030.01FC8C2B@smtp.comcast.net>
@ 2003-04-27 20:10 ` rmoser
0 siblings, 0 replies; 16+ messages in thread
From: rmoser @ 2003-04-27 20:10 UTC (permalink / raw)
To: linux-kernel
*********** REPLY SEPARATOR ***********
On 4/27/2003 at 9:04 PM Jörn Engel wrote:
>On Sun, 27 April 2003 14:31:25 -0400, rmoser wrote:
>>
>> Yeah I know. I really need to get a working user-space version of this
>> thing so I can bench it against source tarballs and extractions from
>> /dev/random a bunch of times.
>
>Your compression won't be too good on /dev/random. ;)
>But /dev/kmem might be useful test data.
>
Ahh okay.
>> >Why should someone decide on an algorithm before measuring?
>>
>> Erm. You can use any one of the other algos you want, there's a lot
>> out there. Go ahead and try zlib/gzip/bzip2/7zip/compress if you
>> want. I just figured, the simplest algo would hopefully be the fastest.
>
>Hopefully, yes. Also, the one that doesn't trash the L[12] caches too
>much will be "fast" in that it doesn't slow down the rest of the
>system. This aspect really favors current uncompressing code, as it is
>very easy on the CPU.
>
Mine doesn't do anything :) It's the lazy-boy of compression! I don't know
enough to trash anything
>> But really we need some finished code to make this work :-)
>
>Yes!
>
>> The real reason I'm working on this is because I favor speed completely
>> over size in this application. It's all modular; make the interface for
>the
>> kernel module flexible enough to push in gzip/bzip2 modules or whatever
>> else you want:
>>
>> <M> Swap partition support
>> ____<M> Compressed Swap
>> ________<M> fcomp
>> ____________<M> fcomp-extended support
>> ________<M> zlib
>> ________<M> gzip
>> ________<M> bzip2
>> ________<M> compress
>> ____<M> Swap on RAM
>
>Exactly. It might also be possible to choose the algorithm at bootup
>or during runtime.
>
(note: Using swaponram as an example instead of a regular swap to
show how to avoid making the user actually give a swap device name
for that particular feature; this works the same if you replace
" -o swaponram=24MB" with a device like /dev/hda4)
Okay, again, swapon -o foo[=bar] should pass whatever foo and bar are
to the kernel, and let the kernel evaluate it. This way you never update
swapon again ;-) Except of the usual (bugs, security fixes, etc)
swapon -o compressed=fcomp-extended -o algoflags=fcomp-mdist=1024 \
-o swaponram=24MB
That should be completely evaluated by the kernel to mean (as long as
fcomp-extended support is enabled and loaded) to swapon compressed
with fcomp-extended with a max backpointer distance of 1024 bytes (which
is 16 bit pointers; always use the minimum), and do a swaponram of 24MB.
The kernel will look at it, as long as all modules needed are loaded it goes
"Okay", tells swapon what the name (swaponram0 ?) of the swaponram
is, and then that gets handled in userspace as a normal
swapon swaponram0
would (assuming that device exists; think shmfs), meaning
swapoff swaponram0
would turn that swaponram off.
>> As far as I know, zlib, gzip, and compress use Huffman trees. I am
>pretty
>> sure about zlib, not sure about the others. gzip I believe uses 16 bit
>> backpointers as well, which means you need a 64k processing window
>> for [de]compression, not to mention that it takes more work. bzip2 we
>> all know is CPU-intensive, or at least it was last time I checked.
>
>Yes, zlib eats up several 100k of memory. You really notice this when
>you add it to a bootloader that was (once) supposed to be small. :)
>
Bick.
>> Yeah true. But for guessing the decompressed size I meant like when
>> you don't want to load the WHOLE block into RAM at once. Ahh, so you
>> swap in pages? Well whatever unit you swap in, that's how you should
>> compress things. Look I'm confusing myself here, just ignore anything
>> that sounds retarded--I'm just a kid after all :-p
>
>Will do. :)
>It should be fine to load a page (maybe several pages) at once. There
>is read-ahead code all over the kernel and this is nothing else. Plus
>it simplifies things.
>
Heh, cool. Compressing groups of pages I think is a bad idea; you have
to waste time RECOMPRESSING when you pull a page off the disk (or
flag it free)
>> >a) Even with 4M, two pages of 4k each don't hurt that much. If they
>> >do, the whole compression trick doesn't pay off at all.
>> >b) Compression ratios usually suck with small buffers.
>> >
>> a)
>> True, two chunks of 4k don't hurt. But how big are swap pages? Assuming
>> the page can't be compressed at all, it's [page size / 256] * 3 + [page
>size]
>> for the maximum compressed data size. (4144 bytes for 4k of absolutely
>> non-redundant data within 256 bytes).
>
>4k + sizeof(header). If the compression doesn't manage to shrink the
>code, it should return an error. The calling code will then put an
>uncompressed flag in the header and we're done.
>The header may be as small as one byte.
>
>> b)
>> Yeah they do. But to find the compression performance (ratio) loss, you
>> do [max pointer distance]/[block size], meaning like for this one
>> 256/[page size]. If you do a 4k page size, you lose 6.25% of the
>compression
>> performance (so if we average 2:1, we'll average 1.875:1). What IS the
>page
>> size the kernel uses?
>
>4k on most architectures, 8k on alpha.
>
Let's make this a swapon option at some point, i.e.
swapon -o page_size=16K /dev/hda5
>> >I didn't look that far yet. What you could do, is read through
>> >/usr/src/linux/Documentation/CodingStyle. It is just so much nicer
>> >(and faster) to read and sticking to it usually produces better code.
>> >
>>
>> Eh, I should just crack open the kernel source and immitate it. But I
>have
>> that file on my hard disk, so mebbe I should look. Mebbe I should take a
>> whack at getting the compression algo to work too, instead of pushing it
>> on someone else.
>
>:)
>
>Imitating the kernel source may or may not be a good idea, btw. It is
>very diverse in style and quality. Some drivers are absolutely
>horrible, but they are just drivers, so noone without that hardware
>cares.
>
LOL! True.
>Another thing: Did you look at the links John Bradford gave yet? It
>doesn't hurt to try something alone first, but once you have a good
>idea about what the problem is and how you would solve it, look for
>existing code.
>
Saw it.
>Most times, someone else already had the same idea and the same
>general solution. Good, less work. Sometimes you were stupid and the
>existing solution is much better. Good to know. And on very rare
>occasions, your solution is better, at least in some details.
>
It happens.
>Well, in this case, the sourceforge project appears to be silent since
>half a year or so, whatever that means.
>
It's dead I'd guess, unless someone can prove me wrong.
>Jörn
>
>--
>Data dominates. If you've chosen the right data structures and organized
>things well, the algorithms will almost always be self-evident. Data
>structures, not algorithms, are central to programming.
>-- Rob Pike
>-
>To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
>the body of a message to majordomo@vger.kernel.org
>More majordomo info at http://vger.kernel.org/majordomo-info.html
>Please read the FAQ at http://www.tux.org/lkml/
--Bluefox Icy
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Re: Swap Compression
2003-04-27 19:04 ` Jörn Engel
@ 2003-04-27 19:57 ` Livio Baldini Soares
2003-04-27 20:24 ` rmoser
[not found] ` <200304271609460030.01FC8C2B@smtp.comcast.net>
2003-04-27 21:52 ` rmoser
2 siblings, 1 reply; 16+ messages in thread
From: Livio Baldini Soares @ 2003-04-27 19:57 UTC (permalink / raw)
To: Jörn Engel; +Cc: rmoser, linux-kernel
Hello,
Unfortunately, I've missed the beginning of this discussion, but it
seems you're trying to do almost exactly what the "Compressed Cache"
project set out to do:
http://linuxcompressed.sourceforge.net/
_Please_ take a look at it. Rodrigo de Castro (the author) spent a
_lot_ of time working out the issues and corner details which a system
like this entail. I've been also involved in the project, even if not
actively coding, but giving suggestions and helping out when time
permitted. This project has been the core of his Master's
dissertation, which he has just finished writting recently, and will
soon defend.
It would be foolish (IMHO) to start from scratch. Take a look at the
web site. There is a nice sketch of the degin he has chosen here:
http://linuxcompressed.sourceforge.net/design/
Scott Kaplan, a researcher interested in compression of memory, has
also helped a bit. This article is something definitely worth reading,
and was one of Rodrigo's "starting point":
http://www.cs.amherst.edu/~sfkaplan/papers/compressed-caching.ps.gz
(There are other relevant sources available on the web page).
Rodrigo has also written a paper about his compressed caching which
has much more up-to-date information than the web page. His newest
benchmarks of the newest compressed cache version shows better
improvements then the numbers on the web page too. I'll ask him to put
it somewhere public, if he's willing.
Jörn Engel writes:
> On Sun, 27 April 2003 14:31:25 -0400, rmoser wrote:
[...]
> Another thing: Did you look at the links John Bradford gave yet? It
> doesn't hurt to try something alone first, but once you have a good
> idea about what the problem is and how you would solve it, look for
> existing code.
I think the compressed cache project is the one John mentioned.
> Most times, someone else already had the same idea and the same
> general solution. Good, less work. Sometimes you were stupid and the
> existing solution is much better. Good to know. And on very rare
> occasions, your solution is better, at least in some details.
>
> Well, in this case, the sourceforge project appears to be silent since
> half a year or so, whatever that means.
It means Rodrigo has been busy writting his dissertation, and, most
recently, looking for a job :-) I've talked to him recently, and he
intends to continue on with the project, as he might have some time to
devote to it.
On a side note, though, one thing that has still not been explored
is compressed _swap_. Since the project's focus has been performance
gains, and it was not clear from the beginning that compressing swap
actually results in performance gains, it still has not been
implemented. That said, this *is* on the project's to-study list.
Hope this helps,
--
Livio B. Soares
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Re: Swap Compression
2003-04-27 18:31 ` rmoser
@ 2003-04-27 19:04 ` Jörn Engel
2003-04-27 19:57 ` Livio Baldini Soares
` (2 more replies)
0 siblings, 3 replies; 16+ messages in thread
From: Jörn Engel @ 2003-04-27 19:04 UTC (permalink / raw)
To: rmoser; +Cc: linux-kernel
On Sun, 27 April 2003 14:31:25 -0400, rmoser wrote:
>
> Yeah I know. I really need to get a working user-space version of this
> thing so I can bench it against source tarballs and extractions from
> /dev/random a bunch of times.
Your compression won't be too good on /dev/random. ;)
But /dev/kmem might be useful test data.
> >Why should someone decide on an algorithm before measuring?
>
> Erm. You can use any one of the other algos you want, there's a lot
> out there. Go ahead and try zlib/gzip/bzip2/7zip/compress if you
> want. I just figured, the simplest algo would hopefully be the fastest.
Hopefully, yes. Also, the one that doesn't trash the L[12] caches too
much will be "fast" in that it doesn't slow down the rest of the
system. This aspect really favors current uncompressing code, as it is
very easy on the CPU.
> But really we need some finished code to make this work :-)
Yes!
> The real reason I'm working on this is because I favor speed completely
> over size in this application. It's all modular; make the interface for the
> kernel module flexible enough to push in gzip/bzip2 modules or whatever
> else you want:
>
> <M> Swap partition support
> ____<M> Compressed Swap
> ________<M> fcomp
> ____________<M> fcomp-extended support
> ________<M> zlib
> ________<M> gzip
> ________<M> bzip2
> ________<M> compress
> ____<M> Swap on RAM
Exactly. It might also be possible to choose the algorithm at bootup
or during runtime.
> As far as I know, zlib, gzip, and compress use Huffman trees. I am pretty
> sure about zlib, not sure about the others. gzip I believe uses 16 bit
> backpointers as well, which means you need a 64k processing window
> for [de]compression, not to mention that it takes more work. bzip2 we
> all know is CPU-intensive, or at least it was last time I checked.
Yes, zlib eats up several 100k of memory. You really notice this when
you add it to a bootloader that was (once) supposed to be small. :)
> Yeah true. But for guessing the decompressed size I meant like when
> you don't want to load the WHOLE block into RAM at once. Ahh, so you
> swap in pages? Well whatever unit you swap in, that's how you should
> compress things. Look I'm confusing myself here, just ignore anything
> that sounds retarded--I'm just a kid after all :-p
Will do. :)
It should be fine to load a page (maybe several pages) at once. There
is read-ahead code all over the kernel and this is nothing else. Plus
it simplifies things.
> >a) Even with 4M, two pages of 4k each don't hurt that much. If they
> >do, the whole compression trick doesn't pay off at all.
> >b) Compression ratios usually suck with small buffers.
> >
> a)
> True, two chunks of 4k don't hurt. But how big are swap pages? Assuming
> the page can't be compressed at all, it's [page size / 256] * 3 + [page size]
> for the maximum compressed data size. (4144 bytes for 4k of absolutely
> non-redundant data within 256 bytes).
4k + sizeof(header). If the compression doesn't manage to shrink the
code, it should return an error. The calling code will then put an
uncompressed flag in the header and we're done.
The header may be as small as one byte.
> b)
> Yeah they do. But to find the compression performance (ratio) loss, you
> do [max pointer distance]/[block size], meaning like for this one
> 256/[page size]. If you do a 4k page size, you lose 6.25% of the compression
> performance (so if we average 2:1, we'll average 1.875:1). What IS the page
> size the kernel uses?
4k on most architectures, 8k on alpha.
> >I didn't look that far yet. What you could do, is read through
> >/usr/src/linux/Documentation/CodingStyle. It is just so much nicer
> >(and faster) to read and sticking to it usually produces better code.
> >
>
> Eh, I should just crack open the kernel source and immitate it. But I have
> that file on my hard disk, so mebbe I should look. Mebbe I should take a
> whack at getting the compression algo to work too, instead of pushing it
> on someone else.
:)
Imitating the kernel source may or may not be a good idea, btw. It is
very diverse in style and quality. Some drivers are absolutely
horrible, but they are just drivers, so noone without that hardware
cares.
Another thing: Did you look at the links John Bradford gave yet? It
doesn't hurt to try something alone first, but once you have a good
idea about what the problem is and how you would solve it, look for
existing code.
Most times, someone else already had the same idea and the same
general solution. Good, less work. Sometimes you were stupid and the
existing solution is much better. Good to know. And on very rare
occasions, your solution is better, at least in some details.
Well, in this case, the sourceforge project appears to be silent since
half a year or so, whatever that means.
Jörn
--
Data dominates. If you've chosen the right data structures and organized
things well, the algorithms will almost always be self-evident. Data
structures, not algorithms, are central to programming.
-- Rob Pike
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Re: Swap Compression
2003-04-27 17:51 ` Jörn Engel
2003-04-27 18:22 ` William Lee Irwin III
@ 2003-04-27 18:31 ` rmoser
2003-04-27 19:04 ` Jörn Engel
1 sibling, 1 reply; 16+ messages in thread
From: rmoser @ 2003-04-27 18:31 UTC (permalink / raw)
To: =?UNKNOWN?Q?J=F6rn?= Engel; +Cc: linux-kernel
*********** REPLY SEPARATOR ***********
On 4/27/2003 at 7:51 PM Jörn Engel wrote:
>On Sun, 27 April 2003 13:24:37 -0400, rmoser wrote:
>> >int fox_compress(unsigned char *input, unsigned char *output,
>> > uint32_t inlen, uint32_t *outlen);
>> >
>> >int fox_decompress(unsigned char *input, unsigned char *output,
>> > uint32_t inlen, uint32_t *outlen);
>>
>> Ey? uint32_t*? I assume that's a mistake....
>
>Nope. outlen is changed, you need a pointer here.
>
ahh okay, gotcha
>> Anyhow, this wasn't what
>> I was asking. What I was asking was about how to determine how much
>> data to send to compress it. Read the message again, the whole thing
>> this time.
>
>I did. But modularity is the key here. The whole idea may be great or
>plain bullshit, depending on the benchmarks. Which one it is depends
>on the compression algorithm used, among other things. Maybe your
>compression algo is better for some machines, zlib for others, etc.
>
Yeah I know. I really need to get a working user-space version of this
thing so I can bench it against source tarballs and extractions from
/dev/random a bunch of times.
>Why should someone decide on an algorithm before measuring?
>
Erm. You can use any one of the other algos you want, there's a lot
out there. Go ahead and try zlib/gzip/bzip2/7zip/compress if you
want. I just figured, the simplest algo would hopefully be the fastest.
But really we need some finished code to make this work :-)
The real reason I'm working on this is because I favor speed completely
over size in this application. It's all modular; make the interface for the
kernel module flexible enough to push in gzip/bzip2 modules or whatever
else you want:
<M> Swap partition support
____<M> Compressed Swap
________<M> fcomp
____________<M> fcomp-extended support
________<M> zlib
________<M> gzip
________<M> bzip2
________<M> compress
____<M> Swap on RAM
As far as I know, zlib, gzip, and compress use Huffman trees. I am pretty
sure about zlib, not sure about the others. gzip I believe uses 16 bit
backpointers as well, which means you need a 64k processing window
for [de]compression, not to mention that it takes more work. bzip2 we
all know is CPU-intensive, or at least it was last time I checked.
>> >Then the mm code can pick any useful size for compression.
>>
>> Eh? I rather the code alloc space itself and do all its own handling.
>That
>> way you don't have to second-guess the buffer size for decompressed
>> data if you don't do all-at-once decompression (i.e. decompressing x86
>> segments, all-at-once would be to decompress the whole compressed
>> block of N size to 64k, where partial would be to pull in N/n at a time
>and
>> decompress in n sets of N/n, which would give varying sized output).
>
>Segments are old, stupid and x86 only. What you want is a number of
>pages, maybe just one at a time. Always compress chunks of the same
>size and you don't have to guess the decompressed size.
>
Yeah true. But for guessing the decompressed size I meant like when
you don't want to load the WHOLE block into RAM at once. Ahh, so you
swap in pages? Well whatever unit you swap in, that's how you should
compress things. Look I'm confusing myself here, just ignore anything
that sounds retarded--I'm just a kid after all :-p
>> Another thing is that the code isn't made to strictly stick to
>compressing
>> or decompressing a whole input all at once; you may break down the
>> input into smaller pieces. Therefore it does need to think about how
>much
>> it's gonna actually tell you to pull off when you inquire about the size
>to
>> remove from the stream (for compression at least), because you might
>> break it if you pull too much data off midway through compression. The
>> advantage of this method is in when you're A) Compressing files, and
>> B) trying to do this kind of thing on EXTREMELY low RAM systems,
>> where you can't afford to pass whole buffers back and forth. (Think 4
>meg)
>
>a) Even with 4M, two pages of 4k each don't hurt that much. If they
>do, the whole compression trick doesn't pay off at all.
>b) Compression ratios usually suck with small buffers.
>
a)
True, two chunks of 4k don't hurt. But how big are swap pages? Assuming
the page can't be compressed at all, it's [page size / 256] * 3 + [page size]
for the maximum compressed data size. (4144 bytes for 4k of absolutely
non-redundant data within 256 bytes).
b)
Yeah they do. But to find the compression performance (ratio) loss, you
do [max pointer distance]/[block size], meaning like for this one
256/[page size]. If you do a 4k page size, you lose 6.25% of the compression
performance (so if we average 2:1, we'll average 1.875:1). What IS the page
size the kernel uses?
>> You do actually understand the code, right? I have this weird habit of
>> making code and doing such obfuscating comments that people go
>> "WTF is this?" when they see it. Then again, you're probably about
>> 12 classes past me in C programming, so maybe it's just my logic that's
>> flawed. :)
>
>I didn't look that far yet. What you could do, is read through
>/usr/src/linux/Documentation/CodingStyle. It is just so much nicer
>(and faster) to read and sticking to it usually produces better code.
>
Eh, I should just crack open the kernel source and immitate it. But I have
that file on my hard disk, so mebbe I should look. Mebbe I should take a
whack at getting the compression algo to work too, instead of pushing it
on someone else.
>Jörn
>
>--
>Beware of bugs in the above code; I have only proved it correct, but
>not tried it.
>-- Donald Knuth
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Re: Swap Compression
2003-04-27 17:51 ` Jörn Engel
@ 2003-04-27 18:22 ` William Lee Irwin III
2003-04-27 18:31 ` rmoser
1 sibling, 0 replies; 16+ messages in thread
From: William Lee Irwin III @ 2003-04-27 18:22 UTC (permalink / raw)
To: J?rn Engel; +Cc: rmoser, linux-kernel
On Sun, Apr 27, 2003 at 07:51:47PM +0200, J?rn Engel wrote:
> Segments are old, stupid and x86 only. What you want is a number of
> pages, maybe just one at a time. Always compress chunks of the same
> size and you don't have to guess the decompressed size.
Well, not really, but x86's notion of segments differs substantially
from that held by other cpus.
-- wli
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Re: Swap Compression
2003-04-27 17:24 ` rmoser
@ 2003-04-27 17:51 ` Jörn Engel
2003-04-27 18:22 ` William Lee Irwin III
2003-04-27 18:31 ` rmoser
0 siblings, 2 replies; 16+ messages in thread
From: Jörn Engel @ 2003-04-27 17:51 UTC (permalink / raw)
To: rmoser; +Cc: linux-kernel
On Sun, 27 April 2003 13:24:37 -0400, rmoser wrote:
> >int fox_compress(unsigned char *input, unsigned char *output,
> > uint32_t inlen, uint32_t *outlen);
> >
> >int fox_decompress(unsigned char *input, unsigned char *output,
> > uint32_t inlen, uint32_t *outlen);
>
> Ey? uint32_t*? I assume that's a mistake....
Nope. outlen is changed, you need a pointer here.
> Anyhow, this wasn't what
> I was asking. What I was asking was about how to determine how much
> data to send to compress it. Read the message again, the whole thing
> this time.
I did. But modularity is the key here. The whole idea may be great or
plain bullshit, depending on the benchmarks. Which one it is depends
on the compression algorithm used, among other things. Maybe your
compression algo is better for some machines, zlib for others, etc.
Why should someone decide on an algorithm before measuring?
> >Then the mm code can pick any useful size for compression.
>
> Eh? I rather the code alloc space itself and do all its own handling. That
> way you don't have to second-guess the buffer size for decompressed
> data if you don't do all-at-once decompression (i.e. decompressing x86
> segments, all-at-once would be to decompress the whole compressed
> block of N size to 64k, where partial would be to pull in N/n at a time and
> decompress in n sets of N/n, which would give varying sized output).
Segments are old, stupid and x86 only. What you want is a number of
pages, maybe just one at a time. Always compress chunks of the same
size and you don't have to guess the decompressed size.
> Another thing is that the code isn't made to strictly stick to compressing
> or decompressing a whole input all at once; you may break down the
> input into smaller pieces. Therefore it does need to think about how much
> it's gonna actually tell you to pull off when you inquire about the size to
> remove from the stream (for compression at least), because you might
> break it if you pull too much data off midway through compression. The
> advantage of this method is in when you're A) Compressing files, and
> B) trying to do this kind of thing on EXTREMELY low RAM systems,
> where you can't afford to pass whole buffers back and forth. (Think 4 meg)
a) Even with 4M, two pages of 4k each don't hurt that much. If they
do, the whole compression trick doesn't pay off at all.
b) Compression ratios usually suck with small buffers.
> You do actually understand the code, right? I have this weird habit of
> making code and doing such obfuscating comments that people go
> "WTF is this?" when they see it. Then again, you're probably about
> 12 classes past me in C programming, so maybe it's just my logic that's
> flawed. :)
I didn't look that far yet. What you could do, is read through
/usr/src/linux/Documentation/CodingStyle. It is just so much nicer
(and faster) to read and sticking to it usually produces better code.
Jörn
--
Beware of bugs in the above code; I have only proved it correct, but
not tried it.
-- Donald Knuth
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Re: Swap Compression
2003-04-27 9:04 ` Jörn Engel
@ 2003-04-27 17:24 ` rmoser
2003-04-27 17:51 ` Jörn Engel
0 siblings, 1 reply; 16+ messages in thread
From: rmoser @ 2003-04-27 17:24 UTC (permalink / raw)
To: =?UNKNOWN?Q?J=F6rn?= Engel; +Cc: linux-kernel
*********** REPLY SEPARATOR ***********
On 4/27/2003 at 11:04 AM Jörn Engel wrote:
>On Sat, 26 April 2003 22:24:04 -0400, rmoser wrote:
>>
>> So what's the best way to do this? I was originally thinking like this:
>>
>> Grab some swap data
>> Stuff it into fcomp_push()
>> When you have 100k of data, seal it up
>> Write that 100k block
>
>I would like something like this:
>
>/* fox_compress
> * @input: Pointer to uncompressed data
> * @output: Pointer to buffer
> * @inlen: Size of uncompressed data
> * @outlen: Size of the buffer
> *
> * Return:
> * 0 on successful compression
> * -Esomething on error
> *
> * Side effects:
> * Output buffer is filled with random data after an error
> * condition or the compressed input data on success.
> * outlen remains unchanged on error and holds the compressed
> * data size on success
> *
> * If the output buffer is too small, this is an error.
> */
>int fox_compress(unsigned char *input, unsigned char *output,
> uint32_t inlen, uint32_t *outlen);
>
>/* fox_decompress
> * see above, basically
> */
>int fox_decompress(unsigned char *input, unsigned char *output,
> uint32_t inlen, uint32_t *outlen);
>
Ey? uint32_t*? I assume that's a mistake.... Anyhow, this wasn't what
I was asking. What I was asking was about how to determine how much
data to send to compress it. Read the message again, the whole thing
this time.
>Then the mm code can pick any useful size for compression.
>
>Jörn
Eh? I rather the code alloc space itself and do all its own handling. That
way you don't have to second-guess the buffer size for decompressed
data if you don't do all-at-once decompression (i.e. decompressing x86
segments, all-at-once would be to decompress the whole compressed
block of N size to 64k, where partial would be to pull in N/n at a time and
decompress in n sets of N/n, which would give varying sized output).
Another thing is that the code isn't made to strictly stick to compressing
or decompressing a whole input all at once; you may break down the
input into smaller pieces. Therefore it does need to think about how much
it's gonna actually tell you to pull off when you inquire about the size to
remove from the stream (for compression at least), because you might
break it if you pull too much data off midway through compression. The
advantage of this method is in when you're A) Compressing files, and
B) trying to do this kind of thing on EXTREMELY low RAM systems,
where you can't afford to pass whole buffers back and forth. (Think 4 meg)
You do actually understand the code, right? I have this weird habit of
making code and doing such obfuscating comments that people go
"WTF is this?" when they see it. Then again, you're probably about
12 classes past me in C programming, so maybe it's just my logic that's
flawed. :)
--Bluefox Icy
(John Moser in case something winds up with my name on it)
>
>--
>My second remark is that our intellectual powers are rather geared to
>master static relations and that our powers to visualize processes
>evolving in time are relatively poorly developed.
>-- Edsger W. Dijkstra
>-
>To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
>the body of a message to majordomo@vger.kernel.org
>More majordomo info at http://vger.kernel.org/majordomo-info.html
>Please read the FAQ at http://www.tux.org/lkml/
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Re: Swap Compression
2003-04-27 2:24 ` rmoser
@ 2003-04-27 9:04 ` Jörn Engel
2003-04-27 17:24 ` rmoser
0 siblings, 1 reply; 16+ messages in thread
From: Jörn Engel @ 2003-04-27 9:04 UTC (permalink / raw)
To: rmoser; +Cc: linux-kernel
On Sat, 26 April 2003 22:24:04 -0400, rmoser wrote:
>
> So what's the best way to do this? I was originally thinking like this:
>
> Grab some swap data
> Stuff it into fcomp_push()
> When you have 100k of data, seal it up
> Write that 100k block
I would like something like this:
/* fox_compress
* @input: Pointer to uncompressed data
* @output: Pointer to buffer
* @inlen: Size of uncompressed data
* @outlen: Size of the buffer
*
* Return:
* 0 on successful compression
* -Esomething on error
*
* Side effects:
* Output buffer is filled with random data after an error
* condition or the compressed input data on success.
* outlen remains unchanged on error and holds the compressed
* data size on success
*
* If the output buffer is too small, this is an error.
*/
int fox_compress(unsigned char *input, unsigned char *output,
uint32_t inlen, uint32_t *outlen);
/* fox_decompress
* see above, basically
*/
int fox_decompress(unsigned char *input, unsigned char *output,
uint32_t inlen, uint32_t *outlen);
Then the mm code can pick any useful size for compression.
Jörn
--
My second remark is that our intellectual powers are rather geared to
master static relations and that our powers to visualize processes
evolving in time are relatively poorly developed.
-- Edsger W. Dijkstra
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Re: Swap Compression
[not found] ` <20030426160920.GC21015@wohnheim.fh-wedel.de>
@ 2003-04-27 2:24 ` rmoser
2003-04-27 9:04 ` Jörn Engel
0 siblings, 1 reply; 16+ messages in thread
From: rmoser @ 2003-04-27 2:24 UTC (permalink / raw)
To: =?UNKNOWN?Q?J=F6rn?= Engel; +Cc: linux-kernel
So what's the best way to do this? I was originally thinking like this:
Grab some swap data
Stuff it into fcomp_push()
When you have 100k of data, seal it up
Write that 100k block
But does swap compress in 64k blocks? 80x86 has 64k segments.
The calculation for compression performance loss, 256/size_of_input,
gives a loss of 0.003906 (0.3906%) for a dataset 65536 bytes long.
So would it be better to just compress segments, whatever size they
may be, and index those? This would, of course, be much more efficient
in terms of finding the data to uncompress. (And system dependant)
The algo is flexible; it doesn't care about the size of the input. If you
pass full segments at once, you could gain a little more speed. Best
part, if you rewrite my decompression code to do the forward calculations
for straight data copy (I'll likely do this before the night is up myself),
you will evade a lot of realloc()'s in the data copy between pointers. This
could also optimize out the decriments of the distance-to-next-pointer
counter, and total this all together gives a lot of speed increase over my
original code. Since this is a logic change, the compiler can't do this
optimization for you.
Another thing, I did state the additional overhead (which now is going to be
64K + code + 256 byte analysis section + 64k uncompressed data)
before, but you can pull in less than the full block, decompress it, put it
where it goes, and pull in more. So on REALLY small systems, you
can still do pagefile buffering and not blow out RAM with the extra 128k
you may need. (heck, all the work could be done in one 64k segment if
you're that determined. Then you could compress the swap on little 6 MB
boxen).
--Bluefox Icy
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Re: Swap Compression
2003-04-25 22:48 rmoser
@ 2003-04-26 9:17 ` Jörn Engel
[not found] ` <200304261148590300.00CE9372@smtp.comcast.net>
0 siblings, 1 reply; 16+ messages in thread
From: Jörn Engel @ 2003-04-26 9:17 UTC (permalink / raw)
To: rmoser; +Cc: linux-kernel
On Fri, 25 April 2003 18:48:41 -0400, rmoser wrote:
>
> Yeah, I had to mail it 3 times. Lst time I figured it out.
<more formal junk>
- Your mailer doesn't generate In-Reply-To: or References: Headers.
This breaks threading.
- It is usually more readable if you reply below the quoted text you
refer to.
- Most people prefer, if you reply to all. Not everyone here is
actually subscribed to the list.
- Feel free to ignore any of this, as long as the mail contains
interesting information. :)
</more formal junk>
> As for the performance hit, the original idea of that very tiny format was
> to pack 6502 programs into 4k of code. The expansion phase is very
> tight and very efficient, and on a ... anything... it will provide no problem.
> The swap-on-ram as long as it's not like 200 MB uncompressed SOR and
> 1 MB RAM will I think work great in the decompression phase.
>
> Compression will take a little overhead. I think if you use a boyer-moore
> fast string search algo for binary strings (yes you can do this), you can
> quickly compress the data. It may be like.. just a guess... 10-30 times
> more overhead than the decompression phase. So use it on at least a
> 10-30 mhz processor. If I ever write the code, it won't be kernel; just the
> compression/decompression program (userspace). Take the code and
> stuff it into the kernel if I do. I'll at the point of the algo coming in to
> existence make another estimate.
A userspace program would be just fine. Send it to me and I'll convert
it to the kernel, putting it somewhere under lib/.
Do you have any problems with a GPL license to your code (necessary
for kernel port)?
> The real power in this is Swap on RAM, but setting that as having precidence
> over swap on disk (normal swaps) would decrease disk swap usage by
> supplying more RAM in RAM. And of course swapping RAM to RAM is
> a lot faster. I'm looking at this for PDA's but yes I will be running this on
> my desktop the day we see it.
Swapping RAM to RAM sounds interesting, but also quite complicated. As
a first step, I would try to compress the swap data before going to
disk, that should be relatively simple to do.
("I would" means, I will if I find the time for it.)
> Well, I could work on the compression code, mebbe I can put the tarball up
> here. If I do I'd expect someone to add the code to swap to work with it--in
> kernel 2.4 at the very least (port it to dev kernel later!). As a separate module.
> We don't want code that could be mean in the real swap driver. :)
Right. But for 2.4, there is no swap driver, that you can simply
enable or disable. I hacked up a patch, but so far, disabling swap
eats ~100k of memory every second, so that clearly needs more work.
Jörn
--
Do not stop an army on its way home.
-- Sun Tzu
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Re: Swap Compression
@ 2003-04-25 22:48 rmoser
2003-04-26 9:17 ` Jörn Engel
0 siblings, 1 reply; 16+ messages in thread
From: rmoser @ 2003-04-25 22:48 UTC (permalink / raw)
To: linux-kernel
Yeah, I had to mail it 3 times. Lst time I figured it out.
As for the performance hit, the original idea of that very tiny format was
to pack 6502 programs into 4k of code. The expansion phase is very
tight and very efficient, and on a ... anything... it will provide no problem.
The swap-on-ram as long as it's not like 200 MB uncompressed SOR and
1 MB RAM will I think work great in the decompression phase.
Compression will take a little overhead. I think if you use a boyer-moore
fast string search algo for binary strings (yes you can do this), you can
quickly compress the data. It may be like.. just a guess... 10-30 times
more overhead than the decompression phase. So use it on at least a
10-30 mhz processor. If I ever write the code, it won't be kernel; just the
compression/decompression program (userspace). Take the code and
stuff it into the kernel if I do. I'll at the point of the algo coming in to
existence make another estimate.
The real power in this is Swap on RAM, but setting that as having precidence
over swap on disk (normal swaps) would decrease disk swap usage by
supplying more RAM in RAM. And of course swapping RAM to RAM is
a lot faster. I'm looking at this for PDA's but yes I will be running this on
my desktop the day we see it.
Well, I could work on the compression code, mebbe I can put the tarball up
here. If I do I'd expect someone to add the code to swap to work with it--in
kernel 2.4 at the very least (port it to dev kernel later!). As a separate module.
We don't want code that could be mean in the real swap driver. :)
--Bluefox Icy
---- ORIGINAL MESSAGE ----
List: linux-kernel
Subject: Re: Swap Compression
From: =?iso-8859-1?Q?J=F6rn?= Engel <joern () wohnheim ! fh-wedel ! de>
Date: 2003-04-25 21:14:05
[Download message RAW]
On Fri, 25 April 2003 16:48:15 -0400, rmoser wrote:
>
> Sorry if this is HTML mailed. I don't know how to control those settings
It is not, but if you could limit lines to <80 characters, that would
be nice.
> COMPRESSED SWAP
>
> This is mainly for things like linux on iPaq (handhelds.org) and
> people who like to play (me :), but how about compressed swap and
> RAM as swap? To be plausable, we need a very fast compression
> algorithm. I'd say use the following back pointer algorithm (this
> is headerless and I coded a decompressor in 6502 assembly in about
> 315 bytes) and 100k block sizes (compress streams of data until they
> are 100k in size, then stop. Include the cap at the end in the
> 100k).
>
> [...]
>
> CONCLUSION
>
> I think compressed swap and swap-on-ram with compression would be a
> great idea, especially for embedded systems. High-performance
> systems that can handle the compression/decompression without
> blinking would especially benefit, as the swap-on-ram feature would
> give an almost seamless RAM increase. Low-performance systems would
> take a performance hit, but embedded devices would still benefit
> from the swap-on-ram with compression RAM boost, considering they
> can probably handle the algorithm. I am looking forward to seeing
> this implimented in 2.4 and 2.5/2.6 if it is adopted.
Nice idea. This might even benefit normal pc style boxes, if the
performance loss through compression is overcompensated by io gains
(less data transferred).
The tiny problem I see is that most people here have tons of whacky
ideas themselves but lack the time to actually implement them. If you
really want to get it done, do it yourself. It doesn't have to be
perfect, if it works in principle and appears to be promising, you
will likely receive enough help to finish it. But you have to get
there first.
At least, that is how it usually works. Feel free to prove me wrong.
Jörn
--
When in doubt, use brute force.
-- Ken Thompson
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
^ permalink raw reply [flat|nested] 16+ messages in thread
end of thread, other threads:[~2003-04-29 0:29 UTC | newest]
Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2003-04-25 22:32 Re: Swap Compression rmoser
2003-04-28 21:35 ` Timothy Miller
2003-04-29 0:43 ` Con Kolivas
2003-04-25 22:48 rmoser
2003-04-26 9:17 ` Jörn Engel
[not found] ` <200304261148590300.00CE9372@smtp.comcast.net>
[not found] ` <20030426160920.GC21015@wohnheim.fh-wedel.de>
2003-04-27 2:24 ` rmoser
2003-04-27 9:04 ` Jörn Engel
2003-04-27 17:24 ` rmoser
2003-04-27 17:51 ` Jörn Engel
2003-04-27 18:22 ` William Lee Irwin III
2003-04-27 18:31 ` rmoser
2003-04-27 19:04 ` Jörn Engel
2003-04-27 19:57 ` Livio Baldini Soares
2003-04-27 20:24 ` rmoser
[not found] ` <200304271609460030.01FC8C2B@smtp.comcast.net>
2003-04-27 20:10 ` rmoser
2003-04-27 21:52 ` rmoser
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).