From: Tim Schmielau <tim@physik3.uni-rostock.de>
To: Andrew Morton <akpm@osdl.org>
Cc: lkml <linux-kernel@vger.kernel.org>
Subject: swapspace layout improvements advocacy
Date: Fri, 14 Jan 2005 14:55:27 +0100 (CET) [thread overview]
Message-ID: <Pine.LNX.4.53.0501141433000.7044@gockel.physik3.uni-rostock.de> (raw)
In-Reply-To: <20050112105315.2ac21173.akpm@osdl.org>
On Wed, 12 Jan 2005, Andrew Morton wrote:
> Our current way of allocating swap can cause us to end up with little
> correlation between adjacent pages on-disk. But this can be improved. THe
> old swapspace-layout-improvements patch was designed to fix that up, but
> needs more testing and tuning.
>
> It clusters pages on-disk via their virtual address.
2.6 seems in due need of such a patch.
I recently found out that 2.6 kernels degrade horribly when going into
swap. On my dual PIII-850 with as little as 256 mb ram, I can easily
demonstrate that by opening about 40-50 instances of konquerer with large
tables, many images and such things. When the machine is into 80-120 mb of
the 256 mb swap partition, it becomes almost unusable. Even the desktop
background picture needs ~20sec to update, not to talk about any windows'
contents. And you can literally hear the reason for it: the harddisk is
seeking like crazy.
I've applied Ingo Molnars swapspace-layout-improvements-2.6.9-rc1-bk12-A1
port of the patch to a 2.6.11-rc1 kernel, and it handles the same workload
much smoother. It's slow, but you can work with it.
I just wonder why noone else complained yet. Are systems with tight memory
constraints so uncommon these days?
Tim
next prev parent reply other threads:[~2005-01-14 13:55 UTC|newest]
Thread overview: 36+ messages / expand[flat|nested] mbox.gz Atom feed top
2005-01-05 0:32 page migration patchset Ray Bryant
2005-01-05 2:07 ` Andi Kleen
2005-01-05 3:20 ` Ray Bryant
2005-01-05 18:41 ` Steve Longerbeam
2005-01-05 19:23 ` Ray Bryant
2005-01-05 23:00 ` Steve Longerbeam
2005-01-05 23:16 ` Ray Bryant
2005-01-05 20:55 ` Hugh Dickins
[not found] ` <41DC7EAD.8010407@mvista.com>
2005-01-06 14:43 ` Andi Kleen
2005-01-06 16:00 ` Ray Bryant
2005-01-06 17:50 ` Christoph Lameter
2005-01-06 19:29 ` Andi Kleen
2005-01-06 22:30 ` William Lee Irwin III
2005-01-06 23:08 ` Andrew Morton
2005-01-06 23:15 ` William Lee Irwin III
2005-01-06 23:21 ` Ray Bryant
2005-01-06 23:35 ` William Lee Irwin III
2005-01-06 23:53 ` Anton Blanchard
2005-01-07 0:06 ` William Lee Irwin III
2005-01-07 0:31 ` Andi Kleen
2005-01-06 23:43 ` Steve Longerbeam
2005-01-06 23:58 ` William Lee Irwin III
2005-01-11 15:38 ` Ray Bryant
2005-01-11 19:00 ` Steve Longerbeam
2005-01-11 19:30 ` Ray Bryant
2005-01-11 20:59 ` Steve Longerbeam
2005-01-12 12:35 ` Robin Holt
2005-01-12 18:12 ` Hugh Dickins
2005-01-12 18:45 ` Ray Bryant
2005-01-12 18:53 ` Andrew Morton
2005-01-14 13:55 ` Tim Schmielau [this message]
2005-01-14 18:15 ` swapspace layout improvements advocacy Andrew Morton
2005-01-14 22:52 ` Barry K. Nathan
2005-01-15 0:33 ` Alan Cox
2005-01-15 2:26 ` Tim Schmielau
2005-01-15 8:55 ` Pasi Savolainen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Pine.LNX.4.53.0501141433000.7044@gockel.physik3.uni-rostock.de \
--to=tim@physik3.uni-rostock.de \
--cc=akpm@osdl.org \
--cc=linux-kernel@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).