From: Rik van Riel <firstname.lastname@example.org>
To: Daniel Phillips <email@example.com>
Cc: Andrew Morton <firstname.lastname@example.org>,
Lorenzo Allegrucci <email@example.com>,
Linux Kernel <firstname.lastname@example.org>
Subject: Re: qsbench, interesting results
Date: Tue, 1 Oct 2002 14:13:29 -0300 (BRT) [thread overview]
Message-ID: <Pine.LNX.4.44L.email@example.com> (raw)
On Tue, 1 Oct 2002, Daniel Phillips wrote:
> On Tuesday 01 October 2002 18:52, Rik van Riel wrote:
> > > The operative phrase here is "if that affects the usual case".
> > > Actually, the quicksort bench is not that bad a model of a usual case,
> > > i.e., a working set 50% bigger than RAM.
> > Having the working set of one process larger than RAM is
> > a highly unusual case ...
> No it's not, it's very similar to having several processes active whose
> working sets add up to more than RAM.
It's similar, but not the same. If you simply have too many
processes running at the same time we could fix the problem
with load control, reducing the number of running processes
until stuff fits in RAM.
With one process that needs 150% of RAM as its working set,
there simply is no way to win.
> > > The page replacement algorithm ought to do something sane with it,
> > ... page replacement ought to give this process less RAM
> > because it isn't going to get enough to run anyway. No need
> > to have a process like qsbench make other processes run
> > slow, too.
> It should run the process as efficiently as possible, given that there
> isn't any competition.
If there is no competition I agree. However, if the system has
something else running at the same time as qsbench I think the
system should make an effort to have _only_ qsbench thrashing
and not every other process in the system as well.
> > The difference there is that desktops don't have a working
> > set larger than RAM. They've got a few (dozen?) of processes,
> > each of which has a working set that easily fits in ram and
> > a bunch of pages, or whole processes, which aren't currently
> > in use.
> Try loading a high res photo in gimp and running any kind of interesting
> script-fu on it. If it doesn't thrash, boot with half the memory and
But, should just the gimp thrash, or should every process on the
machine thrash ?
> > There may be other modifications needed, too...
> No doubt, and for the first time, we've got a solid base to build on.
This will help a lot in fine-tuning the VM. I should do some more
procps work and extend vmstat to understand all the knew VM statistics
being exported in /proc...
Q: Should I include quotations after my reply?
next prev parent reply other threads:[~2002-10-01 17:10 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2002-09-29 14:15 qsbench, interesting results Lorenzo Allegrucci
2002-09-29 16:26 ` bert hubert
2002-09-29 19:56 ` Lorenzo Allegrucci
2002-09-29 20:00 ` bert hubert
2002-09-29 21:05 ` Lorenzo Allegrucci
2002-09-30 5:57 ` Andrew Morton
2002-10-01 14:05 ` Daniel Phillips
2002-10-01 16:52 ` Rik van Riel
2002-10-01 17:03 ` Daniel Phillips
2002-10-01 17:13 ` Rik van Riel [this message]
2002-10-01 17:20 ` Daniel Phillips
2002-10-01 17:29 ` Rik van Riel
2002-10-01 17:38 ` Daniel Phillips
2002-10-01 18:18 ` Lorenzo Allegrucci
2002-10-01 17:15 ` Larry McVoy
2002-10-01 18:04 ` Andrew Morton
2002-10-01 18:20 ` Rik van Riel
2002-10-01 18:35 ` Daniel Phillips
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).