linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Pozsar Balazs <pozsy@sch.bme.hu>
To: John Stoffel <stoffel@casc.com>
Cc: Rik van Riel <riel@conectiva.com.br>,
	Jason McMullan <jmcmullan@linuxcare.com>,
	<linux-kernel@vger.kernel.org>
Subject: Re: VM Requirement Document - v0.0
Date: Wed, 27 Jun 2001 16:09:23 +0200 (MEST)	[thread overview]
Message-ID: <Pine.GSO.4.30.0106271550360.29611-100000@balu> (raw)
In-Reply-To: <15160.65442.682067.38776@gargle.gargle.HOWL>


> Rik> ... but I fail to see this one. If we get a low cache hit rate,
> Rik> couldn't that just mean we allocated too little memory for the
> Rik> cache ?
> Or that we're doing big sequential reads of file(s) which are larger
> than memory, in which case expanding the cache size buys us nothing,
> and can actually hurt us alot.

I've got an idea about how to handle this situation generally (without
sending 'tips' to kernel via madvice() or anything similar).

Instead of sorting chached pages (i mean blocks of files) by last touch
time, and dropping the oldest page(s) if we're sort on memory, i would
propose this nicer algorithm: (i this is relevant only to the read cache)

Suppose that f1,f2,...fN files cached, their sizes are s1,s2,...sN and
that they were last touched t1,t2,...tN seconds ago. (t1<t2<...<tN)
Now we shouldn't automatically choose pages of fN to drop, instead a
probability (chance) could be assigned to each file, for example:
 fI*sI*tI/SUM where I is one of 1,2,...,N, and SUM is the SUM of fI*sI*tI.

With this, mostly newer files would stay in cache, but older files would
still have a chance.
This could also be tuned, for example to take into account 't' more, the
 fI*sI*tI*tI could be  used... and so on, we have infinite possibilities.


have a nice day,
Balazs Pozsar.

ps: If 'my' idea is the which is already used in the kernel, then tell me
:) and give me some points were to read more before telling stupid things.

> I personally don't feel that the cache should be allowed to grow over
> 50% of the system's memory at all, we've got so much in the cache at
> that point, that we're probably not hitting it all that much.
>
> This is why the discussion on the other cache scanning algorithm
> (2Q+?) was so interesting, since it looked to handle both the LRU
> vs. FIFO tradeoffs very nicely.

-- 



  parent reply	other threads:[~2001-06-27 14:11 UTC|newest]

Thread overview: 62+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2001-06-26 19:58 VM Requirement Document - v0.0 Jason McMullan
2001-06-26 21:21 ` Rik van Riel
2001-06-26 21:29   ` Jason McMullan
2001-06-26 21:33 ` John Stoffel
2001-06-26 21:42   ` Rik van Riel
2001-06-26 22:21     ` Stefan Hoffmeister
2001-06-26 22:48       ` Jeffrey W. Baker
2001-06-27  0:18         ` Mike Castle
2001-06-28 13:07       ` John Fremlin
2001-06-27 13:36     ` Marco Colombo
2001-06-27  3:55   ` Daniel Phillips
2001-06-27 14:09   ` Pozsar Balazs [this message]
2001-06-28 22:47 ` John Fremlin
2001-06-30 15:37 ` Pavel Machek
2001-07-10 10:34 ` David Woodhouse
2001-06-27  8:53 Martin Knoblauch
2001-06-27 18:13 ` Rik van Riel
2001-06-28  6:59   ` Martin Knoblauch
2001-06-28 11:27 ` Helge Hafting
2001-06-28 11:54   ` Martin Knoblauch
2001-06-28 12:02   ` Tobias Ringstrom
2001-06-28 12:31     ` Xavier Bestel
2001-06-28 13:05       ` Tobias Ringstrom
     [not found] <fa.oqkojpv.3hosb7@ifi.uio.no>
     [not found] ` <fa.jpsks3v.1o2gag4@ifi.uio.no>
2001-06-27  0:43   ` Dan Maas
2001-06-27  0:45     ` Mike Castle
2001-06-27 10:50   ` Xavier Bestel
2001-06-28 12:20 mike_phillips
2001-06-28 12:30 ` Alan Cox
2001-06-28 13:33   ` Tobias Ringstrom
2001-06-28 13:37     ` Alan Cox
2001-06-28 14:04       ` Tobias Ringstrom
2001-06-28 14:14         ` Alan Cox
2001-06-28 14:52       ` Daniel Phillips
2001-06-28 14:39 ` Daniel Phillips
2001-06-28 18:01   ` Marco Colombo
2001-07-02 18:42     ` Rik van Riel
2001-07-03 10:33       ` Marco Colombo
2001-07-03 15:04         ` Daniel Phillips
2001-07-03 18:24           ` Daniel Phillips
2001-07-04  8:12           ` Ari Heitner
2001-07-04  9:41           ` Marco Colombo
2001-07-04 15:03             ` Daniel Phillips
2001-07-03 18:29       ` Daniel Phillips
2001-07-04  8:32         ` Marco Colombo
2001-07-04 14:44           ` Daniel Phillips
2001-06-28 15:21 ` Jonathan Morton
2001-06-28 16:02   ` Daniel Phillips
2001-07-04 16:08 mike_phillips
2001-07-05 15:04 Daniel Phillips
     [not found] ` <fa.jprli0v.qlofoc@ifi.uio.no>
     [not found]   ` <fa.e66agbv.hn0u1v@ifi.uio.no>
2001-07-05  1:49     ` Dan Maas
2001-07-05 13:02       ` Daniel Phillips
2001-07-05 14:00       ` Xavier Bestel
2001-07-05 14:51         ` Daniel Phillips
2001-07-05 15:00         ` Xavier Bestel
2001-07-05 15:12           ` Daniel Phillips
2001-07-05 15:12         ` Alan Shutko
     [not found]     ` <002501c104f4/mnt/sendme701a8c0@morph>
2001-07-09 12:17       ` Pavel Machek
2001-07-12 23:46         ` Daniel Phillips
2001-07-13 21:07           ` Pavel Machek
2001-07-06 19:09 ` Rik van Riel
2001-07-06 21:57   ` Daniel Phillips
2001-07-05 15:09 mike_phillips

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Pine.GSO.4.30.0106271550360.29611-100000@balu \
    --to=pozsy@sch.bme.hu \
    --cc=jmcmullan@linuxcare.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=riel@conectiva.com.br \
    --cc=stoffel@casc.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).