From: Con Kolivas <kernel@kolivas.org>
To: Nick Piggin <nickpiggin@yahoo.com.au>
Cc: ck@vds.kolivas.org, Andrew Morton <akpm@osdl.org>,
linux list <linux-kernel@vger.kernel.org>
Subject: lowmem_reserve question
Date: Mon, 3 Apr 2006 12:48:13 +1000 [thread overview]
Message-ID: <200604031248.13532.kernel@kolivas.org> (raw)
In-Reply-To: <442F9E91.1020306@yahoo.com.au>
On Sunday 02 April 2006 19:51, Nick Piggin wrote:
> That zone->lowmem_reserve[zone_idx(zone)] == 0 ?
I haven't figured out how to tackle the swap prefetch issue with lowmem
reserve just yet. While trying to digest just what the lowmem_reserve does
and how it's utilised I looked at some of the numbers
int sysctl_lowmem_reserve_ratio[MAX_NR_ZONES-1] = { 256, 256, 32 };
lower_zone->lowmem_reserve[j] = present_pages /
sysctl_lowmem_reserve_ratio[idx];
This is interesting because there are no bounds on this value and it seems
possible to set the sysctl to have a lowmem_reserve that is larger than the
zone size. Ok that's a sysctl so if a user is setting it wrongly that's their
fault... or should there be some upper bound?
Furthermore, now that we have the option of up to 3GB lowmem split on 32bit we
can have a default lowmem_reserve of almost 12MB (if I'm reading it right)
which seems very tight with only 16MB of ZONE_DMA.
On a basically idle 1GB lowmem box that I have it looks like this:
Node 0, zone DMA
pages free 1025
min 15
low 18
high 22
active 2185
inactive 0
scanned 555 (a: 21 i: 6)
spanned 4096
present 4096
protection: (0, 0, 1007, 1007)
With 3GB lowmem the default settings seem too tight to me. The way I see it,
there should be some upper bounds on the lowmem reserves. Or perhaps I'm just
missing something again... I'm feeling even thicker than usual.
Cheers,
Con
next prev parent reply other threads:[~2006-04-03 2:48 UTC|newest]
Thread overview: 33+ messages / expand[flat|nested] mbox.gz Atom feed top
2006-04-02 4:01 2.6.16-ck3 Con Kolivas
2006-04-02 4:46 ` 2.6.16-ck3 Nick Piggin
2006-04-02 8:51 ` 2.6.16-ck3 Con Kolivas
2006-04-02 9:37 ` 2.6.16-ck3 Nick Piggin
2006-04-02 9:39 ` [ck] 2.6.16-ck3 Con Kolivas
2006-04-02 9:51 ` Nick Piggin
2006-04-03 2:48 ` Con Kolivas [this message]
2006-04-03 4:42 ` lowmem_reserve question Mike Galbraith
2006-04-03 4:48 ` Con Kolivas
2006-04-03 4:50 ` [ck] " Con Kolivas
2006-04-03 5:14 ` Mike Galbraith
2006-04-03 5:18 ` Con Kolivas
2006-04-03 5:31 ` Mike Galbraith
2006-04-04 2:35 ` [ck] " Con Kolivas
2006-04-06 1:10 ` [PATCH] mm: limit lowmem_reserve Con Kolivas
2006-04-06 1:29 ` Respin: " Con Kolivas
2006-04-06 2:43 ` Andrew Morton
2006-04-06 2:55 ` Con Kolivas
2006-04-06 2:58 ` Con Kolivas
2006-04-06 3:40 ` Andrew Morton
2006-04-06 4:36 ` Con Kolivas
2006-04-06 4:52 ` Con Kolivas
2006-04-07 6:25 ` Nick Piggin
2006-04-07 9:02 ` Con Kolivas
2006-04-07 12:40 ` Nick Piggin
2006-04-08 0:15 ` Con Kolivas
2006-04-08 0:55 ` Nick Piggin
2006-04-08 1:01 ` Con Kolivas
2006-04-08 1:25 ` Nick Piggin
2006-05-17 14:11 ` Con Kolivas
2006-05-18 7:11 ` Nick Piggin
2006-05-18 7:21 ` Con Kolivas
2006-05-18 7:26 ` Nick Piggin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=200604031248.13532.kernel@kolivas.org \
--to=kernel@kolivas.org \
--cc=akpm@osdl.org \
--cc=ck@vds.kolivas.org \
--cc=linux-kernel@vger.kernel.org \
--cc=nickpiggin@yahoo.com.au \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).