All of lore.kernel.org
 help / color / mirror / Atom feed
* mem=16MB laptop testing
@ 2003-10-14 10:55 William Lee Irwin III
  2003-10-14 11:01 ` John Bradford
                   ` (3 more replies)
  0 siblings, 4 replies; 29+ messages in thread
From: William Lee Irwin III @ 2003-10-14 10:55 UTC (permalink / raw)
  To: linux-kernel

So I tried mem=16m on my laptop (stinkpad T21). I made the following
potentially useless observations:

MemTotal:        12424 kB
MemFree:           352 kB
Buffers:           180 kB
Cached:           1328 kB
SwapCached:       3548 kB
Active:           4576 kB
Inactive:          664 kB
HighTotal:           0 kB
HighFree:            0 kB
LowTotal:        12424 kB
LowFree:           352 kB
SwapTotal:      997880 kB
SwapFree:       969112 kB
Dirty:               0 kB
Writeback:           0 kB
Mapped:           4320 kB
Slab:             4884 kB
Committed_AS:    45776 kB
PageTables:        656 kB
VmallocTotal:  1015752 kB
VmallocUsed:       732 kB
VmallocChunk:  1014368 kB

(a) The profile buffer requires about a 5MB bootmem allocation;
	this near halves MemTotal when used. I refrained from using it,
	as otherwise it's a test of mem=8m instead of mem=16m.

(b) bootmem allocations aren't adding up; after kernel text, data,
	and tracing __alloc_bootmem_core(), there is still about 0.5MB
	still missing from MemTotal. I still haven't found where it's
	gone. mem_map's bootmem allocation also didn't show up in the
	logs, but it should only be 160KB for 16MB of RAM, not 512KB.
	Matt Mackall spotted this, too.

(c) mem= no longer bounds the highest physical address, but rather
	the sum of memory in e820 entries post-sanitization. This
	means a ZONE_NORMAL with about 384KB showed up, with duly
	perverse heuristic consequences for page_alloc.c

(d) The system thrashed heavily on boot, allowing the largest mm
	to acquire an RSS no larger than about 100KB. This needed
	turning /proc/sys/vm/min_free_kb down to 128 to make the
	system behave closer to normally. Matt Mackall spotted this.

(e) About 4.8MB are consumed by slab allocations at runtime.
	The top 10 slab abusers are:

inode_cache               840K           840K     100.00%   
dentry_cache              746K           753K      99.07%   
ext3_inode_cache          591K           592K      99.84%   
size-4096                 504K           504K     100.00%   
size-512                  203K           204K      99.75%   
size-2048                 182K           204K      89.22%   
pgd                       188K           188K     100.00%   
task_struct               100K           108K      92.86%   
vm_area_struct             93K           101K      92.28%   
blkdev_requests           101K           101K     100.00%   

The inode_cache culprit is the obvious butt of many complaints:
# find /sys | wc -l
2656

... which accounts for 100% of the 840KB. TANSTAAFL. OTOH, maybe we
need to learn to do better than pinning dentries and inodes in-core...

(f) the VM appeared to favor processes that burn cpu and take many faults:

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  nFLT COMMAND      
  486 wli       16   0  4196 2072  456 S  1.7 16.7   0:19.41 312k slabtop      
  413 wli       15   0  4360 1064  188 S  0.0  8.6   0:20.33 757k VMTop        
  420 wli       15   0  2004  456  320 R  0.3  3.7   0:15.41 229k top          
  416 wli       16   0  5964  184  116 S  0.0  1.5   0:01.09  13k sshd         
  435 root      15   0 22304  184   88 S  0.0  1.5   0:06.60  85k XFree86      
  409 wli       15   0  5964  180  112 S  0.0  1.4   0:00.21 1646 sshd         
  466 wli       16   0  5964  180  112 S  0.0  1.4   0:00.34 4598 sshd         
  373 root      15   0  1724  152  108 S  0.0  1.2   0:00.07 2126 cron         
  207 root      16   0  1520   96   48 S  0.0  0.8   0:00.14 4342 syslogd      
  417 wli       16   0  3088   88   68 S  0.0  0.7   0:00.08 2289 zsh          

The top 3 RSS consumers were statistics reporting programs that (of
course) burn immense amounts of cpu, and in what is probably no
coincidence, also dominate the nflt category. There are also a bunch
of mostly useless processes holding bits of RAM. Load control, anyone?

(g) X isn't terribly swift; it's slower than I remember old Sun IPC's
	being, though they had 24MB RAM. OTOH luserspace is much more
	bloated these days. zsh alone is at least 3 times the size of
	ksh, which I used back then. fvwm2 is a lot bigger than fvwm1.
	And so on and so forth. I guess the upshot is "unbloating" the
	kernel wouldn't do much good anyway, since luserspace isn't in
	any kind of shape to run in this kind of environment anymore either.


-- wli

^ permalink raw reply	[flat|nested] 29+ messages in thread

end of thread, other threads:[~2003-10-15 17:45 UTC | newest]

Thread overview: 29+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2003-10-14 10:55 mem=16MB laptop testing William Lee Irwin III
2003-10-14 11:01 ` John Bradford
2003-10-14 11:08   ` William Lee Irwin III
2003-10-14 13:20     ` John Bradford
2003-10-14 11:56 ` Andrew Morton
2003-10-14 11:58   ` Russell King
2003-10-14 12:10     ` Andrew Morton
2003-10-14 12:18       ` Russell King
2003-10-14 12:30         ` Andrew Morton
2003-10-14 12:17   ` Anton Blanchard
2003-10-14 12:31     ` Andrew Morton
2003-10-14 12:44       ` Anton Blanchard
2003-10-14 23:40         ` Andrew Morton
2003-10-15 13:32           ` Martin Waitz
2003-10-15 17:34             ` Andrew Morton
2003-10-14 12:28   ` William Lee Irwin III
2003-10-15 12:12   ` Pavel Machek
2003-10-15 12:51     ` William Lee Irwin III
2003-10-15 13:20       ` Pavel Machek
2003-10-15 13:28         ` William Lee Irwin III
2003-10-15 13:59           ` Larry Sendlosky
2003-10-15 15:34             ` Dave Jones
2003-10-15 15:38             ` Thomas Schlichter
2003-10-15 16:06               ` Dave Jones
2003-10-15 17:45               ` Mike Dresser
2003-10-15 15:32         ` Dave Jones
2003-10-15 17:20           ` Andrew Morton
2003-10-15  0:35 ` Nick Piggin
2003-10-15  4:31 ` Andrew Morton

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.