From: John Weekes <lists.xen@nuclearfallout.net>
To: Ian Pratt <Ian.Pratt@eu.citrix.com>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: OOM problems
Date: Sat, 13 Nov 2010 00:27:52 -0800 [thread overview]
Message-ID: <4CDE4C08.70309@nuclearfallout.net> (raw)
In-Reply-To: <4FA716B1526C7C4DB0375C6DADBC4EA38D80702C25@LONPMAILBOX01.citrite.net>
> What do the guests use for storage? (e.g. "blktap2 for VHD files on
an iscsi mounted ext3 volume")
Simple sparse .img files on a local ext4 RAID volume, using "file:".
> It might be worth looking at /proc/kernel/slabinfo to see if there's
anything suspicious.
I didn't see anything suspicious in there, but I'm not sure what I'm
looking for.
Here is the first page of slabtop as it currently stands, if that helps.
It looks a bit easier to read.
Active / Total Objects (% used) : 274753 / 507903 (54.1%)
Active / Total Slabs (% used) : 27573 / 27582 (100.0%)
Active / Total Caches (% used) : 85 / 160 (53.1%)
Active / Total Size (% used) : 75385.52K / 107127.41K (70.4%)
Minimum / Average / Maximum Object : 0.02K / 0.21K / 4096.00K
OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME
306397 110621 36% 0.10K 8281 37 33124K buffer_head
37324 26606 71% 0.54K 5332 7 21328K radix_tree_node
25640 25517 99% 0.19K 1282 20 5128K size-192
23472 23155 98% 0.08K 489 48 1956K sysfs_dir_cache
19964 19186 96% 0.95K 4991 4 19964K ext4_inode_cache
17860 13026 72% 0.19K 893 20 3572K dentry
14896 13057 87% 0.03K 133 112 532K size-32
8316 6171 74% 0.17K 378 22 1512K vm_area_struct
8142 5053 62% 0.06K 138 59 552K size-64
4320 3389 78% 0.12K 144 30 576K size-128
3760 2226 59% 0.19K 188 20 752K filp
3456 1875 54% 0.02K 24 144 96K anon_vma
3380 3001 88% 1.00K 845 4 3380K size-1024
3380 3365 99% 0.76K 676 5 2704K shmem_inode_cache
2736 2484 90% 0.50K 342 8 1368K size-512
2597 2507 96% 0.07K 49 53 196K Acpi-Operand
2100 1095 52% 0.25K 140 15 560K skbuff_head_cache
1920 819 42% 0.12K 64 30 256K cred_jar
1361 1356 99% 4.00K 1361 1 5444K size-4096
1230 628 51% 0.12K 41 30 164K pid
1008 907 89% 0.03K 9 112 36K Acpi-Namespace
959 496 51% 0.57K 137 7 548K inode_cache
891 554 62% 0.81K 99 9 792K signal_cache
888 115 12% 0.10K 24 37 96K ext4_prealloc_space
885 122 13% 0.06K 15 59 60K fs_cache
850 642 75% 1.45K 170 5 1360K task_struct
820 769 93% 0.19K 41 20 164K bio-0
666 550 82% 2.06K 222 3 1776K sighand_cache
576 211 36% 0.50K 72 8 288K task_xstate
529 379 71% 0.16K 23 23 92K cfq_queue
518 472 91% 2.00K 259 2 1036K size-2048
506 375 74% 0.16K 22 23 88K cfq_io_context
495 353 71% 0.33K 45 11 180K blkdev_requests
465 422 90% 0.25K 31 15 124K size-256
418 123 29% 0.69K 38 11 304K files_cache
360 207 57% 0.69K 72 5 288K sock_inode_cache
360 251 69% 0.12K 12 30 48K scsi_sense_cache
336 115 34% 0.08K 7 48 28K blkdev_ioc
285 236 82% 0.25K 19 15 76K scsi_cmd_cache
> BTW: 24 vCPUs in dom0 seems a excessive, especially if you're using
stubdoms. You may get better performance by dropping that to e.g. 2 or 3.
I will test that. Do you think it will make a difference in this case?
-John
next prev parent reply other threads:[~2010-11-13 8:27 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-11-13 7:57 OOM problems John Weekes
2010-11-13 8:14 ` Ian Pratt
2010-11-13 8:27 ` John Weekes [this message]
2010-11-13 9:13 ` Ian Pratt
2010-11-13 9:43 ` John Weekes
2010-11-13 10:19 ` John Weekes
2010-11-14 9:53 ` Daniel Stodden
2010-11-15 8:55 ` Jan Beulich
2010-11-15 9:40 ` Daniel Stodden
2010-11-15 9:57 ` Jan Beulich
2010-11-15 17:59 ` John Weekes
2010-11-16 19:54 ` John Weekes
2010-11-17 20:10 ` Ian Pratt
2010-11-17 22:02 ` John Weekes
2010-11-18 0:56 ` Ian Pratt
2010-11-18 1:23 ` Daniel Stodden
2010-11-18 3:29 ` John Weekes
2010-11-18 4:08 ` Daniel Stodden
2010-11-18 7:15 ` John Weekes
2010-11-18 10:41 ` Daniel Stodden
2010-11-19 7:27 ` John Weekes
2010-11-15 14:17 ` Stefano Stabellini
2010-11-13 18:15 ` George Shuklin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4CDE4C08.70309@nuclearfallout.net \
--to=lists.xen@nuclearfallout.net \
--cc=Ian.Pratt@eu.citrix.com \
--cc=xen-devel@lists.xensource.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.