From: Nishanth Aravamudan <nacc@linux.vnet.ibm.com>
To: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: David Rientjes <rientjes@google.com>,
Han Pingtian <hanpt@linux.vnet.ibm.com>,
penberg@kernel.org, linux-mm@kvack.org, paulus@samba.org,
Anton Blanchard <anton@samba.org>,
mpm@selenic.com, Christoph Lameter <cl@linux.com>,
linuxppc-dev@lists.ozlabs.org,
Wanpeng Li <liwanp@linux.vnet.ibm.com>
Subject: Re: [PATCH] slub: Don't throw away partial remote slabs if there is no local memory
Date: Thu, 6 Feb 2014 11:28:12 -0800 [thread overview]
Message-ID: <20140206192812.GC7845@linux.vnet.ibm.com> (raw)
In-Reply-To: <20140206185955.GA7845@linux.vnet.ibm.com>
[-- Attachment #1: Type: text/plain, Size: 8967 bytes --]
On 06.02.2014 [10:59:55 -0800], Nishanth Aravamudan wrote:
> On 06.02.2014 [17:04:18 +0900], Joonsoo Kim wrote:
> > On Wed, Feb 05, 2014 at 06:07:57PM -0800, Nishanth Aravamudan wrote:
> > > On 24.01.2014 [16:25:58 -0800], David Rientjes wrote:
> > > > On Fri, 24 Jan 2014, Nishanth Aravamudan wrote:
> > > >
> > > > > Thank you for clarifying and providing a test patch. I ran with this on
> > > > > the system showing the original problem, configured to have 15GB of
> > > > > memory.
> > > > >
> > > > > With your patch after boot:
> > > > >
> > > > > MemTotal: 15604736 kB
> > > > > MemFree: 8768192 kB
> > > > > Slab: 3882560 kB
> > > > > SReclaimable: 105408 kB
> > > > > SUnreclaim: 3777152 kB
> > > > >
> > > > > With Anton's patch after boot:
> > > > >
> > > > > MemTotal: 15604736 kB
> > > > > MemFree: 11195008 kB
> > > > > Slab: 1427968 kB
> > > > > SReclaimable: 109184 kB
> > > > > SUnreclaim: 1318784 kB
> > > > >
> > > > >
> > > > > I know that's fairly unscientific, but the numbers are reproducible.
> > > > >
> > > >
> > > > I don't think the goal of the discussion is to reduce the amount of slab
> > > > allocated, but rather get the most local slab memory possible by use of
> > > > kmalloc_node(). When a memoryless node is being passed to kmalloc_node(),
> > > > which is probably cpu_to_node() for a cpu bound to a node without memory,
> > > > my patch is allocating it on the most local node; Anton's patch is
> > > > allocating it on whatever happened to be the cpu slab.
> > > >
> > > > > > diff --git a/mm/slub.c b/mm/slub.c
> > > > > > --- a/mm/slub.c
> > > > > > +++ b/mm/slub.c
> > > > > > @@ -2278,10 +2278,14 @@ redo:
> > > > > >
> > > > > > if (unlikely(!node_match(page, node))) {
> > > > > > stat(s, ALLOC_NODE_MISMATCH);
> > > > > > - deactivate_slab(s, page, c->freelist);
> > > > > > - c->page = NULL;
> > > > > > - c->freelist = NULL;
> > > > > > - goto new_slab;
> > > > > > + if (unlikely(!node_present_pages(node)))
> > > > > > + node = numa_mem_id();
> > > > > > + if (!node_match(page, node)) {
> > > > > > + deactivate_slab(s, page, c->freelist);
> > > > > > + c->page = NULL;
> > > > > > + c->freelist = NULL;
> > > > > > + goto new_slab;
> > > > > > + }
> > > > >
> > > > > Semantically, and please correct me if I'm wrong, this patch is saying
> > > > > if we have a memoryless node, we expect the page's locality to be that
> > > > > of numa_mem_id(), and we still deactivate the slab if that isn't true.
> > > > > Just wanting to make sure I understand the intent.
> > > > >
> > > >
> > > > Yeah, the default policy should be to fallback to local memory if the node
> > > > passed is memoryless.
> > > >
> > > > > What I find odd is that there are only 2 nodes on this system, node 0
> > > > > (empty) and node 1. So won't numa_mem_id() always be 1? And every page
> > > > > should be coming from node 1 (thus node_match() should always be true?)
> > > > >
> > > >
> > > > The nice thing about slub is its debugging ability, what is
> > > > /sys/kernel/slab/cache/objects showing in comparison between the two
> > > > patches?
> > >
> > > Ok, I finally got around to writing a script that compares the objects
> > > output from both kernels.
> > >
> > > log1 is with CONFIG_HAVE_MEMORYLESS_NODES on, my kthread locality patch
> > > and Joonsoo's patch.
> > >
> > > log2 is with CONFIG_HAVE_MEMORYLESS_NODES on, my kthread locality patch
> > > and Anton's patch.
> > >
> > > slab objects objects percent
> > > log1 log2 change
> > > -----------------------------------------------------------
> > > :t-0000104 71190 85680 20.353982 %
> > > UDP 4352 3392 22.058824 %
> > > inode_cache 54302 41923 22.796582 %
> > > fscache_cookie_jar 3276 2457 25.000000 %
> > > :t-0000896 438 292 33.333333 %
> > > :t-0000080 310401 195323 37.073978 %
> > > ext4_inode_cache 335 201 40.000000 %
> > > :t-0000192 89408 128898 44.168307 %
> > > :t-0000184 151300 81880 45.882353 %
> > > :t-0000512 49698 73648 48.191074 %
> > > :at-0000192 242867 120948 50.199904 %
> > > xfs_inode 34350 15221 55.688501 %
> > > :t-0016384 11005 17257 56.810541 %
> > > proc_inode_cache 103868 34717 66.575846 %
> > > tw_sock_TCP 768 256 66.666667 %
> > > :t-0004096 15240 25672 68.451444 %
> > > nfs_inode_cache 1008 315 68.750000 %
> > > :t-0001024 14528 24720 70.154185 %
> > > :t-0032768 655 1312 100.305344%
> > > :t-0002048 14242 30720 115.700042%
> > > :t-0000640 1020 2550 150.000000%
> > > :t-0008192 10005 27905 178.910545%
> > >
> > > FWIW, the configuration of this LPAR has slightly changed. It is now configured
> > > for maximally 400 CPUs, of which 200 are present. The result is that even with
> > > Joonsoo's patch (log1 above), we OOM pretty easily and Anton's slab usage
> > > script reports:
> > >
> > > slab mem objs slabs
> > > used active active
> > > ------------------------------------------------------------
> > > kmalloc-512 1182 MB 2.03% 100.00%
> > > kmalloc-192 1182 MB 1.38% 100.00%
> > > kmalloc-16384 966 MB 17.66% 100.00%
> > > kmalloc-4096 353 MB 15.92% 100.00%
> > > kmalloc-8192 259 MB 27.28% 100.00%
> > > kmalloc-32768 207 MB 9.86% 100.00%
> > >
> > > In comparison (log2 above):
> > >
> > > slab mem objs slabs
> > > used active active
> > > ------------------------------------------------------------
> > > kmalloc-16384 273 MB 98.76% 100.00%
> > > kmalloc-8192 225 MB 98.67% 100.00%
> > > pgtable-2^11 114 MB 100.00% 100.00%
> > > pgtable-2^12 109 MB 100.00% 100.00%
> > > kmalloc-4096 104 MB 98.59% 100.00%
> > >
> > > I appreciate all the help so far, if anyone has any ideas how best to
> > > proceed further, or what they'd like debugged more, I'm happy to get
> > > this fixed. We're hitting this on a couple of different systems and I'd
> > > like to find a good resolution to the problem.
> >
> > Hello,
> >
> > I have no memoryless system, so, to debug it, I need your help. :)
> > First, please let me know node information on your system.
>
> [ 0.000000] Node 0 Memory:
> [ 0.000000] Node 1 Memory: 0x0-0x200000000
>
> [ 0.000000] On node 0 totalpages: 0
> [ 0.000000] On node 1 totalpages: 131072
> [ 0.000000] DMA zone: 112 pages used for memmap
> [ 0.000000] DMA zone: 0 pages reserved
> [ 0.000000] DMA zone: 131072 pages, LIFO batch:1
>
> [ 0.638391] Node 0 CPUs: 0-199
> [ 0.638394] Node 1 CPUs:
>
> Do you need anything else?
>
> > I'm preparing 3 another patches which are nearly same with previous patch,
> > but slightly different approach. Could you test them on your system?
> > I will send them soon.
>
> Test results are in the attached tarball [1].
>
> > And I think that same problem exists if CONFIG_SLAB is enabled. Could you
> > confirm that?
>
> I will test and let you know.
Ok, with your patches applied and CONFIG_SLAB enabled:
MemTotal: 8264640 kB
MemFree: 7119680 kB
Slab: 207232 kB
SReclaimable: 32896 kB
SUnreclaim: 174336 kB
For reference, same kernel with CONFIG_SLUB:
MemTotal: 8264640 kB
MemFree: 4264000 kB
Slab: 3065408 kB
SReclaimable: 104704 kB
SUnreclaim: 2960704 kB
So CONFIG_SLAB is much better in this case.
Without your patches (but still CONFIG_HAVE_MEMORYLESS_NODES, kthread
locality patch and two other unrelated bugfix patches):
3.13.0-slub:
MemTotal: 8264704 kB
MemFree: 4404288 kB
Slab: 2963648 kB
SReclaimable: 106816 kB
SUnreclaim: 2856832 kB
3.13.0-slab:
MemTotal: 8264640 kB
MemFree: 7263168 kB
Slab: 206144 kB
SReclaimable: 32576 kB
SUnreclaim: 173568 kB
In case it's helpful, I've attached /proc/slabinfo from both kernels.
Thanks,
Nish
[-- Attachment #2: slabusage.3.13.SLAB --]
[-- Type: text/plain, Size: 13115 bytes --]
slab mem objs slabs
used active active
------------------------------------------------------------
thread_info 34 MB 96.33% 100.00%
kmalloc-1024 22 MB 97.44% 100.00%
task_struct 19 MB 95.15% 100.00%
kmalloc-16384 9 MB 98.05% 100.00%
inode_cache 8 MB 97.74% 100.00%
kmalloc-512 7 MB 89.56% 100.00%
dentry 7 MB 98.89% 100.00%
kmalloc-8192 6 MB 98.64% 100.00%
proc_inode_cache 6 MB 90.20% 100.00%
idr_layer_cache 4 MB 94.76% 100.00%
sighand_cache 4 MB 94.69% 100.00%
pgtable-2^12 3 MB 72.58% 100.00%
xfs_inode 3 MB 98.89% 100.00%
sysfs_dir_cache 3 MB 98.29% 100.00%
radix_tree_node 2 MB 97.19% 100.00%
kmalloc-32768 2 MB 97.96% 100.00%
kmalloc-4096 2 MB 97.68% 100.00%
filp 2 MB 20.71% 100.00%
signal_cache 2 MB 72.35% 100.00%
pgtable-2^10 2 MB 52.81% 100.00%
kmalloc-256 2 MB 85.56% 100.00%
kmalloc-2048 1 MB 84.95% 100.00%
shmem_inode_cache 1 MB 89.59% 100.00%
dtl 1 MB 98.77% 100.00%
kmalloc-192 1 MB 77.89% 100.00%
vm_area_struct 1 MB 76.80% 100.00%
cred_jar 1 MB 36.80% 100.00%
kmem_cache 1 MB 97.69% 100.00%
kmalloc-65536 0 MB 100.00% 100.00%
kmalloc-128 0 MB 87.07% 100.00%
buffer_head 0 MB 92.52% 100.00%
kmalloc-32 0 MB 92.89% 100.00%
anon_vma_chain 0 MB 47.46% 100.00%
sock_inode_cache 0 MB 65.45% 100.00%
kmalloc-64 0 MB 94.98% 100.00%
files_cache 0 MB 60.85% 100.00%
names_cache 0 MB 85.83% 100.00%
mm_struct 0 MB 22.06% 100.00%
xfs_buf 0 MB 91.50% 100.00%
UNIX 0 MB 37.90% 100.00%
task_delay_info 0 MB 66.76% 100.00%
skbuff_head_cache 0 MB 50.33% 100.00%
pid 0 MB 62.63% 100.00%
RAW 0 MB 92.59% 100.00%
kmalloc-96 0 MB 63.71% 100.00%
anon_vma 0 MB 52.25% 100.00%
xfs_ifork 0 MB 88.60% 100.00%
biovec-256 0 MB 75.56% 100.00%
TCP 0 MB 19.66% 100.00%
ftrace_event_field 0 MB 63.17% 100.00%
fs_cache 0 MB 24.30% 100.00%
file_lock_cache 0 MB 5.24% 100.00%
eventpoll_epi 0 MB 13.21% 100.00%
cifs_request 0 MB 71.43% 100.00%
cfq_queue 0 MB 26.90% 100.00%
blkdev_queue 0 MB 48.39% 100.00%
UDP 0 MB 12.50% 100.00%
xfs_trans 0 MB 4.33% 100.00%
xfs_log_ticket 0 MB 3.45% 100.00%
xfs_log_item_desc 0 MB 2.42% 100.00%
xfs_ioend 0 MB 84.65% 100.00%
xfs_ili 0 MB 66.20% 100.00%
xfs_buf_item 0 MB 7.94% 100.00%
xfs_btree_cur 0 MB 1.94% 100.00%
uid_cache 0 MB 1.61% 100.00%
tcp_bind_bucket 0 MB 2.18% 100.00%
taskstats 0 MB 3.55% 100.00%
sigqueue 0 MB 0.75% 100.00%
sgpool-8 0 MB 1.59% 100.00%
sgpool-64 0 MB 6.45% 100.00%
sgpool-32 0 MB 3.17% 100.00%
sgpool-16 0 MB 1.57% 100.00%
sgpool-128 0 MB 13.33% 100.00%
sd_ext_cdb 0 MB 0.11% 100.00%
scsi_sense_cache 0 MB 0.60% 100.00%
scsi_cmd_cache 0 MB 1.19% 100.00%
rpc_tasks 0 MB 3.17% 100.00%
rpc_inode_cache 0 MB 31.68% 100.00%
rpc_buffers 0 MB 25.81% 100.00%
revoke_table 0 MB 0.12% 100.00%
pool_workqueue 0 MB 4.37% 100.00%
numa_policy 0 MB 46.75% 100.00%
nsproxy 0 MB 0.16% 100.00%
nfs_write_data 0 MB 50.79% 100.00%
nfs_inode_cache 0 MB 27.69% 100.00%
nfs_commit_data 0 MB 4.76% 100.00%
nf_conntrack_c000000000cc9900 0 MB 45.22% 100.00%
mqueue_inode_cache 0 MB 1.39% 100.00%
mnt_cache 0 MB 53.57% 100.00%
key_jar 0 MB 5.56% 100.00%
jbd2_revoke_table_s 0 MB 0.06% 100.00%
ip_fib_trie 0 MB 0.73% 100.00%
ip_fib_alias 0 MB 0.71% 100.00%
ip_dst_cache 0 MB 30.16% 100.00%
inotify_inode_mark 0 MB 17.23% 100.00%
inet_peer_cache 0 MB 3.97% 100.00%
hugetlbfs_inode_cache 0 MB 2.59% 100.00%
ftrace_event_file 0 MB 92.58% 100.00%
fsnotify_event 0 MB 0.18% 100.00%
ext4_inode_cache 0 MB 4.35% 100.00%
ext4_groupinfo_4k 0 MB 8.55% 100.00%
ext4_extent_status 0 MB 0.07% 100.00%
ext3_inode_cache 0 MB 4.76% 100.00%
eventpoll_pwq 0 MB 15.20% 100.00%
dnotify_struct 0 MB 0.60% 100.00%
dnotify_mark 0 MB 2.08% 100.00%
dm_io 0 MB 2.28% 100.00%
cifs_small_rq 0 MB 23.62% 100.00%
cifs_mpx_ids 0 MB 0.60% 100.00%
cfq_io_cq 0 MB 26.42% 100.00%
blkdev_requests 0 MB 10.56% 100.00%
blkdev_ioc 0 MB 19.31% 100.00%
biovec-16 0 MB 1.19% 100.00%
bio-1 0 MB 13.49% 100.00%
bio-0 0 MB 1.59% 100.00%
bdev_cache 0 MB 52.78% 100.00%
xfs_mru_cache_elem 0 MB 0.00% 0.00%
xfs_icr 0 MB 0.00% 0.00%
xfs_efi_item 0 MB 0.00% 0.00%
xfs_efd_item 0 MB 0.00% 0.00%
xfs_da_state 0 MB 0.00% 0.00%
xfs_bmap_free_item 0 MB 0.00% 0.00%
xfrm_dst_cache 0 MB 0.00% 0.00%
tw_sock_TCP 0 MB 0.00% 0.00%
skbuff_fclone_cache 0 MB 0.00% 0.00%
shared_policy_node 0 MB 0.00% 0.00%
secpath_cache 0 MB 0.00% 0.00%
scsi_data_buffer 0 MB 0.00% 0.00%
revoke_record 0 MB 0.00% 0.00%
request_sock_TCP 0 MB 0.00% 0.00%
reiser_inode_cache 0 MB 0.00% 0.00%
posix_timers_cache 0 MB 0.00% 0.00%
pid_namespace 0 MB 0.00% 0.00%
nfsd_drc 0 MB 0.00% 0.00%
nfsd4_stateids 0 MB 0.00% 0.00%
nfsd4_openowners 0 MB 0.00% 0.00%
nfsd4_lockowners 0 MB 0.00% 0.00%
nfsd4_files 0 MB 0.00% 0.00%
nfsd4_delegations 0 MB 0.00% 0.00%
nfs_read_data 0 MB 0.00% 0.00%
nfs_page 0 MB 0.00% 0.00%
nfs_direct_cache 0 MB 0.00% 0.00%
nf_conntrack_expect 0 MB 0.00% 0.00%
net_namespace 0 MB 0.00% 0.00%
kmalloc-8388608 0 MB 0.00% 0.00%
kmalloc-524288 0 MB 0.00% 0.00%
kmalloc-4194304 0 MB 0.00% 0.00%
kmalloc-262144 0 MB 0.00% 0.00%
kmalloc-2097152 0 MB 0.00% 0.00%
kmalloc-16777216 0 MB 0.00% 0.00%
kmalloc-131072 0 MB 0.00% 0.00%
kmalloc-1048576 0 MB 0.00% 0.00%
kioctx 0 MB 0.00% 0.00%
kiocb 0 MB 0.00% 0.00%
kcopyd_job 0 MB 0.00% 0.00%
journal_head 0 MB 0.00% 0.00%
journal_handle 0 MB 0.00% 0.00%
jbd2_transaction_s 0 MB 0.00% 0.00%
jbd2_revoke_record_s 0 MB 0.00% 0.00%
jbd2_journal_head 0 MB 0.00% 0.00%
jbd2_journal_handle 0 MB 0.00% 0.00%
jbd2_inode 0 MB 0.00% 0.00%
jbd2_4k 0 MB 0.00% 0.00%
isofs_inode_cache 0 MB 0.00% 0.00%
io 0 MB 0.00% 0.00%
inotify_event_private_data 0 MB 0.00% 0.00%
fstrm_item 0 MB 0.00% 0.00%
fsnotify_event_holder 0 MB 0.00% 0.00%
flow_cache 0 MB 0.00% 0.00%
fat_inode_cache 0 MB 0.00% 0.00%
fat_cache 0 MB 0.00% 0.00%
fasync_cache 0 MB 0.00% 0.00%
ext4_xattr 0 MB 0.00% 0.00%
ext4_system_zone 0 MB 0.00% 0.00%
ext4_prealloc_space 0 MB 0.00% 0.00%
ext4_io_end 0 MB 0.00% 0.00%
ext4_free_data 0 MB 0.00% 0.00%
ext4_allocation_context 0 MB 0.00% 0.00%
ext3_xattr 0 MB 0.00% 0.00%
ext2_xattr 0 MB 0.00% 0.00%
ext2_inode_cache 0 MB 0.00% 0.00%
dma-kmalloc-96 0 MB 0.00% 0.00%
dma-kmalloc-8388608 0 MB 0.00% 0.00%
dma-kmalloc-8192 0 MB 0.00% 0.00%
dma-kmalloc-65536 0 MB 0.00% 0.00%
dma-kmalloc-64 0 MB 0.00% 0.00%
dma-kmalloc-524288 0 MB 0.00% 0.00%
dma-kmalloc-512 0 MB 0.00% 0.00%
dma-kmalloc-4194304 0 MB 0.00% 0.00%
dma-kmalloc-4096 0 MB 0.00% 0.00%
dma-kmalloc-32768 0 MB 0.00% 0.00%
dma-kmalloc-32 0 MB 0.00% 0.00%
dma-kmalloc-262144 0 MB 0.00% 0.00%
dma-kmalloc-256 0 MB 0.00% 0.00%
dma-kmalloc-2097152 0 MB 0.00% 0.00%
dma-kmalloc-2048 0 MB 0.00% 0.00%
dma-kmalloc-192 0 MB 0.00% 0.00%
dma-kmalloc-16777216 0 MB 0.00% 0.00%
dma-kmalloc-16384 0 MB 0.00% 0.00%
dma-kmalloc-131072 0 MB 0.00% 0.00%
dma-kmalloc-128 0 MB 0.00% 0.00%
dma-kmalloc-1048576 0 MB 0.00% 0.00%
dma-kmalloc-1024 0 MB 0.00% 0.00%
dm_uevent 0 MB 0.00% 0.00%
dm_rq_target_io 0 MB 0.00% 0.00%
dio 0 MB 0.00% 0.00%
cifs_inode_cache 0 MB 0.00% 0.00%
bsg_cmd 0 MB 0.00% 0.00%
biovec-64 0 MB 0.00% 0.00%
biovec-128 0 MB 0.00% 0.00%
UDP-Lite 0 MB 0.00% 0.00%
PING 0 MB 0.00% 0.00%
[-- Attachment #3: slabusage.3.13.SLUB --]
[-- Type: text/plain, Size: 7076 bytes --]
slab mem objs slabs
used active active
------------------------------------------------------------
kmalloc-16384 1018 MB 14.09% 100.00%
task_struct 704 MB 17.20% 100.00%
pgtable-2^12 110 MB 100.00% 100.00%
kmalloc-8192 109 MB 49.21% 100.00%
pgtable-2^10 105 MB 100.00% 100.00%
kmalloc-65536 92 MB 100.00% 100.00%
kmalloc-512 83 MB 16.68% 100.00%
kmalloc-128 75 MB 17.55% 100.00%
kmalloc-4096 52 MB 97.30% 100.00%
kmalloc-16 38 MB 24.78% 100.00%
kmalloc-256 33 MB 99.09% 100.00%
kmalloc-1024 27 MB 60.45% 100.00%
sighand_cache 27 MB 100.00% 100.00%
idr_layer_cache 25 MB 100.00% 100.00%
kmalloc-2048 25 MB 97.59% 100.00%
dentry 23 MB 100.00% 100.00%
inode_cache 20 MB 100.00% 100.00%
proc_inode_cache 19 MB 100.00% 100.00%
sysfs_dir_cache 16 MB 100.00% 100.00%
vm_area_struct 14 MB 100.00% 100.00%
kmalloc-64 14 MB 97.79% 100.00%
kmalloc-192 13 MB 97.60% 100.00%
kmalloc-32 12 MB 97.56% 100.00%
anon_vma 12 MB 100.00% 100.00%
mm_struct 12 MB 100.00% 100.00%
sigqueue 12 MB 100.00% 100.00%
files_cache 12 MB 100.00% 100.00%
cfq_queue 11 MB 100.00% 100.00%
radix_tree_node 11 MB 100.00% 100.00%
kmalloc-96 10 MB 97.06% 100.00%
blkdev_requests 10 MB 100.00% 100.00%
xfs_inode 9 MB 100.00% 100.00%
shmem_inode_cache 9 MB 100.00% 100.00%
ext4_system_zone 9 MB 100.00% 100.00%
sock_inode_cache 9 MB 100.00% 100.00%
RAW 8 MB 100.00% 100.00%
kmalloc-8 8 MB 100.00% 100.00%
kmalloc-32768 8 MB 100.00% 100.00%
blkdev_ioc 7 MB 100.00% 100.00%
buffer_head 6 MB 100.00% 100.00%
xfs_da_state 6 MB 100.00% 100.00%
mnt_cache 6 MB 100.00% 100.00%
numa_policy 6 MB 100.00% 100.00%
dnotify_mark 4 MB 100.00% 100.00%
TCP 3 MB 100.00% 100.00%
cifs_request 3 MB 100.00% 100.00%
UDP 3 MB 100.00% 100.00%
xfs_ili 3 MB 100.00% 100.00%
xfs_btree_cur 3 MB 100.00% 100.00%
nf_conntrack_c000000000cb5480 2 MB 100.00% 100.00%
fsnotify_event_holder 1 MB 100.00% 100.00%
dm_rq_target_io 1 MB 100.00% 100.00%
bdev_cache 1 MB 100.00% 100.00%
kmem_cache 1 MB 89.09% 100.00%
blkdev_queue 0 MB 100.00% 100.00%
dio 0 MB 100.00% 100.00%
taskstats 0 MB 100.00% 100.00%
kmem_cache_node 0 MB 100.00% 100.00%
shared_policy_node 0 MB 100.00% 100.00%
rpc_inode_cache 0 MB 100.00% 100.00%
nfs_inode_cache 0 MB 100.00% 100.00%
revoke_table 0 MB 100.00% 100.00%
ip_fib_trie 0 MB 100.00% 100.00%
ext4_inode_cache 0 MB 100.00% 100.00%
hugetlbfs_inode_cache 0 MB 100.00% 100.00%
ext3_inode_cache 0 MB 100.00% 100.00%
tw_sock_TCP 0 MB 100.00% 100.00%
mqueue_inode_cache 0 MB 100.00% 100.00%
ext4_extent_status 0 MB 100.00% 100.00%
ext4_allocation_context 0 MB 100.00% 100.00%
xfs_icr 0 MB 0.00% 0.00%
revoke_record 0 MB 0.00% 0.00%
reiser_inode_cache 0 MB 0.00% 0.00%
posix_timers_cache 0 MB 0.00% 0.00%
pid_namespace 0 MB 0.00% 0.00%
nfsd4_openowners 0 MB 0.00% 0.00%
nfsd4_delegations 0 MB 0.00% 0.00%
nfs_direct_cache 0 MB 0.00% 0.00%
net_namespace 0 MB 0.00% 0.00%
kmalloc-131072 0 MB 0.00% 0.00%
kcopyd_job 0 MB 0.00% 0.00%
journal_head 0 MB 0.00% 0.00%
journal_handle 0 MB 0.00% 0.00%
jbd2_transaction_s 0 MB 0.00% 0.00%
jbd2_journal_handle 0 MB 0.00% 0.00%
isofs_inode_cache 0 MB 0.00% 0.00%
fat_inode_cache 0 MB 0.00% 0.00%
fat_cache 0 MB 0.00% 0.00%
ext4_io_end 0 MB 0.00% 0.00%
ext4_free_data 0 MB 0.00% 0.00%
ext3_xattr 0 MB 0.00% 0.00%
ext2_inode_cache 0 MB 0.00% 0.00%
dma-kmalloc-96 0 MB 0.00% 0.00%
dma-kmalloc-8192 0 MB 0.00% 0.00%
dma-kmalloc-8 0 MB 0.00% 0.00%
dma-kmalloc-65536 0 MB 0.00% 0.00%
dma-kmalloc-64 0 MB 0.00% 0.00%
dma-kmalloc-512 0 MB 0.00% 0.00%
dma-kmalloc-4096 0 MB 0.00% 0.00%
dma-kmalloc-32768 0 MB 0.00% 0.00%
dma-kmalloc-32 0 MB 0.00% 0.00%
dma-kmalloc-256 0 MB 0.00% 0.00%
dma-kmalloc-2048 0 MB 0.00% 0.00%
dma-kmalloc-192 0 MB 0.00% 0.00%
dma-kmalloc-16384 0 MB 0.00% 0.00%
dma-kmalloc-16 0 MB 0.00% 0.00%
dma-kmalloc-131072 0 MB 0.00% 0.00%
dma-kmalloc-128 0 MB 0.00% 0.00%
dma-kmalloc-1024 0 MB 0.00% 0.00%
dm_uevent 0 MB 0.00% 0.00%
cifs_inode_cache 0 MB 0.00% 0.00%
bsg_cmd 0 MB 0.00% 0.00%
UDP-Lite 0 MB 0.00% 0.00%
WARNING: multiple messages have this Message-ID (diff)
From: Nishanth Aravamudan <nacc@linux.vnet.ibm.com>
To: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Han Pingtian <hanpt@linux.vnet.ibm.com>,
mpm@selenic.com, penberg@kernel.org, linux-mm@kvack.org,
paulus@samba.org, Anton Blanchard <anton@samba.org>,
David Rientjes <rientjes@google.com>,
Christoph Lameter <cl@linux.com>,
linuxppc-dev@lists.ozlabs.org,
Wanpeng Li <liwanp@linux.vnet.ibm.com>
Subject: Re: [PATCH] slub: Don't throw away partial remote slabs if there is no local memory
Date: Thu, 6 Feb 2014 11:28:12 -0800 [thread overview]
Message-ID: <20140206192812.GC7845@linux.vnet.ibm.com> (raw)
In-Reply-To: <20140206185955.GA7845@linux.vnet.ibm.com>
[-- Attachment #1: Type: text/plain, Size: 8967 bytes --]
On 06.02.2014 [10:59:55 -0800], Nishanth Aravamudan wrote:
> On 06.02.2014 [17:04:18 +0900], Joonsoo Kim wrote:
> > On Wed, Feb 05, 2014 at 06:07:57PM -0800, Nishanth Aravamudan wrote:
> > > On 24.01.2014 [16:25:58 -0800], David Rientjes wrote:
> > > > On Fri, 24 Jan 2014, Nishanth Aravamudan wrote:
> > > >
> > > > > Thank you for clarifying and providing a test patch. I ran with this on
> > > > > the system showing the original problem, configured to have 15GB of
> > > > > memory.
> > > > >
> > > > > With your patch after boot:
> > > > >
> > > > > MemTotal: 15604736 kB
> > > > > MemFree: 8768192 kB
> > > > > Slab: 3882560 kB
> > > > > SReclaimable: 105408 kB
> > > > > SUnreclaim: 3777152 kB
> > > > >
> > > > > With Anton's patch after boot:
> > > > >
> > > > > MemTotal: 15604736 kB
> > > > > MemFree: 11195008 kB
> > > > > Slab: 1427968 kB
> > > > > SReclaimable: 109184 kB
> > > > > SUnreclaim: 1318784 kB
> > > > >
> > > > >
> > > > > I know that's fairly unscientific, but the numbers are reproducible.
> > > > >
> > > >
> > > > I don't think the goal of the discussion is to reduce the amount of slab
> > > > allocated, but rather get the most local slab memory possible by use of
> > > > kmalloc_node(). When a memoryless node is being passed to kmalloc_node(),
> > > > which is probably cpu_to_node() for a cpu bound to a node without memory,
> > > > my patch is allocating it on the most local node; Anton's patch is
> > > > allocating it on whatever happened to be the cpu slab.
> > > >
> > > > > > diff --git a/mm/slub.c b/mm/slub.c
> > > > > > --- a/mm/slub.c
> > > > > > +++ b/mm/slub.c
> > > > > > @@ -2278,10 +2278,14 @@ redo:
> > > > > >
> > > > > > if (unlikely(!node_match(page, node))) {
> > > > > > stat(s, ALLOC_NODE_MISMATCH);
> > > > > > - deactivate_slab(s, page, c->freelist);
> > > > > > - c->page = NULL;
> > > > > > - c->freelist = NULL;
> > > > > > - goto new_slab;
> > > > > > + if (unlikely(!node_present_pages(node)))
> > > > > > + node = numa_mem_id();
> > > > > > + if (!node_match(page, node)) {
> > > > > > + deactivate_slab(s, page, c->freelist);
> > > > > > + c->page = NULL;
> > > > > > + c->freelist = NULL;
> > > > > > + goto new_slab;
> > > > > > + }
> > > > >
> > > > > Semantically, and please correct me if I'm wrong, this patch is saying
> > > > > if we have a memoryless node, we expect the page's locality to be that
> > > > > of numa_mem_id(), and we still deactivate the slab if that isn't true.
> > > > > Just wanting to make sure I understand the intent.
> > > > >
> > > >
> > > > Yeah, the default policy should be to fallback to local memory if the node
> > > > passed is memoryless.
> > > >
> > > > > What I find odd is that there are only 2 nodes on this system, node 0
> > > > > (empty) and node 1. So won't numa_mem_id() always be 1? And every page
> > > > > should be coming from node 1 (thus node_match() should always be true?)
> > > > >
> > > >
> > > > The nice thing about slub is its debugging ability, what is
> > > > /sys/kernel/slab/cache/objects showing in comparison between the two
> > > > patches?
> > >
> > > Ok, I finally got around to writing a script that compares the objects
> > > output from both kernels.
> > >
> > > log1 is with CONFIG_HAVE_MEMORYLESS_NODES on, my kthread locality patch
> > > and Joonsoo's patch.
> > >
> > > log2 is with CONFIG_HAVE_MEMORYLESS_NODES on, my kthread locality patch
> > > and Anton's patch.
> > >
> > > slab objects objects percent
> > > log1 log2 change
> > > -----------------------------------------------------------
> > > :t-0000104 71190 85680 20.353982 %
> > > UDP 4352 3392 22.058824 %
> > > inode_cache 54302 41923 22.796582 %
> > > fscache_cookie_jar 3276 2457 25.000000 %
> > > :t-0000896 438 292 33.333333 %
> > > :t-0000080 310401 195323 37.073978 %
> > > ext4_inode_cache 335 201 40.000000 %
> > > :t-0000192 89408 128898 44.168307 %
> > > :t-0000184 151300 81880 45.882353 %
> > > :t-0000512 49698 73648 48.191074 %
> > > :at-0000192 242867 120948 50.199904 %
> > > xfs_inode 34350 15221 55.688501 %
> > > :t-0016384 11005 17257 56.810541 %
> > > proc_inode_cache 103868 34717 66.575846 %
> > > tw_sock_TCP 768 256 66.666667 %
> > > :t-0004096 15240 25672 68.451444 %
> > > nfs_inode_cache 1008 315 68.750000 %
> > > :t-0001024 14528 24720 70.154185 %
> > > :t-0032768 655 1312 100.305344%
> > > :t-0002048 14242 30720 115.700042%
> > > :t-0000640 1020 2550 150.000000%
> > > :t-0008192 10005 27905 178.910545%
> > >
> > > FWIW, the configuration of this LPAR has slightly changed. It is now configured
> > > for maximally 400 CPUs, of which 200 are present. The result is that even with
> > > Joonsoo's patch (log1 above), we OOM pretty easily and Anton's slab usage
> > > script reports:
> > >
> > > slab mem objs slabs
> > > used active active
> > > ------------------------------------------------------------
> > > kmalloc-512 1182 MB 2.03% 100.00%
> > > kmalloc-192 1182 MB 1.38% 100.00%
> > > kmalloc-16384 966 MB 17.66% 100.00%
> > > kmalloc-4096 353 MB 15.92% 100.00%
> > > kmalloc-8192 259 MB 27.28% 100.00%
> > > kmalloc-32768 207 MB 9.86% 100.00%
> > >
> > > In comparison (log2 above):
> > >
> > > slab mem objs slabs
> > > used active active
> > > ------------------------------------------------------------
> > > kmalloc-16384 273 MB 98.76% 100.00%
> > > kmalloc-8192 225 MB 98.67% 100.00%
> > > pgtable-2^11 114 MB 100.00% 100.00%
> > > pgtable-2^12 109 MB 100.00% 100.00%
> > > kmalloc-4096 104 MB 98.59% 100.00%
> > >
> > > I appreciate all the help so far, if anyone has any ideas how best to
> > > proceed further, or what they'd like debugged more, I'm happy to get
> > > this fixed. We're hitting this on a couple of different systems and I'd
> > > like to find a good resolution to the problem.
> >
> > Hello,
> >
> > I have no memoryless system, so, to debug it, I need your help. :)
> > First, please let me know node information on your system.
>
> [ 0.000000] Node 0 Memory:
> [ 0.000000] Node 1 Memory: 0x0-0x200000000
>
> [ 0.000000] On node 0 totalpages: 0
> [ 0.000000] On node 1 totalpages: 131072
> [ 0.000000] DMA zone: 112 pages used for memmap
> [ 0.000000] DMA zone: 0 pages reserved
> [ 0.000000] DMA zone: 131072 pages, LIFO batch:1
>
> [ 0.638391] Node 0 CPUs: 0-199
> [ 0.638394] Node 1 CPUs:
>
> Do you need anything else?
>
> > I'm preparing 3 another patches which are nearly same with previous patch,
> > but slightly different approach. Could you test them on your system?
> > I will send them soon.
>
> Test results are in the attached tarball [1].
>
> > And I think that same problem exists if CONFIG_SLAB is enabled. Could you
> > confirm that?
>
> I will test and let you know.
Ok, with your patches applied and CONFIG_SLAB enabled:
MemTotal: 8264640 kB
MemFree: 7119680 kB
Slab: 207232 kB
SReclaimable: 32896 kB
SUnreclaim: 174336 kB
For reference, same kernel with CONFIG_SLUB:
MemTotal: 8264640 kB
MemFree: 4264000 kB
Slab: 3065408 kB
SReclaimable: 104704 kB
SUnreclaim: 2960704 kB
So CONFIG_SLAB is much better in this case.
Without your patches (but still CONFIG_HAVE_MEMORYLESS_NODES, kthread
locality patch and two other unrelated bugfix patches):
3.13.0-slub:
MemTotal: 8264704 kB
MemFree: 4404288 kB
Slab: 2963648 kB
SReclaimable: 106816 kB
SUnreclaim: 2856832 kB
3.13.0-slab:
MemTotal: 8264640 kB
MemFree: 7263168 kB
Slab: 206144 kB
SReclaimable: 32576 kB
SUnreclaim: 173568 kB
In case it's helpful, I've attached /proc/slabinfo from both kernels.
Thanks,
Nish
[-- Attachment #2: slabusage.3.13.SLAB --]
[-- Type: text/plain, Size: 13115 bytes --]
slab mem objs slabs
used active active
------------------------------------------------------------
thread_info 34 MB 96.33% 100.00%
kmalloc-1024 22 MB 97.44% 100.00%
task_struct 19 MB 95.15% 100.00%
kmalloc-16384 9 MB 98.05% 100.00%
inode_cache 8 MB 97.74% 100.00%
kmalloc-512 7 MB 89.56% 100.00%
dentry 7 MB 98.89% 100.00%
kmalloc-8192 6 MB 98.64% 100.00%
proc_inode_cache 6 MB 90.20% 100.00%
idr_layer_cache 4 MB 94.76% 100.00%
sighand_cache 4 MB 94.69% 100.00%
pgtable-2^12 3 MB 72.58% 100.00%
xfs_inode 3 MB 98.89% 100.00%
sysfs_dir_cache 3 MB 98.29% 100.00%
radix_tree_node 2 MB 97.19% 100.00%
kmalloc-32768 2 MB 97.96% 100.00%
kmalloc-4096 2 MB 97.68% 100.00%
filp 2 MB 20.71% 100.00%
signal_cache 2 MB 72.35% 100.00%
pgtable-2^10 2 MB 52.81% 100.00%
kmalloc-256 2 MB 85.56% 100.00%
kmalloc-2048 1 MB 84.95% 100.00%
shmem_inode_cache 1 MB 89.59% 100.00%
dtl 1 MB 98.77% 100.00%
kmalloc-192 1 MB 77.89% 100.00%
vm_area_struct 1 MB 76.80% 100.00%
cred_jar 1 MB 36.80% 100.00%
kmem_cache 1 MB 97.69% 100.00%
kmalloc-65536 0 MB 100.00% 100.00%
kmalloc-128 0 MB 87.07% 100.00%
buffer_head 0 MB 92.52% 100.00%
kmalloc-32 0 MB 92.89% 100.00%
anon_vma_chain 0 MB 47.46% 100.00%
sock_inode_cache 0 MB 65.45% 100.00%
kmalloc-64 0 MB 94.98% 100.00%
files_cache 0 MB 60.85% 100.00%
names_cache 0 MB 85.83% 100.00%
mm_struct 0 MB 22.06% 100.00%
xfs_buf 0 MB 91.50% 100.00%
UNIX 0 MB 37.90% 100.00%
task_delay_info 0 MB 66.76% 100.00%
skbuff_head_cache 0 MB 50.33% 100.00%
pid 0 MB 62.63% 100.00%
RAW 0 MB 92.59% 100.00%
kmalloc-96 0 MB 63.71% 100.00%
anon_vma 0 MB 52.25% 100.00%
xfs_ifork 0 MB 88.60% 100.00%
biovec-256 0 MB 75.56% 100.00%
TCP 0 MB 19.66% 100.00%
ftrace_event_field 0 MB 63.17% 100.00%
fs_cache 0 MB 24.30% 100.00%
file_lock_cache 0 MB 5.24% 100.00%
eventpoll_epi 0 MB 13.21% 100.00%
cifs_request 0 MB 71.43% 100.00%
cfq_queue 0 MB 26.90% 100.00%
blkdev_queue 0 MB 48.39% 100.00%
UDP 0 MB 12.50% 100.00%
xfs_trans 0 MB 4.33% 100.00%
xfs_log_ticket 0 MB 3.45% 100.00%
xfs_log_item_desc 0 MB 2.42% 100.00%
xfs_ioend 0 MB 84.65% 100.00%
xfs_ili 0 MB 66.20% 100.00%
xfs_buf_item 0 MB 7.94% 100.00%
xfs_btree_cur 0 MB 1.94% 100.00%
uid_cache 0 MB 1.61% 100.00%
tcp_bind_bucket 0 MB 2.18% 100.00%
taskstats 0 MB 3.55% 100.00%
sigqueue 0 MB 0.75% 100.00%
sgpool-8 0 MB 1.59% 100.00%
sgpool-64 0 MB 6.45% 100.00%
sgpool-32 0 MB 3.17% 100.00%
sgpool-16 0 MB 1.57% 100.00%
sgpool-128 0 MB 13.33% 100.00%
sd_ext_cdb 0 MB 0.11% 100.00%
scsi_sense_cache 0 MB 0.60% 100.00%
scsi_cmd_cache 0 MB 1.19% 100.00%
rpc_tasks 0 MB 3.17% 100.00%
rpc_inode_cache 0 MB 31.68% 100.00%
rpc_buffers 0 MB 25.81% 100.00%
revoke_table 0 MB 0.12% 100.00%
pool_workqueue 0 MB 4.37% 100.00%
numa_policy 0 MB 46.75% 100.00%
nsproxy 0 MB 0.16% 100.00%
nfs_write_data 0 MB 50.79% 100.00%
nfs_inode_cache 0 MB 27.69% 100.00%
nfs_commit_data 0 MB 4.76% 100.00%
nf_conntrack_c000000000cc9900 0 MB 45.22% 100.00%
mqueue_inode_cache 0 MB 1.39% 100.00%
mnt_cache 0 MB 53.57% 100.00%
key_jar 0 MB 5.56% 100.00%
jbd2_revoke_table_s 0 MB 0.06% 100.00%
ip_fib_trie 0 MB 0.73% 100.00%
ip_fib_alias 0 MB 0.71% 100.00%
ip_dst_cache 0 MB 30.16% 100.00%
inotify_inode_mark 0 MB 17.23% 100.00%
inet_peer_cache 0 MB 3.97% 100.00%
hugetlbfs_inode_cache 0 MB 2.59% 100.00%
ftrace_event_file 0 MB 92.58% 100.00%
fsnotify_event 0 MB 0.18% 100.00%
ext4_inode_cache 0 MB 4.35% 100.00%
ext4_groupinfo_4k 0 MB 8.55% 100.00%
ext4_extent_status 0 MB 0.07% 100.00%
ext3_inode_cache 0 MB 4.76% 100.00%
eventpoll_pwq 0 MB 15.20% 100.00%
dnotify_struct 0 MB 0.60% 100.00%
dnotify_mark 0 MB 2.08% 100.00%
dm_io 0 MB 2.28% 100.00%
cifs_small_rq 0 MB 23.62% 100.00%
cifs_mpx_ids 0 MB 0.60% 100.00%
cfq_io_cq 0 MB 26.42% 100.00%
blkdev_requests 0 MB 10.56% 100.00%
blkdev_ioc 0 MB 19.31% 100.00%
biovec-16 0 MB 1.19% 100.00%
bio-1 0 MB 13.49% 100.00%
bio-0 0 MB 1.59% 100.00%
bdev_cache 0 MB 52.78% 100.00%
xfs_mru_cache_elem 0 MB 0.00% 0.00%
xfs_icr 0 MB 0.00% 0.00%
xfs_efi_item 0 MB 0.00% 0.00%
xfs_efd_item 0 MB 0.00% 0.00%
xfs_da_state 0 MB 0.00% 0.00%
xfs_bmap_free_item 0 MB 0.00% 0.00%
xfrm_dst_cache 0 MB 0.00% 0.00%
tw_sock_TCP 0 MB 0.00% 0.00%
skbuff_fclone_cache 0 MB 0.00% 0.00%
shared_policy_node 0 MB 0.00% 0.00%
secpath_cache 0 MB 0.00% 0.00%
scsi_data_buffer 0 MB 0.00% 0.00%
revoke_record 0 MB 0.00% 0.00%
request_sock_TCP 0 MB 0.00% 0.00%
reiser_inode_cache 0 MB 0.00% 0.00%
posix_timers_cache 0 MB 0.00% 0.00%
pid_namespace 0 MB 0.00% 0.00%
nfsd_drc 0 MB 0.00% 0.00%
nfsd4_stateids 0 MB 0.00% 0.00%
nfsd4_openowners 0 MB 0.00% 0.00%
nfsd4_lockowners 0 MB 0.00% 0.00%
nfsd4_files 0 MB 0.00% 0.00%
nfsd4_delegations 0 MB 0.00% 0.00%
nfs_read_data 0 MB 0.00% 0.00%
nfs_page 0 MB 0.00% 0.00%
nfs_direct_cache 0 MB 0.00% 0.00%
nf_conntrack_expect 0 MB 0.00% 0.00%
net_namespace 0 MB 0.00% 0.00%
kmalloc-8388608 0 MB 0.00% 0.00%
kmalloc-524288 0 MB 0.00% 0.00%
kmalloc-4194304 0 MB 0.00% 0.00%
kmalloc-262144 0 MB 0.00% 0.00%
kmalloc-2097152 0 MB 0.00% 0.00%
kmalloc-16777216 0 MB 0.00% 0.00%
kmalloc-131072 0 MB 0.00% 0.00%
kmalloc-1048576 0 MB 0.00% 0.00%
kioctx 0 MB 0.00% 0.00%
kiocb 0 MB 0.00% 0.00%
kcopyd_job 0 MB 0.00% 0.00%
journal_head 0 MB 0.00% 0.00%
journal_handle 0 MB 0.00% 0.00%
jbd2_transaction_s 0 MB 0.00% 0.00%
jbd2_revoke_record_s 0 MB 0.00% 0.00%
jbd2_journal_head 0 MB 0.00% 0.00%
jbd2_journal_handle 0 MB 0.00% 0.00%
jbd2_inode 0 MB 0.00% 0.00%
jbd2_4k 0 MB 0.00% 0.00%
isofs_inode_cache 0 MB 0.00% 0.00%
io 0 MB 0.00% 0.00%
inotify_event_private_data 0 MB 0.00% 0.00%
fstrm_item 0 MB 0.00% 0.00%
fsnotify_event_holder 0 MB 0.00% 0.00%
flow_cache 0 MB 0.00% 0.00%
fat_inode_cache 0 MB 0.00% 0.00%
fat_cache 0 MB 0.00% 0.00%
fasync_cache 0 MB 0.00% 0.00%
ext4_xattr 0 MB 0.00% 0.00%
ext4_system_zone 0 MB 0.00% 0.00%
ext4_prealloc_space 0 MB 0.00% 0.00%
ext4_io_end 0 MB 0.00% 0.00%
ext4_free_data 0 MB 0.00% 0.00%
ext4_allocation_context 0 MB 0.00% 0.00%
ext3_xattr 0 MB 0.00% 0.00%
ext2_xattr 0 MB 0.00% 0.00%
ext2_inode_cache 0 MB 0.00% 0.00%
dma-kmalloc-96 0 MB 0.00% 0.00%
dma-kmalloc-8388608 0 MB 0.00% 0.00%
dma-kmalloc-8192 0 MB 0.00% 0.00%
dma-kmalloc-65536 0 MB 0.00% 0.00%
dma-kmalloc-64 0 MB 0.00% 0.00%
dma-kmalloc-524288 0 MB 0.00% 0.00%
dma-kmalloc-512 0 MB 0.00% 0.00%
dma-kmalloc-4194304 0 MB 0.00% 0.00%
dma-kmalloc-4096 0 MB 0.00% 0.00%
dma-kmalloc-32768 0 MB 0.00% 0.00%
dma-kmalloc-32 0 MB 0.00% 0.00%
dma-kmalloc-262144 0 MB 0.00% 0.00%
dma-kmalloc-256 0 MB 0.00% 0.00%
dma-kmalloc-2097152 0 MB 0.00% 0.00%
dma-kmalloc-2048 0 MB 0.00% 0.00%
dma-kmalloc-192 0 MB 0.00% 0.00%
dma-kmalloc-16777216 0 MB 0.00% 0.00%
dma-kmalloc-16384 0 MB 0.00% 0.00%
dma-kmalloc-131072 0 MB 0.00% 0.00%
dma-kmalloc-128 0 MB 0.00% 0.00%
dma-kmalloc-1048576 0 MB 0.00% 0.00%
dma-kmalloc-1024 0 MB 0.00% 0.00%
dm_uevent 0 MB 0.00% 0.00%
dm_rq_target_io 0 MB 0.00% 0.00%
dio 0 MB 0.00% 0.00%
cifs_inode_cache 0 MB 0.00% 0.00%
bsg_cmd 0 MB 0.00% 0.00%
biovec-64 0 MB 0.00% 0.00%
biovec-128 0 MB 0.00% 0.00%
UDP-Lite 0 MB 0.00% 0.00%
PING 0 MB 0.00% 0.00%
[-- Attachment #3: slabusage.3.13.SLUB --]
[-- Type: text/plain, Size: 7076 bytes --]
slab mem objs slabs
used active active
------------------------------------------------------------
kmalloc-16384 1018 MB 14.09% 100.00%
task_struct 704 MB 17.20% 100.00%
pgtable-2^12 110 MB 100.00% 100.00%
kmalloc-8192 109 MB 49.21% 100.00%
pgtable-2^10 105 MB 100.00% 100.00%
kmalloc-65536 92 MB 100.00% 100.00%
kmalloc-512 83 MB 16.68% 100.00%
kmalloc-128 75 MB 17.55% 100.00%
kmalloc-4096 52 MB 97.30% 100.00%
kmalloc-16 38 MB 24.78% 100.00%
kmalloc-256 33 MB 99.09% 100.00%
kmalloc-1024 27 MB 60.45% 100.00%
sighand_cache 27 MB 100.00% 100.00%
idr_layer_cache 25 MB 100.00% 100.00%
kmalloc-2048 25 MB 97.59% 100.00%
dentry 23 MB 100.00% 100.00%
inode_cache 20 MB 100.00% 100.00%
proc_inode_cache 19 MB 100.00% 100.00%
sysfs_dir_cache 16 MB 100.00% 100.00%
vm_area_struct 14 MB 100.00% 100.00%
kmalloc-64 14 MB 97.79% 100.00%
kmalloc-192 13 MB 97.60% 100.00%
kmalloc-32 12 MB 97.56% 100.00%
anon_vma 12 MB 100.00% 100.00%
mm_struct 12 MB 100.00% 100.00%
sigqueue 12 MB 100.00% 100.00%
files_cache 12 MB 100.00% 100.00%
cfq_queue 11 MB 100.00% 100.00%
radix_tree_node 11 MB 100.00% 100.00%
kmalloc-96 10 MB 97.06% 100.00%
blkdev_requests 10 MB 100.00% 100.00%
xfs_inode 9 MB 100.00% 100.00%
shmem_inode_cache 9 MB 100.00% 100.00%
ext4_system_zone 9 MB 100.00% 100.00%
sock_inode_cache 9 MB 100.00% 100.00%
RAW 8 MB 100.00% 100.00%
kmalloc-8 8 MB 100.00% 100.00%
kmalloc-32768 8 MB 100.00% 100.00%
blkdev_ioc 7 MB 100.00% 100.00%
buffer_head 6 MB 100.00% 100.00%
xfs_da_state 6 MB 100.00% 100.00%
mnt_cache 6 MB 100.00% 100.00%
numa_policy 6 MB 100.00% 100.00%
dnotify_mark 4 MB 100.00% 100.00%
TCP 3 MB 100.00% 100.00%
cifs_request 3 MB 100.00% 100.00%
UDP 3 MB 100.00% 100.00%
xfs_ili 3 MB 100.00% 100.00%
xfs_btree_cur 3 MB 100.00% 100.00%
nf_conntrack_c000000000cb5480 2 MB 100.00% 100.00%
fsnotify_event_holder 1 MB 100.00% 100.00%
dm_rq_target_io 1 MB 100.00% 100.00%
bdev_cache 1 MB 100.00% 100.00%
kmem_cache 1 MB 89.09% 100.00%
blkdev_queue 0 MB 100.00% 100.00%
dio 0 MB 100.00% 100.00%
taskstats 0 MB 100.00% 100.00%
kmem_cache_node 0 MB 100.00% 100.00%
shared_policy_node 0 MB 100.00% 100.00%
rpc_inode_cache 0 MB 100.00% 100.00%
nfs_inode_cache 0 MB 100.00% 100.00%
revoke_table 0 MB 100.00% 100.00%
ip_fib_trie 0 MB 100.00% 100.00%
ext4_inode_cache 0 MB 100.00% 100.00%
hugetlbfs_inode_cache 0 MB 100.00% 100.00%
ext3_inode_cache 0 MB 100.00% 100.00%
tw_sock_TCP 0 MB 100.00% 100.00%
mqueue_inode_cache 0 MB 100.00% 100.00%
ext4_extent_status 0 MB 100.00% 100.00%
ext4_allocation_context 0 MB 100.00% 100.00%
xfs_icr 0 MB 0.00% 0.00%
revoke_record 0 MB 0.00% 0.00%
reiser_inode_cache 0 MB 0.00% 0.00%
posix_timers_cache 0 MB 0.00% 0.00%
pid_namespace 0 MB 0.00% 0.00%
nfsd4_openowners 0 MB 0.00% 0.00%
nfsd4_delegations 0 MB 0.00% 0.00%
nfs_direct_cache 0 MB 0.00% 0.00%
net_namespace 0 MB 0.00% 0.00%
kmalloc-131072 0 MB 0.00% 0.00%
kcopyd_job 0 MB 0.00% 0.00%
journal_head 0 MB 0.00% 0.00%
journal_handle 0 MB 0.00% 0.00%
jbd2_transaction_s 0 MB 0.00% 0.00%
jbd2_journal_handle 0 MB 0.00% 0.00%
isofs_inode_cache 0 MB 0.00% 0.00%
fat_inode_cache 0 MB 0.00% 0.00%
fat_cache 0 MB 0.00% 0.00%
ext4_io_end 0 MB 0.00% 0.00%
ext4_free_data 0 MB 0.00% 0.00%
ext3_xattr 0 MB 0.00% 0.00%
ext2_inode_cache 0 MB 0.00% 0.00%
dma-kmalloc-96 0 MB 0.00% 0.00%
dma-kmalloc-8192 0 MB 0.00% 0.00%
dma-kmalloc-8 0 MB 0.00% 0.00%
dma-kmalloc-65536 0 MB 0.00% 0.00%
dma-kmalloc-64 0 MB 0.00% 0.00%
dma-kmalloc-512 0 MB 0.00% 0.00%
dma-kmalloc-4096 0 MB 0.00% 0.00%
dma-kmalloc-32768 0 MB 0.00% 0.00%
dma-kmalloc-32 0 MB 0.00% 0.00%
dma-kmalloc-256 0 MB 0.00% 0.00%
dma-kmalloc-2048 0 MB 0.00% 0.00%
dma-kmalloc-192 0 MB 0.00% 0.00%
dma-kmalloc-16384 0 MB 0.00% 0.00%
dma-kmalloc-16 0 MB 0.00% 0.00%
dma-kmalloc-131072 0 MB 0.00% 0.00%
dma-kmalloc-128 0 MB 0.00% 0.00%
dma-kmalloc-1024 0 MB 0.00% 0.00%
dm_uevent 0 MB 0.00% 0.00%
cifs_inode_cache 0 MB 0.00% 0.00%
bsg_cmd 0 MB 0.00% 0.00%
UDP-Lite 0 MB 0.00% 0.00%
next prev parent reply other threads:[~2014-02-06 23:31 UTC|newest]
Thread overview: 229+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-01-07 2:21 [PATCH] slub: Don't throw away partial remote slabs if there is no local memory Anton Blanchard
2014-01-07 2:21 ` Anton Blanchard
2014-01-07 4:19 ` Wanpeng Li
2014-01-07 4:19 ` Wanpeng Li
2014-01-07 4:19 ` Wanpeng Li
2014-01-08 14:17 ` Anton Blanchard
2014-01-08 14:17 ` Anton Blanchard
2014-01-07 4:19 ` Wanpeng Li
2014-01-07 6:49 ` Andi Kleen
2014-01-07 6:49 ` Andi Kleen
2014-01-08 14:03 ` Anton Blanchard
2014-01-08 14:03 ` Anton Blanchard
2014-01-07 7:41 ` Joonsoo Kim
2014-01-07 7:41 ` Joonsoo Kim
2014-01-07 8:48 ` Wanpeng Li
2014-01-07 8:48 ` Wanpeng Li
2014-01-07 8:48 ` Wanpeng Li
2014-01-07 8:48 ` Wanpeng Li
2014-01-07 9:10 ` Joonsoo Kim
2014-01-07 9:10 ` Joonsoo Kim
2014-01-07 9:21 ` Wanpeng Li
2014-01-07 9:21 ` Wanpeng Li
2014-01-07 9:31 ` Joonsoo Kim
2014-01-07 9:31 ` Joonsoo Kim
2014-01-07 9:49 ` Wanpeng Li
2014-01-07 9:49 ` Wanpeng Li
2014-01-07 9:49 ` Wanpeng Li
2014-01-07 9:49 ` Wanpeng Li
2014-01-07 9:21 ` Wanpeng Li
2014-01-07 9:21 ` Wanpeng Li
2014-01-07 9:52 ` Wanpeng Li
2014-01-07 9:52 ` Wanpeng Li
2014-01-07 9:52 ` Wanpeng Li
2014-01-09 0:20 ` Joonsoo Kim
2014-01-09 0:20 ` Joonsoo Kim
2014-01-07 9:52 ` Wanpeng Li
2014-01-20 9:10 ` Wanpeng Li
2014-01-20 9:10 ` Wanpeng Li
2014-01-20 9:10 ` Wanpeng Li
2014-01-20 9:10 ` Wanpeng Li
[not found] ` <52dce7fe.e5e6420a.5ff6.ffff84a0SMTPIN_ADDED_BROKEN@mx.google.com>
2014-01-20 22:13 ` Christoph Lameter
2014-01-20 22:13 ` Christoph Lameter
2014-01-21 2:20 ` Wanpeng Li
2014-01-21 2:20 ` Wanpeng Li
2014-01-21 2:20 ` Wanpeng Li
2014-01-21 2:20 ` Wanpeng Li
2014-01-24 3:09 ` Wanpeng Li
2014-01-24 3:09 ` Wanpeng Li
2014-01-24 3:09 ` Wanpeng Li
2014-01-24 3:09 ` Wanpeng Li
2014-01-24 3:14 ` Wanpeng Li
2014-01-24 3:14 ` Wanpeng Li
2014-01-24 3:14 ` Wanpeng Li
2014-01-24 3:14 ` Wanpeng Li
[not found] ` <52e1da8f.86f7440a.120f.25f3SMTPIN_ADDED_BROKEN@mx.google.com>
2014-01-24 15:50 ` Christoph Lameter
2014-01-24 15:50 ` Christoph Lameter
2014-01-24 21:03 ` David Rientjes
2014-01-24 21:03 ` David Rientjes
2014-01-24 22:19 ` Nishanth Aravamudan
2014-01-24 22:19 ` Nishanth Aravamudan
2014-01-24 23:29 ` Nishanth Aravamudan
2014-01-24 23:29 ` Nishanth Aravamudan
2014-01-24 23:49 ` David Rientjes
2014-01-24 23:49 ` David Rientjes
2014-01-25 0:16 ` Nishanth Aravamudan
2014-01-25 0:16 ` Nishanth Aravamudan
2014-01-25 0:25 ` David Rientjes
2014-01-25 0:25 ` David Rientjes
2014-01-25 1:10 ` Nishanth Aravamudan
2014-01-25 1:10 ` Nishanth Aravamudan
2014-01-27 5:58 ` Joonsoo Kim
2014-01-27 5:58 ` Joonsoo Kim
2014-01-28 18:29 ` Nishanth Aravamudan
2014-01-28 18:29 ` Nishanth Aravamudan
2014-01-29 15:54 ` Christoph Lameter
2014-01-29 15:54 ` Christoph Lameter
2014-01-29 22:36 ` Nishanth Aravamudan
2014-01-29 22:36 ` Nishanth Aravamudan
2014-01-30 16:26 ` Christoph Lameter
2014-01-30 16:26 ` Christoph Lameter
2014-02-03 23:00 ` Nishanth Aravamudan
2014-02-03 23:00 ` Nishanth Aravamudan
2014-02-04 3:38 ` Christoph Lameter
2014-02-04 3:38 ` Christoph Lameter
2014-02-04 7:26 ` Nishanth Aravamudan
2014-02-04 7:26 ` Nishanth Aravamudan
2014-02-04 20:39 ` Christoph Lameter
2014-02-04 20:39 ` Christoph Lameter
2014-02-05 0:13 ` Nishanth Aravamudan
2014-02-05 0:13 ` Nishanth Aravamudan
2014-02-05 19:28 ` Christoph Lameter
2014-02-05 19:28 ` Christoph Lameter
2014-02-06 2:08 ` Nishanth Aravamudan
2014-02-06 2:08 ` Nishanth Aravamudan
2014-02-06 17:25 ` Christoph Lameter
2014-02-06 17:25 ` Christoph Lameter
2014-01-27 16:18 ` Christoph Lameter
2014-01-27 16:18 ` Christoph Lameter
2014-02-06 2:07 ` Nishanth Aravamudan
2014-02-06 2:07 ` Nishanth Aravamudan
2014-02-06 8:04 ` Joonsoo Kim
2014-02-06 8:04 ` Joonsoo Kim
[not found] ` <20140206185955.GA7845@linux.vnet.ibm.com>
2014-02-06 19:28 ` Nishanth Aravamudan [this message]
2014-02-06 19:28 ` Nishanth Aravamudan
2014-02-07 8:03 ` Joonsoo Kim
2014-02-07 8:03 ` Joonsoo Kim
2014-02-06 8:07 ` [RFC PATCH 1/3] slub: search partial list on numa_mem_id(), instead of numa_node_id() Joonsoo Kim
2014-02-06 8:07 ` Joonsoo Kim
2014-02-06 8:07 ` [RFC PATCH 2/3] topology: support node_numa_mem() for determining the fallback node Joonsoo Kim
2014-02-06 8:07 ` Joonsoo Kim
2014-02-06 8:52 ` David Rientjes
2014-02-06 8:52 ` David Rientjes
2014-02-06 10:29 ` Joonsoo Kim
2014-02-06 10:29 ` Joonsoo Kim
2014-02-06 19:11 ` Nishanth Aravamudan
2014-02-06 19:11 ` Nishanth Aravamudan
2014-02-07 5:42 ` Joonsoo Kim
2014-02-07 5:42 ` Joonsoo Kim
2014-02-06 20:52 ` David Rientjes
2014-02-06 20:52 ` David Rientjes
2014-02-07 5:48 ` Joonsoo Kim
2014-02-07 5:48 ` Joonsoo Kim
2014-02-07 17:53 ` Christoph Lameter
2014-02-07 17:53 ` Christoph Lameter
2014-02-07 18:51 ` Christoph Lameter
2014-02-07 18:51 ` Christoph Lameter
2014-02-07 21:38 ` Nishanth Aravamudan
2014-02-07 21:38 ` Nishanth Aravamudan
2014-02-10 1:15 ` Joonsoo Kim
2014-02-10 1:15 ` Joonsoo Kim
2014-02-10 1:29 ` Joonsoo Kim
2014-02-10 1:29 ` Joonsoo Kim
2014-02-11 18:45 ` Christoph Lameter
2014-02-11 18:45 ` Christoph Lameter
2014-02-10 19:13 ` Nishanth Aravamudan
2014-02-10 19:13 ` Nishanth Aravamudan
2014-02-11 7:42 ` Joonsoo Kim
2014-02-11 7:42 ` Joonsoo Kim
2014-02-12 22:16 ` Christoph Lameter
2014-02-12 22:16 ` Christoph Lameter
2014-02-13 3:53 ` Nishanth Aravamudan
2014-02-13 3:53 ` Nishanth Aravamudan
2014-02-17 6:52 ` Joonsoo Kim
2014-02-17 6:52 ` Joonsoo Kim
2014-02-18 16:38 ` Christoph Lameter
2014-02-18 16:38 ` Christoph Lameter
2014-02-19 22:04 ` David Rientjes
2014-02-19 22:04 ` David Rientjes
2014-02-20 16:02 ` Christoph Lameter
2014-02-20 16:02 ` Christoph Lameter
2014-02-24 5:08 ` Joonsoo Kim
2014-02-24 5:08 ` Joonsoo Kim
2014-02-24 19:54 ` Christoph Lameter
2014-02-24 19:54 ` Christoph Lameter
2014-03-13 16:51 ` Nishanth Aravamudan
2014-03-13 16:51 ` Nishanth Aravamudan
2014-02-18 17:22 ` Nishanth Aravamudan
2014-02-18 17:22 ` Nishanth Aravamudan
2014-02-13 6:51 ` Nishanth Aravamudan
2014-02-13 6:51 ` Nishanth Aravamudan
2014-02-17 7:00 ` Joonsoo Kim
2014-02-17 7:00 ` Joonsoo Kim
2014-02-18 16:57 ` Christoph Lameter
2014-02-18 16:57 ` Christoph Lameter
2014-02-18 17:28 ` Nishanth Aravamudan
2014-02-18 17:28 ` Nishanth Aravamudan
2014-02-18 19:58 ` Christoph Lameter
2014-02-18 19:58 ` Christoph Lameter
2014-02-18 21:09 ` Nishanth Aravamudan
2014-02-18 21:09 ` Nishanth Aravamudan
2014-02-18 21:49 ` Christoph Lameter
2014-02-18 21:49 ` Christoph Lameter
2014-02-18 22:22 ` Nishanth Aravamudan
2014-02-18 22:22 ` Nishanth Aravamudan
2014-02-19 16:11 ` Christoph Lameter
2014-02-19 16:11 ` Christoph Lameter
2014-02-19 22:03 ` David Rientjes
2014-02-19 22:03 ` David Rientjes
2014-02-08 9:57 ` David Rientjes
2014-02-08 9:57 ` David Rientjes
2014-02-10 1:09 ` Joonsoo Kim
2014-02-10 1:09 ` Joonsoo Kim
2014-07-22 1:03 ` Nishanth Aravamudan
2014-07-22 1:03 ` Nishanth Aravamudan
2014-07-22 1:16 ` David Rientjes
2014-07-22 1:16 ` David Rientjes
2014-07-22 21:43 ` Nishanth Aravamudan
2014-07-22 21:43 ` Nishanth Aravamudan
2014-07-22 21:49 ` Tejun Heo
2014-07-22 21:49 ` Tejun Heo
2014-07-22 23:47 ` Nishanth Aravamudan
2014-07-22 23:47 ` Nishanth Aravamudan
2014-07-23 0:43 ` David Rientjes
2014-07-23 0:43 ` David Rientjes
2014-02-06 8:07 ` [RFC PATCH 3/3] slub: fallback to get_numa_mem() node if we want to allocate on memoryless node Joonsoo Kim
2014-02-06 8:07 ` Joonsoo Kim
2014-02-06 17:30 ` Christoph Lameter
2014-02-06 17:30 ` Christoph Lameter
2014-02-07 5:41 ` Joonsoo Kim
2014-02-07 5:41 ` Joonsoo Kim
2014-02-07 17:49 ` Christoph Lameter
2014-02-07 17:49 ` Christoph Lameter
2014-02-10 1:22 ` Joonsoo Kim
2014-02-10 1:22 ` Joonsoo Kim
2014-02-06 8:37 ` [RFC PATCH 1/3] slub: search partial list on numa_mem_id(), instead of numa_node_id() David Rientjes
2014-02-06 8:37 ` David Rientjes
2014-02-06 17:31 ` Christoph Lameter
2014-02-06 17:31 ` Christoph Lameter
2014-02-06 17:26 ` Christoph Lameter
2014-02-06 17:26 ` Christoph Lameter
2014-05-16 23:37 ` Nishanth Aravamudan
2014-05-16 23:37 ` Nishanth Aravamudan
2014-05-19 2:41 ` Joonsoo Kim
2014-05-19 2:41 ` Joonsoo Kim
2014-06-05 0:13 ` [RESEND PATCH] " David Rientjes
2014-06-05 0:13 ` David Rientjes
2014-06-05 0:13 ` David Rientjes
2014-01-27 16:24 ` [PATCH] slub: Don't throw away partial remote slabs if there is no local memory Christoph Lameter
2014-01-27 16:24 ` Christoph Lameter
2014-01-27 16:16 ` Christoph Lameter
2014-01-27 16:16 ` Christoph Lameter
2014-01-07 9:42 ` David Laight
2014-01-07 9:42 ` David Laight
2014-01-08 14:14 ` Anton Blanchard
2014-01-08 14:14 ` Anton Blanchard
2014-01-07 10:28 ` Wanpeng Li
2014-01-07 10:28 ` Wanpeng Li
2014-01-07 10:28 ` Wanpeng Li
2014-01-07 10:28 ` Wanpeng Li
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20140206192812.GC7845@linux.vnet.ibm.com \
--to=nacc@linux.vnet.ibm.com \
--cc=anton@samba.org \
--cc=cl@linux.com \
--cc=hanpt@linux.vnet.ibm.com \
--cc=iamjoonsoo.kim@lge.com \
--cc=linux-mm@kvack.org \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=liwanp@linux.vnet.ibm.com \
--cc=mpm@selenic.com \
--cc=paulus@samba.org \
--cc=penberg@kernel.org \
--cc=rientjes@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.