iommu.lists.linux-foundation.org archive mirror
 help / color / mirror / Atom feed
* iommu_iova slab eats too much memory
@ 2020-04-23  9:11 Bin
  2020-04-23  9:14 ` Bin
  0 siblings, 1 reply; 16+ messages in thread
From: Bin @ 2020-04-23  9:11 UTC (permalink / raw)
  To: iommu


[-- Attachment #1.1: Type: text/plain, Size: 4394 bytes --]

Hey, guys:

I'm running a batch of CoreOS boxes, the lsb_release is:

```
# cat /etc/lsb-release
DISTRIB_ID="Container Linux by CoreOS"
DISTRIB_RELEASE=2303.3.0
DISTRIB_CODENAME="Rhyolite"
DISTRIB_DESCRIPTION="Container Linux by CoreOS 2303.3.0 (Rhyolite)"
```

```
# uname -a
Linux cloud-worker-25 4.19.86-coreos #1 SMP Mon Dec 2 20:13:38 -00 2019
x86_64 Intel(R) Xeon(R) CPU E5-2640 v2 @ 2.00GHz GenuineIntel GNU/Linux
```
Recently, I found my vms constently being killed due to OOM, and after
digging into the problem, I finally realized that the kernel is leaking
memory.

Here's my slabinfo:

 Active / Total Objects (% used)    : 83818306 / 84191607 (99.6%)
 Active / Total Slabs (% used)      : 1336293 / 1336293 (100.0%)
 Active / Total Caches (% used)     : 152 / 217 (70.0%)
 Active / Total Size (% used)       : 5828768.08K / 5996848.72K (97.2%)
 Minimum / Average / Maximum Object : 0.01K / 0.07K / 23.25K

  OBJS ACTIVE  USE OBJ SIZE  SLABS OBJ/SLAB CACHE SIZE NAME

80253888 80253888 100%    0.06K 1253967       64   5015868K iommu_iova

489472 489123  99%    0.03K   3824      128     15296K kmalloc-32

297444 271112  91%    0.19K   7082       42     56656K dentry

254400 252784  99%    0.06K   3975       64     15900K anon_vma_chain

222528  39255  17%    0.50K   6954       32    111264K kmalloc-512

202482 201814  99%    0.19K   4821       42     38568K vm_area_struct

200192 200192 100%    0.01K    391      512      1564K kmalloc-8

170528 169359  99%    0.25K   5329       32     42632K filp

158144 153508  97%    0.06K   2471       64      9884K kmalloc-64

149914 149365  99%    0.09K   3259       46     13036K anon_vma

146640 143123  97%    0.10K   3760       39     15040K buffer_head

130368  32791  25%    0.09K   3104       42     12416K kmalloc-96

129752 129752 100%    0.07K   2317       56      9268K Acpi-Operand

105468 105106  99%    0.04K   1034      102      4136K
selinux_inode_security
 73080  73080 100%    0.13K   2436       30      9744K kernfs_node_cache

 72360  70261  97%    0.59K   1340       54     42880K inode_cache

 71040  71040 100%    0.12K   2220       32      8880K eventpoll_epi

 68096  59262  87%    0.02K    266      256      1064K kmalloc-16

 53652  53652 100%    0.04K    526      102      2104K pde_opener

 50496  31654  62%    2.00K   3156       16    100992K kmalloc-2048

 46242  46242 100%    0.19K   1101       42      8808K cred_jar

 44496  43013  96%    0.66K    927       48     29664K proc_inode_cache

 44352  44352 100%    0.06K    693       64      2772K task_delay_info

 43516  43471  99%    0.69K    946       46     30272K sock_inode_cache

 37856  27626  72%    1.00K   1183       32     37856K kmalloc-1024

 36736  36736 100%    0.07K    656       56      2624K eventpoll_pwq

 34076  31282  91%    0.57K   1217       28     19472K radix_tree_node

 33660  30528  90%    1.05K   1122       30     35904K ext4_inode_cache

 32760  30959  94%    0.19K    780       42      6240K kmalloc-192

 32028  32028 100%    0.04K    314      102      1256K ext4_extent_status

 30048  30048 100%    0.25K    939       32      7512K skbuff_head_cache

 28736  28736 100%    0.06K    449       64      1796K fs_cache

 24702  24702 100%    0.69K    537       46     17184K files_cache

 23808  23808 100%    0.66K    496       48     15872K ovl_inode

 23104  22945  99%    0.12K    722       32      2888K kmalloc-128

 22724  21307  93%    0.69K    494       46     15808K shmem_inode_cache

 21472  21472 100%    0.12K    671       32      2684K seq_file

 19904  19904 100%    1.00K    622       32     19904K UNIX

 17340  17340 100%    1.06K    578       30     18496K mm_struct

 15980  15980 100%    0.02K     94      170       376K avtab_node

 14070  14070 100%    1.06K    469       30     15008K signal_cache

 13248  13248 100%    0.12K    414       32      1656K pid

 12128  11777  97%    0.25K    379       32      3032K kmalloc-256

 11008  11008 100%    0.02K     43      256       172K
selinux_file_security
 10812  10812 100%    0.04K    106      102       424K Acpi-Namespace

These information shows that the 'iommu_iova' is the top memory consumer.
In order to optimize the network performence of Openstack virtual machines,
I enabled the vt-d feature in bios and sriov feature of Intel 82599 10G
NIC. I'm assuming this is the root cause of this issue.

Is there anything I can do to fix it?

[-- Attachment #1.2: Type: text/html, Size: 6015 bytes --]

[-- Attachment #2: Type: text/plain, Size: 156 bytes --]

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2020-04-29 15:00 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-04-23  9:11 iommu_iova slab eats too much memory Bin
2020-04-23  9:14 ` Bin
2020-04-24  0:40   ` Bin
2020-04-24 11:20     ` Robin Murphy
2020-04-24 12:00       ` Bin
2020-04-24 12:06         ` Bin
2020-04-24 12:15           ` Robin Murphy
2020-04-24 13:20             ` Bin
2020-04-24 16:30               ` Robin Murphy
2020-04-24 17:49                 ` John Garry
2020-04-25 13:38                   ` Bin
2020-04-28  9:17                 ` Salil Mehta
2020-04-29  4:13                   ` Bin
2020-04-29 13:37                   ` Salil Mehta
2020-04-29 13:59                     ` Robin Murphy
2020-04-29 15:00                       ` Salil Mehta

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).