selinux.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* ebitmap_node ate over 40GB of memory
@ 2020-04-15 12:31 郭彬
  2020-04-15 13:44 ` Ondrej Mosnacek
  0 siblings, 1 reply; 3+ messages in thread
From: 郭彬 @ 2020-04-15 12:31 UTC (permalink / raw)
  To: selinux

I'm running a batch of CoreOS boxes, the lsb_release is:

```
# cat /etc/lsb-release
DISTRIB_ID="Container Linux by CoreOS"
DISTRIB_RELEASE=2303.3.0
DISTRIB_CODENAME="Rhyolite"
DISTRIB_DESCRIPTION="Container Linux by CoreOS 2303.3.0 (Rhyolite)"
```

```
# uname -a
Linux cloud-worker-25 4.19.86-coreos #1 SMP Mon Dec 2 20:13:38 -00 2019 
x86_64 Intel(R) Xeon(R) CPU E5-2640 v2 @ 2.00GHz GenuineIntel GNU/Linux
```
Recently, I found my vms constently being killed due to OOM, and after 
digging into the problem, I finally realized that the kernel is leaking 
memory.

Here's my slabinfo:

```
# slabtop --sort c -o
  Active / Total Objects (% used)    : 739390584 / 740008326 (99.9%)
  Active / Total Slabs (% used)      : 11594275 / 11594275 (100.0%)
  Active / Total Caches (% used)     : 105 / 129 (81.4%)
  Active / Total Size (% used)       : 47121380.33K / 47376581.93K (99.5%)
  Minimum / Average / Maximum Object : 0.01K / 0.06K / 8.00K

   OBJS ACTIVE  USE OBJ SIZE  SLABS OBJ/SLAB CACHE SIZE NAME
734506368 734506368 100%    0.06K 11476662       64 45906648K ebitmap_node
328160  80830  24%    0.50K  10255       32    164080K kmalloc-512
  69442  36292  52%    2.00K   4341       16    138912K kmalloc-2048
  13148  12571  95%    7.50K   3287        4    105184K task_struct
  85359  75134  88%    1.05K   2857       30     91424K ext4_inode_cache
462336 459563  99%    0.19K  11008       42     88064K cred_jar
382641 323093  84%    0.19K   9125       42     73000K dentry
251968 249625  99%    0.25K   7874       32     62992K filp
  51488  41972  81%    1.00K   1609       32     51488K kmalloc-1024
  78846  77530  98%    0.59K   1461       54     46752K inode_cache
  69342  68580  98%    0.66K   1449       48     46368K proc_inode_cache
  70000  63132  90%    0.57K   2500       28     40000K radix_tree_node
  56902  56673  99%    0.69K   1237       46     39584K sock_inode_cache
  57504  54982  95%    0.66K   1198       48     38336K ovl_inode
   9056   9007  99%    4.00K   1132        8     36224K kmalloc-4096
  31756  31756 100%    1.00K    995       32     31840K UNIX
504192 501166  99%    0.06K   7878       64     31512K anon_vma_chain
  27660  27660 100%    1.06K    922       30     29504K signal_cache
336950 335179  99%    0.09K   7325       46     29300K anon_vma
  26400  26400 100%    1.06K    880       30     28160K mm_struct
  38042  38042 100%    0.69K    827       46     26464K files_cache
  12315  12298  99%    2.06K    821       15     26272K sighand_cache
   3001   3001 100%    8.00K    763        4     24416K kmalloc-8192
  74336  73888  99%    0.25K   2323       32     18584K skbuff_head_cache
186102  63644  34%    0.09K   4431       42     17724K kmalloc-96
  24104  22322  92%    0.69K    524       46     16768K shmem_inode_cache
527360 479425  90%    0.03K   4120      128     16480K kmalloc-32
140439 137220  97%    0.10K   3601       39     14404K buffer_head
  10075  10075 100%    1.25K    403       25     12896K UDPv6
183808 158004  85%    0.06K   2872       64     11488K kmalloc-64
  60102  47918  79%    0.19K   1431       42     11448K kmalloc-192
  84704  84704 100%    0.12K   2647       32     10588K pid
  72450  72243  99%    0.13K   2415       30      9660K kernfs_node_cache
131152 131152 100%    0.07K   2342       56      9368K Acpi-Operand
   4020   4020 100%    2.12K    268       15      8576K TCP
   6936   6936 100%    0.94K    204       34      6528K RAW
118320 107640  90%    0.04K   1160      102      4640K numa_policy
  11340  11191  98%    0.38K    270       42      4320K mnt_cache
   1750   1750 100%    2.25K    125       14      4000K TCPv6
   3472   3360  96%    1.12K    124       28      3968K RAWv6
  14976  14893  99%    0.25K    468       32      3744K kmalloc-256
  29728  25895  87%    0.12K    929       32      3716K kmalloc-128
  86190  86190 100%    0.04K    845      102      3380K pde_opener
  44240  44240 100%    0.07K    790       56      3160K eventpoll_pwq
   7392   7222  97%    0.38K    176       42      2816K kmem_cache
329728 329728 100%    0.01K    644      512      2576K kmalloc-8
    320    300  93%    8.00K     80        4      2560K biovec-max
142080 136981  96%    0.02K    555      256      2220K kmalloc-16
  13248  13248 100%    0.12K    414       32      1656K secpath_cache
   6432   5952  92%    0.25K    201       32      1608K pool_workqueue
  10540  10540 100%    0.12K    310       34      1240K jbd2_journal_head
   2400   2400 100%    0.50K     75       32      1200K skbuff_fclone_cache
  29886  29886 100%    0.04K    293      102      1172K ext4_extent_status
   3672   3431  93%    0.31K     72       51      1152K nf_conntrack
  17664  17472  98%    0.06K    276       64      1104K ext4_io_end
   1518   1518 100%    0.69K     33       46      1056K bio-2
   1344   1344 100%    0.75K     32       42      1024K task_group
    256    256 100%    4.00K     32        8      1024K names_cache
   1024   1024 100%    1.00K     32       32      1024K biovec-64
   1632   1632 100%    0.62K     32       51      1024K dio
   4352   4352 100%    0.23K    128       34      1024K tw_sock_TCPv6
   1120   1120 100%    0.91K     32       35      1024K sw_flow
   5292   5292 100%    0.19K    126       42      1008K proc_dir_entry
   6748   6748 100%    0.14K    241       28       964K ext4_groupinfo_4k
   3536   3536 100%    0.23K    104       34       832K tw_sock_TCP
    286    286 100%    2.75K     26       11       832K iommu_domain
    816    816 100%    0.94K     24       34       768K mqueue_inode_cache
  14016  14016 100%    0.05K    192       73       768K mbcache
    819    377  46%    0.81K     21       39       672K bdev_cache
  24480  24480 100%    0.02K    144      170       576K avtab_node
   2940   2940 100%    0.19K     70       42       560K dmaengine-unmap-16
     85     85 100%    5.50K     17        5       544K net_namespace
   1568   1568 100%    0.32K     32       49       512K taskstats
   1696   1696 100%    0.30K     32       53       512K request_sock_TCP
   8128   8128 100%    0.06K    127       64       508K kmem_cache_node
   1073   1073 100%    0.43K     29       37       464K uts_namespace
   3420   3360  98%    0.13K    114       30       456K dm_bufio_buffer-4
   1404   1404 100%    0.30K     27       52       432K blkdev_requests
    169     98  57%    2.40K     13       13       416K request_queue
   3328   3328 100%    0.12K    104       32       416K scsi_sense_cache
   5049   5049 100%    0.08K     99       51       396K inotify_inode_mark
  23040  23040 100%    0.02K     90      256       360K 
selinux_file_security
  10240  10240 100%    0.03K     80      128       320K fscrypt_info
   1120   1120 100%    0.25K     35       32       280K dquot
   1734   1734 100%    0.16K     34       51       272K sigqueue
   5525   5525 100%    0.05K     65       85       260K ftrace_event_field
   1280   1280 100%    0.20K     32       40       256K file_lock_cache
   1088   1088 100%    0.23K     32       34       256K posix_timers_cache
   1092   1092 100%    0.20K     28       39       224K ip4-frags
   1564   1564 100%    0.09K     34       46       136K trace_event_file
   1287   1287 100%    0.10K     33       39       132K blkdev_ioc
   2336   2336 100%    0.05K     32       73       128K Acpi-Parse
   4096   4096 100%    0.03K     32      128       128K avc_xperms_data
    208    157  75%    0.61K      4       52       128K 
hugetlbfs_inode_cache
   2720   2720 100%    0.05K     32       85       128K fscrypt_ctx
   1024   1024 100%    0.12K     32       32       128K 
ext4_allocation_context
     88     88 100%    0.72K      2       44        64K fat_inode_cache
     30     30 100%    1.06K      1       30        32K dmaengine-unmap-128
     15     15 100%    2.06K      1       15        32K dmaengine-unmap-256
     16     16 100%    2.00K      1       16        32K biovec-128
   1024   1024 100%    0.03K      8      128        32K dnotify_struct
      8      8 100%    4.00K      1        8        32K sgpool-128
    306    306 100%    0.08K      6       51        24K Acpi-State
     32     32 100%    0.50K      1       32        16K dma-kmalloc-512
    512    512 100%    0.02K      2      256         8K jbd2_revoke_table_s
      0      0   0%    0.09K      0       42         0K dma-kmalloc-96
      0      0   0%    0.19K      0       42         0K dma-kmalloc-192
      0      0   0%    0.01K      0      512         0K dma-kmalloc-8
      0      0   0%    0.02K      0      256         0K dma-kmalloc-16
      0      0   0%    0.03K      0      128         0K dma-kmalloc-32
      0      0   0%    0.06K      0       64         0K dma-kmalloc-64
      0      0   0%    0.12K      0       32         0K dma-kmalloc-128
      0      0   0%    0.25K      0       32         0K dma-kmalloc-256
      0      0   0%    1.00K      0       32         0K dma-kmalloc-1024
      0      0   0%    2.00K      0       16         0K dma-kmalloc-2048
      0      0   0%    4.00K      0        8         0K dma-kmalloc-4096
      0      0   0%    8.00K      0        4         0K dma-kmalloc-8192
      0      0   0%    0.12K      0       34         0K iint_cache
      0      0   0%    0.45K      0       35         0K user_namespace
      0      0   0%    0.94K      0       34         0K PING
      0      0   0%    0.31K      0       51         0K xfrm_dst_cache
      0      0   0%    0.12K      0       34         0K cfq_io_cq
      0      0   0%    0.30K      0       53         0K request_sock_TCPv6
      0      0   0%    1.12K      0       28         0K PINGv6
      0      0   0%    2.57K      0       12         0K dm_uevent
      0      0   0%    3.23K      0        9         0K kcopyd_job
      0      0   0%    0.04K      0      102         0K fat_cache
      0      0   0%    0.21K      0       37         0K nf_conntrack_expect
      0      0   0%    0.09K      0       42         0K nf_conncount_rb
```
You can see that the `ebitmap_node` is over 40GB and still growing. The 
only thing I can do is rebooting the OS, but there are tens of them and 
lots of workloads running on them, I can't just reboot whenever I want.
So, I run out of options, any help?

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: ebitmap_node ate over 40GB of memory
  2020-04-15 12:31 ebitmap_node ate over 40GB of memory 郭彬
@ 2020-04-15 13:44 ` Ondrej Mosnacek
       [not found]   ` <CAEbUFv6bED87bk-bAyUoHkvRPxpP+vS0kzNUa88qZmeB=2O7YA@mail.gmail.com>
  0 siblings, 1 reply; 3+ messages in thread
From: Ondrej Mosnacek @ 2020-04-15 13:44 UTC (permalink / raw)
  To: 郭彬; +Cc: SElinux list

On Wed, Apr 15, 2020 at 2:31 PM 郭彬 <anole1949@gmail.com> wrote:
> I'm running a batch of CoreOS boxes, the lsb_release is:
>
> ```
> # cat /etc/lsb-release
> DISTRIB_ID="Container Linux by CoreOS"
> DISTRIB_RELEASE=2303.3.0
> DISTRIB_CODENAME="Rhyolite"
> DISTRIB_DESCRIPTION="Container Linux by CoreOS 2303.3.0 (Rhyolite)"
> ```
>
> ```
> # uname -a
> Linux cloud-worker-25 4.19.86-coreos #1 SMP Mon Dec 2 20:13:38 -00 2019
> x86_64 Intel(R) Xeon(R) CPU E5-2640 v2 @ 2.00GHz GenuineIntel GNU/Linux
> ```
> Recently, I found my vms constently being killed due to OOM, and after
> digging into the problem, I finally realized that the kernel is leaking
> memory.
>
> Here's my slabinfo:
>
> ```
> # slabtop --sort c -o
>   Active / Total Objects (% used)    : 739390584 / 740008326 (99.9%)
>   Active / Total Slabs (% used)      : 11594275 / 11594275 (100.0%)
>   Active / Total Caches (% used)     : 105 / 129 (81.4%)
>   Active / Total Size (% used)       : 47121380.33K / 47376581.93K (99.5%)
>   Minimum / Average / Maximum Object : 0.01K / 0.06K / 8.00K
>
>    OBJS ACTIVE  USE OBJ SIZE  SLABS OBJ/SLAB CACHE SIZE NAME
> 734506368 734506368 100%    0.06K 11476662       64 45906648K ebitmap_node
[...]
> You can see that the `ebitmap_node` is over 40GB and still growing. The
> only thing I can do is rebooting the OS, but there are tens of them and
> lots of workloads running on them, I can't just reboot whenever I want.
> So, I run out of options, any help?

Pasting in relevant comments/questions from [1]:

2. Your kernel seems to be quite behind the current upstream and is
probably maintained by your distribution (seems to be derived from the
4.19 stable branch). Can you reproduce the issue on a more recent
kernel (at least 5.5+)? If you can't or the recent kernel doesn't
exhibit the issue, then you should report this to your distribution.
3. Was this working fine with some earlier kernel? If you can
determine the last working version, then it could help us identify the
cause and/or the fix.

On top of that, I realized one more thing - the kernel merges the
caches for objects of the same size - so any cache with object size 64
bytes will be accounted under 'ebitmap_node' here. For example, on my
system there are several caches that all alias to the common 64-byte
cache:
# ls -l /sys/kernel/slab/ | grep -- '-> :0000064'
lrwxrwxrwx. 1 root root 0 apr 15 15:26 dmaengine-unmap-2 -> :0000064
lrwxrwxrwx. 1 root root 0 apr 15 15:26 ebitmap_node -> :0000064
lrwxrwxrwx. 1 root root 0 apr 15 15:26 fanotify_event -> :0000064
lrwxrwxrwx. 1 root root 0 apr 15 15:26 io -> :0000064
lrwxrwxrwx. 1 root root 0 apr 15 15:26 iommu_iova -> :0000064
lrwxrwxrwx. 1 root root 0 apr 15 15:26 jbd2_inode -> :0000064
lrwxrwxrwx. 1 root root 0 apr 15 15:26 ksm_rmap_item -> :0000064
lrwxrwxrwx. 1 root root 0 apr 15 15:26 ksm_stable_node -> :0000064
lrwxrwxrwx. 1 root root 0 apr 15 15:26 vmap_area -> :0000064

On your kernel you might get a different list, but any of the caches
you get could be the culprit, ebitmap_node is just one of the
possibilities. You can disable this merging by adding "slab_nomerge"
to your kernel boot command-line. That will allow you to identify
which cache is really the source of the leak.

[1] https://github.com/SELinuxProject/selinux/issues/220#issuecomment-613944748

-- 
Ondrej Mosnacek <omosnace at redhat dot com>
Software Engineer, Security Technologies
Red Hat, Inc.


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: ebitmap_node ate over 40GB of memory
       [not found]   ` <CAEbUFv6bED87bk-bAyUoHkvRPxpP+vS0kzNUa88qZmeB=2O7YA@mail.gmail.com>
@ 2020-04-23  8:25     ` Ondrej Mosnacek
  0 siblings, 0 replies; 3+ messages in thread
From: Ondrej Mosnacek @ 2020-04-23  8:25 UTC (permalink / raw)
  To: Bin; +Cc: SElinux list

On Thu, Apr 23, 2020 at 9:50 AM Bin <anole1949@gmail.com> wrote:
> Dear Ondrej:
>
> I've added "slab_nomerge" in the kernel parameters, and after observation for couple of days, I got this:
>
>
>  Active / Total Objects (% used)    : 83818306 / 84191607 (99.6%)
>  Active / Total Slabs (% used)      : 1336293 / 1336293 (100.0%)
>  Active / Total Caches (% used)     : 152 / 217 (70.0%)
>  Active / Total Size (% used)       : 5828768.08K / 5996848.72K (97.2%)
>  Minimum / Average / Maximum Object : 0.01K / 0.07K / 23.25K
>
>   OBJS ACTIVE  USE OBJ SIZE  SLABS OBJ/SLAB CACHE SIZE NAME
> 80253888 80253888 100%    0.06K 1253967       64   5015868K iommu_iova

Well, that means the leak is caused by the "iommu_iova" kmem cache and
has nothing to with SELinux. I'd try luck on the iommu mailing list:
iommu@lists.linux-foundation.org

> 489472 489123  99%    0.03K   3824      128     15296K kmalloc-32
> 297444 271112  91%    0.19K   7082       42     56656K dentry
> 254400 252784  99%    0.06K   3975       64     15900K anon_vma_chain
> 222528  39255  17%    0.50K   6954       32    111264K kmalloc-512
> 202482 201814  99%    0.19K   4821       42     38568K vm_area_struct
> 200192 200192 100%    0.01K    391      512      1564K kmalloc-8
> 170528 169359  99%    0.25K   5329       32     42632K filp
> 158144 153508  97%    0.06K   2471       64      9884K kmalloc-64
> 149914 149365  99%    0.09K   3259       46     13036K anon_vma
> 146640 143123  97%    0.10K   3760       39     15040K buffer_head
> 130368  32791  25%    0.09K   3104       42     12416K kmalloc-96
> 129752 129752 100%    0.07K   2317       56      9268K Acpi-Operand
> 105468 105106  99%    0.04K   1034      102      4136K selinux_inode_security
>  73080  73080 100%    0.13K   2436       30      9744K kernfs_node_cache
>  72360  70261  97%    0.59K   1340       54     42880K inode_cache
>  71040  71040 100%    0.12K   2220       32      8880K eventpoll_epi
>  68096  59262  87%    0.02K    266      256      1064K kmalloc-16
>  53652  53652 100%    0.04K    526      102      2104K pde_opener
>  50496  31654  62%    2.00K   3156       16    100992K kmalloc-2048
>  46242  46242 100%    0.19K   1101       42      8808K cred_jar
>  44496  43013  96%    0.66K    927       48     29664K proc_inode_cache
>  44352  44352 100%    0.06K    693       64      2772K task_delay_info
>  43516  43471  99%    0.69K    946       46     30272K sock_inode_cache
>  37856  27626  72%    1.00K   1183       32     37856K kmalloc-1024
>  36736  36736 100%    0.07K    656       56      2624K eventpoll_pwq
>  34076  31282  91%    0.57K   1217       28     19472K radix_tree_node
>  33660  30528  90%    1.05K   1122       30     35904K ext4_inode_cache
>  32760  30959  94%    0.19K    780       42      6240K kmalloc-192
>  32028  32028 100%    0.04K    314      102      1256K ext4_extent_status
>  30048  30048 100%    0.25K    939       32      7512K skbuff_head_cache
>  28736  28736 100%    0.06K    449       64      1796K fs_cache
>  24702  24702 100%    0.69K    537       46     17184K files_cache
>  23808  23808 100%    0.66K    496       48     15872K ovl_inode
>  23104  22945  99%    0.12K    722       32      2888K kmalloc-128
>  22724  21307  93%    0.69K    494       46     15808K shmem_inode_cache
>  21472  21472 100%    0.12K    671       32      2684K seq_file
>  19904  19904 100%    1.00K    622       32     19904K UNIX
>  17340  17340 100%    1.06K    578       30     18496K mm_struct
>  15980  15980 100%    0.02K     94      170       376K avtab_node
>  14070  14070 100%    1.06K    469       30     15008K signal_cache
>  13248  13248 100%    0.12K    414       32      1656K pid
>  12128  11777  97%    0.25K    379       32      3032K kmalloc-256
>  11008  11008 100%    0.02K     43      256       172K selinux_file_security
>  10812  10812 100%    0.04K    106      102       424K Acpi-Namespace
>
> Is these info ring any bell for you?
>
> Ondrej Mosnacek <omosnace@redhat.com> 于2020年4月15日周三 下午9:44写道:
>>
>> On Wed, Apr 15, 2020 at 2:31 PM 郭彬 <anole1949@gmail.com> wrote:
>> > I'm running a batch of CoreOS boxes, the lsb_release is:
>> >
>> > ```
>> > # cat /etc/lsb-release
>> > DISTRIB_ID="Container Linux by CoreOS"
>> > DISTRIB_RELEASE=2303.3.0
>> > DISTRIB_CODENAME="Rhyolite"
>> > DISTRIB_DESCRIPTION="Container Linux by CoreOS 2303.3.0 (Rhyolite)"
>> > ```
>> >
>> > ```
>> > # uname -a
>> > Linux cloud-worker-25 4.19.86-coreos #1 SMP Mon Dec 2 20:13:38 -00 2019
>> > x86_64 Intel(R) Xeon(R) CPU E5-2640 v2 @ 2.00GHz GenuineIntel GNU/Linux
>> > ```
>> > Recently, I found my vms constently being killed due to OOM, and after
>> > digging into the problem, I finally realized that the kernel is leaking
>> > memory.
>> >
>> > Here's my slabinfo:
>> >
>> > ```
>> > # slabtop --sort c -o
>> >   Active / Total Objects (% used)    : 739390584 / 740008326 (99.9%)
>> >   Active / Total Slabs (% used)      : 11594275 / 11594275 (100.0%)
>> >   Active / Total Caches (% used)     : 105 / 129 (81.4%)
>> >   Active / Total Size (% used)       : 47121380.33K / 47376581.93K (99.5%)
>> >   Minimum / Average / Maximum Object : 0.01K / 0.06K / 8.00K
>> >
>> >    OBJS ACTIVE  USE OBJ SIZE  SLABS OBJ/SLAB CACHE SIZE NAME
>> > 734506368 734506368 100%    0.06K 11476662       64 45906648K ebitmap_node
>> [...]
>> > You can see that the `ebitmap_node` is over 40GB and still growing. The
>> > only thing I can do is rebooting the OS, but there are tens of them and
>> > lots of workloads running on them, I can't just reboot whenever I want.
>> > So, I run out of options, any help?
>>
>> Pasting in relevant comments/questions from [1]:
>>
>> 2. Your kernel seems to be quite behind the current upstream and is
>> probably maintained by your distribution (seems to be derived from the
>> 4.19 stable branch). Can you reproduce the issue on a more recent
>> kernel (at least 5.5+)? If you can't or the recent kernel doesn't
>> exhibit the issue, then you should report this to your distribution.
>> 3. Was this working fine with some earlier kernel? If you can
>> determine the last working version, then it could help us identify the
>> cause and/or the fix.
>>
>> On top of that, I realized one more thing - the kernel merges the
>> caches for objects of the same size - so any cache with object size 64
>> bytes will be accounted under 'ebitmap_node' here. For example, on my
>> system there are several caches that all alias to the common 64-byte
>> cache:
>> # ls -l /sys/kernel/slab/ | grep -- '-> :0000064'
>> lrwxrwxrwx. 1 root root 0 apr 15 15:26 dmaengine-unmap-2 -> :0000064
>> lrwxrwxrwx. 1 root root 0 apr 15 15:26 ebitmap_node -> :0000064
>> lrwxrwxrwx. 1 root root 0 apr 15 15:26 fanotify_event -> :0000064
>> lrwxrwxrwx. 1 root root 0 apr 15 15:26 io -> :0000064
>> lrwxrwxrwx. 1 root root 0 apr 15 15:26 iommu_iova -> :0000064
>> lrwxrwxrwx. 1 root root 0 apr 15 15:26 jbd2_inode -> :0000064
>> lrwxrwxrwx. 1 root root 0 apr 15 15:26 ksm_rmap_item -> :0000064
>> lrwxrwxrwx. 1 root root 0 apr 15 15:26 ksm_stable_node -> :0000064
>> lrwxrwxrwx. 1 root root 0 apr 15 15:26 vmap_area -> :0000064
>>
>> On your kernel you might get a different list, but any of the caches
>> you get could be the culprit, ebitmap_node is just one of the
>> possibilities. You can disable this merging by adding "slab_nomerge"
>> to your kernel boot command-line. That will allow you to identify
>> which cache is really the source of the leak.
>>
>> [1] https://github.com/SELinuxProject/selinux/issues/220#issuecomment-613944748
>>
>> --
>> Ondrej Mosnacek <omosnace at redhat dot com>
>> Software Engineer, Security Technologies
>> Red Hat, Inc.
>>


-- 
Ondrej Mosnacek <omosnace at redhat dot com>
Software Engineer, Security Technologies
Red Hat, Inc.


^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2020-04-23  8:25 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-04-15 12:31 ebitmap_node ate over 40GB of memory 郭彬
2020-04-15 13:44 ` Ondrej Mosnacek
     [not found]   ` <CAEbUFv6bED87bk-bAyUoHkvRPxpP+vS0kzNUa88qZmeB=2O7YA@mail.gmail.com>
2020-04-23  8:25     ` Ondrej Mosnacek

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).