linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Nikolay Borisov <kernel@kyup.com>
To: Christoph Lameter <cl@linux.com>
Cc: "Linux-Kernel@Vger. Kernel. Org" <linux-kernel@vger.kernel.org>
Subject: Re: Unbounded growth of slab caches and how to shrink them
Date: Wed, 29 Jun 2016 17:17:51 +0300	[thread overview]
Message-ID: <5773D88F.1030005@kyup.com> (raw)
In-Reply-To: <alpine.DEB.2.20.1606290854500.14924@east.gentwo.org>



On 06/29/2016 05:00 PM, Christoph Lameter wrote:
> On Wed, 29 Jun 2016, Nikolay Borisov wrote:
> 
>> I've observed a rather strange unbounded growth of the kmalloc-192
>> slab cache:
>>
>> OBJS ACTIVE  USE OBJ SIZE  SLABS OBJ/SLAB CACHE SIZE NAME
>> 711124869 411527215   3%    0.19K 16934908       42 135479264K kmalloc-192
>>
>> Essentially the kmalloc is around 130 GB , yet only 3 percent of this are
>> being used. In this case I'd like to essentially shrink the overall size
>> of the cache. How is it possible to achieve that? I tried echoing '1'
>> to /sys/kernel/slab/kmalloc-192/shrink but nothing changed.
> 
> Ok this probably means that most slabs have just a few or one objects?
> Some workloads can result in situations like that. Can you enable
> debugging and get a list of functions where these objects are allocated?

Right, so what debugging concretely do you have in mind. So far what I
did was reboot the machine with SLUB merging disabled, since there are
quite a lot of slabs being merged into that particular one:

:t-0000192   <- cred_jar pid_3 inet_peer_cache request_sock_TCPv6
kmalloc-192 file_lock_cache bio-0 ip_dst_cache key_jar

I'm quite sure it's likely it's one of the either networking or bio-0
slab cache, since the others seems generally not very used.

> 
>> This is on 3.12 which is rather old kernel, but still I believe it is
>> entirely possible for someone to find a way to flood a machine with
>> network requests which would cause a lot of objects to be allocate,
>> resulting in a particular slab cache growing, then later when the request
>> flood stops the cache would be almost empty, yet the memory won't be usable
>> for anything other than satisfying memory allocation from this cache.
> 
> True. Long known problem and all my attempts to facilitate a solution here
> did not go anywhere. The essential solution would require objects being
> movable or removable from the sparsely allocated page frames. And this
> goes way beyond my subsystem.
> 
> If you can figure out which subsystem allocates or frees these objects
> (through the call traces) then we may find a knob in the subsystem to
> clear those out once in a while.
> 
> 

  reply	other threads:[~2016-06-29 14:26 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-06-29 10:34 Unbounded growth of slab caches and how to shrink them Nikolay Borisov
2016-06-29 14:00 ` Christoph Lameter
2016-06-29 14:17   ` Nikolay Borisov [this message]
2016-06-29 14:34     ` Christoph Lameter

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5773D88F.1030005@kyup.com \
    --to=kernel@kyup.com \
    --cc=cl@linux.com \
    --cc=linux-kernel@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).