linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Shakeel Butt <shakeelb@google.com>
To: Michal Hocko <mhocko@suse.com>
Cc: Bharata B Rao <bharata@linux.ibm.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	LKML <linux-kernel@vger.kernel.org>,
	Linux MM <linux-mm@kvack.org>,
	linux-fsdevel <linux-fsdevel@vger.kernel.org>,
	"Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>
Subject: Re: High kmalloc-32 slab cache consumption with 10k containers
Date: Wed, 7 Apr 2021 08:42:31 -0700	[thread overview]
Message-ID: <CALvZod79PDmPLYAXm=EQDrn8mQfE9aQL+Mgaai6zu=uqucbMAQ@mail.gmail.com> (raw)
In-Reply-To: <YG2diKMPNSK2cMyG@dhcp22.suse.cz>

On Wed, Apr 7, 2021 at 4:55 AM Michal Hocko <mhocko@suse.com> wrote:
>
> On Mon 05-04-21 11:18:48, Bharata B Rao wrote:
> > Hi,
> >
> > When running 10000 (more-or-less-empty-)containers on a bare-metal Power9
> > server(160 CPUs, 2 NUMA nodes, 256G memory), it is seen that memory
> > consumption increases quite a lot (around 172G) when the containers are
> > running. Most of it comes from slab (149G) and within slab, the majority of
> > it comes from kmalloc-32 cache (102G)
>
> Is this 10k cgroups a testing enviroment or does anybody really use that
> in production? I would be really curious to hear how that behaves when
> those containers are not idle. E.g. global memory reclaim iterating over
> 10k memcgs will likely be very visible. I do remember playing with
> similar setups few years back and the overhead was very high.
> --

I can tell about our environment. Couple of thousands of memcgs (~2k)
are very normal on our machines as machines can be running 100+ jobs
(and each job can manage their own sub-memcgs). However each job can
have a high number of mounts. There is no local disk and each package
of the job is remotely mounted (a bit more complicated).

We do have issues with global memory reclaim but mostly the proactive
reclaim makes the global reclaim a tail issue (and at tail it often
does create havoc).

      parent reply	other threads:[~2021-04-07 15:42 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-04-05  5:48 High kmalloc-32 slab cache consumption with 10k containers Bharata B Rao
2021-04-05  8:22 ` kernel test robot
2021-04-05 18:08 ` Yang Shi
2021-04-05 18:38   ` Roman Gushchin
2021-04-06 10:13     ` Bharata B Rao
2021-04-06 10:05   ` Bharata B Rao
2021-04-07  1:39     ` Yang Shi
2021-04-06 22:28 ` Dave Chinner
2021-04-07  5:05   ` Bharata B Rao
2021-04-07 10:07     ` Kirill Tkhai
2021-04-07 11:47       ` Bharata B Rao
2021-04-07 12:49         ` Kirill Tkhai
2021-04-07 13:57   ` Christian Brauner
2021-04-15  5:23   ` Bharata B Rao
2021-04-15  6:54     ` Michal Hocko
2021-04-15  7:21       ` Bharata B Rao
2021-04-16  4:44   ` Bharata B Rao
2021-04-19  1:23     ` Dave Chinner
2021-04-07 11:54 ` Michal Hocko
2021-04-07 13:32   ` Christian Brauner
2021-04-07 13:43   ` Bharata B Rao
2021-04-07 13:57     ` Michal Hocko
2021-04-07 15:42   ` Shakeel Butt [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CALvZod79PDmPLYAXm=EQDrn8mQfE9aQL+Mgaai6zu=uqucbMAQ@mail.gmail.com' \
    --to=shakeelb@google.com \
    --cc=akpm@linux-foundation.org \
    --cc=aneesh.kumar@linux.ibm.com \
    --cc=bharata@linux.ibm.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).