All of lore.kernel.org
 help / color / mirror / Atom feed
From: Michal Hocko <mhocko@suse.com>
To: Bharata B Rao <bharata@linux.ibm.com>
Cc: akpm@linux-foundation.org, linux-kernel@vger.kernel.org,
	linux-mm@kvack.org, linux-fsdevel@vger.kernel.org,
	aneesh.kumar@linux.ibm.com
Subject: Re: High kmalloc-32 slab cache consumption with 10k containers
Date: Wed, 7 Apr 2021 15:57:02 +0200	[thread overview]
Message-ID: <YG26LlJJTuZ5UrJ5@dhcp22.suse.cz> (raw)
In-Reply-To: <20210407134342.GA1386511@in.ibm.com>

On Wed 07-04-21 19:13:42, Bharata B Rao wrote:
> On Wed, Apr 07, 2021 at 01:54:48PM +0200, Michal Hocko wrote:
> > On Mon 05-04-21 11:18:48, Bharata B Rao wrote:
> > > Hi,
> > > 
> > > When running 10000 (more-or-less-empty-)containers on a bare-metal Power9
> > > server(160 CPUs, 2 NUMA nodes, 256G memory), it is seen that memory
> > > consumption increases quite a lot (around 172G) when the containers are
> > > running. Most of it comes from slab (149G) and within slab, the majority of
> > > it comes from kmalloc-32 cache (102G)
> > 
> > Is this 10k cgroups a testing enviroment or does anybody really use that
> > in production? I would be really curious to hear how that behaves when
> > those containers are not idle. E.g. global memory reclaim iterating over
> > 10k memcgs will likely be very visible. I do remember playing with
> > similar setups few years back and the overhead was very high.
> 
> This 10k containers is only a test scenario that we are looking at.

OK, this is good to know. I would definitely recommend looking at the
runtime aspect of such a large scale deployment before optimizing for
memory footprint. I do agree that the later is an interesting topic on
its own but I do not expect such a deployment on small machines so the
overhead shouldn't be a showstopper. I would be definitely interested
to hear about the runtime overhead. I do expect some interesting
finidings there.

Btw. I do expect that memory controller will not be the only one
deployed right?

-- 
Michal Hocko
SUSE Labs

  reply	other threads:[~2021-04-07 13:57 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-04-05  5:48 High kmalloc-32 slab cache consumption with 10k containers Bharata B Rao
2021-04-05  8:22 ` kernel test robot
2021-04-05  8:22   ` kernel test robot
2021-04-05 18:08 ` Yang Shi
2021-04-05 18:08   ` Yang Shi
2021-04-05 18:38   ` Roman Gushchin
2021-04-06 10:13     ` Bharata B Rao
2021-04-06 10:05   ` Bharata B Rao
2021-04-07  1:39     ` Yang Shi
2021-04-07  1:39       ` Yang Shi
2021-04-06 22:28 ` Dave Chinner
2021-04-07  5:05   ` Bharata B Rao
2021-04-07 10:07     ` Kirill Tkhai
2021-04-07 11:47       ` Bharata B Rao
2021-04-07 12:49         ` Kirill Tkhai
2021-04-07 13:57   ` Christian Brauner
2021-04-15  5:23   ` Bharata B Rao
2021-04-15  6:54     ` Michal Hocko
2021-04-15  7:21       ` Bharata B Rao
2021-04-16  4:44   ` Bharata B Rao
2021-04-19  1:23     ` Dave Chinner
2021-04-07 11:54 ` Michal Hocko
2021-04-07 13:32   ` Christian Brauner
2021-04-07 13:43   ` Bharata B Rao
2021-04-07 13:57     ` Michal Hocko [this message]
2021-04-07 15:42   ` Shakeel Butt
2021-04-07 15:42     ` Shakeel Butt

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YG26LlJJTuZ5UrJ5@dhcp22.suse.cz \
    --to=mhocko@suse.com \
    --cc=akpm@linux-foundation.org \
    --cc=aneesh.kumar@linux.ibm.com \
    --cc=bharata@linux.ibm.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.