linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Suleiman Souhlal <suleiman@google.com>
To: Roman Gushchin <guro@fb.com>
Cc: "linux-mm@kvack.org" <linux-mm@kvack.org>,
	Michal Hocko <mhocko@kernel.org>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Linux Kernel <linux-kernel@vger.kernel.org>,
	Kernel Team <Kernel-team@fb.com>,
	Shakeel Butt <shakeelb@google.com>,
	Vladimir Davydov <vdavydov.dev@gmail.com>,
	Waiman Long <longman@redhat.com>
Subject: Re: [PATCH RFC 00/14] The new slab memory controller
Date: Fri, 20 Sep 2019 06:10:11 +0900	[thread overview]
Message-ID: <CABCjUKB2BFF9s0RsYj4reUDbPrSkwxDo96Rmqk3tOc0_vo3Xag@mail.gmail.com> (raw)
In-Reply-To: <20190919162204.GA20035@castle.dhcp.thefacebook.com>

On Fri, Sep 20, 2019 at 1:22 AM Roman Gushchin <guro@fb.com> wrote:
>
> On Thu, Sep 19, 2019 at 10:39:18PM +0900, Suleiman Souhlal wrote:
> > On Fri, Sep 6, 2019 at 6:57 AM Roman Gushchin <guro@fb.com> wrote:
> > > The patchset has been tested on a number of different workloads in our
> > > production. In all cases, it saved hefty amounts of memory:
> > > 1) web frontend, 650-700 Mb, ~42% of slab memory
> > > 2) database cache, 750-800 Mb, ~35% of slab memory
> > > 3) dns server, 700 Mb, ~36% of slab memory
> >
> > Do these workloads cycle through a lot of different memcgs?
>
> Not really, those are just plain services managed by systemd.
> They aren't restarted too often, maybe several times per day at most.
>
> Also, there is nothing fb-specific. You can take any new modern
> distributive (I've tried Fedora 30), boot it up and look at the
> amount of slab memory. Numbers are roughly the same.

Ah, ok.
These numbers are kind of surprising to me.
Do you know if the savings are similar if you use CONFIG_SLAB instead
of CONFIG_SLUB?

> > For workloads that don't, wouldn't this approach potentially use more
> > memory? For example, a workload where everything is in one or two
> > memcgs, and those memcgs last forever.
> >
>
> Yes, it's true, if you have a very small and fixed number of memory cgroups,
> in theory the new approach can take ~10% more memory.
>
> I don't think it's such a big problem though: it seems that the majority
> of cgroup users have a lot of them, and they are dynamically created and
> destroyed by systemd/kubernetes/whatever else.
>
> And if somebody has a very special setup with only 1-2 cgroups, arguably
> kernel memory accounting isn't such a big thing for them, so it can be simple
> disabled. Am I wrong and do you have a real-life example?

No, I don't have any specific examples.

-- Suleiman

  reply	other threads:[~2019-09-19 21:10 UTC|newest]

Thread overview: 36+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-09-05 21:45 [PATCH RFC 00/14] The new slab memory controller Roman Gushchin
2019-09-05 21:45 ` [PATCH RFC 01/14] mm: memcg: subpage charging API Roman Gushchin
2019-09-16 12:56   ` Johannes Weiner
2019-09-17  2:27     ` Roman Gushchin
2019-09-17  8:50       ` Johannes Weiner
2019-09-17 18:33         ` Roman Gushchin
2019-09-05 21:45 ` [PATCH RFC 02/14] mm: memcg: introduce mem_cgroup_ptr Roman Gushchin
2019-09-05 22:34   ` Roman Gushchin
2019-09-05 21:45 ` [PATCH RFC 03/14] mm: vmstat: use s32 for vm_node_stat_diff in struct per_cpu_nodestat Roman Gushchin
2019-09-05 21:45 ` [PATCH RFC 04/14] mm: vmstat: convert slab vmstat counter to bytes Roman Gushchin
2019-09-16 12:38   ` Johannes Weiner
2019-09-17  2:08     ` Roman Gushchin
2019-09-05 21:45 ` [PATCH RFC 05/14] mm: memcg/slab: allocate space for memcg ownership data for non-root slabs Roman Gushchin
2019-09-05 21:45 ` [PATCH RFC 06/14] mm: slub: implement SLUB version of obj_to_index() Roman Gushchin
2019-09-05 21:45 ` [PATCH RFC 07/14] mm: memcg/slab: save memcg ownership data for non-root slab objects Roman Gushchin
2019-09-05 21:45 ` [PATCH RFC 08/14] mm: memcg: move memcg_kmem_bypass() to memcontrol.h Roman Gushchin
2019-09-05 21:45 ` [PATCH RFC 09/14] mm: memcg: introduce __mod_lruvec_memcg_state() Roman Gushchin
2019-09-05 22:37 ` [PATCH RFC 02/14] mm: memcg: introduce mem_cgroup_ptr Roman Gushchin
2019-09-17 19:48 ` [PATCH RFC 00/14] The new slab memory controller Waiman Long
2019-09-17 21:24   ` Roman Gushchin
2019-09-19 13:39 ` Suleiman Souhlal
2019-09-19 16:22   ` Roman Gushchin
2019-09-19 21:10     ` Suleiman Souhlal [this message]
2019-09-19 21:40       ` Roman Gushchin
2019-10-01 15:12 ` Michal Koutný
2019-10-02  2:09   ` Roman Gushchin
2019-10-02 13:00     ` Suleiman Souhlal
2019-10-03 10:47       ` Michal Koutný
2019-10-03 15:52         ` Roman Gushchin
2019-12-09  9:17 ` [PATCH 00/16] " Bharata B Rao
2019-12-09 11:56   ` Bharata B Rao
2019-12-09 18:04     ` Roman Gushchin
2019-12-10  6:23       ` Bharata B Rao
2019-12-10 18:05         ` Roman Gushchin
2020-01-13  8:47           ` Bharata B Rao
2020-01-13 15:31             ` Roman Gushchin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CABCjUKB2BFF9s0RsYj4reUDbPrSkwxDo96Rmqk3tOc0_vo3Xag@mail.gmail.com \
    --to=suleiman@google.com \
    --cc=Kernel-team@fb.com \
    --cc=guro@fb.com \
    --cc=hannes@cmpxchg.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=longman@redhat.com \
    --cc=mhocko@kernel.org \
    --cc=shakeelb@google.com \
    --cc=vdavydov.dev@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).