From: Bharata B Rao <bharata@linux.ibm.com>
To: Roman Gushchin <guro@fb.com>
Cc: Yang Shi <shy828301@gmail.com>,
Kirill Tkhai <ktkhai@virtuozzo.com>,
Shakeel Butt <shakeelb@google.com>,
Johannes Weiner <hannes@cmpxchg.org>,
Andrew Morton <akpm@linux-foundation.org>,
Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
Linux MM <linux-mm@kvack.org>,
Linux FS-devel Mailing List <linux-fsdevel@vger.kernel.org>,
aneesh.kumar@linux.ibm.com
Subject: Re: High kmalloc-32 slab cache consumption with 10k containers
Date: Tue, 6 Apr 2021 15:43:31 +0530 [thread overview]
Message-ID: <20210406101331.GB1354243@in.ibm.com> (raw)
In-Reply-To: <YGtZNLhXjv8RegTK@carbon.dhcp.thefacebook.com>
On Mon, Apr 05, 2021 at 11:38:44AM -0700, Roman Gushchin wrote:
> > > @@ -534,7 +521,17 @@ static void memcg_drain_list_lru_node(struct list_lru *lru, int nid,
> > > spin_lock_irq(&nlru->lock);
> > >
> > > src = list_lru_from_memcg_idx(nlru, src_idx);
> > > + if (!src)
> > > + goto out;
> > > +
> > > dst = list_lru_from_memcg_idx(nlru, dst_idx);
> > > + if (!dst) {
> > > + /* TODO: Use __GFP_NOFAIL? */
> > > + dst = kmalloc(sizeof(struct list_lru_one), GFP_ATOMIC);
> > > + init_one_lru(dst);
> > > + memcg_lrus = rcu_dereference_protected(nlru->memcg_lrus, true);
> > > + memcg_lrus->lru[dst_idx] = dst;
> > > + }
>
> Hm, can't we just reuse src as dst in this case?
> We don't need src anymore and we're basically allocating dst to move all data from src.
Yes, we can do that and it would be much simpler.
> If not, we can allocate up to the root memcg every time to avoid having
> !dst case and fiddle with __GFP_NOFAIL.
>
> Otherwise I like the idea and I think it might reduce the memory overhead
> especially on (very) big machines.
Yes, I will however have to check if the callers of list_lru_add() are capable
of handling failure which can happen with this approach if kmalloc fails.
Regards,
Bharata.
next prev parent reply other threads:[~2021-04-06 10:13 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-04-05 5:48 High kmalloc-32 slab cache consumption with 10k containers Bharata B Rao
2021-04-05 8:22 ` kernel test robot
2021-04-05 18:08 ` Yang Shi
2021-04-05 18:38 ` Roman Gushchin
2021-04-06 10:13 ` Bharata B Rao [this message]
2021-04-06 10:05 ` Bharata B Rao
2021-04-07 1:39 ` Yang Shi
2021-04-06 22:28 ` Dave Chinner
2021-04-07 5:05 ` Bharata B Rao
2021-04-07 10:07 ` Kirill Tkhai
2021-04-07 11:47 ` Bharata B Rao
2021-04-07 12:49 ` Kirill Tkhai
2021-04-07 13:57 ` Christian Brauner
2021-04-15 5:23 ` Bharata B Rao
2021-04-15 6:54 ` Michal Hocko
2021-04-15 7:21 ` Bharata B Rao
2021-04-16 4:44 ` Bharata B Rao
2021-04-19 1:23 ` Dave Chinner
2021-04-07 11:54 ` Michal Hocko
2021-04-07 13:32 ` Christian Brauner
2021-04-07 13:43 ` Bharata B Rao
2021-04-07 13:57 ` Michal Hocko
2021-04-07 15:42 ` Shakeel Butt
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20210406101331.GB1354243@in.ibm.com \
--to=bharata@linux.ibm.com \
--cc=akpm@linux-foundation.org \
--cc=aneesh.kumar@linux.ibm.com \
--cc=guro@fb.com \
--cc=hannes@cmpxchg.org \
--cc=ktkhai@virtuozzo.com \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=shakeelb@google.com \
--cc=shy828301@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).