linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Matthew Wilcox <willy@infradead.org>
To: Johannes Weiner <hannes@cmpxchg.org>
Cc: Shakeel Butt <shakeelb@google.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Linux MM <linux-mm@kvack.org>,
	linux-fsdevel <linux-fsdevel@vger.kernel.org>,
	LKML <linux-kernel@vger.kernel.org>,
	Kernel Team <kernel-team@fb.com>
Subject: Re: xarray breaks thrashing detection and cgroup isolation
Date: Thu, 23 May 2019 12:41:30 -0700	[thread overview]
Message-ID: <20190523194130.GA4598@bombadil.infradead.org> (raw)
In-Reply-To: <20190523192117.GA5723@cmpxchg.org>

On Thu, May 23, 2019 at 03:21:17PM -0400, Johannes Weiner wrote:
> On Thu, May 23, 2019 at 12:00:32PM -0700, Matthew Wilcox wrote:
> > On Thu, May 23, 2019 at 11:49:41AM -0700, Shakeel Butt wrote:
> > > On Thu, May 23, 2019 at 11:37 AM Matthew Wilcox <willy@infradead.org> wrote:
> > > > On Thu, May 23, 2019 at 01:43:49PM -0400, Johannes Weiner wrote:
> > > > > I noticed that recent upstream kernels don't account the xarray nodes
> > > > > of the page cache to the allocating cgroup, like we used to do for the
> > > > > radix tree nodes.
> > > > >
> > > > > This results in broken isolation for cgrouped apps, allowing them to
> > > > > escape their containment and harm other cgroups and the system with an
> > > > > excessive build-up of nonresident information.
> > > > >
> > > > > It also breaks thrashing/refault detection because the page cache
> > > > > lives in a different domain than the xarray nodes, and so the shadow
> > > > > shrinker can reclaim nonresident information way too early when there
> > > > > isn't much cache in the root cgroup.
> > > > >
> > > > > I'm not quite sure how to fix this, since the xarray code doesn't seem
> > > > > to have per-tree gfp flags anymore like the radix tree did. We cannot
> > > > > add SLAB_ACCOUNT to the radix_tree_node_cachep slab cache. And the
> > > > > xarray api doesn't seem to really support gfp flags, either (xas_nomem
> > > > > does, but the optimistic internal allocations have fixed gfp flags).
> > > >
> > > > Would it be a problem to always add __GFP_ACCOUNT to the fixed flags?
> > > > I don't really understand cgroups.
> > 
> > > Also some users of xarray may not want __GFP_ACCOUNT. That's the
> > > reason we had __GFP_ACCOUNT for page cache instead of hard coding it
> > > in radix tree.
> > 
> > This is what I don't understand -- why would someone not want
> > __GFP_ACCOUNT?  For a shared resource?  But the page cache is a shared
> > resource.  So what is a good example of a time when an allocation should
> > _not_ be accounted to the cgroup?
> 
> We used to cgroup-account every slab charge to cgroups per default,
> until we changed it to a whitelist behavior:
> 
> commit b2a209ffa605994cbe3c259c8584ba1576d3310c
> Author: Vladimir Davydov <vdavydov@virtuozzo.com>
> Date:   Thu Jan 14 15:18:05 2016 -0800
> 
>     Revert "kernfs: do not account ino_ida allocations to memcg"
>     
>     Currently, all kmem allocations (namely every kmem_cache_alloc, kmalloc,
>     alloc_kmem_pages call) are accounted to memory cgroup automatically.
>     Callers have to explicitly opt out if they don't want/need accounting
>     for some reason.  Such a design decision leads to several problems:
>     
>      - kmalloc users are highly sensitive to failures, many of them
>        implicitly rely on the fact that kmalloc never fails, while memcg
>        makes failures quite plausible.

Doesn't apply here.  The allocation under spinlock is expected to fail,
and then we'll use xas_nomem() with the caller's specified GFP flags
which may or may not include __GFP_ACCOUNT.

>      - A lot of objects are shared among different containers by design.
>        Accounting such objects to one of containers is just unfair.
>        Moreover, it might lead to pinning a dead memcg along with its kmem
>        caches, which aren't tiny, which might result in noticeable increase
>        in memory consumption for no apparent reason in the long run.

These objects are in the slab of radix_tree_nodes, and we'll already be
accounting page cache nodes to the cgroup, so accounting random XArray
nodes to the cgroups isn't going to make the problem worse.

>      - There are tons of short-lived objects. Accounting them to memcg will
>        only result in slight noise and won't change the overall picture, but
>        we still have to pay accounting overhead.

XArray nodes are generally not short-lived objects.


  reply	other threads:[~2019-05-23 19:41 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-05-23 17:43 xarray breaks thrashing detection and cgroup isolation Johannes Weiner
2019-05-23 18:37 ` Matthew Wilcox
2019-05-23 18:49   ` Shakeel Butt
2019-05-23 19:00     ` Matthew Wilcox
2019-05-23 19:21       ` Johannes Weiner
2019-05-23 19:41         ` Matthew Wilcox [this message]
2019-05-23 19:59           ` Johannes Weiner
2019-05-24 16:11             ` Matthew Wilcox
2019-05-24 17:06               ` Johannes Weiner
2019-05-24 17:18                 ` Shakeel Butt

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190523194130.GA4598@bombadil.infradead.org \
    --to=willy@infradead.org \
    --cc=akpm@linux-foundation.org \
    --cc=hannes@cmpxchg.org \
    --cc=kernel-team@fb.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=shakeelb@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).