linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Johannes Weiner <hannes@cmpxchg.org>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: linux-mm@kvack.org, cgroups@vger.kernel.org,
	linux-kernel@vger.kernel.org, kernel-team@fb.com
Subject: [PATCH 0/4] mm: memcontrol: memory.stat cost & correctness
Date: Fri, 12 Apr 2019 11:15:03 -0400	[thread overview]
Message-ID: <20190412151507.2769-1-hannes@cmpxchg.org> (raw)

The cgroup memory.stat file holds recursive statistics for the entire
subtree. The current implementation does this tree walk on-demand
whenever the file is read. This is giving us problems in production.

1. The cost of aggregating the statistics on-demand is high. A lot of
system service cgroups are mostly idle and their stats don't change
between reads, yet we always have to check them. There are also always
some lazily-dying cgroups sitting around that are pinned by a handful
of remaining page cache; the same applies to them.

In an application that periodically monitors memory.stat in our fleet,
we have seen the aggregation consume up to 5% CPU time.

2. When cgroups die and disappear from the cgroup tree, so do their
accumulated vm events. The result is that the event counters at
higher-level cgroups can go backwards and confuse some of our
automation, let alone people looking at the graphs over time.

To address both issues, this patch series changes the stat
implementation to spill counts upwards when the counters change.

The upward spilling is batched using the existing per-cpu cache. In a
sparse file stress test with 5 level cgroup nesting, the additional
cost of the flushing was negligible (a little under 1% of CPU at 100%
CPU utilization, compared to the 5% of reading memory.stat during
regular operation).

 include/linux/memcontrol.h |  96 +++++++-------
 mm/memcontrol.c            | 290 +++++++++++++++++++++++++++----------------
 mm/vmscan.c                |   4 +-
 mm/workingset.c            |   7 +-
 4 files changed, 234 insertions(+), 163 deletions(-)



             reply	other threads:[~2019-04-12 15:15 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-04-12 15:15 Johannes Weiner [this message]
2019-04-12 15:15 ` [PATCH 1/4] mm: memcontrol: make cgroup stats and events query API explicitly local Johannes Weiner
2019-04-12 15:15 ` [PATCH 2/4] mm: memcontrol: move stat/event counting functions out-of-line Johannes Weiner
2019-04-12 15:15 ` [PATCH 3/4] mm: memcontrol: fix recursive statistics correctness & scalabilty Johannes Weiner
2019-04-12 19:55   ` Shakeel Butt
2019-04-12 20:10     ` Johannes Weiner
2019-04-12 20:38       ` Shakeel Butt
2019-04-12 20:15     ` Roman Gushchin
2019-04-12 20:50       ` Shakeel Butt
2019-04-12 15:15 ` [PATCH 4/4] mm: memcontrol: fix NUMA round-robin reclaim at intermediate level Johannes Weiner
2019-04-12 20:07 ` [PATCH 0/4] mm: memcontrol: memory.stat cost & correctness Shakeel Butt
2019-04-12 22:04 ` Roman Gushchin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190412151507.2769-1-hannes@cmpxchg.org \
    --to=hannes@cmpxchg.org \
    --cc=akpm@linux-foundation.org \
    --cc=cgroups@vger.kernel.org \
    --cc=kernel-team@fb.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).