linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [LSF/MM TOPIC] Memory cgroups, whether you like it or not
@ 2020-02-05 18:34 Tim Chen
  2020-02-14 10:45 ` [Lsf-pc] " Michal Hocko
  0 siblings, 1 reply; 7+ messages in thread
From: Tim Chen @ 2020-02-05 18:34 UTC (permalink / raw)
  To: lsf-pc, linux-mm; +Cc: Dave Hansen, Dan Williams, Huang Ying

Topic: Memory cgroups, whether you like it or not

1. Memory cgroup counters scalability

Recently, benchmark teams at Intel were running some bare-metal
benchmarks.  To our great surprise, we saw lots of memcg activity in
the profiles.  When we asked the benchmark team, they did not even
realize they were using memory cgroups.  They were fond of running all
their benchmarks in containers that just happened to use memory cgroups
by default.  What were previously problems only for memory cgroup users
are quickly becoming a problem for everyone.

There are mem cgroup counters that are read in page management paths
which scale poorly when read.  These counters are per cpu based and
need to be summed over all CPUs to get the overall value for the mem
cgroup in lruvec_page_state_local function.  This led to scalability
problems on system with large numbers of CPUs. For example, we’ve seen 14+% kernel
CPU time consumed in snapshot_refaults().  We have also encountered a
similar issue recently when computing the lru_size[1].

We'll like to do some brainstorming to see if there are ways to make
such accounting more scalable.  For example, not all usages
of such counters need precise counts, and some approximate counts that are
updated lazily can be used.

[1] https://lore.kernel.org/linux-mm/a64eecf1-81d4-371f-ff6d-1cb057bd091c@linux.intel.com/ 

2. Tiered memory accounting and management

Traditionally, all RAM is DRAM.  Some DRAM might be closer/faster
than others, but a byte of media has about the same cost whether it
is close or far.  But, with new memory tiers such as High-Bandwidth
Memory or Persistent Memory, there is a choice between fast/expensive
and slow/cheap.  But, the current memory cgroups still live in the
old model. There is only one set of limits, and it implies that all
memory has the same cost.  We would like to extend memory cgroups to
comprehend different memory tiers to give users a way to choose a mix
between fast/expensive and slow/cheap.

We would like to propose that for systems with multiple memory tiers,
We will add accounting per mem cgroup for a memory cgroup's usage of the
top tier memory.  Such top tier memory are precious resources, where it
makes sense to impose soft limits. We can start to actively demote the
top tier memory used by cgroups that exceed their allowance when the
system experience memory pressure in the top tier memory.

There is existing infrastructure for memory soft limit per cgroup we
can leverage to implement such a scheme.  We'll like to find out if this
approach makes sense for people working on systems with multiple memory tiers.








^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2020-03-04 20:52 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-02-05 18:34 [LSF/MM TOPIC] Memory cgroups, whether you like it or not Tim Chen
2020-02-14 10:45 ` [Lsf-pc] " Michal Hocko
2020-02-20 16:06   ` Kirill A. Shutemov
2020-02-20 16:19     ` Michal Hocko
2020-02-20 22:16       ` Tim Chen
2020-02-21  8:42         ` Michal Hocko
2020-03-04 20:52   ` Tim Chen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).