All of lore.kernel.org
 help / color / mirror / Atom feed
* [LSF/MM TOPIC] dying memory cgroups and slab reclaim issues
@ 2019-02-19  7:13 ` Roman Gushchin
  0 siblings, 0 replies; 33+ messages in thread
From: Roman Gushchin @ 2019-02-19  7:13 UTC (permalink / raw)
  To: lsf-pc
  Cc: linux-fsdevel, linux-mm, riel, dchinner, guroan, Kernel Team, hannes

Sorry, once more, now with fsdevel@ in cc, asked by Dave.
--

Recent reverts of memcg leak fixes [1, 2] reintroduced the problem
with accumulating of dying memory cgroups. This is a serious problem:
on most of our machines we've seen thousands on dying cgroups, and
the corresponding memory footprint was measured in hundreds of megabytes.
The problem was also independently discovered by other companies.

The fixes were reverted due to xfs regression investigated by Dave Chinner.
Simultaneously we've seen a very small (0.18%) cpu regression on some hosts,
which caused Rik van Riel to propose a patch [3], which aimed to fix the
regression. The idea is to accumulate small memory pressure and apply it
periodically, so that we don't overscan small shrinker lists. According
to Jan Kara's data [4], Rik's patch partially fixed the regression,
but not entirely.

The path forward isn't entirely clear now, and the status quo isn't acceptable
due to memcg leak bug. Dave and Michal's position is to focus on dying memory
cgroup case and apply some artificial memory pressure on corresponding slabs
(probably, during cgroup deletion process). This approach can theoretically
be less harmful for the subtle scanning balance, and not cause any regressions.

In my opinion, it's not necessarily true. Slab objects can be shared between
cgroups, and often can't be reclaimed on cgroup removal without an impact on the
rest of the system. Applying constant artificial memory pressure precisely only
on objects accounted to dying cgroups is challenging and will likely
cause a quite significant overhead. Also, by "forgetting" of some slab objects
under light or even moderate memory pressure, we're wasting memory, which can be
used for something useful. Dying cgroups are just making this problem more
obvious because of their size.

So, using "natural" memory pressure in a way, that all slabs objects are scanned
periodically, seems to me as the best solution. The devil is in details, and how
to do it without causing any regressions, is an open question now.

Also, completely re-parenting slabs to parent cgroup (not only shrinker lists)
is a potential option to consider.

It will be nice to discuss the problem on LSF/MM, agree on general path and
make a potential list of benchmarks, which can be used to prove the solution.

[1] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=a9a238e83fbb0df31c3b9b67003f8f9d1d1b6c96
[2] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=69056ee6a8a3d576ed31e38b3b14c70d6c74edcc
[3] https://lkml.org/lkml/2019/1/28/1865
[4] https://lkml.org/lkml/2019/2/8/336

^ permalink raw reply	[flat|nested] 33+ messages in thread
* [LSF/MM TOPIC] dying memory cgroups and slab reclaim issues
@ 2019-02-19  0:31 Roman Gushchin
  2019-02-19  2:04 ` Dave Chinner
  0 siblings, 1 reply; 33+ messages in thread
From: Roman Gushchin @ 2019-02-19  0:31 UTC (permalink / raw)
  To: lsf-pc; +Cc: linux-mm, mhocko, riel, dchinner, guroan, Kernel Team, hannes

Sorry, resending with the fixed to/cc list. Please, ignore the first letter.
--

Recent reverts of memcg leak fixes [1, 2] reintroduced the problem
with accumulating of dying memory cgroups. This is a serious problem:
on most of our machines we've seen thousands on dying cgroups, and
the corresponding memory footprint was measured in hundreds of megabytes.
The problem was also independently discovered by other companies.

The fixes were reverted due to xfs regression investigated by Dave Chinner.
Simultaneously we've seen a very small (0.18%) cpu regression on some hosts,
which caused Rik van Riel to propose a patch [3], which aimed to fix the
regression. The idea is to accumulate small memory pressure and apply it
periodically, so that we don't overscan small shrinker lists. According
to Jan Kara's data [4], Rik's patch partially fixed the regression,
but not entirely.

The path forward isn't entirely clear now, and the status quo isn't acceptable
sue to memcg leak bug. Dave and Michal's position is to focus on dying memory
cgroup case and apply some artificial memory pressure on corresponding slabs
(probably, during cgroup deletion process). This approach can theoretically
be less harmful for the subtle scanning balance, and not cause any regressions.

In my opinion, it's not necessarily true. Slab objects can be shared between
cgroups, and often can't be reclaimed on cgroup removal without an impact on the
rest of the system. Applying constant artificial memory pressure precisely only
on objects accounted to dying cgroups is challenging and will likely
cause a quite significant overhead. Also, by "forgetting" of some slab objects
under light or even moderate memory pressure, we're wasting memory, which can be
used for something useful. Dying cgroups are just making this problem more
obvious because of their size.

So, using "natural" memory pressure in a way, that all slabs objects are scanned
periodically, seems to me as the best solution. The devil is in details, and how
to do it without causing any regressions, is an open question now.

Also, completely re-parenting slabs to parent cgroup (not only shrinker lists)
is a potential option to consider.

It will be nice to discuss the problem on LSF/MM, agree on general path and
make a potential list of benchmarks, which can be used to prove the solution.

[1] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=a9a238e83fbb0df31c3b9b67003f8f9d1d1b6c96
[2] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=69056ee6a8a3d576ed31e38b3b14c70d6c74edcc
[3] https://lkml.org/lkml/2019/1/28/1865
[4] https://lkml.org/lkml/2019/2/8/336


^ permalink raw reply	[flat|nested] 33+ messages in thread
* [LSF/MM TOPIC] dying memory cgroups and slab reclaim issues
@ 2019-02-18 23:53 Roman Gushchin
  0 siblings, 0 replies; 33+ messages in thread
From: Roman Gushchin @ 2019-02-18 23:53 UTC (permalink / raw)
  To: sf-pc; +Cc: linux-mm, mhocko, riel, dchinner, dairinin, akpm

Recent reverts of memcg leak fixes [1, 2] reintroduced the problem
with accumulating of dying memory cgroups. This is a serious problem:
on most of our machines we've seen thousands on dying cgroups, and
the corresponding memory footprint was measured in hundreds of megabytes.
The problem was also independently discovered by other companies.

The fixes were reverted due to xfs regression investigated by Dave Chinner.
Simultaneously we've seen a very small (0.18%) cpu regression on some hosts,
which caused Rik van Riel to propose a patch [3], which aimed to fix the
regression. The idea is to accumulate small memory pressure and apply it
periodically, so that we don't overscan small shrinker lists. According
to Jan Kara's data [4], Rik's patch partially fixed the regression,
but not entirely.

The path forward isn't entirely clear now, and the status quo isn't acceptable
sue to memcg leak bug. Dave and Michal's position is to focus on dying memory
cgroup case and apply some artificial memory pressure on corresponding slabs
(probably, during cgroup deletion process). This approach can theoretically
be less harmful for the subtle scanning balance, and not cause any regressions.

In my opinion, it's not necessarily true. Slab objects can be shared between
cgroups, and often can't be reclaimed on cgroup removal without an impact on the
rest of the system. Applying constant artificial memory pressure precisely only
on objects accounted to dying cgroups is challenging and will likely
cause a quite significant overhead. Also, by "forgetting" of some slab objects
under light or even moderate memory pressure, we're wasting memory, which can be
used for something useful. Dying cgroups are just making this problem more
obvious because of their size.

So, using "natural" memory pressure in a way, that all slabs objects are scanned
periodically, seems to me as the best solution. The devil is in details, and how
to do it without causing any regressions, is an open question now.

Also, completely re-parenting slabs to parent cgroup (not only shrinker lists)
is a potential option to consider.

It will be nice to discuss the problem on LSF/MM, agree on general path and
make a potential list of benchmarks, which can be used to prove the solution.

[1] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=a9a238e83fbb0df31c3b9b67003f8f9d1d1b6c96
[2] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=69056ee6a8a3d576ed31e38b3b14c70d6c74edcc
[3] https://lkml.org/lkml/2019/1/28/1865
[4] https://lkml.org/lkml/2019/2/8/336


^ permalink raw reply	[flat|nested] 33+ messages in thread

end of thread, other threads:[~2019-02-28 22:30 UTC | newest]

Thread overview: 33+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-02-19  7:13 [LSF/MM TOPIC] dying memory cgroups and slab reclaim issues Roman Gushchin
2019-02-19  7:13 ` Roman Gushchin
     [not found] ` <20190219092323.GH4525@dhcp22.suse.cz>
2019-02-19 16:21   ` [LSF/MM ATTEND] MM track: dying memory cgroups and slab reclaim issue, memcg, THP Roman Gushchin
2019-02-20  2:47 ` [LSF/MM TOPIC] dying memory cgroups and slab reclaim issues Dave Chinner
2019-02-20  2:47   ` Dave Chinner
2019-02-20  5:50   ` Dave Chinner
2019-02-20  5:50     ` Dave Chinner
2019-02-20  7:27     ` Dave Chinner
2019-02-20  7:27       ` Dave Chinner
2019-02-20 16:20       ` Johannes Weiner
2019-02-20 16:20         ` Johannes Weiner
2019-02-21 22:46       ` Roman Gushchin
2019-02-21 22:46         ` Roman Gushchin
2019-02-22  1:48         ` Rik van Riel
2019-02-22  1:48           ` Rik van Riel
2019-02-22  1:57           ` Roman Gushchin
2019-02-22  1:57             ` Roman Gushchin
2019-02-28 20:30         ` Roman Gushchin
2019-02-28 20:30           ` Roman Gushchin
2019-02-28 21:30           ` Dave Chinner
2019-02-28 21:30             ` Dave Chinner
2019-02-28 22:29             ` Roman Gushchin
2019-02-28 22:29               ` Roman Gushchin
  -- strict thread matches above, loose matches on Subject: below --
2019-02-19  0:31 Roman Gushchin
2019-02-19  2:04 ` Dave Chinner
2019-02-19 17:31   ` Rik van Riel
2019-02-19 17:38     ` Michal Hocko
2019-02-19 23:26     ` Dave Chinner
2019-02-20  2:06       ` Rik van Riel
2019-02-20  4:33         ` Dave Chinner
2019-02-20  5:31           ` Roman Gushchin
2019-02-20 17:00           ` Rik van Riel
2019-02-18 23:53 Roman Gushchin

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.