linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Waiman Long <longman@redhat.com>
To: Michal Hocko <mhocko@kernel.org>
Cc: Christoph Lameter <cl@linux.com>,
	Pekka Enberg <penberg@kernel.org>,
	David Rientjes <rientjes@google.com>,
	Joonsoo Kim <iamjoonsoo.kim@lge.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	Roman Gushchin <guro@fb.com>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Shakeel Butt <shakeelb@google.com>,
	Vladimir Davydov <vdavydov.dev@gmail.com>
Subject: Re: [PATCH v2 1/2] mm, slab: Extend slab/shrink to shrink all memcg caches
Date: Fri, 19 Jul 2019 10:09:06 -0400	[thread overview]
Message-ID: <caa120bb-bfcc-45ef-08e1-af40e52b43df@redhat.com> (raw)
In-Reply-To: <20190719062052.GK30461@dhcp22.suse.cz>

On 7/19/19 2:20 AM, Michal Hocko wrote:
> On Wed 17-07-19 16:24:12, Waiman Long wrote:
>> Currently, a value of '1" is written to /sys/kernel/slab/<slab>/shrink
>> file to shrink the slab by flushing out all the per-cpu slabs and free
>> slabs in partial lists. This can be useful to squeeze out a bit more memory
>> under extreme condition as well as making the active object counts in
>> /proc/slabinfo more accurate.
>>
>> This usually applies only to the root caches, as the SLUB_MEMCG_SYSFS_ON
>> option is usually not enabled and "slub_memcg_sysfs=1" not set. Even
>> if memcg sysfs is turned on, it is too cumbersome and impractical to
>> manage all those per-memcg sysfs files in a real production system.
>>
>> So there is no practical way to shrink memcg caches.  Fix this by
>> enabling a proper write to the shrink sysfs file of the root cache
>> to scan all the available memcg caches and shrink them as well. For a
>> non-root memcg cache (when SLUB_MEMCG_SYSFS_ON or slub_memcg_sysfs is
>> on), only that cache will be shrunk when written.
> I would mention that memcg unawareness was an overlook more than
> anything else. The interface is intended to shrink all pcp data of the
> cache. The fact that we are using per-memcg internal caches is an
> implementation detail.
>
>> On a 2-socket 64-core 256-thread arm64 system with 64k page after
>> a parallel kernel build, the the amount of memory occupied by slabs
>> before shrinking slabs were:
>>
>>  # grep task_struct /proc/slabinfo
>>  task_struct        53137  53192   4288   61    4 : tunables    0    0
>>  0 : slabdata    872    872      0
>>  # grep "^S[lRU]" /proc/meminfo
>>  Slab:            3936832 kB
>>  SReclaimable:     399104 kB
>>  SUnreclaim:      3537728 kB
>>
>> After shrinking slabs:
>>
>>  # grep "^S[lRU]" /proc/meminfo
>>  Slab:            1356288 kB
>>  SReclaimable:     263296 kB
>>  SUnreclaim:      1092992 kB
>>  # grep task_struct /proc/slabinfo
>>  task_struct         2764   6832   4288   61    4 : tunables    0    0
>>  0 : slabdata    112    112      0
> Now that you are touching the documentation I would just add a note that
> shrinking might be expensive and block other slab operations so it
> should be used with some care.
>
Good point. I will update the patch to include such a note in the
documentation.

Thanks,
Longman


  reply	other threads:[~2019-07-19 14:09 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-07-17 20:24 [PATCH v2 0/2] mm, slab: Extend slab/shrink to shrink all memcg caches Waiman Long
2019-07-17 20:24 ` [PATCH v2 1/2] " Waiman Long
2019-07-18 11:38   ` Christopher Lameter
2019-07-18 17:05     ` Roman Gushchin
2019-07-19  6:20   ` Michal Hocko
2019-07-19 14:09     ` Waiman Long [this message]
2019-07-17 20:24 ` [PATCH v2 2/2] mm, slab: Show last shrink time in us when slab/shrink is read Waiman Long
2019-07-18 11:39   ` Christopher Lameter
2019-07-18 14:36     ` Waiman Long
2019-07-18 18:04       ` Waiman Long
2019-07-19  6:14   ` Michal Hocko
2019-07-19 14:07     ` Waiman Long
2019-07-19 14:29       ` Michal Hocko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=caa120bb-bfcc-45ef-08e1-af40e52b43df@redhat.com \
    --to=longman@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=cl@linux.com \
    --cc=guro@fb.com \
    --cc=hannes@cmpxchg.org \
    --cc=iamjoonsoo.kim@lge.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@kernel.org \
    --cc=penberg@kernel.org \
    --cc=rientjes@google.com \
    --cc=shakeelb@google.com \
    --cc=vdavydov.dev@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).