From: Michal Hocko <mhocko@suse.com>
To: Neil Sun <neilsun@yunify.com>
Cc: akpm@linux-foundation.org, linux-mm@kvack.org,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH] mm/vmscan.c: drop_slab_node with task's memcg
Date: Tue, 6 Apr 2021 09:21:51 +0200 [thread overview]
Message-ID: <YGwMD3DOymOFJ7O5@dhcp22.suse.cz> (raw)
In-Reply-To: <1617359934-7812-1-git-send-email-neilsun@yunify.com>
On Fri 02-04-21 18:38:54, Neil Sun wrote:
> This patch makes shrink_slab() with task's memcg in drop_slab_node(),
> so we can free reclaimable slab objects belongs to memcg /lxc/i-vbe1u8o7
> with following command:
You are changing semantic of the existing user interface. This knob has
never been memcg aware and it is supposed to have a global impact. I do
not think we can simply change that without some users being surprised
or even breaking them.
> cgexec -g memory:/lxc/i-vbe1u8o7 sysctl vm.drop_caches=2
>
> Test with following steps:
>
> root@i-yl0pwrt8:~# free -h
> total used free shared buff/cache available
> Mem: 62Gi 265Mi 62Gi 1.0Mi 290Mi 61Gi
> Swap: 31Gi 0B 31Gi
> root@i-yl0pwrt8:~# (cd /tmp && /root/generate_slab_cache)
> root@i-yl0pwrt8:~# free -h
> total used free shared buff/cache available
> Mem: 62Gi 266Mi 60Gi 1.0Mi 2.2Gi 61Gi
> Swap: 31Gi 0B 31Gi
> root@i-yl0pwrt8:~# cgcreate -g memory:/lxc/i-vbe1u8o7
> root@i-yl0pwrt8:~# cgexec -g memory:/lxc/i-vbe1u8o7 /root/generate_slab_cache
> root@i-yl0pwrt8:~# free -h
> total used free shared buff/cache available
> Mem: 62Gi 267Mi 58Gi 1.0Mi 4.1Gi 61Gi
> Swap: 31Gi 0B 31Gi
> root@i-yl0pwrt8:~# cgexec -g memory:/lxc/i-vbe1u8o7 sysctl vm.drop_caches=2
> vm.drop_caches = 2
> root@i-yl0pwrt8:~# free -h
> total used free shared buff/cache available
> Mem: 62Gi 268Mi 60Gi 1.0Mi 2.2Gi 61Gi
> Swap: 31Gi 0B 31Gi
> root@i-yl0pwrt8:~# sysctl vm.drop_caches=2
> vm.drop_caches = 2
> root@i-yl0pwrt8:~# free -h
> total used free shared buff/cache available
> Mem: 62Gi 267Mi 62Gi 1.0Mi 290Mi 61Gi
> Swap: 31Gi 0B 31Gi
>
> Signed-off-by: Neil Sun <neilsun@yunify.com>
> ---
> mm/vmscan.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 562e87cb..81d770a 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -702,7 +702,7 @@ void drop_slab_node(int nid)
> return;
>
> freed = 0;
> - memcg = mem_cgroup_iter(NULL, NULL, NULL);
> + memcg = mem_cgroup_from_task(current);
> do {
> freed += shrink_slab(GFP_KERNEL, nid, memcg, 0);
> } while ((memcg = mem_cgroup_iter(NULL, memcg, NULL)) != NULL);
> --
> 2.7.4
--
Michal Hocko
SUSE Labs
next prev parent reply other threads:[~2021-04-06 7:21 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-04-02 10:38 [PATCH] mm/vmscan.c: drop_slab_node with task's memcg Neil Sun
2021-04-02 14:38 ` kernel test robot
2021-04-02 14:50 ` kernel test robot
2021-04-06 7:21 ` Michal Hocko [this message]
2021-04-06 11:30 ` Neil Sun
2021-04-06 11:39 ` Michal Hocko
2021-04-06 14:34 ` Neil Sun
2021-04-06 14:39 ` Michal Hocko
2021-04-06 15:12 ` Neil Sun
2021-04-06 17:38 ` Michal Hocko
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=YGwMD3DOymOFJ7O5@dhcp22.suse.cz \
--to=mhocko@suse.com \
--cc=akpm@linux-foundation.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=neilsun@yunify.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).