All of lore.kernel.org
 help / color / mirror / Atom feed
From: Yang Yingliang <yangyingliang@huawei.com>
To: Tejun Heo <tj@kernel.org>
Cc: <linux-kernel@vger.kernel.org>, <cgroups@vger.kernel.org>,
	<netdev@vger.kernel.org>,
	"Libin (Huawei)" <huawei.libin@huawei.com>, <guofan5@huawei.com>,
	<wangkefeng.wang@huawei.com>, <lizefan@huawei.com>
Subject: Re: cgroup pointed by sock is leaked on mode switch
Date: Wed, 6 May 2020 09:50:43 +0800	[thread overview]
Message-ID: <c9879fd2-cb91-2a08-8293-c6a436b5a539@huawei.com> (raw)
In-Reply-To: <20200505160639.GG12217@mtj.thefacebook.com>

+cc lizefan@huawei.com

On 2020/5/6 0:06, Tejun Heo wrote:
> Hello, Yang.
>
> On Sat, May 02, 2020 at 06:27:21PM +0800, Yang Yingliang wrote:
>> I find the number nr_dying_descendants is increasing:
>> linux-dVpNUK:~ # find /sys/fs/cgroup/ -name cgroup.stat -exec grep
>> '^nr_dying_descendants [^0]'  {} +
>> /sys/fs/cgroup/unified/cgroup.stat:nr_dying_descendants 80
>> /sys/fs/cgroup/unified/system.slice/cgroup.stat:nr_dying_descendants 1
>> /sys/fs/cgroup/unified/system.slice/system-hostos.slice/cgroup.stat:nr_dying_descendants
>> 1
>> /sys/fs/cgroup/unified/lxc/cgroup.stat:nr_dying_descendants 79
>> /sys/fs/cgroup/unified/lxc/5f1fdb8c54fa40c3e599613dab6e4815058b76ebada8a27bc1fe80c0d4801764/cgroup.stat:nr_dying_descendants
>> 78
>> /sys/fs/cgroup/unified/lxc/5f1fdb8c54fa40c3e599613dab6e4815058b76ebada8a27bc1fe80c0d4801764/system.slice/cgroup.stat:nr_dying_descendants
>> 78
> Those numbers are nowhere close to causing oom issues. There are some
> aspects of page and other cache draining which is being improved but unless
> you're seeing numbers multiple orders of magnitude higher, this isn't the
> source of your problem.
>
>> The situation is as same as the commit bd1060a1d671 ("sock, cgroup: add
>> sock->sk_cgroup") describes.
>> "On mode switch, cgroup references which are already being pointed to by
>> socks may be leaked."
> I'm doubtful that you're hitting that issue. Mode switching means memcg
> being switched between cgroup1 and cgroup2 hierarchies, which is unlikely to
> be what's happening when you're launching docker containers.
>
> The first step would be identifying where memory is going and finding out
> whether memcg is actually being switched between cgroup1 and 2 - look at the
> hierarchy number in /proc/cgroups, if that's switching between 0 and
> someting not zero, it is switching.
>
> Thanks.
>


  reply	other threads:[~2020-05-06  1:50 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-05-02 10:27 cgroup pointed by sock is leaked on mode switch Yang Yingliang
2020-05-05 16:06 ` Tejun Heo
2020-05-06  1:50   ` Yang Yingliang [this message]
2020-05-06  2:16     ` Zefan Li
2020-05-06  7:51       ` Zefan Li
2020-05-09  2:31         ` Yang Yingliang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=c9879fd2-cb91-2a08-8293-c6a436b5a539@huawei.com \
    --to=yangyingliang@huawei.com \
    --cc=cgroups@vger.kernel.org \
    --cc=guofan5@huawei.com \
    --cc=huawei.libin@huawei.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lizefan@huawei.com \
    --cc=netdev@vger.kernel.org \
    --cc=tj@kernel.org \
    --cc=wangkefeng.wang@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.