All of lore.kernel.org
 help / color / mirror / Atom feed
From: Shakeel Butt <shakeelb@google.com>
To: Roman Gushchin <guro@fb.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Linux MM <linux-mm@kvack.org>,
	LKML <linux-kernel@vger.kernel.org>,
	Alexander Egorenkov <egorenar@linux.ibm.com>,
	Waiman Long <longman@redhat.com>, Tejun Heo <tj@kernel.org>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Jeremy Linton <jeremy.linton@arm.com>,
	Cgroups <cgroups@vger.kernel.org>
Subject: Re: [PATCH RESEND] mm: memcg: synchronize objcg lists with a dedicated spinlock
Date: Tue, 1 Feb 2022 15:49:24 -0800	[thread overview]
Message-ID: <CALvZod5xXihut1mw1Q9vn9wuB0qOsm8ir63obf3_vv9rGZoacg@mail.gmail.com> (raw)
In-Reply-To: <Yfm1IHmoGdyUR81T@carbon.dhcp.thefacebook.com>

On Tue, Feb 1, 2022 at 2:33 PM Roman Gushchin <guro@fb.com> wrote:
>
> Alexander reported a circular lock dependency revealed by the mmap1
> ltp test:
>   LOCKDEP_CIRCULAR (suite: ltp, case: mtest06 (mmap1))
>           WARNING: possible circular locking dependency detected
>           5.17.0-20220113.rc0.git0.f2211f194038.300.fc35.s390x+debug #1 Not tainted
>           ------------------------------------------------------
>           mmap1/202299 is trying to acquire lock:
>           00000001892c0188 (css_set_lock){..-.}-{2:2}, at: obj_cgroup_release+0x4a/0xe0
>           but task is already holding lock:
>           00000000ca3b3818 (&sighand->siglock){-.-.}-{2:2}, at: force_sig_info_to_task+0x38/0x180
>           which lock already depends on the new lock.
>           the existing dependency chain (in reverse order) is:
>           -> #1 (&sighand->siglock){-.-.}-{2:2}:
>                  __lock_acquire+0x604/0xbd8
>                  lock_acquire.part.0+0xe2/0x238
>                  lock_acquire+0xb0/0x200
>                  _raw_spin_lock_irqsave+0x6a/0xd8
>                  __lock_task_sighand+0x90/0x190
>                  cgroup_freeze_task+0x2e/0x90
>                  cgroup_migrate_execute+0x11c/0x608
>                  cgroup_update_dfl_csses+0x246/0x270
>                  cgroup_subtree_control_write+0x238/0x518
>                  kernfs_fop_write_iter+0x13e/0x1e0
>                  new_sync_write+0x100/0x190
>                  vfs_write+0x22c/0x2d8
>                  ksys_write+0x6c/0xf8
>                  __do_syscall+0x1da/0x208
>                  system_call+0x82/0xb0
>           -> #0 (css_set_lock){..-.}-{2:2}:
>                  check_prev_add+0xe0/0xed8
>                  validate_chain+0x736/0xb20
>                  __lock_acquire+0x604/0xbd8
>                  lock_acquire.part.0+0xe2/0x238
>                  lock_acquire+0xb0/0x200
>                  _raw_spin_lock_irqsave+0x6a/0xd8
>                  obj_cgroup_release+0x4a/0xe0
>                  percpu_ref_put_many.constprop.0+0x150/0x168
>                  drain_obj_stock+0x94/0xe8
>                  refill_obj_stock+0x94/0x278
>                  obj_cgroup_charge+0x164/0x1d8
>                  kmem_cache_alloc+0xac/0x528
>                  __sigqueue_alloc+0x150/0x308
>                  __send_signal+0x260/0x550
>                  send_signal+0x7e/0x348
>                  force_sig_info_to_task+0x104/0x180
>                  force_sig_fault+0x48/0x58
>                  __do_pgm_check+0x120/0x1f0
>                  pgm_check_handler+0x11e/0x180
>           other info that might help us debug this:
>            Possible unsafe locking scenario:
>                  CPU0                    CPU1
>                  ----                    ----
>             lock(&sighand->siglock);
>                                          lock(css_set_lock);
>                                          lock(&sighand->siglock);
>             lock(css_set_lock);
>            *** DEADLOCK ***
>           2 locks held by mmap1/202299:
>            #0: 00000000ca3b3818 (&sighand->siglock){-.-.}-{2:2}, at: force_sig_info_to_task+0x38/0x180
>            #1: 00000001892ad560 (rcu_read_lock){....}-{1:2}, at: percpu_ref_put_many.constprop.0+0x0/0x168
>           stack backtrace:
>           CPU: 15 PID: 202299 Comm: mmap1 Not tainted 5.17.0-20220113.rc0.git0.f2211f194038.300.fc35.s390x+debug #1
>           Hardware name: IBM 3906 M04 704 (LPAR)
>           Call Trace:
>            [<00000001888aacfe>] dump_stack_lvl+0x76/0x98
>            [<0000000187c6d7be>] check_noncircular+0x136/0x158
>            [<0000000187c6e888>] check_prev_add+0xe0/0xed8
>            [<0000000187c6fdb6>] validate_chain+0x736/0xb20
>            [<0000000187c71e54>] __lock_acquire+0x604/0xbd8
>            [<0000000187c7301a>] lock_acquire.part.0+0xe2/0x238
>            [<0000000187c73220>] lock_acquire+0xb0/0x200
>            [<00000001888bf9aa>] _raw_spin_lock_irqsave+0x6a/0xd8
>            [<0000000187ef6862>] obj_cgroup_release+0x4a/0xe0
>            [<0000000187ef6498>] percpu_ref_put_many.constprop.0+0x150/0x168
>            [<0000000187ef9674>] drain_obj_stock+0x94/0xe8
>            [<0000000187efa464>] refill_obj_stock+0x94/0x278
>            [<0000000187eff55c>] obj_cgroup_charge+0x164/0x1d8
>            [<0000000187ed8aa4>] kmem_cache_alloc+0xac/0x528
>            [<0000000187bf2eb8>] __sigqueue_alloc+0x150/0x308
>            [<0000000187bf4210>] __send_signal+0x260/0x550
>            [<0000000187bf5f06>] send_signal+0x7e/0x348
>            [<0000000187bf7274>] force_sig_info_to_task+0x104/0x180
>            [<0000000187bf7758>] force_sig_fault+0x48/0x58
>            [<00000001888ae160>] __do_pgm_check+0x120/0x1f0
>            [<00000001888c0cde>] pgm_check_handler+0x11e/0x180
>           INFO: lockdep is turned off.
>
> In this example a slab allocation from __send_signal() caused a
> refilling and draining of a percpu objcg stock, resulted in a
> releasing of another non-related objcg. Objcg release path requires
> taking the css_set_lock, which is used to synchronize objcg lists.
>
> This can create a circular dependency with the sighandler lock,
> which is taken with the locked css_set_lock by the freezer code
> (to freeze a task).
>
> In general it seems that using css_set_lock to synchronize objcg lists
> makes any slab allocations and deallocation with the locked
> css_set_lock and any intervened locks risky.
>
> To fix the problem and make the code more robust let's stop using
> css_set_lock to synchronize objcg lists and use a new dedicated
> spinlock instead.
>
> Fixes: bf4f059954dc ("mm: memcg/slab: obj_cgroup API")
> Signed-off-by: Roman Gushchin <guro@fb.com>
> Reported-by: Alexander Egorenkov <egorenar@linux.ibm.com>
> Tested-by: Alexander Egorenkov <egorenar@linux.ibm.com>
> Reviewed-by: Waiman Long <longman@redhat.com>
> Cc: Tejun Heo <tj@kernel.org>
> Cc: Johannes Weiner <hannes@cmpxchg.org>
> Cc: Shakeel Butt <shakeelb@google.com>
> Cc: Jeremy Linton <jeremy.linton@arm.com>
> Cc: cgroups@vger.kernel.org

Reviewed-by: Shakeel Butt <shakeelb@google.com>

WARNING: multiple messages have this Message-ID (diff)
From: Shakeel Butt <shakeelb-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
To: Roman Gushchin <guro-b10kYP2dOMg@public.gmane.org>
Cc: Andrew Morton
	<akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org>,
	Linux MM <linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org>,
	LKML <linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>,
	Alexander Egorenkov
	<egorenar-tEXmvtCZX7AybS5Ee8rs3A@public.gmane.org>,
	Waiman Long <longman-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>,
	Tejun Heo <tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>,
	Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>,
	Jeremy Linton <jeremy.linton-5wv7dgnIgG8@public.gmane.org>,
	Cgroups <cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>
Subject: Re: [PATCH RESEND] mm: memcg: synchronize objcg lists with a dedicated spinlock
Date: Tue, 1 Feb 2022 15:49:24 -0800	[thread overview]
Message-ID: <CALvZod5xXihut1mw1Q9vn9wuB0qOsm8ir63obf3_vv9rGZoacg@mail.gmail.com> (raw)
In-Reply-To: <Yfm1IHmoGdyUR81T-cx5fftMpWqeCjSd+JxjunQ2O0Ztt9esIQQ4Iyu8u01E@public.gmane.org>

On Tue, Feb 1, 2022 at 2:33 PM Roman Gushchin <guro-b10kYP2dOMg@public.gmane.org> wrote:
>
> Alexander reported a circular lock dependency revealed by the mmap1
> ltp test:
>   LOCKDEP_CIRCULAR (suite: ltp, case: mtest06 (mmap1))
>           WARNING: possible circular locking dependency detected
>           5.17.0-20220113.rc0.git0.f2211f194038.300.fc35.s390x+debug #1 Not tainted
>           ------------------------------------------------------
>           mmap1/202299 is trying to acquire lock:
>           00000001892c0188 (css_set_lock){..-.}-{2:2}, at: obj_cgroup_release+0x4a/0xe0
>           but task is already holding lock:
>           00000000ca3b3818 (&sighand->siglock){-.-.}-{2:2}, at: force_sig_info_to_task+0x38/0x180
>           which lock already depends on the new lock.
>           the existing dependency chain (in reverse order) is:
>           -> #1 (&sighand->siglock){-.-.}-{2:2}:
>                  __lock_acquire+0x604/0xbd8
>                  lock_acquire.part.0+0xe2/0x238
>                  lock_acquire+0xb0/0x200
>                  _raw_spin_lock_irqsave+0x6a/0xd8
>                  __lock_task_sighand+0x90/0x190
>                  cgroup_freeze_task+0x2e/0x90
>                  cgroup_migrate_execute+0x11c/0x608
>                  cgroup_update_dfl_csses+0x246/0x270
>                  cgroup_subtree_control_write+0x238/0x518
>                  kernfs_fop_write_iter+0x13e/0x1e0
>                  new_sync_write+0x100/0x190
>                  vfs_write+0x22c/0x2d8
>                  ksys_write+0x6c/0xf8
>                  __do_syscall+0x1da/0x208
>                  system_call+0x82/0xb0
>           -> #0 (css_set_lock){..-.}-{2:2}:
>                  check_prev_add+0xe0/0xed8
>                  validate_chain+0x736/0xb20
>                  __lock_acquire+0x604/0xbd8
>                  lock_acquire.part.0+0xe2/0x238
>                  lock_acquire+0xb0/0x200
>                  _raw_spin_lock_irqsave+0x6a/0xd8
>                  obj_cgroup_release+0x4a/0xe0
>                  percpu_ref_put_many.constprop.0+0x150/0x168
>                  drain_obj_stock+0x94/0xe8
>                  refill_obj_stock+0x94/0x278
>                  obj_cgroup_charge+0x164/0x1d8
>                  kmem_cache_alloc+0xac/0x528
>                  __sigqueue_alloc+0x150/0x308
>                  __send_signal+0x260/0x550
>                  send_signal+0x7e/0x348
>                  force_sig_info_to_task+0x104/0x180
>                  force_sig_fault+0x48/0x58
>                  __do_pgm_check+0x120/0x1f0
>                  pgm_check_handler+0x11e/0x180
>           other info that might help us debug this:
>            Possible unsafe locking scenario:
>                  CPU0                    CPU1
>                  ----                    ----
>             lock(&sighand->siglock);
>                                          lock(css_set_lock);
>                                          lock(&sighand->siglock);
>             lock(css_set_lock);
>            *** DEADLOCK ***
>           2 locks held by mmap1/202299:
>            #0: 00000000ca3b3818 (&sighand->siglock){-.-.}-{2:2}, at: force_sig_info_to_task+0x38/0x180
>            #1: 00000001892ad560 (rcu_read_lock){....}-{1:2}, at: percpu_ref_put_many.constprop.0+0x0/0x168
>           stack backtrace:
>           CPU: 15 PID: 202299 Comm: mmap1 Not tainted 5.17.0-20220113.rc0.git0.f2211f194038.300.fc35.s390x+debug #1
>           Hardware name: IBM 3906 M04 704 (LPAR)
>           Call Trace:
>            [<00000001888aacfe>] dump_stack_lvl+0x76/0x98
>            [<0000000187c6d7be>] check_noncircular+0x136/0x158
>            [<0000000187c6e888>] check_prev_add+0xe0/0xed8
>            [<0000000187c6fdb6>] validate_chain+0x736/0xb20
>            [<0000000187c71e54>] __lock_acquire+0x604/0xbd8
>            [<0000000187c7301a>] lock_acquire.part.0+0xe2/0x238
>            [<0000000187c73220>] lock_acquire+0xb0/0x200
>            [<00000001888bf9aa>] _raw_spin_lock_irqsave+0x6a/0xd8
>            [<0000000187ef6862>] obj_cgroup_release+0x4a/0xe0
>            [<0000000187ef6498>] percpu_ref_put_many.constprop.0+0x150/0x168
>            [<0000000187ef9674>] drain_obj_stock+0x94/0xe8
>            [<0000000187efa464>] refill_obj_stock+0x94/0x278
>            [<0000000187eff55c>] obj_cgroup_charge+0x164/0x1d8
>            [<0000000187ed8aa4>] kmem_cache_alloc+0xac/0x528
>            [<0000000187bf2eb8>] __sigqueue_alloc+0x150/0x308
>            [<0000000187bf4210>] __send_signal+0x260/0x550
>            [<0000000187bf5f06>] send_signal+0x7e/0x348
>            [<0000000187bf7274>] force_sig_info_to_task+0x104/0x180
>            [<0000000187bf7758>] force_sig_fault+0x48/0x58
>            [<00000001888ae160>] __do_pgm_check+0x120/0x1f0
>            [<00000001888c0cde>] pgm_check_handler+0x11e/0x180
>           INFO: lockdep is turned off.
>
> In this example a slab allocation from __send_signal() caused a
> refilling and draining of a percpu objcg stock, resulted in a
> releasing of another non-related objcg. Objcg release path requires
> taking the css_set_lock, which is used to synchronize objcg lists.
>
> This can create a circular dependency with the sighandler lock,
> which is taken with the locked css_set_lock by the freezer code
> (to freeze a task).
>
> In general it seems that using css_set_lock to synchronize objcg lists
> makes any slab allocations and deallocation with the locked
> css_set_lock and any intervened locks risky.
>
> To fix the problem and make the code more robust let's stop using
> css_set_lock to synchronize objcg lists and use a new dedicated
> spinlock instead.
>
> Fixes: bf4f059954dc ("mm: memcg/slab: obj_cgroup API")
> Signed-off-by: Roman Gushchin <guro-b10kYP2dOMg@public.gmane.org>
> Reported-by: Alexander Egorenkov <egorenar-tEXmvtCZX7AybS5Ee8rs3A@public.gmane.org>
> Tested-by: Alexander Egorenkov <egorenar-tEXmvtCZX7AybS5Ee8rs3A@public.gmane.org>
> Reviewed-by: Waiman Long <longman-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> Cc: Tejun Heo <tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
> Cc: Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>
> Cc: Shakeel Butt <shakeelb-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
> Cc: Jeremy Linton <jeremy.linton-5wv7dgnIgG8@public.gmane.org>
> Cc: cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org

Reviewed-by: Shakeel Butt <shakeelb-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>

  parent reply	other threads:[~2022-02-01 23:49 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-02-01 22:33 [PATCH RESEND] mm: memcg: synchronize objcg lists with a dedicated spinlock Roman Gushchin
2022-02-01 22:33 ` Roman Gushchin
2022-02-01 22:48 ` Tejun Heo
2022-02-01 22:48   ` Tejun Heo
2022-02-01 23:26   ` Roman Gushchin
2022-02-01 23:26     ` Roman Gushchin
2022-02-01 23:49 ` Shakeel Butt [this message]
2022-02-01 23:49   ` Shakeel Butt
2022-02-02 15:58 ` Jeremy Linton
2022-02-02 15:58   ` Jeremy Linton
2022-02-02 16:19   ` Roman Gushchin
2022-02-02 16:19     ` Roman Gushchin
2022-02-03 23:19 ` Andrew Morton
2022-02-03 23:19   ` Andrew Morton
2022-02-05 16:58   ` Roman Gushchin
2022-02-05 16:58     ` Roman Gushchin
2022-02-05 12:27 ` Muchun Song
2022-02-05 12:27   ` Muchun Song

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CALvZod5xXihut1mw1Q9vn9wuB0qOsm8ir63obf3_vv9rGZoacg@mail.gmail.com \
    --to=shakeelb@google.com \
    --cc=akpm@linux-foundation.org \
    --cc=cgroups@vger.kernel.org \
    --cc=egorenar@linux.ibm.com \
    --cc=guro@fb.com \
    --cc=hannes@cmpxchg.org \
    --cc=jeremy.linton@arm.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=longman@redhat.com \
    --cc=tj@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.