All of lore.kernel.org
 help / color / mirror / Atom feed
From: Waiman Long <longman@redhat.com>
To: Peter Zijlstra <peterz@infradead.org>,
	Ingo Molnar <mingo@redhat.com>, Will Deacon <will@kernel.org>,
	Christoph Lameter <cl@linux.com>,
	Pekka Enberg <penberg@kernel.org>,
	David Rientjes <rientjes@google.com>,
	Joonsoo Kim <iamjoonsoo.kim@lge.com>,
	Andrew Morton <akpm@linux-foundation.org>
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	Waiman Long <longman@redhat.com>
Subject: [PATCH 3/3] mm/slub: Fix potential deadlock problem in slab_attr_store()
Date: Mon, 10 Feb 2020 15:46:51 -0500	[thread overview]
Message-ID: <20200210204651.21674-4-longman@redhat.com> (raw)
In-Reply-To: <20200210204651.21674-1-longman@redhat.com>

The following lockdep splat was seen:

    [  176.241923] ======================================================
    [  176.241924] WARNING: possible circular locking dependency detected
    [  176.241926] 4.18.0-172.rt13.29.el8.x86_64+debug #1 Not tainted
    [  176.241927] ------------------------------------------------------
    [  176.241929] slub_cpu_partia/5371 is trying to acquire lock:
    [  176.241930] ffffffffa0b83718 (slab_mutex){+.+.}, at: slab_attr_store+0x6b/0xe0
    [  176.241941] but task is already holding lock:
    [  176.241942] ffff88bb6d8b83c8 (kn->count#103){++++}, at: kernfs_fop_write+0x1cc/0x400
    [  176.241947] which lock already depends on the new lock.

    [  176.241949] the existing dependency chain (in reverse order) is:
    [  176.241949] -> #1 (kn->count#103){++++}:
    [  176.241955]        __kernfs_remove+0x616/0x800
    [  176.241957]        kernfs_remove_by_name_ns+0x3e/0x80
    [  176.241959]        sysfs_slab_add+0x1c6/0x330
    [  176.241961]        __kmem_cache_create+0x15f/0x1b0
    [  176.241964]        create_cache+0xe1/0x220
    [  176.241966]        kmem_cache_create_usercopy+0x1a3/0x260
    [  176.241967]        kmem_cache_create+0x12/0x20
    [  176.242076]        mlx5_init_fs+0x18d/0x1a00 [mlx5_core]
    [  176.242100]        mlx5_load_one+0x3b4/0x1730 [mlx5_core]
    [  176.242124]        init_one+0x901/0x11b0 [mlx5_core]
    [  176.242127]        local_pci_probe+0xd4/0x180
    [  176.242131]        work_for_cpu_fn+0x51/0xa0
    [  176.242133]        process_one_work+0x91a/0x1ac0
    [  176.242134]        worker_thread+0x536/0xb40
    [  176.242136]        kthread+0x30c/0x3d0
    [  176.242140]        ret_from_fork+0x27/0x50
    [  176.242140] -> #0 (slab_mutex){+.+.}:
    [  176.242145]        __lock_acquire+0x22cb/0x48c0
    [  176.242146]        lock_acquire+0x134/0x4c0
    [  176.242148]        _mutex_lock+0x28/0x40
    [  176.242150]        slab_attr_store+0x6b/0xe0
    [  176.242151]        kernfs_fop_write+0x251/0x400
    [  176.242154]        vfs_write+0x157/0x460
    [  176.242155]        ksys_write+0xb8/0x170
    [  176.242158]        do_syscall_64+0x13c/0x710
    [  176.242160]        entry_SYSCALL_64_after_hwframe+0x6a/0xdf
    [  176.242161]
                   other info that might help us debug this:

    [  176.242161]  Possible unsafe locking scenario:

    [  176.242162]        CPU0                    CPU1
    [  176.242163]        ----                    ----
    [  176.242163]   lock(kn->count#103);
    [  176.242165]                                lock(slab_mutex);
    [  176.242166]                                lock(kn->count#103);
    [  176.242167]   lock(slab_mutex);
    [  176.242169]
                    *** DEADLOCK ***

    [  176.242170] 3 locks held by slub_cpu_partia/5371:
    [  176.242170]  #0: ffff888705e3a800 (sb_writers#4){.+.+}, at: vfs_write+0x31c/0x460
    [  176.242174]  #1: ffff889aeec4d658 (&of->mutex){+.+.}, at: kernfs_fop_write+0x1a9/0x400
    [  176.242177]  #2: ffff88bb6d8b83c8 (kn->count#103){++++}, at: kernfs_fop_write+0x1cc/0x400
    [  176.242180]
                   stack backtrace:
    [  176.242183] CPU: 36 PID: 5371 Comm: slub_cpu_partia Not tainted 4.18.0-172.rt13.29.el8.x86_64+debug #1
    [  176.242184] Hardware name: AMD Corporation DAYTONA_X/DAYTONA_X, BIOS RDY1005C 11/22/2019
    [  176.242185] Call Trace:
    [  176.242190]  dump_stack+0x9a/0xf0
    [  176.242193]  check_noncircular+0x317/0x3c0
    [  176.242195]  ? print_circular_bug+0x1e0/0x1e0
    [  176.242199]  ? native_sched_clock+0x32/0x1e0
    [  176.242202]  ? sched_clock+0x5/0x10
    [  176.242205]  ? sched_clock_cpu+0x238/0x340
    [  176.242208]  __lock_acquire+0x22cb/0x48c0
    [  176.242213]  ? trace_hardirqs_on+0x10/0x10
    [  176.242215]  ? trace_hardirqs_on+0x10/0x10
    [  176.242218]  lock_acquire+0x134/0x4c0
    [  176.242220]  ? slab_attr_store+0x6b/0xe0
    [  176.242223]  _mutex_lock+0x28/0x40
    [  176.242225]  ? slab_attr_store+0x6b/0xe0
    [  176.242227]  slab_attr_store+0x6b/0xe0
    [  176.242229]  ? sysfs_file_ops+0x160/0x160
    [  176.242230]  kernfs_fop_write+0x251/0x400
    [  176.242232]  ? __sb_start_write+0x26a/0x3f0
    [  176.242234]  vfs_write+0x157/0x460
    [  176.242237]  ksys_write+0xb8/0x170
    [  176.242239]  ? __ia32_sys_read+0xb0/0xb0
    [  176.242242]  ? do_syscall_64+0xb9/0x710
    [  176.242245]  do_syscall_64+0x13c/0x710
    [  176.242247]  entry_SYSCALL_64_after_hwframe+0x6a/0xdf

In order to fix this circular lock dependency problem, we need to do a
mutex_trylock(&slab_mutex) in slab_attr_store() for CPU0 above. A simple
trylock, however, is easy to fail causing user dissatisfaction. So the
new mutex_timed_lock() function is now used to do a trylock with a
100ms timeout.

Signed-off-by: Waiman Long <longman@redhat.com>
---
 mm/slub.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/mm/slub.c b/mm/slub.c
index 17dc00e33115..495bec9b66ab 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -5536,7 +5536,12 @@ static ssize_t slab_attr_store(struct kobject *kobj,
 	if (slab_state >= FULL && err >= 0 && is_root_cache(s)) {
 		struct kmem_cache *c;
 
-		mutex_lock(&slab_mutex);
+		/*
+		 * Timeout after 100ms
+		 */
+		if (mutex_timed_lock(&slab_mutex, 100) < 0)
+			return -EBUSY;
+
 		if (s->max_attr_size < len)
 			s->max_attr_size = len;
 
-- 
2.18.1


  parent reply	other threads:[~2020-02-10 20:48 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-02-10 20:46 [PATCH 0/3] locking/mutex: Add mutex_timed_lock() to solve potential deadlock problems Waiman Long
2020-02-10 20:46 ` [PATCH 1/3] locking/mutex: Add mutex_timed_lock() Waiman Long
2020-02-10 20:46 ` [PATCH 2/3] locking/mutex: Enable some lock event counters Waiman Long
2020-02-10 20:46 ` Waiman Long [this message]
2020-02-10 22:03   ` [PATCH 3/3] mm/slub: Fix potential deadlock problem in slab_attr_store() Andrew Morton
2020-02-10 22:14     ` Waiman Long
2020-02-10 23:10       ` Andrew Morton
2020-02-11 23:30         ` Waiman Long
2020-02-12 20:40           ` Waiman Long
2020-02-13 12:22   ` kbuild test robot
2020-02-13 12:22     ` kbuild test robot
2020-02-13 16:48   ` kbuild test robot
2020-02-13 16:48     ` kbuild test robot
2020-02-11 12:31 ` [PATCH 0/3] locking/mutex: Add mutex_timed_lock() to solve potential deadlock problems Peter Zijlstra
2020-02-11 23:31   ` Waiman Long

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200210204651.21674-4-longman@redhat.com \
    --to=longman@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=cl@linux.com \
    --cc=iamjoonsoo.kim@lge.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mingo@redhat.com \
    --cc=penberg@kernel.org \
    --cc=peterz@infradead.org \
    --cc=rientjes@google.com \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.