linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] mm: avoid slub allocation while holding list_lock
@ 2019-09-09  6:10 Yu Zhao
  2019-09-09 16:00 ` Kirill A. Shutemov
  2019-09-11 14:13 ` Andrew Morton
  0 siblings, 2 replies; 44+ messages in thread
From: Yu Zhao @ 2019-09-09  6:10 UTC (permalink / raw)
  To: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton
  Cc: linux-mm, linux-kernel, Yu Zhao

If we are already under list_lock, don't call kmalloc(). Otherwise we
will run into deadlock because kmalloc() also tries to grab the same
lock.

Instead, allocate pages directly. Given currently page->objects has
15 bits, we only need 1 page. We may waste some memory but we only do
so when slub debug is on.

  WARNING: possible recursive locking detected
  --------------------------------------------
  mount-encrypted/4921 is trying to acquire lock:
  (&(&n->list_lock)->rlock){-.-.}, at: ___slab_alloc+0x104/0x437

  but task is already holding lock:
  (&(&n->list_lock)->rlock){-.-.}, at: __kmem_cache_shutdown+0x81/0x3cb

  other info that might help us debug this:
   Possible unsafe locking scenario:

         CPU0
         ----
    lock(&(&n->list_lock)->rlock);
    lock(&(&n->list_lock)->rlock);

   *** DEADLOCK ***

Signed-off-by: Yu Zhao <yuzhao@google.com>
---
 mm/slub.c | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/mm/slub.c b/mm/slub.c
index 8834563cdb4b..574a53ee31e1 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3683,7 +3683,11 @@ static void list_slab_objects(struct kmem_cache *s, struct page *page,
 #ifdef CONFIG_SLUB_DEBUG
 	void *addr = page_address(page);
 	void *p;
-	unsigned long *map = bitmap_zalloc(page->objects, GFP_ATOMIC);
+	int order;
+	unsigned long *map;
+
+	order = get_order(DIV_ROUND_UP(page->objects, BITS_PER_BYTE));
+	map = (void *)__get_free_pages(GFP_ATOMIC | __GFP_ZERO, order);
 	if (!map)
 		return;
 	slab_err(s, page, text, s->name);
@@ -3698,7 +3702,7 @@ static void list_slab_objects(struct kmem_cache *s, struct page *page,
 		}
 	}
 	slab_unlock(page);
-	bitmap_free(map);
+	free_pages((unsigned long)map, order);
 #endif
 }
 
-- 
2.23.0.187.g17f5b7556c-goog


^ permalink raw reply related	[flat|nested] 44+ messages in thread

end of thread, other threads:[~2020-01-13  1:34 UTC | newest]

Thread overview: 44+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-09-09  6:10 [PATCH] mm: avoid slub allocation while holding list_lock Yu Zhao
2019-09-09 16:00 ` Kirill A. Shutemov
2019-09-09 20:57   ` Tetsuo Handa
2019-09-09 21:39     ` Yu Zhao
2019-09-10  1:41       ` Tetsuo Handa
2019-09-10  2:16         ` Yu Zhao
2019-09-10  9:16       ` Kirill A. Shutemov
2019-09-11 14:13 ` Andrew Morton
2019-09-12  0:29   ` [PATCH 1/3] mm: correct mask size for slub page->objects Yu Zhao
2019-09-12  0:29     ` [PATCH 2/3] mm: avoid slub allocation while holding list_lock Yu Zhao
2019-09-12  0:44       ` Kirill A. Shutemov
2019-09-12  1:31         ` Yu Zhao
2019-09-12  2:31         ` [PATCH v2 1/4] mm: correct mask size for slub page->objects Yu Zhao
2019-09-12  2:31           ` [PATCH v2 2/4] mm: clean up validate_slab() Yu Zhao
2019-09-12  9:46             ` Kirill A. Shutemov
2019-09-12  2:31           ` [PATCH v2 3/4] mm: avoid slub allocation while holding list_lock Yu Zhao
2019-09-12 10:04             ` Kirill A. Shutemov
2019-09-12  2:31           ` [PATCH v2 4/4] mm: lock slub page when listing objects Yu Zhao
2019-09-12 10:06             ` Kirill A. Shutemov
2019-09-12 21:12               ` Yu Zhao
2019-09-13 14:58             ` Christopher Lameter
2019-09-12  9:40           ` [PATCH v2 1/4] mm: correct mask size for slub page->objects Kirill A. Shutemov
2019-09-12 21:11             ` Yu Zhao
2019-09-12 22:03               ` Kirill A. Shutemov
2019-09-14  0:07           ` [PATCH v3 1/2] mm: clean up validate_slab() Yu Zhao
2019-09-14  0:07             ` [PATCH v3 2/2] mm: avoid slub allocation while holding list_lock Yu Zhao
2019-09-16  8:39             ` [PATCH v3 1/2] mm: clean up validate_slab() Kirill A. Shutemov
2019-11-08 19:39             ` [PATCH v4 " Yu Zhao
2019-11-08 19:39               ` [PATCH v4 2/2] mm: avoid slub allocation while holding list_lock Yu Zhao
2019-11-09 20:52                 ` Christopher Lameter
2019-11-09 23:01                   ` Yu Zhao
2019-11-09 23:16                     ` Christopher Lameter
2019-11-10 18:47                       ` Yu Zhao
2019-11-11 15:47                         ` Christopher Lameter
2019-11-11 15:55                           ` [FIX] slub: Remove kmalloc under list_lock from list_slab_objects() V2 Christopher Lameter
2019-11-30 23:09                             ` Andrew Morton
2019-12-01  1:17                               ` Tetsuo Handa
2019-12-02 15:12                               ` Christopher Lameter
2019-12-07 22:03                                 ` Yu Zhao
2020-01-10 14:11                                   ` Vlastimil Babka
2020-01-12 11:03                                     ` Tetsuo Handa
2020-01-13  1:34                                       ` Christopher Lameter
2019-11-11 18:15                           ` [PATCH v4 2/2] mm: avoid slub allocation while holding list_lock Shakeel Butt
2019-09-12  0:29     ` [PATCH 3/3] mm: lock slub page when listing objects Yu Zhao

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).