From: Yu Zhao <yuzhao@google.com> To: Christoph Lameter <cl@linux.com>, Pekka Enberg <penberg@kernel.org>, David Rientjes <rientjes@google.com>, Joonsoo Kim <iamjoonsoo.kim@lge.com>, Andrew Morton <akpm@linux-foundation.org> Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Yu Zhao <yuzhao@google.com> Subject: [PATCH] mm: avoid slub allocation while holding list_lock Date: Mon, 9 Sep 2019 00:10:16 -0600 Message-ID: <20190909061016.173927-1-yuzhao@google.com> (raw) If we are already under list_lock, don't call kmalloc(). Otherwise we will run into deadlock because kmalloc() also tries to grab the same lock. Instead, allocate pages directly. Given currently page->objects has 15 bits, we only need 1 page. We may waste some memory but we only do so when slub debug is on. WARNING: possible recursive locking detected -------------------------------------------- mount-encrypted/4921 is trying to acquire lock: (&(&n->list_lock)->rlock){-.-.}, at: ___slab_alloc+0x104/0x437 but task is already holding lock: (&(&n->list_lock)->rlock){-.-.}, at: __kmem_cache_shutdown+0x81/0x3cb other info that might help us debug this: Possible unsafe locking scenario: CPU0 ---- lock(&(&n->list_lock)->rlock); lock(&(&n->list_lock)->rlock); *** DEADLOCK *** Signed-off-by: Yu Zhao <yuzhao@google.com> --- mm/slub.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 8834563cdb4b..574a53ee31e1 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3683,7 +3683,11 @@ static void list_slab_objects(struct kmem_cache *s, struct page *page, #ifdef CONFIG_SLUB_DEBUG void *addr = page_address(page); void *p; - unsigned long *map = bitmap_zalloc(page->objects, GFP_ATOMIC); + int order; + unsigned long *map; + + order = get_order(DIV_ROUND_UP(page->objects, BITS_PER_BYTE)); + map = (void *)__get_free_pages(GFP_ATOMIC | __GFP_ZERO, order); if (!map) return; slab_err(s, page, text, s->name); @@ -3698,7 +3702,7 @@ static void list_slab_objects(struct kmem_cache *s, struct page *page, } } slab_unlock(page); - bitmap_free(map); + free_pages((unsigned long)map, order); #endif } -- 2.23.0.187.g17f5b7556c-goog
next reply index Thread overview: 40+ messages / expand[flat|nested] mbox.gz Atom feed top 2019-09-09 6:10 Yu Zhao [this message] 2019-09-09 16:00 ` Kirill A. Shutemov [not found] ` <e5e25aa3-651d-92b4-ac82-c5011c66a7cb@I-love.SAKURA.ne.jp> 2019-09-09 21:39 ` Yu Zhao [not found] ` <201909100141.x8A1fVdu048305@www262.sakura.ne.jp> 2019-09-10 2:16 ` Yu Zhao 2019-09-10 9:16 ` Kirill A. Shutemov 2019-09-11 14:13 ` Andrew Morton 2019-09-12 0:29 ` [PATCH 1/3] mm: correct mask size for slub page->objects Yu Zhao 2019-09-12 0:29 ` [PATCH 2/3] mm: avoid slub allocation while holding list_lock Yu Zhao 2019-09-12 0:44 ` Kirill A. Shutemov 2019-09-12 1:31 ` Yu Zhao 2019-09-12 2:31 ` [PATCH v2 1/4] mm: correct mask size for slub page->objects Yu Zhao 2019-09-12 2:31 ` [PATCH v2 2/4] mm: clean up validate_slab() Yu Zhao 2019-09-12 9:46 ` Kirill A. Shutemov 2019-09-12 2:31 ` [PATCH v2 3/4] mm: avoid slub allocation while holding list_lock Yu Zhao 2019-09-12 10:04 ` Kirill A. Shutemov 2019-09-12 2:31 ` [PATCH v2 4/4] mm: lock slub page when listing objects Yu Zhao 2019-09-12 10:06 ` Kirill A. Shutemov 2019-09-12 21:12 ` Yu Zhao 2019-09-13 14:58 ` Christopher Lameter 2019-09-12 9:40 ` [PATCH v2 1/4] mm: correct mask size for slub page->objects Kirill A. Shutemov 2019-09-12 21:11 ` Yu Zhao 2019-09-12 22:03 ` Kirill A. Shutemov 2019-09-14 0:07 ` [PATCH v3 1/2] mm: clean up validate_slab() Yu Zhao 2019-09-14 0:07 ` [PATCH v3 2/2] mm: avoid slub allocation while holding list_lock Yu Zhao 2019-09-16 8:39 ` [PATCH v3 1/2] mm: clean up validate_slab() Kirill A. Shutemov 2019-11-08 19:39 ` [PATCH v4 " Yu Zhao 2019-11-08 19:39 ` [PATCH v4 2/2] mm: avoid slub allocation while holding list_lock Yu Zhao 2019-11-09 20:52 ` Christopher Lameter 2019-11-09 23:01 ` Yu Zhao 2019-11-09 23:16 ` Christopher Lameter 2019-11-10 18:47 ` Yu Zhao 2019-11-11 15:47 ` Christopher Lameter 2019-11-11 15:55 ` [FIX] slub: Remove kmalloc under list_lock from list_slab_objects() V2 Christopher Lameter 2019-11-30 23:09 ` Andrew Morton 2019-12-02 15:12 ` Christopher Lameter 2019-12-07 22:03 ` Yu Zhao 2020-01-10 14:11 ` Vlastimil Babka [not found] ` <e0ed44ae-8dae-e8db-9d14-2b09b239af8e@i-love.sakura.ne.jp> 2020-01-13 1:34 ` Christopher Lameter 2019-11-11 18:15 ` [PATCH v4 2/2] mm: avoid slub allocation while holding list_lock Shakeel Butt 2019-09-12 0:29 ` [PATCH 3/3] mm: lock slub page when listing objects Yu Zhao
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20190909061016.173927-1-yuzhao@google.com \ --to=yuzhao@google.com \ --cc=akpm@linux-foundation.org \ --cc=cl@linux.com \ --cc=iamjoonsoo.kim@lge.com \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=penberg@kernel.org \ --cc=rientjes@google.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: link
Linux-mm Archive on lore.kernel.org Archives are clonable: git clone --mirror https://lore.kernel.org/linux-mm/0 linux-mm/git/0.git # If you have public-inbox 1.1+ installed, you may # initialize and index your mirror using the following commands: public-inbox-init -V2 linux-mm linux-mm/ https://lore.kernel.org/linux-mm \ linux-mm@kvack.org public-inbox-index linux-mm Example config snippet for mirrors Newsgroup available over NNTP: nntp://nntp.lore.kernel.org/org.kvack.linux-mm AGPL code for this site: git clone https://public-inbox.org/public-inbox.git