From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7C9E8C4320A for ; Thu, 29 Jul 2021 13:21:54 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1D97F60C41 for ; Thu, 29 Jul 2021 13:21:54 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 1D97F60C41 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 1E18E8D000C; Thu, 29 Jul 2021 09:21:44 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D9DFD8E0003; Thu, 29 Jul 2021 09:21:43 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 79FC58D0007; Thu, 29 Jul 2021 09:21:43 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0141.hostedemail.com [216.40.44.141]) by kanga.kvack.org (Postfix) with ESMTP id ED6198D0003 for ; Thu, 29 Jul 2021 09:21:42 -0400 (EDT) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 7484626823 for ; Thu, 29 Jul 2021 13:21:42 +0000 (UTC) X-FDA: 78415687644.05.27FC5B9 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28]) by imf18.hostedemail.com (Postfix) with ESMTP id C9EEC400208E for ; Thu, 29 Jul 2021 13:21:41 +0000 (UTC) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 9643D223CA; Thu, 29 Jul 2021 13:21:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1627564900; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=/1/J2IKawzQ6YynsjxPysAh3AnMnsb8kH5Q3drk+3fQ=; b=ElCeyg2CfKOwB+VE6mxOeI47gCYio5XffieQGXA3wLA/DW3vmuFvcNTwKYNEIS1Ufz0vTo OQwPRbluruHqrhDqQhrTx5paThtFBHK5IRUeSym18/+DFvRONKPedeyQ+Iidf+T4aLIwB4 qcYrd3AOSi/aDsKilzx99g0mFJU7cy0= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1627564900; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=/1/J2IKawzQ6YynsjxPysAh3AnMnsb8kH5Q3drk+3fQ=; b=NBujWOQcCLwdc3rnadhy9Fb8qhY5nbEX6ubNg/loB5RfBwS1yMvqjIRAQXacKoBpDn7IK6 NCKPPkVgZ01LHMCQ== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 63BE713AE9; Thu, 29 Jul 2021 13:21:40 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id UMvCF2SrAmF9AwAAMHmgww (envelope-from ); Thu, 29 Jul 2021 13:21:40 +0000 From: Vlastimil Babka To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Christoph Lameter , David Rientjes , Pekka Enberg , Joonsoo Kim Cc: Mike Galbraith , Sebastian Andrzej Siewior , Thomas Gleixner , Mel Gorman , Jesper Dangaard Brouer , Jann Horn , Vlastimil Babka Subject: [PATCH v3 02/35] mm, slub: allocate private object map for debugfs listings Date: Thu, 29 Jul 2021 15:20:59 +0200 Message-Id: <20210729132132.19691-3-vbabka@suse.cz> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20210729132132.19691-1-vbabka@suse.cz> References: <20210729132132.19691-1-vbabka@suse.cz> MIME-Version: 1.0 X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: C9EEC400208E Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=ElCeyg2C; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=NBujWOQc; dmarc=none; spf=pass (imf18.hostedemail.com: domain of vbabka@suse.cz designates 195.135.220.28 as permitted sender) smtp.mailfrom=vbabka@suse.cz X-Stat-Signature: snqpnxearzc1jbctcq4dxyo1epbze7uy X-HE-Tag: 1627564901-968286 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Slub has a static spinlock protected bitmap for marking which objects are= on freelist when it wants to list them, for situations where dynamically allocating such map can lead to recursion or locking issues, and on-stack bitmap would be too large. The handlers of debugfs files alloc_traces and free_traces also currently= use this shared bitmap, but their syscall context makes it straightforward to allo= cate a private map before entering locked sections, so switch these processing p= aths to use a private bitmap. Signed-off-by: Vlastimil Babka Acked-by: Christoph Lameter Acked-by: Mel Gorman --- mm/slub.c | 44 +++++++++++++++++++++++++++++--------------- 1 file changed, 29 insertions(+), 15 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 422a61d7bf5f..66795aec6e10 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -454,6 +454,18 @@ static inline bool cmpxchg_double_slab(struct kmem_c= ache *s, struct page *page, static unsigned long object_map[BITS_TO_LONGS(MAX_OBJS_PER_PAGE)]; static DEFINE_SPINLOCK(object_map_lock); =20 +static void __fill_map(unsigned long *obj_map, struct kmem_cache *s, + struct page *page) +{ + void *addr =3D page_address(page); + void *p; + + bitmap_zero(obj_map, page->objects); + + for (p =3D page->freelist; p; p =3D get_freepointer(s, p)) + set_bit(__obj_to_index(s, addr, p), obj_map); +} + #if IS_ENABLED(CONFIG_KUNIT) static bool slab_add_kunit_errors(void) { @@ -483,17 +495,11 @@ static inline bool slab_add_kunit_errors(void) { re= turn false; } static unsigned long *get_map(struct kmem_cache *s, struct page *page) __acquires(&object_map_lock) { - void *p; - void *addr =3D page_address(page); - VM_BUG_ON(!irqs_disabled()); =20 spin_lock(&object_map_lock); =20 - bitmap_zero(object_map, page->objects); - - for (p =3D page->freelist; p; p =3D get_freepointer(s, p)) - set_bit(__obj_to_index(s, addr, p), object_map); + __fill_map(object_map, s, page); =20 return object_map; } @@ -4874,17 +4880,17 @@ static int add_location(struct loc_track *t, stru= ct kmem_cache *s, } =20 static void process_slab(struct loc_track *t, struct kmem_cache *s, - struct page *page, enum track_item alloc) + struct page *page, enum track_item alloc, + unsigned long *obj_map) { void *addr =3D page_address(page); void *p; - unsigned long *map; =20 - map =3D get_map(s, page); + __fill_map(obj_map, s, page); + for_each_object(p, s, addr, page->objects) - if (!test_bit(__obj_to_index(s, addr, p), map)) + if (!test_bit(__obj_to_index(s, addr, p), obj_map)) add_location(t, s, get_track(s, p, alloc)); - put_map(map); } #endif /* CONFIG_DEBUG_FS */ #endif /* CONFIG_SLUB_DEBUG */ @@ -5811,14 +5817,21 @@ static int slab_debug_trace_open(struct inode *in= ode, struct file *filep) struct loc_track *t =3D __seq_open_private(filep, &slab_debugfs_sops, sizeof(struct loc_track)); struct kmem_cache *s =3D file_inode(filep)->i_private; + unsigned long *obj_map; + + obj_map =3D bitmap_alloc(oo_objects(s->oo), GFP_KERNEL); + if (!obj_map) + return -ENOMEM; =20 if (strcmp(filep->f_path.dentry->d_name.name, "alloc_traces") =3D=3D 0) alloc =3D TRACK_ALLOC; else alloc =3D TRACK_FREE; =20 - if (!alloc_loc_track(t, PAGE_SIZE / sizeof(struct location), GFP_KERNEL= )) + if (!alloc_loc_track(t, PAGE_SIZE / sizeof(struct location), GFP_KERNEL= )) { + bitmap_free(obj_map); return -ENOMEM; + } =20 for_each_kmem_cache_node(s, node, n) { unsigned long flags; @@ -5829,12 +5842,13 @@ static int slab_debug_trace_open(struct inode *in= ode, struct file *filep) =20 spin_lock_irqsave(&n->list_lock, flags); list_for_each_entry(page, &n->partial, slab_list) - process_slab(t, s, page, alloc); + process_slab(t, s, page, alloc, obj_map); list_for_each_entry(page, &n->full, slab_list) - process_slab(t, s, page, alloc); + process_slab(t, s, page, alloc, obj_map); spin_unlock_irqrestore(&n->list_lock, flags); } =20 + bitmap_free(obj_map); return 0; } =20 --=20 2.32.0