All of lore.kernel.org
 help / color / mirror / Atom feed
From: Yu Zhao <yuzhao@google.com>
To: Christoph Lameter <cl@linux.com>,
	Pekka Enberg <penberg@kernel.org>,
	David Rientjes <rientjes@google.com>,
	Joonsoo Kim <iamjoonsoo.kim@lge.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	"Kirill A . Shutemov" <kirill@shutemov.name>,
	Tetsuo Handa <penguin-kernel@i-love.sakura.ne.jp>
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	Yu Zhao <yuzhao@google.com>,
	"Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>
Subject: [PATCH v4 1/2] mm: clean up validate_slab()
Date: Fri,  8 Nov 2019 12:39:57 -0700	[thread overview]
Message-ID: <20191108193958.205102-1-yuzhao@google.com> (raw)
In-Reply-To: <20190914000743.182739-1-yuzhao@google.com>

The function doesn't need to return any value, and the check can be
done in one pass.

There is a behavior change: before the patch, we stop at the first
invalid free object; after the patch, we stop at the first invalid
object, free or in use. This shouldn't matter because the original
behavior isn't intended anyway.

Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Signed-off-by: Yu Zhao <yuzhao@google.com>
---
 mm/slub.c | 21 ++++++++-------------
 1 file changed, 8 insertions(+), 13 deletions(-)

diff --git a/mm/slub.c b/mm/slub.c
index b25c807a111f..6930c3febad7 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -4404,31 +4404,26 @@ static int count_total(struct page *page)
 #endif
 
 #ifdef CONFIG_SLUB_DEBUG
-static int validate_slab(struct kmem_cache *s, struct page *page,
+static void validate_slab(struct kmem_cache *s, struct page *page,
 						unsigned long *map)
 {
 	void *p;
 	void *addr = page_address(page);
 
-	if (!check_slab(s, page) ||
-			!on_freelist(s, page, NULL))
-		return 0;
+	if (!check_slab(s, page) || !on_freelist(s, page, NULL))
+		return;
 
 	/* Now we know that a valid freelist exists */
 	bitmap_zero(map, page->objects);
 
 	get_map(s, page, map);
 	for_each_object(p, s, addr, page->objects) {
-		if (test_bit(slab_index(p, s, addr), map))
-			if (!check_object(s, page, p, SLUB_RED_INACTIVE))
-				return 0;
-	}
+		u8 val = test_bit(slab_index(p, s, addr), map) ?
+			 SLUB_RED_INACTIVE : SLUB_RED_ACTIVE;
 
-	for_each_object(p, s, addr, page->objects)
-		if (!test_bit(slab_index(p, s, addr), map))
-			if (!check_object(s, page, p, SLUB_RED_ACTIVE))
-				return 0;
-	return 1;
+		if (!check_object(s, page, p, val))
+			break;
+	}
 }
 
 static void validate_slab_slab(struct kmem_cache *s, struct page *page,
-- 
2.24.0.rc1.363.gb1bccd3e3d-goog


  parent reply	other threads:[~2019-11-08 19:40 UTC|newest]

Thread overview: 64+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-09-09  6:10 [PATCH] mm: avoid slub allocation while holding list_lock Yu Zhao
2019-09-09  6:10 ` Yu Zhao
2019-09-09 16:00 ` Kirill A. Shutemov
2019-09-09 20:57   ` Tetsuo Handa
2019-09-09 21:39     ` Yu Zhao
2019-09-10  1:41       ` Tetsuo Handa
2019-09-10  2:16         ` Yu Zhao
2019-09-10  9:16       ` Kirill A. Shutemov
2019-09-11 14:13 ` Andrew Morton
2019-09-12  0:29   ` [PATCH 1/3] mm: correct mask size for slub page->objects Yu Zhao
2019-09-12  0:29     ` Yu Zhao
2019-09-12  0:29     ` [PATCH 2/3] mm: avoid slub allocation while holding list_lock Yu Zhao
2019-09-12  0:29       ` Yu Zhao
2019-09-12  0:44       ` Kirill A. Shutemov
2019-09-12  1:31         ` Yu Zhao
2019-09-12  2:31         ` [PATCH v2 1/4] mm: correct mask size for slub page->objects Yu Zhao
2019-09-12  2:31           ` Yu Zhao
2019-09-12  2:31           ` [PATCH v2 2/4] mm: clean up validate_slab() Yu Zhao
2019-09-12  2:31             ` Yu Zhao
2019-09-12  9:46             ` Kirill A. Shutemov
2019-09-12  2:31           ` [PATCH v2 3/4] mm: avoid slub allocation while holding list_lock Yu Zhao
2019-09-12  2:31             ` Yu Zhao
2019-09-12 10:04             ` Kirill A. Shutemov
2019-09-12  2:31           ` [PATCH v2 4/4] mm: lock slub page when listing objects Yu Zhao
2019-09-12  2:31             ` Yu Zhao
2019-09-12 10:06             ` Kirill A. Shutemov
2019-09-12 21:12               ` Yu Zhao
2019-09-13 14:58             ` Christopher Lameter
2019-09-13 14:58               ` Christopher Lameter
2019-09-12  9:40           ` [PATCH v2 1/4] mm: correct mask size for slub page->objects Kirill A. Shutemov
2019-09-12 21:11             ` Yu Zhao
2019-09-12 22:03               ` Kirill A. Shutemov
2019-09-14  0:07           ` [PATCH v3 1/2] mm: clean up validate_slab() Yu Zhao
2019-09-14  0:07             ` Yu Zhao
2019-09-14  0:07             ` [PATCH v3 2/2] mm: avoid slub allocation while holding list_lock Yu Zhao
2019-09-14  0:07               ` Yu Zhao
2019-09-16  8:39             ` [PATCH v3 1/2] mm: clean up validate_slab() Kirill A. Shutemov
2019-11-08 19:39             ` Yu Zhao [this message]
2019-11-08 19:39               ` [PATCH v4 " Yu Zhao
2019-11-08 19:39               ` [PATCH v4 2/2] mm: avoid slub allocation while holding list_lock Yu Zhao
2019-11-08 19:39                 ` Yu Zhao
2019-11-09 20:52                 ` Christopher Lameter
2019-11-09 20:52                   ` Christopher Lameter
2019-11-09 23:01                   ` Yu Zhao
2019-11-09 23:16                     ` Christopher Lameter
2019-11-09 23:16                       ` Christopher Lameter
2019-11-10 18:47                       ` Yu Zhao
2019-11-11 15:47                         ` Christopher Lameter
2019-11-11 15:47                           ` Christopher Lameter
2019-11-11 15:55                           ` [FIX] slub: Remove kmalloc under list_lock from list_slab_objects() V2 Christopher Lameter
2019-11-11 15:55                             ` Christopher Lameter
2019-11-30 23:09                             ` Andrew Morton
2019-12-01  1:17                               ` Tetsuo Handa
2019-12-02 15:12                               ` Christopher Lameter
2019-12-02 15:12                                 ` Christopher Lameter
2019-12-07 22:03                                 ` Yu Zhao
2020-01-10 14:11                                   ` Vlastimil Babka
2020-01-12 11:03                                     ` Tetsuo Handa
2020-01-13  1:34                                       ` Christopher Lameter
2020-01-13  1:34                                         ` Christopher Lameter
2019-11-11 18:15                           ` [PATCH v4 2/2] mm: avoid slub allocation while holding list_lock Shakeel Butt
2019-11-11 18:15                             ` Shakeel Butt
2019-09-12  0:29     ` [PATCH 3/3] mm: lock slub page when listing objects Yu Zhao
2019-09-12  0:29       ` Yu Zhao

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20191108193958.205102-1-yuzhao@google.com \
    --to=yuzhao@google.com \
    --cc=akpm@linux-foundation.org \
    --cc=cl@linux.com \
    --cc=iamjoonsoo.kim@lge.com \
    --cc=kirill.shutemov@linux.intel.com \
    --cc=kirill@shutemov.name \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=penberg@kernel.org \
    --cc=penguin-kernel@i-love.sakura.ne.jp \
    --cc=rientjes@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.