All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 0/7] mm: Use slab_list list_head instead of lru
@ 2019-03-14  5:31 Tobin C. Harding
  2019-03-14  5:31 ` [PATCH v3 1/7] list: Add function list_rotate_to_front() Tobin C. Harding
                   ` (6 more replies)
  0 siblings, 7 replies; 12+ messages in thread
From: Tobin C. Harding @ 2019-03-14  5:31 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Tobin C. Harding, Roman Gushchin, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Matthew Wilcox,
	linux-mm, linux-kernel

Currently the slab allocators (ab)use the struct page 'lru' list_head.
We have a list head for slab allocators to use, 'slab_list'.

During v2 it was noted by Christoph that the SLOB allocator was reaching
into a list_head, this version adds 2 patches to the front of the set to
fix that.

Clean up all three allocators by using the 'slab_list' list_head instead
of overloading the 'lru' list_head.

Patch 1 - Adds a function to rotate a list to a specified entry.

Patch 2 - Removes the code that reaches into list_head and instead uses
	  the list_head API including the newly defined function.

Patches 3-7 are unchanged from v3

Patch 3 (v2: patch 4) - Changes the SLOB allocator to use slab_list
      	     	      	instead of lru.

Patch 4 (v2: patch 1) - Makes no code changes, adds comments to #endif
      	     	      	statements.

Patch 5 (v2: patch 2) - Use slab_list instead of lru for SLUB allocator.

Patch 6 (v2: patch 3) - Use slab_list instead of lru for SLAB allocator.

Patch 7 (v2: patch 5) - Removes the now stale comment in the page struct
      	     	      	definition.

During v2 development patches were checked to see if the object file
before and after was identical.  Clearly this will no longer be possible
for mm/slob.o, however this work is still of use to validate the
change from lru -> slab_list.

Patch 1 was tested with a module (creates and populates a list then
calls list_rotate_to_front() and verifies new order):

      https://github.com/tcharding/ktest/tree/master/list_head

Patch 2 was tested with another module that does some basic slab
allocation and freeing to a newly created slab cache:

	https://github.com/tcharding/ktest/tree/master/slab

Tested on a kernel with this in the config:

	CONFIG_SLOB=y
	CONFIG_SLAB_MERGE_DEFAULT=y


Changes since v2:

 - Add list_rotate_to_front().
 - Fix slob to use list_head API.
 - Re-order patches to put the list.h changes up front.
 - Add acks from Christoph.

Changes since v1:

 - Verify object files are the same before and after the patch set is
   applied (suggested by Matthew).
 - Add extra explanation to the commit logs explaining why these changes
   are safe to make (suggested by Roman).
 - Remove stale comment (thanks Willy).

thanks,
Tobin.


Tobin C. Harding (7):
  list: Add function list_rotate_to_front()
  slob: Respect list_head abstraction layer
  slob: Use slab_list instead of lru
  slub: Add comments to endif pre-processor macros
  slub: Use slab_list instead of lru
  slab: Use slab_list instead of lru
  mm: Remove stale comment from page struct

 include/linux/list.h     | 18 ++++++++++++
 include/linux/mm_types.h |  2 +-
 mm/slab.c                | 49 ++++++++++++++++----------------
 mm/slob.c                | 32 +++++++++++++--------
 mm/slub.c                | 60 ++++++++++++++++++++--------------------
 5 files changed, 94 insertions(+), 67 deletions(-)

-- 
2.21.0


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH v3 1/7] list: Add function list_rotate_to_front()
  2019-03-14  5:31 [PATCH v3 0/7] mm: Use slab_list list_head instead of lru Tobin C. Harding
@ 2019-03-14  5:31 ` Tobin C. Harding
  2019-03-14  5:31 ` [PATCH v3 2/7] slob: Respect list_head abstraction layer Tobin C. Harding
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 12+ messages in thread
From: Tobin C. Harding @ 2019-03-14  5:31 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Tobin C. Harding, Roman Gushchin, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Matthew Wilcox,
	linux-mm, linux-kernel

Currently if we wish to rotate a list until a specific item is at the
front of the list we can call list_move_tail(head, list).  Note that the
arguments are the reverse way to the usual use of list_move_tail(list,
head).  This is a hack, it depends on the developer knowing how the
list_head operates internally which violates the layer of abstraction
offered by the list_head.  Also, it is not intuitive so the next
developer to come along must study list.h in order to fully understand
what is meant by the call, while this is 'good for' the developer it
makes reading the code harder.  We should have an function appropriately
named that does this if there are users for it intree.

By grep'ing the tree for list_move_tail() and list_tail() and attempting
to guess the argument order from the names it seems there is only one
place currently in the tree that does this - the slob allocatator.

Add function list_rotate_to_front() to rotate a list until the specified
item is at the front of the list.

Signed-off-by: Tobin C. Harding <tobin@kernel.org>
---
 include/linux/list.h | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)

diff --git a/include/linux/list.h b/include/linux/list.h
index 79626b5ab36c..8ead813e7f1c 100644
--- a/include/linux/list.h
+++ b/include/linux/list.h
@@ -270,6 +270,24 @@ static inline void list_rotate_left(struct list_head *head)
 	}
 }
 
+/**
+ * list_rotate_to_front() - Rotate list to specific item.
+ * @list: The desired new front of the list.
+ * @head: The head of the list.
+ *
+ * Rotates list so that @list becomes the new front of the list.
+ */
+static inline void list_rotate_to_front(struct list_head *list,
+					struct list_head *head)
+{
+	/*
+	 * Deletes the list head from the list denoted by @head and
+	 * places it as the tail of @list, this effectively rotates the
+	 * list so that @list is at the front.
+	 */
+	list_move_tail(head, list);
+}
+
 /**
  * list_is_singular - tests whether a list has just one entry.
  * @head: the list to test.
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v3 2/7] slob: Respect list_head abstraction layer
  2019-03-14  5:31 [PATCH v3 0/7] mm: Use slab_list list_head instead of lru Tobin C. Harding
  2019-03-14  5:31 ` [PATCH v3 1/7] list: Add function list_rotate_to_front() Tobin C. Harding
@ 2019-03-14  5:31 ` Tobin C. Harding
  2019-03-14  5:31 ` [PATCH v3 3/7] slob: Use slab_list instead of lru Tobin C. Harding
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 12+ messages in thread
From: Tobin C. Harding @ 2019-03-14  5:31 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Tobin C. Harding, Roman Gushchin, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Matthew Wilcox,
	linux-mm, linux-kernel

Currently we reach inside the list_head.  This is a violation of the
layer of abstraction provided by the list_head.  It makes the code
fragile.  More importantly it makes the code wicked hard to understand.

The code logic is based on the page in which an allocation was made, we
want to modify the slob_list we are working on to have this page at the
front.  We already have a function to check if an entry is at the front
of the list.  Recently a function was added to list.h to do the list
rotation. We can use these two functions to reduce line count, reduce
code fragility, and reduce cognitive load required to read the code.

Use list_head functions to interact with lists thereby maintaining the
abstraction provided by the list_head structure.

Signed-off-by: Tobin C. Harding <tobin@kernel.org>
---

I verified the comment pointing to Knuth, the page number may be out of
date but with this comment I was able to find the text that discusses
this, left the comment as is (after fixing style).

 mm/slob.c | 24 ++++++++++++++++--------
 1 file changed, 16 insertions(+), 8 deletions(-)

diff --git a/mm/slob.c b/mm/slob.c
index 307c2c9feb44..39ad9217ffea 100644
--- a/mm/slob.c
+++ b/mm/slob.c
@@ -268,8 +268,7 @@ static void *slob_page_alloc(struct page *sp, size_t size, int align)
  */
 static void *slob_alloc(size_t size, gfp_t gfp, int align, int node)
 {
-	struct page *sp;
-	struct list_head *prev;
+	struct page *sp, *prev, *next;
 	struct list_head *slob_list;
 	slob_t *b = NULL;
 	unsigned long flags;
@@ -296,18 +295,27 @@ static void *slob_alloc(size_t size, gfp_t gfp, int align, int node)
 		if (sp->units < SLOB_UNITS(size))
 			continue;
 
+		/*
+		 * Cache previous entry because slob_page_alloc() may
+		 * remove sp from slob_list.
+		 */
+		prev = list_prev_entry(sp, lru);
+
 		/* Attempt to alloc */
-		prev = sp->lru.prev;
 		b = slob_page_alloc(sp, size, align);
 		if (!b)
 			continue;
 
-		/* Improve fragment distribution and reduce our average
+		next = list_next_entry(prev, lru); /* This may or may not be sp */
+
+		/*
+		 * Improve fragment distribution and reduce our average
 		 * search time by starting our next search here. (see
-		 * Knuth vol 1, sec 2.5, pg 449) */
-		if (prev != slob_list->prev &&
-				slob_list->next != prev->next)
-			list_move_tail(slob_list, prev->next);
+		 * Knuth vol 1, sec 2.5, pg 449)
+		 */
+		if (!list_is_first(&next->lru, slob_list))
+			list_rotate_to_front(&next->lru, slob_list);
+
 		break;
 	}
 	spin_unlock_irqrestore(&slob_lock, flags);
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v3 3/7] slob: Use slab_list instead of lru
  2019-03-14  5:31 [PATCH v3 0/7] mm: Use slab_list list_head instead of lru Tobin C. Harding
  2019-03-14  5:31 ` [PATCH v3 1/7] list: Add function list_rotate_to_front() Tobin C. Harding
  2019-03-14  5:31 ` [PATCH v3 2/7] slob: Respect list_head abstraction layer Tobin C. Harding
@ 2019-03-14  5:31 ` Tobin C. Harding
  2019-03-14 18:52   ` Roman Gushchin
  2019-03-14  5:31 ` [PATCH v3 4/7] slub: Add comments to endif pre-processor macros Tobin C. Harding
                   ` (3 subsequent siblings)
  6 siblings, 1 reply; 12+ messages in thread
From: Tobin C. Harding @ 2019-03-14  5:31 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Tobin C. Harding, Roman Gushchin, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Matthew Wilcox,
	linux-mm, linux-kernel

Currently we use the page->lru list for maintaining lists of slabs.  We
have a list_head in the page structure (slab_list) that can be used for
this purpose.  Doing so makes the code cleaner since we are not
overloading the lru list.

The slab_list is part of a union within the page struct (included here
stripped down):

	union {
		struct {	/* Page cache and anonymous pages */
			struct list_head lru;
			...
		};
		struct {
			dma_addr_t dma_addr;
		};
		struct {	/* slab, slob and slub */
			union {
				struct list_head slab_list;
				struct {	/* Partial pages */
					struct page *next;
					int pages;	/* Nr of pages left */
					int pobjects;	/* Approximate count */
				};
			};
		...

Here we see that slab_list and lru are the same bits.  We can verify
that this change is safe to do by examining the object file produced from
slob.c before and after this patch is applied.

Steps taken to verify:

 1. checkout current tip of Linus' tree

    commit a667cb7a94d4 ("Merge branch 'akpm' (patches from Andrew)")

 2. configure and build (select SLOB allocator)

    CONFIG_SLOB=y
    CONFIG_SLAB_MERGE_DEFAULT=y

 3. dissasemble object file `objdump -dr mm/slub.o > before.s
 4. apply patch
 5. build
 6. dissasemble object file `objdump -dr mm/slub.o > after.s
 7. diff before.s after.s

Use slab_list list_head instead of the lru list_head for maintaining
lists of slabs.

Reviewed-by: Roman Gushchin <guro@fb.com>
Signed-off-by: Tobin C. Harding <tobin@kernel.org>
---
 mm/slob.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/mm/slob.c b/mm/slob.c
index 39ad9217ffea..94486c32e0ff 100644
--- a/mm/slob.c
+++ b/mm/slob.c
@@ -112,13 +112,13 @@ static inline int slob_page_free(struct page *sp)
 
 static void set_slob_page_free(struct page *sp, struct list_head *list)
 {
-	list_add(&sp->lru, list);
+	list_add(&sp->slab_list, list);
 	__SetPageSlobFree(sp);
 }
 
 static inline void clear_slob_page_free(struct page *sp)
 {
-	list_del(&sp->lru);
+	list_del(&sp->slab_list);
 	__ClearPageSlobFree(sp);
 }
 
@@ -282,7 +282,7 @@ static void *slob_alloc(size_t size, gfp_t gfp, int align, int node)
 
 	spin_lock_irqsave(&slob_lock, flags);
 	/* Iterate through each partially free page, try to find room */
-	list_for_each_entry(sp, slob_list, lru) {
+	list_for_each_entry(sp, slob_list, slab_list) {
 #ifdef CONFIG_NUMA
 		/*
 		 * If there's a node specification, search for a partial
@@ -331,7 +331,7 @@ static void *slob_alloc(size_t size, gfp_t gfp, int align, int node)
 		spin_lock_irqsave(&slob_lock, flags);
 		sp->units = SLOB_UNITS(PAGE_SIZE);
 		sp->freelist = b;
-		INIT_LIST_HEAD(&sp->lru);
+		INIT_LIST_HEAD(&sp->slab_list);
 		set_slob(b, SLOB_UNITS(PAGE_SIZE), b + SLOB_UNITS(PAGE_SIZE));
 		set_slob_page_free(sp, slob_list);
 		b = slob_page_alloc(sp, size, align);
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v3 4/7] slub: Add comments to endif pre-processor macros
  2019-03-14  5:31 [PATCH v3 0/7] mm: Use slab_list list_head instead of lru Tobin C. Harding
                   ` (2 preceding siblings ...)
  2019-03-14  5:31 ` [PATCH v3 3/7] slob: Use slab_list instead of lru Tobin C. Harding
@ 2019-03-14  5:31 ` Tobin C. Harding
  2019-03-14  5:31 ` [PATCH v3 5/7] slub: Use slab_list instead of lru Tobin C. Harding
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 12+ messages in thread
From: Tobin C. Harding @ 2019-03-14  5:31 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Tobin C. Harding, Roman Gushchin, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Matthew Wilcox,
	linux-mm, linux-kernel

SLUB allocator makes heavy use of ifdef/endif pre-processor macros.
The pairing of these statements is at times hard to follow e.g. if the
pair are further than a screen apart or if there are nested pairs.  We
can reduce cognitive load by adding a comment to the endif statement of
form

       #ifdef CONFIG_FOO
       ...
       #endif /* CONFIG_FOO */

Add comments to endif pre-processor macros if ifdef/endif pair is not
immediately apparent.

Reviewed-by: Roman Gushchin <guro@fb.com>
Acked-by: Christoph Lameter <cl@linux.com>
Signed-off-by: Tobin C. Harding <tobin@kernel.org>
---
 mm/slub.c | 20 ++++++++++----------
 1 file changed, 10 insertions(+), 10 deletions(-)

diff --git a/mm/slub.c b/mm/slub.c
index 1b08fbcb7e61..b282e22885cd 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1951,7 +1951,7 @@ static void *get_any_partial(struct kmem_cache *s, gfp_t flags,
 			}
 		}
 	} while (read_mems_allowed_retry(cpuset_mems_cookie));
-#endif
+#endif	/* CONFIG_NUMA */
 	return NULL;
 }
 
@@ -2249,7 +2249,7 @@ static void unfreeze_partials(struct kmem_cache *s,
 		discard_slab(s, page);
 		stat(s, FREE_SLAB);
 	}
-#endif
+#endif	/* CONFIG_SLUB_CPU_PARTIAL */
 }
 
 /*
@@ -2308,7 +2308,7 @@ static void put_cpu_partial(struct kmem_cache *s, struct page *page, int drain)
 		local_irq_restore(flags);
 	}
 	preempt_enable();
-#endif
+#endif	/* CONFIG_SLUB_CPU_PARTIAL */
 }
 
 static inline void flush_slab(struct kmem_cache *s, struct kmem_cache_cpu *c)
@@ -2813,7 +2813,7 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *s,
 }
 EXPORT_SYMBOL(kmem_cache_alloc_node_trace);
 #endif
-#endif
+#endif	/* CONFIG_NUMA */
 
 /*
  * Slow path handling. This may still be called frequently since objects
@@ -3845,7 +3845,7 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node)
 	return ret;
 }
 EXPORT_SYMBOL(__kmalloc_node);
-#endif
+#endif	/* CONFIG_NUMA */
 
 #ifdef CONFIG_HARDENED_USERCOPY
 /*
@@ -4063,7 +4063,7 @@ void __kmemcg_cache_deactivate(struct kmem_cache *s)
 	 */
 	slab_deactivate_memcg_cache_rcu_sched(s, kmemcg_cache_deact_after_rcu);
 }
-#endif
+#endif	/* CONFIG_MEMCG */
 
 static int slab_mem_going_offline_callback(void *arg)
 {
@@ -4696,7 +4696,7 @@ static int list_locations(struct kmem_cache *s, char *buf,
 		len += sprintf(buf, "No data\n");
 	return len;
 }
-#endif
+#endif	/* CONFIG_SLUB_DEBUG */
 
 #ifdef SLUB_RESILIENCY_TEST
 static void __init resiliency_test(void)
@@ -4756,7 +4756,7 @@ static void __init resiliency_test(void)
 #ifdef CONFIG_SYSFS
 static void resiliency_test(void) {};
 #endif
-#endif
+#endif	/* SLUB_RESILIENCY_TEST */
 
 #ifdef CONFIG_SYSFS
 enum slab_stat_type {
@@ -5413,7 +5413,7 @@ STAT_ATTR(CPU_PARTIAL_ALLOC, cpu_partial_alloc);
 STAT_ATTR(CPU_PARTIAL_FREE, cpu_partial_free);
 STAT_ATTR(CPU_PARTIAL_NODE, cpu_partial_node);
 STAT_ATTR(CPU_PARTIAL_DRAIN, cpu_partial_drain);
-#endif
+#endif	/* CONFIG_SLUB_STATS */
 
 static struct attribute *slab_attrs[] = {
 	&slab_size_attr.attr,
@@ -5614,7 +5614,7 @@ static void memcg_propagate_slab_attrs(struct kmem_cache *s)
 
 	if (buffer)
 		free_page((unsigned long)buffer);
-#endif
+#endif	/* CONFIG_MEMCG */
 }
 
 static void kmem_cache_release(struct kobject *k)
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v3 5/7] slub: Use slab_list instead of lru
  2019-03-14  5:31 [PATCH v3 0/7] mm: Use slab_list list_head instead of lru Tobin C. Harding
                   ` (3 preceding siblings ...)
  2019-03-14  5:31 ` [PATCH v3 4/7] slub: Add comments to endif pre-processor macros Tobin C. Harding
@ 2019-03-14  5:31 ` Tobin C. Harding
  2019-03-14  5:31 ` [PATCH v3 6/7] slab: " Tobin C. Harding
  2019-03-14  5:31 ` [PATCH v3 7/7] mm: Remove stale comment from page struct Tobin C. Harding
  6 siblings, 0 replies; 12+ messages in thread
From: Tobin C. Harding @ 2019-03-14  5:31 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Tobin C. Harding, Roman Gushchin, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Matthew Wilcox,
	linux-mm, linux-kernel

Currently we use the page->lru list for maintaining lists of slabs.  We
have a list in the page structure (slab_list) that can be used for this
purpose.  Doing so makes the code cleaner since we are not overloading
the lru list.

Use the slab_list instead of the lru list for maintaining lists of
slabs.

Reviewed-by: Roman Gushchin <guro@fb.com>
Acked-by: Christoph Lameter <cl@linux.com>
Signed-off-by: Tobin C. Harding <tobin@kernel.org>
---
 mm/slub.c | 40 ++++++++++++++++++++--------------------
 1 file changed, 20 insertions(+), 20 deletions(-)

diff --git a/mm/slub.c b/mm/slub.c
index b282e22885cd..d692b5e0163d 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1023,7 +1023,7 @@ static void add_full(struct kmem_cache *s,
 		return;
 
 	lockdep_assert_held(&n->list_lock);
-	list_add(&page->lru, &n->full);
+	list_add(&page->slab_list, &n->full);
 }
 
 static void remove_full(struct kmem_cache *s, struct kmem_cache_node *n, struct page *page)
@@ -1032,7 +1032,7 @@ static void remove_full(struct kmem_cache *s, struct kmem_cache_node *n, struct
 		return;
 
 	lockdep_assert_held(&n->list_lock);
-	list_del(&page->lru);
+	list_del(&page->slab_list);
 }
 
 /* Tracking of the number of slabs for debugging purposes */
@@ -1773,9 +1773,9 @@ __add_partial(struct kmem_cache_node *n, struct page *page, int tail)
 {
 	n->nr_partial++;
 	if (tail == DEACTIVATE_TO_TAIL)
-		list_add_tail(&page->lru, &n->partial);
+		list_add_tail(&page->slab_list, &n->partial);
 	else
-		list_add(&page->lru, &n->partial);
+		list_add(&page->slab_list, &n->partial);
 }
 
 static inline void add_partial(struct kmem_cache_node *n,
@@ -1789,7 +1789,7 @@ static inline void remove_partial(struct kmem_cache_node *n,
 					struct page *page)
 {
 	lockdep_assert_held(&n->list_lock);
-	list_del(&page->lru);
+	list_del(&page->slab_list);
 	n->nr_partial--;
 }
 
@@ -1863,7 +1863,7 @@ static void *get_partial_node(struct kmem_cache *s, struct kmem_cache_node *n,
 		return NULL;
 
 	spin_lock(&n->list_lock);
-	list_for_each_entry_safe(page, page2, &n->partial, lru) {
+	list_for_each_entry_safe(page, page2, &n->partial, slab_list) {
 		void *t;
 
 		if (!pfmemalloc_match(page, flags))
@@ -2407,7 +2407,7 @@ static unsigned long count_partial(struct kmem_cache_node *n,
 	struct page *page;
 
 	spin_lock_irqsave(&n->list_lock, flags);
-	list_for_each_entry(page, &n->partial, lru)
+	list_for_each_entry(page, &n->partial, slab_list)
 		x += get_count(page);
 	spin_unlock_irqrestore(&n->list_lock, flags);
 	return x;
@@ -3702,10 +3702,10 @@ static void free_partial(struct kmem_cache *s, struct kmem_cache_node *n)
 
 	BUG_ON(irqs_disabled());
 	spin_lock_irq(&n->list_lock);
-	list_for_each_entry_safe(page, h, &n->partial, lru) {
+	list_for_each_entry_safe(page, h, &n->partial, slab_list) {
 		if (!page->inuse) {
 			remove_partial(n, page);
-			list_add(&page->lru, &discard);
+			list_add(&page->slab_list, &discard);
 		} else {
 			list_slab_objects(s, page,
 			"Objects remaining in %s on __kmem_cache_shutdown()");
@@ -3713,7 +3713,7 @@ static void free_partial(struct kmem_cache *s, struct kmem_cache_node *n)
 	}
 	spin_unlock_irq(&n->list_lock);
 
-	list_for_each_entry_safe(page, h, &discard, lru)
+	list_for_each_entry_safe(page, h, &discard, slab_list)
 		discard_slab(s, page);
 }
 
@@ -3993,7 +3993,7 @@ int __kmem_cache_shrink(struct kmem_cache *s)
 		 * Note that concurrent frees may occur while we hold the
 		 * list_lock. page->inuse here is the upper limit.
 		 */
-		list_for_each_entry_safe(page, t, &n->partial, lru) {
+		list_for_each_entry_safe(page, t, &n->partial, slab_list) {
 			int free = page->objects - page->inuse;
 
 			/* Do not reread page->inuse */
@@ -4003,10 +4003,10 @@ int __kmem_cache_shrink(struct kmem_cache *s)
 			BUG_ON(free <= 0);
 
 			if (free == page->objects) {
-				list_move(&page->lru, &discard);
+				list_move(&page->slab_list, &discard);
 				n->nr_partial--;
 			} else if (free <= SHRINK_PROMOTE_MAX)
-				list_move(&page->lru, promote + free - 1);
+				list_move(&page->slab_list, promote + free - 1);
 		}
 
 		/*
@@ -4019,7 +4019,7 @@ int __kmem_cache_shrink(struct kmem_cache *s)
 		spin_unlock_irqrestore(&n->list_lock, flags);
 
 		/* Release empty slabs */
-		list_for_each_entry_safe(page, t, &discard, lru)
+		list_for_each_entry_safe(page, t, &discard, slab_list)
 			discard_slab(s, page);
 
 		if (slabs_node(s, node))
@@ -4211,11 +4211,11 @@ static struct kmem_cache * __init bootstrap(struct kmem_cache *static_cache)
 	for_each_kmem_cache_node(s, node, n) {
 		struct page *p;
 
-		list_for_each_entry(p, &n->partial, lru)
+		list_for_each_entry(p, &n->partial, slab_list)
 			p->slab_cache = s;
 
 #ifdef CONFIG_SLUB_DEBUG
-		list_for_each_entry(p, &n->full, lru)
+		list_for_each_entry(p, &n->full, slab_list)
 			p->slab_cache = s;
 #endif
 	}
@@ -4432,7 +4432,7 @@ static int validate_slab_node(struct kmem_cache *s,
 
 	spin_lock_irqsave(&n->list_lock, flags);
 
-	list_for_each_entry(page, &n->partial, lru) {
+	list_for_each_entry(page, &n->partial, slab_list) {
 		validate_slab_slab(s, page, map);
 		count++;
 	}
@@ -4443,7 +4443,7 @@ static int validate_slab_node(struct kmem_cache *s,
 	if (!(s->flags & SLAB_STORE_USER))
 		goto out;
 
-	list_for_each_entry(page, &n->full, lru) {
+	list_for_each_entry(page, &n->full, slab_list) {
 		validate_slab_slab(s, page, map);
 		count++;
 	}
@@ -4639,9 +4639,9 @@ static int list_locations(struct kmem_cache *s, char *buf,
 			continue;
 
 		spin_lock_irqsave(&n->list_lock, flags);
-		list_for_each_entry(page, &n->partial, lru)
+		list_for_each_entry(page, &n->partial, slab_list)
 			process_slab(&t, s, page, alloc, map);
-		list_for_each_entry(page, &n->full, lru)
+		list_for_each_entry(page, &n->full, slab_list)
 			process_slab(&t, s, page, alloc, map);
 		spin_unlock_irqrestore(&n->list_lock, flags);
 	}
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v3 6/7] slab: Use slab_list instead of lru
  2019-03-14  5:31 [PATCH v3 0/7] mm: Use slab_list list_head instead of lru Tobin C. Harding
                   ` (4 preceding siblings ...)
  2019-03-14  5:31 ` [PATCH v3 5/7] slub: Use slab_list instead of lru Tobin C. Harding
@ 2019-03-14  5:31 ` Tobin C. Harding
  2019-03-14  5:31 ` [PATCH v3 7/7] mm: Remove stale comment from page struct Tobin C. Harding
  6 siblings, 0 replies; 12+ messages in thread
From: Tobin C. Harding @ 2019-03-14  5:31 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Tobin C. Harding, Roman Gushchin, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Matthew Wilcox,
	linux-mm, linux-kernel

Currently we use the page->lru list for maintaining lists of slabs.  We
have a list in the page structure (slab_list) that can be used for this
purpose.  Doing so makes the code cleaner since we are not overloading
the lru list.

Use the slab_list instead of the lru list for maintaining lists of
slabs.

Reviewed-by: Roman Gushchin <guro@fb.com>
Signed-off-by: Tobin C. Harding <tobin@kernel.org>
---
 mm/slab.c | 49 +++++++++++++++++++++++++------------------------
 1 file changed, 25 insertions(+), 24 deletions(-)

diff --git a/mm/slab.c b/mm/slab.c
index 28652e4218e0..09cc64ef9613 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -1710,8 +1710,8 @@ static void slabs_destroy(struct kmem_cache *cachep, struct list_head *list)
 {
 	struct page *page, *n;
 
-	list_for_each_entry_safe(page, n, list, lru) {
-		list_del(&page->lru);
+	list_for_each_entry_safe(page, n, list, slab_list) {
+		list_del(&page->slab_list);
 		slab_destroy(cachep, page);
 	}
 }
@@ -2265,8 +2265,8 @@ static int drain_freelist(struct kmem_cache *cache,
 			goto out;
 		}
 
-		page = list_entry(p, struct page, lru);
-		list_del(&page->lru);
+		page = list_entry(p, struct page, slab_list);
+		list_del(&page->slab_list);
 		n->free_slabs--;
 		n->total_slabs--;
 		/*
@@ -2726,13 +2726,13 @@ static void cache_grow_end(struct kmem_cache *cachep, struct page *page)
 	if (!page)
 		return;
 
-	INIT_LIST_HEAD(&page->lru);
+	INIT_LIST_HEAD(&page->slab_list);
 	n = get_node(cachep, page_to_nid(page));
 
 	spin_lock(&n->list_lock);
 	n->total_slabs++;
 	if (!page->active) {
-		list_add_tail(&page->lru, &(n->slabs_free));
+		list_add_tail(&page->slab_list, &n->slabs_free);
 		n->free_slabs++;
 	} else
 		fixup_slab_list(cachep, n, page, &list);
@@ -2841,9 +2841,9 @@ static inline void fixup_slab_list(struct kmem_cache *cachep,
 				void **list)
 {
 	/* move slabp to correct slabp list: */
-	list_del(&page->lru);
+	list_del(&page->slab_list);
 	if (page->active == cachep->num) {
-		list_add(&page->lru, &n->slabs_full);
+		list_add(&page->slab_list, &n->slabs_full);
 		if (OBJFREELIST_SLAB(cachep)) {
 #if DEBUG
 			/* Poisoning will be done without holding the lock */
@@ -2857,7 +2857,7 @@ static inline void fixup_slab_list(struct kmem_cache *cachep,
 			page->freelist = NULL;
 		}
 	} else
-		list_add(&page->lru, &n->slabs_partial);
+		list_add(&page->slab_list, &n->slabs_partial);
 }
 
 /* Try to find non-pfmemalloc slab if needed */
@@ -2880,20 +2880,20 @@ static noinline struct page *get_valid_first_slab(struct kmem_cache_node *n,
 	}
 
 	/* Move pfmemalloc slab to the end of list to speed up next search */
-	list_del(&page->lru);
+	list_del(&page->slab_list);
 	if (!page->active) {
-		list_add_tail(&page->lru, &n->slabs_free);
+		list_add_tail(&page->slab_list, &n->slabs_free);
 		n->free_slabs++;
 	} else
-		list_add_tail(&page->lru, &n->slabs_partial);
+		list_add_tail(&page->slab_list, &n->slabs_partial);
 
-	list_for_each_entry(page, &n->slabs_partial, lru) {
+	list_for_each_entry(page, &n->slabs_partial, slab_list) {
 		if (!PageSlabPfmemalloc(page))
 			return page;
 	}
 
 	n->free_touched = 1;
-	list_for_each_entry(page, &n->slabs_free, lru) {
+	list_for_each_entry(page, &n->slabs_free, slab_list) {
 		if (!PageSlabPfmemalloc(page)) {
 			n->free_slabs--;
 			return page;
@@ -2908,11 +2908,12 @@ static struct page *get_first_slab(struct kmem_cache_node *n, bool pfmemalloc)
 	struct page *page;
 
 	assert_spin_locked(&n->list_lock);
-	page = list_first_entry_or_null(&n->slabs_partial, struct page, lru);
+	page = list_first_entry_or_null(&n->slabs_partial, struct page,
+					slab_list);
 	if (!page) {
 		n->free_touched = 1;
 		page = list_first_entry_or_null(&n->slabs_free, struct page,
-						lru);
+						slab_list);
 		if (page)
 			n->free_slabs--;
 	}
@@ -3413,29 +3414,29 @@ static void free_block(struct kmem_cache *cachep, void **objpp,
 		objp = objpp[i];
 
 		page = virt_to_head_page(objp);
-		list_del(&page->lru);
+		list_del(&page->slab_list);
 		check_spinlock_acquired_node(cachep, node);
 		slab_put_obj(cachep, page, objp);
 		STATS_DEC_ACTIVE(cachep);
 
 		/* fixup slab chains */
 		if (page->active == 0) {
-			list_add(&page->lru, &n->slabs_free);
+			list_add(&page->slab_list, &n->slabs_free);
 			n->free_slabs++;
 		} else {
 			/* Unconditionally move a slab to the end of the
 			 * partial list on free - maximum time for the
 			 * other objects to be freed, too.
 			 */
-			list_add_tail(&page->lru, &n->slabs_partial);
+			list_add_tail(&page->slab_list, &n->slabs_partial);
 		}
 	}
 
 	while (n->free_objects > n->free_limit && !list_empty(&n->slabs_free)) {
 		n->free_objects -= cachep->num;
 
-		page = list_last_entry(&n->slabs_free, struct page, lru);
-		list_move(&page->lru, list);
+		page = list_last_entry(&n->slabs_free, struct page, slab_list);
+		list_move(&page->slab_list, list);
 		n->free_slabs--;
 		n->total_slabs--;
 	}
@@ -3473,7 +3474,7 @@ static void cache_flusharray(struct kmem_cache *cachep, struct array_cache *ac)
 		int i = 0;
 		struct page *page;
 
-		list_for_each_entry(page, &n->slabs_free, lru) {
+		list_for_each_entry(page, &n->slabs_free, slab_list) {
 			BUG_ON(page->active);
 
 			i++;
@@ -4336,9 +4337,9 @@ static int leaks_show(struct seq_file *m, void *p)
 			check_irq_on();
 			spin_lock_irq(&n->list_lock);
 
-			list_for_each_entry(page, &n->slabs_full, lru)
+			list_for_each_entry(page, &n->slabs_full, slab_list)
 				handle_slab(x, cachep, page);
-			list_for_each_entry(page, &n->slabs_partial, lru)
+			list_for_each_entry(page, &n->slabs_partial, slab_list)
 				handle_slab(x, cachep, page);
 			spin_unlock_irq(&n->list_lock);
 		}
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v3 7/7] mm: Remove stale comment from page struct
  2019-03-14  5:31 [PATCH v3 0/7] mm: Use slab_list list_head instead of lru Tobin C. Harding
                   ` (5 preceding siblings ...)
  2019-03-14  5:31 ` [PATCH v3 6/7] slab: " Tobin C. Harding
@ 2019-03-14  5:31 ` Tobin C. Harding
  6 siblings, 0 replies; 12+ messages in thread
From: Tobin C. Harding @ 2019-03-14  5:31 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Tobin C. Harding, Roman Gushchin, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Matthew Wilcox,
	linux-mm, linux-kernel

We now use the slab_list list_head instead of the lru list_head.  This
comment has become stale.

Remove stale comment from page struct slab_list list_head.

Reviewed-by: Roman Gushchin <guro@fb.com>
Acked-by: Christoph Lameter <cl@linux.com>
Signed-off-by: Tobin C. Harding <tobin@kernel.org>
---
 include/linux/mm_types.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 7eade9132f02..63a34e3d7c29 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -103,7 +103,7 @@ struct page {
 		};
 		struct {	/* slab, slob and slub */
 			union {
-				struct list_head slab_list;	/* uses lru */
+				struct list_head slab_list;
 				struct {	/* Partial pages */
 					struct page *next;
 #ifdef CONFIG_64BIT
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH v3 3/7] slob: Use slab_list instead of lru
  2019-03-14  5:31 ` [PATCH v3 3/7] slob: Use slab_list instead of lru Tobin C. Harding
@ 2019-03-14 18:52   ` Roman Gushchin
  2019-03-14 20:38     ` Tobin C. Harding
  2019-03-14 20:47     ` Tobin C. Harding
  0 siblings, 2 replies; 12+ messages in thread
From: Roman Gushchin @ 2019-03-14 18:52 UTC (permalink / raw)
  To: Tobin C. Harding
  Cc: Andrew Morton, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Matthew Wilcox, linux-mm, linux-kernel

On Thu, Mar 14, 2019 at 04:31:31PM +1100, Tobin C. Harding wrote:
> Currently we use the page->lru list for maintaining lists of slabs.  We
> have a list_head in the page structure (slab_list) that can be used for
> this purpose.  Doing so makes the code cleaner since we are not
> overloading the lru list.
> 
> The slab_list is part of a union within the page struct (included here
> stripped down):
> 
> 	union {
> 		struct {	/* Page cache and anonymous pages */
> 			struct list_head lru;
> 			...
> 		};
> 		struct {
> 			dma_addr_t dma_addr;
> 		};
> 		struct {	/* slab, slob and slub */
> 			union {
> 				struct list_head slab_list;
> 				struct {	/* Partial pages */
> 					struct page *next;
> 					int pages;	/* Nr of pages left */
> 					int pobjects;	/* Approximate count */
> 				};
> 			};
> 		...
> 
> Here we see that slab_list and lru are the same bits.  We can verify
> that this change is safe to do by examining the object file produced from
> slob.c before and after this patch is applied.
> 
> Steps taken to verify:
> 
>  1. checkout current tip of Linus' tree
> 
>     commit a667cb7a94d4 ("Merge branch 'akpm' (patches from Andrew)")
> 
>  2. configure and build (select SLOB allocator)
> 
>     CONFIG_SLOB=y
>     CONFIG_SLAB_MERGE_DEFAULT=y
> 
>  3. dissasemble object file `objdump -dr mm/slub.o > before.s
>  4. apply patch
>  5. build
>  6. dissasemble object file `objdump -dr mm/slub.o > after.s
>  7. diff before.s after.s
> 
> Use slab_list list_head instead of the lru list_head for maintaining
> lists of slabs.
> 
> Reviewed-by: Roman Gushchin <guro@fb.com>
> Signed-off-by: Tobin C. Harding <tobin@kernel.org>
> ---
>  mm/slob.c | 8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/mm/slob.c b/mm/slob.c
> index 39ad9217ffea..94486c32e0ff 100644
> --- a/mm/slob.c
> +++ b/mm/slob.c
> @@ -112,13 +112,13 @@ static inline int slob_page_free(struct page *sp)
>  
>  static void set_slob_page_free(struct page *sp, struct list_head *list)
>  {
> -	list_add(&sp->lru, list);
> +	list_add(&sp->slab_list, list);
>  	__SetPageSlobFree(sp);
>  }
>  
>  static inline void clear_slob_page_free(struct page *sp)
>  {
> -	list_del(&sp->lru);
> +	list_del(&sp->slab_list);
>  	__ClearPageSlobFree(sp);
>  }
>  
> @@ -282,7 +282,7 @@ static void *slob_alloc(size_t size, gfp_t gfp, int align, int node)
>  
>  	spin_lock_irqsave(&slob_lock, flags);
>  	/* Iterate through each partially free page, try to find room */
> -	list_for_each_entry(sp, slob_list, lru) {
> +	list_for_each_entry(sp, slob_list, slab_list) {
>  #ifdef CONFIG_NUMA
>  		/*
>  		 * If there's a node specification, search for a partial


Hi Tobin!

How about list_rotate_to_front(&next->lru, slob_list) from the previous patch?
Shouldn't it use slab_list instead of lru too?

Thanks!

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v3 3/7] slob: Use slab_list instead of lru
  2019-03-14 18:52   ` Roman Gushchin
@ 2019-03-14 20:38     ` Tobin C. Harding
  2019-03-14 20:42       ` Tobin C. Harding
  2019-03-14 20:47     ` Tobin C. Harding
  1 sibling, 1 reply; 12+ messages in thread
From: Tobin C. Harding @ 2019-03-14 20:38 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: Tobin C. Harding, Andrew Morton, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Matthew Wilcox, linux-mm,
	linux-kernel

On Thu, Mar 14, 2019 at 06:52:25PM +0000, Roman Gushchin wrote:
> On Thu, Mar 14, 2019 at 04:31:31PM +1100, Tobin C. Harding wrote:
> > Currently we use the page->lru list for maintaining lists of slabs.  We
> > have a list_head in the page structure (slab_list) that can be used for
> > this purpose.  Doing so makes the code cleaner since we are not
> > overloading the lru list.
> > 
> > The slab_list is part of a union within the page struct (included here
> > stripped down):
> > 
> > 	union {
> > 		struct {	/* Page cache and anonymous pages */
> > 			struct list_head lru;
> > 			...
> > 		};
> > 		struct {
> > 			dma_addr_t dma_addr;
> > 		};
> > 		struct {	/* slab, slob and slub */
> > 			union {
> > 				struct list_head slab_list;
> > 				struct {	/* Partial pages */
> > 					struct page *next;
> > 					int pages;	/* Nr of pages left */
> > 					int pobjects;	/* Approximate count */
> > 				};
> > 			};
> > 		...
> > 
> > Here we see that slab_list and lru are the same bits.  We can verify
> > that this change is safe to do by examining the object file produced from
> > slob.c before and after this patch is applied.
> > 
> > Steps taken to verify:
> > 
> >  1. checkout current tip of Linus' tree
> > 
> >     commit a667cb7a94d4 ("Merge branch 'akpm' (patches from Andrew)")
> > 
> >  2. configure and build (select SLOB allocator)
> > 
> >     CONFIG_SLOB=y
> >     CONFIG_SLAB_MERGE_DEFAULT=y
> > 
> >  3. dissasemble object file `objdump -dr mm/slub.o > before.s
> >  4. apply patch
> >  5. build
> >  6. dissasemble object file `objdump -dr mm/slub.o > after.s
> >  7. diff before.s after.s
> > 
> > Use slab_list list_head instead of the lru list_head for maintaining
> > lists of slabs.
> > 
> > Reviewed-by: Roman Gushchin <guro@fb.com>
> > Signed-off-by: Tobin C. Harding <tobin@kernel.org>
> > ---
> >  mm/slob.c | 8 ++++----
> >  1 file changed, 4 insertions(+), 4 deletions(-)
> > 
> > diff --git a/mm/slob.c b/mm/slob.c
> > index 39ad9217ffea..94486c32e0ff 100644
> > --- a/mm/slob.c
> > +++ b/mm/slob.c
> > @@ -112,13 +112,13 @@ static inline int slob_page_free(struct page *sp)
> >  
> >  static void set_slob_page_free(struct page *sp, struct list_head *list)
> >  {
> > -	list_add(&sp->lru, list);
> > +	list_add(&sp->slab_list, list);
> >  	__SetPageSlobFree(sp);
> >  }
> >  
> >  static inline void clear_slob_page_free(struct page *sp)
> >  {
> > -	list_del(&sp->lru);
> > +	list_del(&sp->slab_list);
> >  	__ClearPageSlobFree(sp);
> >  }
> >  
> > @@ -282,7 +282,7 @@ static void *slob_alloc(size_t size, gfp_t gfp, int align, int node)
> >  
> >  	spin_lock_irqsave(&slob_lock, flags);
> >  	/* Iterate through each partially free page, try to find room */
> > -	list_for_each_entry(sp, slob_list, lru) {
> > +	list_for_each_entry(sp, slob_list, slab_list) {
> >  #ifdef CONFIG_NUMA
> >  		/*
> >  		 * If there's a node specification, search for a partial
> 
> 
> Hi Tobin!
> 
> How about list_rotate_to_front(&next->lru, slob_list) from the previous patch?
> Shouldn't it use slab_list instead of lru too?

Thanks Roman, my mistake - one too many rebases.  I hate when I drop the
ball like this.

Tobin.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v3 3/7] slob: Use slab_list instead of lru
  2019-03-14 20:38     ` Tobin C. Harding
@ 2019-03-14 20:42       ` Tobin C. Harding
  0 siblings, 0 replies; 12+ messages in thread
From: Tobin C. Harding @ 2019-03-14 20:42 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: Tobin C. Harding, Andrew Morton, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Matthew Wilcox, linux-mm,
	linux-kernel

On Fri, Mar 15, 2019 at 07:38:09AM +1100, Tobin C. Harding wrote:
> On Thu, Mar 14, 2019 at 06:52:25PM +0000, Roman Gushchin wrote:
> > On Thu, Mar 14, 2019 at 04:31:31PM +1100, Tobin C. Harding wrote:
> > > Currently we use the page->lru list for maintaining lists of slabs.  We
> > > have a list_head in the page structure (slab_list) that can be used for
> > > this purpose.  Doing so makes the code cleaner since we are not
> > > overloading the lru list.
> > > 
> > > The slab_list is part of a union within the page struct (included here
> > > stripped down):
> > > 
> > > 	union {
> > > 		struct {	/* Page cache and anonymous pages */
> > > 			struct list_head lru;
> > > 			...
> > > 		};
> > > 		struct {
> > > 			dma_addr_t dma_addr;
> > > 		};
> > > 		struct {	/* slab, slob and slub */
> > > 			union {
> > > 				struct list_head slab_list;
> > > 				struct {	/* Partial pages */
> > > 					struct page *next;
> > > 					int pages;	/* Nr of pages left */
> > > 					int pobjects;	/* Approximate count */
> > > 				};
> > > 			};
> > > 		...
> > > 
> > > Here we see that slab_list and lru are the same bits.  We can verify
> > > that this change is safe to do by examining the object file produced from
> > > slob.c before and after this patch is applied.
> > > 
> > > Steps taken to verify:
> > > 
> > >  1. checkout current tip of Linus' tree
> > > 
> > >     commit a667cb7a94d4 ("Merge branch 'akpm' (patches from Andrew)")
> > > 
> > >  2. configure and build (select SLOB allocator)
> > > 
> > >     CONFIG_SLOB=y
> > >     CONFIG_SLAB_MERGE_DEFAULT=y
> > > 
> > >  3. dissasemble object file `objdump -dr mm/slub.o > before.s
> > >  4. apply patch
> > >  5. build
> > >  6. dissasemble object file `objdump -dr mm/slub.o > after.s
> > >  7. diff before.s after.s
> > > 
> > > Use slab_list list_head instead of the lru list_head for maintaining
> > > lists of slabs.
> > > 
> > > Reviewed-by: Roman Gushchin <guro@fb.com>
> > > Signed-off-by: Tobin C. Harding <tobin@kernel.org>
> > > ---
> > >  mm/slob.c | 8 ++++----
> > >  1 file changed, 4 insertions(+), 4 deletions(-)
> > > 
> > > diff --git a/mm/slob.c b/mm/slob.c
> > > index 39ad9217ffea..94486c32e0ff 100644
> > > --- a/mm/slob.c
> > > +++ b/mm/slob.c
> > > @@ -112,13 +112,13 @@ static inline int slob_page_free(struct page *sp)
> > >  
> > >  static void set_slob_page_free(struct page *sp, struct list_head *list)
> > >  {
> > > -	list_add(&sp->lru, list);
> > > +	list_add(&sp->slab_list, list);
> > >  	__SetPageSlobFree(sp);
> > >  }
> > >  
> > >  static inline void clear_slob_page_free(struct page *sp)
> > >  {
> > > -	list_del(&sp->lru);
> > > +	list_del(&sp->slab_list);
> > >  	__ClearPageSlobFree(sp);
> > >  }
> > >  
> > > @@ -282,7 +282,7 @@ static void *slob_alloc(size_t size, gfp_t gfp, int align, int node)
> > >  
> > >  	spin_lock_irqsave(&slob_lock, flags);
> > >  	/* Iterate through each partially free page, try to find room */
> > > -	list_for_each_entry(sp, slob_list, lru) {
> > > +	list_for_each_entry(sp, slob_list, slab_list) {
> > >  #ifdef CONFIG_NUMA
> > >  		/*
> > >  		 * If there's a node specification, search for a partial
> > 
> > 
> > Hi Tobin!
> > 
> > How about list_rotate_to_front(&next->lru, slob_list) from the previous patch?
> > Shouldn't it use slab_list instead of lru too?
> 
> Thanks Roman, my mistake - one too many rebases.  I hate when I drop the
> ball like this.

Oh that's right, its a union so it still builds and boots - I was
thinking that I had rebased and not built.  I guess that's just a fumble
instead of a complete ball drop.

Thanks for the careful review all the same.

	Tobin

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v3 3/7] slob: Use slab_list instead of lru
  2019-03-14 18:52   ` Roman Gushchin
  2019-03-14 20:38     ` Tobin C. Harding
@ 2019-03-14 20:47     ` Tobin C. Harding
  1 sibling, 0 replies; 12+ messages in thread
From: Tobin C. Harding @ 2019-03-14 20:47 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: Tobin C. Harding, Andrew Morton, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Matthew Wilcox, linux-mm,
	linux-kernel

On Thu, Mar 14, 2019 at 06:52:25PM +0000, Roman Gushchin wrote:
> On Thu, Mar 14, 2019 at 04:31:31PM +1100, Tobin C. Harding wrote:
> > Currently we use the page->lru list for maintaining lists of slabs.  We
> > have a list_head in the page structure (slab_list) that can be used for
> > this purpose.  Doing so makes the code cleaner since we are not
> > overloading the lru list.
> > 
> > The slab_list is part of a union within the page struct (included here
> > stripped down):
> > 
> > 	union {
> > 		struct {	/* Page cache and anonymous pages */
> > 			struct list_head lru;
> > 			...
> > 		};
> > 		struct {
> > 			dma_addr_t dma_addr;
> > 		};
> > 		struct {	/* slab, slob and slub */
> > 			union {
> > 				struct list_head slab_list;
> > 				struct {	/* Partial pages */
> > 					struct page *next;
> > 					int pages;	/* Nr of pages left */
> > 					int pobjects;	/* Approximate count */
> > 				};
> > 			};
> > 		...
> > 
> > Here we see that slab_list and lru are the same bits.  We can verify
> > that this change is safe to do by examining the object file produced from
> > slob.c before and after this patch is applied.
> > 
> > Steps taken to verify:
> > 
> >  1. checkout current tip of Linus' tree
> > 
> >     commit a667cb7a94d4 ("Merge branch 'akpm' (patches from Andrew)")
> > 
> >  2. configure and build (select SLOB allocator)
> > 
> >     CONFIG_SLOB=y
> >     CONFIG_SLAB_MERGE_DEFAULT=y
> > 
> >  3. dissasemble object file `objdump -dr mm/slub.o > before.s
> >  4. apply patch
> >  5. build
> >  6. dissasemble object file `objdump -dr mm/slub.o > after.s
> >  7. diff before.s after.s
> > 
> > Use slab_list list_head instead of the lru list_head for maintaining
> > lists of slabs.
> > 
> > Reviewed-by: Roman Gushchin <guro@fb.com>
> > Signed-off-by: Tobin C. Harding <tobin@kernel.org>
> > ---
> >  mm/slob.c | 8 ++++----
> >  1 file changed, 4 insertions(+), 4 deletions(-)
> > 
> > diff --git a/mm/slob.c b/mm/slob.c
> > index 39ad9217ffea..94486c32e0ff 100644
> > --- a/mm/slob.c
> > +++ b/mm/slob.c
> > @@ -112,13 +112,13 @@ static inline int slob_page_free(struct page *sp)
> >  
> >  static void set_slob_page_free(struct page *sp, struct list_head *list)
> >  {
> > -	list_add(&sp->lru, list);
> > +	list_add(&sp->slab_list, list);
> >  	__SetPageSlobFree(sp);
> >  }
> >  
> >  static inline void clear_slob_page_free(struct page *sp)
> >  {
> > -	list_del(&sp->lru);
> > +	list_del(&sp->slab_list);
> >  	__ClearPageSlobFree(sp);
> >  }
> >  
> > @@ -282,7 +282,7 @@ static void *slob_alloc(size_t size, gfp_t gfp, int align, int node)
> >  
> >  	spin_lock_irqsave(&slob_lock, flags);
> >  	/* Iterate through each partially free page, try to find room */
> > -	list_for_each_entry(sp, slob_list, lru) {
> > +	list_for_each_entry(sp, slob_list, slab_list) {
> >  #ifdef CONFIG_NUMA
> >  		/*
> >  		 * If there's a node specification, search for a partial
> 
> 
> Hi Tobin!
> 
> How about list_rotate_to_front(&next->lru, slob_list) from the previous patch?
> Shouldn't it use slab_list instead of lru too?

I'll let this sit for a day or two in case we get any more comments on
the list.h stuff then do another version ready for US Monday morning.

Thanks again,
Tobin.

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2019-03-14 20:47 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-03-14  5:31 [PATCH v3 0/7] mm: Use slab_list list_head instead of lru Tobin C. Harding
2019-03-14  5:31 ` [PATCH v3 1/7] list: Add function list_rotate_to_front() Tobin C. Harding
2019-03-14  5:31 ` [PATCH v3 2/7] slob: Respect list_head abstraction layer Tobin C. Harding
2019-03-14  5:31 ` [PATCH v3 3/7] slob: Use slab_list instead of lru Tobin C. Harding
2019-03-14 18:52   ` Roman Gushchin
2019-03-14 20:38     ` Tobin C. Harding
2019-03-14 20:42       ` Tobin C. Harding
2019-03-14 20:47     ` Tobin C. Harding
2019-03-14  5:31 ` [PATCH v3 4/7] slub: Add comments to endif pre-processor macros Tobin C. Harding
2019-03-14  5:31 ` [PATCH v3 5/7] slub: Use slab_list instead of lru Tobin C. Harding
2019-03-14  5:31 ` [PATCH v3 6/7] slab: " Tobin C. Harding
2019-03-14  5:31 ` [PATCH v3 7/7] mm: Remove stale comment from page struct Tobin C. Harding

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.