linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/8] zsmalloc: remove bit_spin_lock
@ 2021-11-10 18:54 Minchan Kim
  2021-11-10 18:54 ` [PATCH 1/8] zsmalloc: introduce some helper functions Minchan Kim
                   ` (7 more replies)
  0 siblings, 8 replies; 19+ messages in thread
From: Minchan Kim @ 2021-11-10 18:54 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Sergey Senozhatsky, linux-mm, Minchan Kim

The zsmalloc has used bit_spin_lock to minimize space overhead
since it's zpage granularity lock. However, it causes zsmalloc
non-working under PREEMPT_RT as well as adding too much
complication.

This patchset tries to replace the bit_spin_lock with per-pool
rwlock. It also removes unnecessary zspage isolation logic
from class, which was the other part too much complication
added into zsmalloc.
Last patch changes the get_cpu_var to local_lock to make it
work in PREEMPT_RT.

Minchan Kim (7):
  zsmalloc: introduce some helper functions
  zsmalloc: rename zs_stat_type to class_stat_type
  zsmalloc: decouple class actions from zspage works
  zsmalloc: introduce obj_allocated
  zsmalloc: move huge compressed obj from page to zspage
  zsmalloc: remove zspage isolation for migration
  zsmalloc: replace per zpage lock with pool->migrate_lock

Sebastian Andrzej Siewior (1):
  zsmalloc: replace get_cpu_var with local_lock

 mm/zsmalloc.c | 528 ++++++++++++++++++--------------------------------
 1 file changed, 188 insertions(+), 340 deletions(-)

-- 
2.34.0.rc1.387.gb447b232ab-goog



^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH 1/8] zsmalloc: introduce some helper functions
  2021-11-10 18:54 [PATCH 0/8] zsmalloc: remove bit_spin_lock Minchan Kim
@ 2021-11-10 18:54 ` Minchan Kim
  2021-11-10 18:54 ` [PATCH 2/8] zsmalloc: rename zs_stat_type to class_stat_type Minchan Kim
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 19+ messages in thread
From: Minchan Kim @ 2021-11-10 18:54 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Sergey Senozhatsky, linux-mm, Minchan Kim

get_zspage_mapping returns fullness as well as class_idx. However,
the fullness is usually not used since it could be stale in some
contexts. It causes misleading as well as unnecessary instructions
so this patch introduces zspage_class.

obj_to_location also produces page and index but we don't need
always the index, either so this patch introduces obj_to_page.

Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 mm/zsmalloc.c | 54 ++++++++++++++++++++++-----------------------------
 1 file changed, 23 insertions(+), 31 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 68e8831068f4..be02db164477 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -517,6 +517,12 @@ static void get_zspage_mapping(struct zspage *zspage,
 	*class_idx = zspage->class;
 }
 
+static struct size_class *zspage_class(struct zs_pool *pool,
+					     struct zspage *zspage)
+{
+	return pool->size_class[zspage->class];
+}
+
 static void set_zspage_mapping(struct zspage *zspage,
 				unsigned int class_idx,
 				enum fullness_group fullness)
@@ -844,6 +850,12 @@ static void obj_to_location(unsigned long obj, struct page **page,
 	*obj_idx = (obj & OBJ_INDEX_MASK);
 }
 
+static void obj_to_page(unsigned long obj, struct page **page)
+{
+	obj >>= OBJ_TAG_BITS;
+	*page = pfn_to_page(obj >> OBJ_INDEX_BITS);
+}
+
 /**
  * location_to_obj - get obj value encoded from (<page>, <obj_idx>)
  * @page: page object resides in zspage
@@ -1246,8 +1258,6 @@ void *zs_map_object(struct zs_pool *pool, unsigned long handle,
 	unsigned long obj, off;
 	unsigned int obj_idx;
 
-	unsigned int class_idx;
-	enum fullness_group fg;
 	struct size_class *class;
 	struct mapping_area *area;
 	struct page *pages[2];
@@ -1270,8 +1280,7 @@ void *zs_map_object(struct zs_pool *pool, unsigned long handle,
 	/* migration cannot move any subpage in this zspage */
 	migrate_read_lock(zspage);
 
-	get_zspage_mapping(zspage, &class_idx, &fg);
-	class = pool->size_class[class_idx];
+	class = zspage_class(pool, zspage);
 	off = (class->size * obj_idx) & ~PAGE_MASK;
 
 	area = &get_cpu_var(zs_map_area);
@@ -1304,16 +1313,13 @@ void zs_unmap_object(struct zs_pool *pool, unsigned long handle)
 	unsigned long obj, off;
 	unsigned int obj_idx;
 
-	unsigned int class_idx;
-	enum fullness_group fg;
 	struct size_class *class;
 	struct mapping_area *area;
 
 	obj = handle_to_obj(handle);
 	obj_to_location(obj, &page, &obj_idx);
 	zspage = get_zspage(page);
-	get_zspage_mapping(zspage, &class_idx, &fg);
-	class = pool->size_class[class_idx];
+	class = zspage_class(pool, zspage);
 	off = (class->size * obj_idx) & ~PAGE_MASK;
 
 	area = this_cpu_ptr(&zs_map_area);
@@ -1491,8 +1497,6 @@ void zs_free(struct zs_pool *pool, unsigned long handle)
 	struct zspage *zspage;
 	struct page *f_page;
 	unsigned long obj;
-	unsigned int f_objidx;
-	int class_idx;
 	struct size_class *class;
 	enum fullness_group fullness;
 	bool isolated;
@@ -1502,13 +1506,11 @@ void zs_free(struct zs_pool *pool, unsigned long handle)
 
 	pin_tag(handle);
 	obj = handle_to_obj(handle);
-	obj_to_location(obj, &f_page, &f_objidx);
+	obj_to_page(obj, &f_page);
 	zspage = get_zspage(f_page);
 
 	migrate_read_lock(zspage);
-
-	get_zspage_mapping(zspage, &class_idx, &fullness);
-	class = pool->size_class[class_idx];
+	class = zspage_class(pool, zspage);
 
 	spin_lock(&class->lock);
 	obj_free(class, obj);
@@ -1865,8 +1867,6 @@ static bool zs_page_isolate(struct page *page, isolate_mode_t mode)
 {
 	struct zs_pool *pool;
 	struct size_class *class;
-	int class_idx;
-	enum fullness_group fullness;
 	struct zspage *zspage;
 	struct address_space *mapping;
 
@@ -1879,15 +1879,10 @@ static bool zs_page_isolate(struct page *page, isolate_mode_t mode)
 
 	zspage = get_zspage(page);
 
-	/*
-	 * Without class lock, fullness could be stale while class_idx is okay
-	 * because class_idx is constant unless page is freed so we should get
-	 * fullness again under class lock.
-	 */
-	get_zspage_mapping(zspage, &class_idx, &fullness);
 	mapping = page_mapping(page);
 	pool = mapping->private_data;
-	class = pool->size_class[class_idx];
+
+	class = zspage_class(pool, zspage);
 
 	spin_lock(&class->lock);
 	if (get_zspage_inuse(zspage) == 0) {
@@ -1906,6 +1901,9 @@ static bool zs_page_isolate(struct page *page, isolate_mode_t mode)
 	 * size_class to prevent further object allocation from the zspage.
 	 */
 	if (!list_empty(&zspage->list) && !is_zspage_isolated(zspage)) {
+		enum fullness_group fullness;
+		unsigned int class_idx;
+
 		get_zspage_mapping(zspage, &class_idx, &fullness);
 		atomic_long_inc(&pool->isolated_pages);
 		remove_zspage(class, zspage, fullness);
@@ -1922,8 +1920,6 @@ static int zs_page_migrate(struct address_space *mapping, struct page *newpage,
 {
 	struct zs_pool *pool;
 	struct size_class *class;
-	int class_idx;
-	enum fullness_group fullness;
 	struct zspage *zspage;
 	struct page *dummy;
 	void *s_addr, *d_addr, *addr;
@@ -1948,9 +1944,8 @@ static int zs_page_migrate(struct address_space *mapping, struct page *newpage,
 
 	/* Concurrent compactor cannot migrate any subpage in zspage */
 	migrate_write_lock(zspage);
-	get_zspage_mapping(zspage, &class_idx, &fullness);
 	pool = mapping->private_data;
-	class = pool->size_class[class_idx];
+	class = zspage_class(pool, zspage);
 	offset = get_first_obj_offset(page);
 
 	spin_lock(&class->lock);
@@ -2048,8 +2043,6 @@ static void zs_page_putback(struct page *page)
 {
 	struct zs_pool *pool;
 	struct size_class *class;
-	int class_idx;
-	enum fullness_group fg;
 	struct address_space *mapping;
 	struct zspage *zspage;
 
@@ -2057,10 +2050,9 @@ static void zs_page_putback(struct page *page)
 	VM_BUG_ON_PAGE(!PageIsolated(page), page);
 
 	zspage = get_zspage(page);
-	get_zspage_mapping(zspage, &class_idx, &fg);
 	mapping = page_mapping(page);
 	pool = mapping->private_data;
-	class = pool->size_class[class_idx];
+	class = zspage_class(pool, zspage);
 
 	spin_lock(&class->lock);
 	dec_zspage_isolation(zspage);
-- 
2.34.0.rc1.387.gb447b232ab-goog



^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 2/8] zsmalloc: rename zs_stat_type to class_stat_type
  2021-11-10 18:54 [PATCH 0/8] zsmalloc: remove bit_spin_lock Minchan Kim
  2021-11-10 18:54 ` [PATCH 1/8] zsmalloc: introduce some helper functions Minchan Kim
@ 2021-11-10 18:54 ` Minchan Kim
  2021-11-10 18:54 ` [PATCH 3/8] zsmalloc: decouple class actions from zspage works Minchan Kim
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 19+ messages in thread
From: Minchan Kim @ 2021-11-10 18:54 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Sergey Senozhatsky, linux-mm, Minchan Kim, Minchan Kim

From: Minchan Kim <minchan@google.com>

The stat aims for class stat, not zspage so rename it.

Signed-off-by: Minchan Kim <minchan@google.com>
---
 mm/zsmalloc.c | 24 ++++++++++++------------
 1 file changed, 12 insertions(+), 12 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index be02db164477..0b073becb91c 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -158,7 +158,7 @@ enum fullness_group {
 	NR_ZS_FULLNESS,
 };
 
-enum zs_stat_type {
+enum class_stat_type {
 	CLASS_EMPTY,
 	CLASS_ALMOST_EMPTY,
 	CLASS_ALMOST_FULL,
@@ -549,21 +549,21 @@ static int get_size_class_index(int size)
 	return min_t(int, ZS_SIZE_CLASSES - 1, idx);
 }
 
-/* type can be of enum type zs_stat_type or fullness_group */
-static inline void zs_stat_inc(struct size_class *class,
+/* type can be of enum type class_stat_type or fullness_group */
+static inline void class_stat_inc(struct size_class *class,
 				int type, unsigned long cnt)
 {
 	class->stats.objs[type] += cnt;
 }
 
-/* type can be of enum type zs_stat_type or fullness_group */
-static inline void zs_stat_dec(struct size_class *class,
+/* type can be of enum type class_stat_type or fullness_group */
+static inline void class_stat_dec(struct size_class *class,
 				int type, unsigned long cnt)
 {
 	class->stats.objs[type] -= cnt;
 }
 
-/* type can be of enum type zs_stat_type or fullness_group */
+/* type can be of enum type class_stat_type or fullness_group */
 static inline unsigned long zs_stat_get(struct size_class *class,
 				int type)
 {
@@ -725,7 +725,7 @@ static void insert_zspage(struct size_class *class,
 {
 	struct zspage *head;
 
-	zs_stat_inc(class, fullness, 1);
+	class_stat_inc(class, fullness, 1);
 	head = list_first_entry_or_null(&class->fullness_list[fullness],
 					struct zspage, list);
 	/*
@@ -750,7 +750,7 @@ static void remove_zspage(struct size_class *class,
 	VM_BUG_ON(is_zspage_isolated(zspage));
 
 	list_del_init(&zspage->list);
-	zs_stat_dec(class, fullness, 1);
+	class_stat_dec(class, fullness, 1);
 }
 
 /*
@@ -964,7 +964,7 @@ static void __free_zspage(struct zs_pool *pool, struct size_class *class,
 
 	cache_free_zspage(pool, zspage);
 
-	zs_stat_dec(class, OBJ_ALLOCATED, class->objs_per_zspage);
+	class_stat_dec(class, OBJ_ALLOCATED, class->objs_per_zspage);
 	atomic_long_sub(class->pages_per_zspage,
 					&pool->pages_allocated);
 }
@@ -1394,7 +1394,7 @@ static unsigned long obj_malloc(struct size_class *class,
 
 	kunmap_atomic(vaddr);
 	mod_zspage_inuse(zspage, 1);
-	zs_stat_inc(class, OBJ_USED, 1);
+	class_stat_inc(class, OBJ_USED, 1);
 
 	obj = location_to_obj(m_page, obj);
 
@@ -1458,7 +1458,7 @@ unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t gfp)
 	record_obj(handle, obj);
 	atomic_long_add(class->pages_per_zspage,
 				&pool->pages_allocated);
-	zs_stat_inc(class, OBJ_ALLOCATED, class->objs_per_zspage);
+	class_stat_inc(class, OBJ_ALLOCATED, class->objs_per_zspage);
 
 	/* We completely set up zspage so mark them as movable */
 	SetZsPageMovable(pool, zspage);
@@ -1489,7 +1489,7 @@ static void obj_free(struct size_class *class, unsigned long obj)
 	kunmap_atomic(vaddr);
 	set_freeobj(zspage, f_objidx);
 	mod_zspage_inuse(zspage, -1);
-	zs_stat_dec(class, OBJ_USED, 1);
+	class_stat_dec(class, OBJ_USED, 1);
 }
 
 void zs_free(struct zs_pool *pool, unsigned long handle)
-- 
2.34.0.rc1.387.gb447b232ab-goog



^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 3/8] zsmalloc: decouple class actions from zspage works
  2021-11-10 18:54 [PATCH 0/8] zsmalloc: remove bit_spin_lock Minchan Kim
  2021-11-10 18:54 ` [PATCH 1/8] zsmalloc: introduce some helper functions Minchan Kim
  2021-11-10 18:54 ` [PATCH 2/8] zsmalloc: rename zs_stat_type to class_stat_type Minchan Kim
@ 2021-11-10 18:54 ` Minchan Kim
  2021-11-10 18:54 ` [PATCH 4/8] zsmalloc: introduce obj_allocated Minchan Kim
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 19+ messages in thread
From: Minchan Kim @ 2021-11-10 18:54 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Sergey Senozhatsky, linux-mm, Minchan Kim

This patch moves class stat update out of obj_malloc since
it's not related to zspage operation.
This is a preparation to introduce new lock scheme in next
patch.

Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 mm/zsmalloc.c | 23 +++++++++++++----------
 1 file changed, 13 insertions(+), 10 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 0b073becb91c..52c6431ed5c6 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -1360,17 +1360,19 @@ size_t zs_huge_class_size(struct zs_pool *pool)
 }
 EXPORT_SYMBOL_GPL(zs_huge_class_size);
 
-static unsigned long obj_malloc(struct size_class *class,
+static unsigned long obj_malloc(struct zs_pool *pool,
 				struct zspage *zspage, unsigned long handle)
 {
 	int i, nr_page, offset;
 	unsigned long obj;
 	struct link_free *link;
+	struct size_class *class;
 
 	struct page *m_page;
 	unsigned long m_offset;
 	void *vaddr;
 
+	class = pool->size_class[zspage->class];
 	handle |= OBJ_ALLOCATED_TAG;
 	obj = get_freeobj(zspage);
 
@@ -1394,7 +1396,6 @@ static unsigned long obj_malloc(struct size_class *class,
 
 	kunmap_atomic(vaddr);
 	mod_zspage_inuse(zspage, 1);
-	class_stat_inc(class, OBJ_USED, 1);
 
 	obj = location_to_obj(m_page, obj);
 
@@ -1433,10 +1434,11 @@ unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t gfp)
 	spin_lock(&class->lock);
 	zspage = find_get_zspage(class);
 	if (likely(zspage)) {
-		obj = obj_malloc(class, zspage, handle);
+		obj = obj_malloc(pool, zspage, handle);
 		/* Now move the zspage to another fullness group, if required */
 		fix_fullness_group(class, zspage);
 		record_obj(handle, obj);
+		class_stat_inc(class, OBJ_USED, 1);
 		spin_unlock(&class->lock);
 
 		return handle;
@@ -1451,7 +1453,7 @@ unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t gfp)
 	}
 
 	spin_lock(&class->lock);
-	obj = obj_malloc(class, zspage, handle);
+	obj = obj_malloc(pool, zspage, handle);
 	newfg = get_fullness_group(class, zspage);
 	insert_zspage(class, zspage, newfg);
 	set_zspage_mapping(zspage, class->index, newfg);
@@ -1459,6 +1461,7 @@ unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t gfp)
 	atomic_long_add(class->pages_per_zspage,
 				&pool->pages_allocated);
 	class_stat_inc(class, OBJ_ALLOCATED, class->objs_per_zspage);
+	class_stat_inc(class, OBJ_USED, 1);
 
 	/* We completely set up zspage so mark them as movable */
 	SetZsPageMovable(pool, zspage);
@@ -1468,7 +1471,7 @@ unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t gfp)
 }
 EXPORT_SYMBOL_GPL(zs_malloc);
 
-static void obj_free(struct size_class *class, unsigned long obj)
+static void obj_free(int class_size, unsigned long obj)
 {
 	struct link_free *link;
 	struct zspage *zspage;
@@ -1478,7 +1481,7 @@ static void obj_free(struct size_class *class, unsigned long obj)
 	void *vaddr;
 
 	obj_to_location(obj, &f_page, &f_objidx);
-	f_offset = (class->size * f_objidx) & ~PAGE_MASK;
+	f_offset = (class_size * f_objidx) & ~PAGE_MASK;
 	zspage = get_zspage(f_page);
 
 	vaddr = kmap_atomic(f_page);
@@ -1489,7 +1492,6 @@ static void obj_free(struct size_class *class, unsigned long obj)
 	kunmap_atomic(vaddr);
 	set_freeobj(zspage, f_objidx);
 	mod_zspage_inuse(zspage, -1);
-	class_stat_dec(class, OBJ_USED, 1);
 }
 
 void zs_free(struct zs_pool *pool, unsigned long handle)
@@ -1513,7 +1515,8 @@ void zs_free(struct zs_pool *pool, unsigned long handle)
 	class = zspage_class(pool, zspage);
 
 	spin_lock(&class->lock);
-	obj_free(class, obj);
+	obj_free(class->size, obj);
+	class_stat_dec(class, OBJ_USED, 1);
 	fullness = fix_fullness_group(class, zspage);
 	if (fullness != ZS_EMPTY) {
 		migrate_read_unlock(zspage);
@@ -1671,7 +1674,7 @@ static int migrate_zspage(struct zs_pool *pool, struct size_class *class,
 		}
 
 		used_obj = handle_to_obj(handle);
-		free_obj = obj_malloc(class, get_zspage(d_page), handle);
+		free_obj = obj_malloc(pool, get_zspage(d_page), handle);
 		zs_object_copy(class, free_obj, used_obj);
 		obj_idx++;
 		/*
@@ -1683,7 +1686,7 @@ static int migrate_zspage(struct zs_pool *pool, struct size_class *class,
 		free_obj |= BIT(HANDLE_PIN_BIT);
 		record_obj(handle, free_obj);
 		unpin_tag(handle);
-		obj_free(class, used_obj);
+		obj_free(class->size, used_obj);
 	}
 
 	/* Remember last position in this iteration */
-- 
2.34.0.rc1.387.gb447b232ab-goog



^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 4/8] zsmalloc: introduce obj_allocated
  2021-11-10 18:54 [PATCH 0/8] zsmalloc: remove bit_spin_lock Minchan Kim
                   ` (2 preceding siblings ...)
  2021-11-10 18:54 ` [PATCH 3/8] zsmalloc: decouple class actions from zspage works Minchan Kim
@ 2021-11-10 18:54 ` Minchan Kim
  2021-11-10 18:54 ` [PATCH 5/8] zsmalloc: move huge compressed obj from page to zspage Minchan Kim
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 19+ messages in thread
From: Minchan Kim @ 2021-11-10 18:54 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Sergey Senozhatsky, linux-mm, Minchan Kim

The usage pattern for obj_to_head is to check whether the zpage
is allocated or not. Thus, introduce obj_allocated.

Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 mm/zsmalloc.c | 33 ++++++++++++++++-----------------
 1 file changed, 16 insertions(+), 17 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 52c6431ed5c6..8f9cd07033de 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -877,13 +877,21 @@ static unsigned long handle_to_obj(unsigned long handle)
 	return *(unsigned long *)handle;
 }
 
-static unsigned long obj_to_head(struct page *page, void *obj)
+static bool obj_allocated(struct page *page, void *obj, unsigned long *phandle)
 {
+	unsigned long handle;
+
 	if (unlikely(PageHugeObject(page))) {
 		VM_BUG_ON_PAGE(!is_first_page(page), page);
-		return page->index;
+		handle = page->index;
 	} else
-		return *(unsigned long *)obj;
+		handle = *(unsigned long *)obj;
+
+	if (!(handle & OBJ_ALLOCATED_TAG))
+		return false;
+
+	*phandle = handle & ~OBJ_ALLOCATED_TAG;
+	return true;
 }
 
 static inline int testpin_tag(unsigned long handle)
@@ -1606,7 +1614,6 @@ static void zs_object_copy(struct size_class *class, unsigned long dst,
 static unsigned long find_alloced_obj(struct size_class *class,
 					struct page *page, int *obj_idx)
 {
-	unsigned long head;
 	int offset = 0;
 	int index = *obj_idx;
 	unsigned long handle = 0;
@@ -1616,9 +1623,7 @@ static unsigned long find_alloced_obj(struct size_class *class,
 	offset += class->size * index;
 
 	while (offset < PAGE_SIZE) {
-		head = obj_to_head(page, addr + offset);
-		if (head & OBJ_ALLOCATED_TAG) {
-			handle = head & ~OBJ_ALLOCATED_TAG;
+		if (obj_allocated(page, addr + offset, &handle)) {
 			if (trypin_tag(handle))
 				break;
 			handle = 0;
@@ -1927,7 +1932,7 @@ static int zs_page_migrate(struct address_space *mapping, struct page *newpage,
 	struct page *dummy;
 	void *s_addr, *d_addr, *addr;
 	int offset, pos;
-	unsigned long handle, head;
+	unsigned long handle;
 	unsigned long old_obj, new_obj;
 	unsigned int obj_idx;
 	int ret = -EAGAIN;
@@ -1963,9 +1968,7 @@ static int zs_page_migrate(struct address_space *mapping, struct page *newpage,
 	pos = offset;
 	s_addr = kmap_atomic(page);
 	while (pos < PAGE_SIZE) {
-		head = obj_to_head(page, s_addr + pos);
-		if (head & OBJ_ALLOCATED_TAG) {
-			handle = head & ~OBJ_ALLOCATED_TAG;
+		if (obj_allocated(page, s_addr + pos, &handle)) {
 			if (!trypin_tag(handle))
 				goto unpin_objects;
 		}
@@ -1981,9 +1984,7 @@ static int zs_page_migrate(struct address_space *mapping, struct page *newpage,
 
 	for (addr = s_addr + offset; addr < s_addr + pos;
 					addr += class->size) {
-		head = obj_to_head(page, addr);
-		if (head & OBJ_ALLOCATED_TAG) {
-			handle = head & ~OBJ_ALLOCATED_TAG;
+		if (obj_allocated(page, addr, &handle)) {
 			BUG_ON(!testpin_tag(handle));
 
 			old_obj = handle_to_obj(handle);
@@ -2028,9 +2029,7 @@ static int zs_page_migrate(struct address_space *mapping, struct page *newpage,
 unpin_objects:
 	for (addr = s_addr + offset; addr < s_addr + pos;
 						addr += class->size) {
-		head = obj_to_head(page, addr);
-		if (head & OBJ_ALLOCATED_TAG) {
-			handle = head & ~OBJ_ALLOCATED_TAG;
+		if (obj_allocated(page, addr, &handle)) {
 			BUG_ON(!testpin_tag(handle));
 			unpin_tag(handle);
 		}
-- 
2.34.0.rc1.387.gb447b232ab-goog



^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 5/8] zsmalloc: move huge compressed obj from page to zspage
  2021-11-10 18:54 [PATCH 0/8] zsmalloc: remove bit_spin_lock Minchan Kim
                   ` (3 preceding siblings ...)
  2021-11-10 18:54 ` [PATCH 4/8] zsmalloc: introduce obj_allocated Minchan Kim
@ 2021-11-10 18:54 ` Minchan Kim
  2021-11-10 18:54 ` [PATCH 6/8] zsmalloc: remove zspage isolation for migration Minchan Kim
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 19+ messages in thread
From: Minchan Kim @ 2021-11-10 18:54 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Sergey Senozhatsky, linux-mm, Minchan Kim

the flag aims for zspage, not per page. Let's move it to zspage.

Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 mm/zsmalloc.c | 50 ++++++++++++++++++++++++++------------------------
 1 file changed, 26 insertions(+), 24 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 8f9cd07033de..da15d98a6e29 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -121,6 +121,7 @@
 #define OBJ_INDEX_BITS	(BITS_PER_LONG - _PFN_BITS - OBJ_TAG_BITS)
 #define OBJ_INDEX_MASK	((_AC(1, UL) << OBJ_INDEX_BITS) - 1)
 
+#define HUGE_BITS	1
 #define FULLNESS_BITS	2
 #define CLASS_BITS	8
 #define ISOLATED_BITS	3
@@ -213,22 +214,6 @@ struct size_class {
 	struct zs_size_stat stats;
 };
 
-/* huge object: pages_per_zspage == 1 && maxobj_per_zspage == 1 */
-static void SetPageHugeObject(struct page *page)
-{
-	SetPageOwnerPriv1(page);
-}
-
-static void ClearPageHugeObject(struct page *page)
-{
-	ClearPageOwnerPriv1(page);
-}
-
-static int PageHugeObject(struct page *page)
-{
-	return PageOwnerPriv1(page);
-}
-
 /*
  * Placed within free objects to form a singly linked list.
  * For every zspage, zspage->freeobj gives head of this list.
@@ -278,6 +263,7 @@ struct zs_pool {
 
 struct zspage {
 	struct {
+		unsigned int huge:HUGE_BITS;
 		unsigned int fullness:FULLNESS_BITS;
 		unsigned int class:CLASS_BITS + 1;
 		unsigned int isolated:ISOLATED_BITS;
@@ -298,6 +284,17 @@ struct mapping_area {
 	enum zs_mapmode vm_mm; /* mapping mode */
 };
 
+/* huge object: pages_per_zspage == 1 && maxobj_per_zspage == 1 */
+static void SetZsHugePage(struct zspage *zspage)
+{
+	zspage->huge = 1;
+}
+
+static bool ZsHugePage(struct zspage *zspage)
+{
+	return zspage->huge;
+}
+
 #ifdef CONFIG_COMPACTION
 static int zs_register_migration(struct zs_pool *pool);
 static void zs_unregister_migration(struct zs_pool *pool);
@@ -830,7 +827,9 @@ static struct zspage *get_zspage(struct page *page)
 
 static struct page *get_next_page(struct page *page)
 {
-	if (unlikely(PageHugeObject(page)))
+	struct zspage *zspage = get_zspage(page);
+
+	if (unlikely(ZsHugePage(zspage)))
 		return NULL;
 
 	return page->freelist;
@@ -880,8 +879,9 @@ static unsigned long handle_to_obj(unsigned long handle)
 static bool obj_allocated(struct page *page, void *obj, unsigned long *phandle)
 {
 	unsigned long handle;
+	struct zspage *zspage = get_zspage(page);
 
-	if (unlikely(PageHugeObject(page))) {
+	if (unlikely(ZsHugePage(zspage))) {
 		VM_BUG_ON_PAGE(!is_first_page(page), page);
 		handle = page->index;
 	} else
@@ -920,7 +920,6 @@ static void reset_page(struct page *page)
 	ClearPagePrivate(page);
 	set_page_private(page, 0);
 	page_mapcount_reset(page);
-	ClearPageHugeObject(page);
 	page->freelist = NULL;
 }
 
@@ -1062,7 +1061,7 @@ static void create_page_chain(struct size_class *class, struct zspage *zspage,
 			SetPagePrivate(page);
 			if (unlikely(class->objs_per_zspage == 1 &&
 					class->pages_per_zspage == 1))
-				SetPageHugeObject(page);
+				SetZsHugePage(zspage);
 		} else {
 			prev_page->freelist = page;
 		}
@@ -1307,7 +1306,7 @@ void *zs_map_object(struct zs_pool *pool, unsigned long handle,
 
 	ret = __zs_map_object(area, pages, off, class->size);
 out:
-	if (likely(!PageHugeObject(page)))
+	if (likely(!ZsHugePage(zspage)))
 		ret += ZS_HANDLE_SIZE;
 
 	return ret;
@@ -1395,7 +1394,7 @@ static unsigned long obj_malloc(struct zs_pool *pool,
 	vaddr = kmap_atomic(m_page);
 	link = (struct link_free *)vaddr + m_offset / sizeof(*link);
 	set_freeobj(zspage, link->next >> OBJ_TAG_BITS);
-	if (likely(!PageHugeObject(m_page)))
+	if (likely(!ZsHugePage(zspage)))
 		/* record handle in the header of allocated chunk */
 		link->handle = handle;
 	else
@@ -1496,7 +1495,10 @@ static void obj_free(int class_size, unsigned long obj)
 
 	/* Insert this object in containing zspage's freelist */
 	link = (struct link_free *)(vaddr + f_offset);
-	link->next = get_freeobj(zspage) << OBJ_TAG_BITS;
+	if (likely(!ZsHugePage(zspage)))
+		link->next = get_freeobj(zspage) << OBJ_TAG_BITS;
+	else
+		f_page->index = 0;
 	kunmap_atomic(vaddr);
 	set_freeobj(zspage, f_objidx);
 	mod_zspage_inuse(zspage, -1);
@@ -1866,7 +1868,7 @@ static void replace_sub_page(struct size_class *class, struct zspage *zspage,
 
 	create_page_chain(class, zspage, pages);
 	set_first_obj_offset(newpage, get_first_obj_offset(oldpage));
-	if (unlikely(PageHugeObject(oldpage)))
+	if (unlikely(ZsHugePage(zspage)))
 		newpage->index = oldpage->index;
 	__SetPageMovable(newpage, page_mapping(oldpage));
 }
-- 
2.34.0.rc1.387.gb447b232ab-goog



^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 6/8] zsmalloc: remove zspage isolation for migration
  2021-11-10 18:54 [PATCH 0/8] zsmalloc: remove bit_spin_lock Minchan Kim
                   ` (4 preceding siblings ...)
  2021-11-10 18:54 ` [PATCH 5/8] zsmalloc: move huge compressed obj from page to zspage Minchan Kim
@ 2021-11-10 18:54 ` Minchan Kim
  2021-11-10 18:54 ` [PATCH 7/8] zsmalloc: replace per zpage lock with pool->migrate_lock Minchan Kim
  2021-11-10 18:54 ` [PATCH 8/8] zsmalloc: replace get_cpu_var with local_lock Minchan Kim
  7 siblings, 0 replies; 19+ messages in thread
From: Minchan Kim @ 2021-11-10 18:54 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Sergey Senozhatsky, linux-mm, Minchan Kim

zspage isolation for migration introduced additional exceptions
to be dealt with since the zspage was isolated from class list.
The reason why I isolated zspage from class list was to prevent
race between obj_malloc and page migration via allocating zpage
from the zspage further. However, it couldn't prevent object
freeing from zspage so it needed corner case handling.

This patch removes the whole mess. Now, we are fine since
class->lock and zspage->lock can prevent the race.

Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 mm/zsmalloc.c | 156 +++-----------------------------------------------
 1 file changed, 8 insertions(+), 148 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index da15d98a6e29..b8b098be92fa 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -254,10 +254,6 @@ struct zs_pool {
 #ifdef CONFIG_COMPACTION
 	struct inode *inode;
 	struct work_struct free_work;
-	/* A wait queue for when migration races with async_free_zspage() */
-	struct wait_queue_head migration_wait;
-	atomic_long_t isolated_pages;
-	bool destroying;
 #endif
 };
 
@@ -454,11 +450,6 @@ MODULE_ALIAS("zpool-zsmalloc");
 /* per-cpu VM mapping areas for zspage accesses that cross page boundaries */
 static DEFINE_PER_CPU(struct mapping_area, zs_map_area);
 
-static bool is_zspage_isolated(struct zspage *zspage)
-{
-	return zspage->isolated;
-}
-
 static __maybe_unused int is_first_page(struct page *page)
 {
 	return PagePrivate(page);
@@ -744,7 +735,6 @@ static void remove_zspage(struct size_class *class,
 				enum fullness_group fullness)
 {
 	VM_BUG_ON(list_empty(&class->fullness_list[fullness]));
-	VM_BUG_ON(is_zspage_isolated(zspage));
 
 	list_del_init(&zspage->list);
 	class_stat_dec(class, fullness, 1);
@@ -770,13 +760,9 @@ static enum fullness_group fix_fullness_group(struct size_class *class,
 	if (newfg == currfg)
 		goto out;
 
-	if (!is_zspage_isolated(zspage)) {
-		remove_zspage(class, zspage, currfg);
-		insert_zspage(class, zspage, newfg);
-	}
-
+	remove_zspage(class, zspage, currfg);
+	insert_zspage(class, zspage, newfg);
 	set_zspage_mapping(zspage, class_idx, newfg);
-
 out:
 	return newfg;
 }
@@ -1511,7 +1497,6 @@ void zs_free(struct zs_pool *pool, unsigned long handle)
 	unsigned long obj;
 	struct size_class *class;
 	enum fullness_group fullness;
-	bool isolated;
 
 	if (unlikely(!handle))
 		return;
@@ -1533,11 +1518,9 @@ void zs_free(struct zs_pool *pool, unsigned long handle)
 		goto out;
 	}
 
-	isolated = is_zspage_isolated(zspage);
 	migrate_read_unlock(zspage);
 	/* If zspage is isolated, zs_page_putback will free the zspage */
-	if (likely(!isolated))
-		free_zspage(pool, class, zspage);
+	free_zspage(pool, class, zspage);
 out:
 
 	spin_unlock(&class->lock);
@@ -1718,7 +1701,6 @@ static struct zspage *isolate_zspage(struct size_class *class, bool source)
 		zspage = list_first_entry_or_null(&class->fullness_list[fg[i]],
 							struct zspage, list);
 		if (zspage) {
-			VM_BUG_ON(is_zspage_isolated(zspage));
 			remove_zspage(class, zspage, fg[i]);
 			return zspage;
 		}
@@ -1739,8 +1721,6 @@ static enum fullness_group putback_zspage(struct size_class *class,
 {
 	enum fullness_group fullness;
 
-	VM_BUG_ON(is_zspage_isolated(zspage));
-
 	fullness = get_fullness_group(class, zspage);
 	insert_zspage(class, zspage, fullness);
 	set_zspage_mapping(zspage, class->index, fullness);
@@ -1822,34 +1802,10 @@ static void inc_zspage_isolation(struct zspage *zspage)
 
 static void dec_zspage_isolation(struct zspage *zspage)
 {
+	VM_BUG_ON(zspage->isolated == 0);
 	zspage->isolated--;
 }
 
-static void putback_zspage_deferred(struct zs_pool *pool,
-				    struct size_class *class,
-				    struct zspage *zspage)
-{
-	enum fullness_group fg;
-
-	fg = putback_zspage(class, zspage);
-	if (fg == ZS_EMPTY)
-		schedule_work(&pool->free_work);
-
-}
-
-static inline void zs_pool_dec_isolated(struct zs_pool *pool)
-{
-	VM_BUG_ON(atomic_long_read(&pool->isolated_pages) <= 0);
-	atomic_long_dec(&pool->isolated_pages);
-	/*
-	 * There's no possibility of racing, since wait_for_isolated_drain()
-	 * checks the isolated count under &class->lock after enqueuing
-	 * on migration_wait.
-	 */
-	if (atomic_long_read(&pool->isolated_pages) == 0 && pool->destroying)
-		wake_up_all(&pool->migration_wait);
-}
-
 static void replace_sub_page(struct size_class *class, struct zspage *zspage,
 				struct page *newpage, struct page *oldpage)
 {
@@ -1875,10 +1831,7 @@ static void replace_sub_page(struct size_class *class, struct zspage *zspage,
 
 static bool zs_page_isolate(struct page *page, isolate_mode_t mode)
 {
-	struct zs_pool *pool;
-	struct size_class *class;
 	struct zspage *zspage;
-	struct address_space *mapping;
 
 	/*
 	 * Page is locked so zspage couldn't be destroyed. For detail, look at
@@ -1888,39 +1841,9 @@ static bool zs_page_isolate(struct page *page, isolate_mode_t mode)
 	VM_BUG_ON_PAGE(PageIsolated(page), page);
 
 	zspage = get_zspage(page);
-
-	mapping = page_mapping(page);
-	pool = mapping->private_data;
-
-	class = zspage_class(pool, zspage);
-
-	spin_lock(&class->lock);
-	if (get_zspage_inuse(zspage) == 0) {
-		spin_unlock(&class->lock);
-		return false;
-	}
-
-	/* zspage is isolated for object migration */
-	if (list_empty(&zspage->list) && !is_zspage_isolated(zspage)) {
-		spin_unlock(&class->lock);
-		return false;
-	}
-
-	/*
-	 * If this is first time isolation for the zspage, isolate zspage from
-	 * size_class to prevent further object allocation from the zspage.
-	 */
-	if (!list_empty(&zspage->list) && !is_zspage_isolated(zspage)) {
-		enum fullness_group fullness;
-		unsigned int class_idx;
-
-		get_zspage_mapping(zspage, &class_idx, &fullness);
-		atomic_long_inc(&pool->isolated_pages);
-		remove_zspage(class, zspage, fullness);
-	}
-
+	migrate_write_lock(zspage);
 	inc_zspage_isolation(zspage);
-	spin_unlock(&class->lock);
+	migrate_write_unlock(zspage);
 
 	return true;
 }
@@ -2003,21 +1926,6 @@ static int zs_page_migrate(struct address_space *mapping, struct page *newpage,
 
 	dec_zspage_isolation(zspage);
 
-	/*
-	 * Page migration is done so let's putback isolated zspage to
-	 * the list if @page is final isolated subpage in the zspage.
-	 */
-	if (!is_zspage_isolated(zspage)) {
-		/*
-		 * We cannot race with zs_destroy_pool() here because we wait
-		 * for isolation to hit zero before we start destroying.
-		 * Also, we ensure that everyone can see pool->destroying before
-		 * we start waiting.
-		 */
-		putback_zspage_deferred(pool, class, zspage);
-		zs_pool_dec_isolated(pool);
-	}
-
 	if (page_zone(newpage) != page_zone(page)) {
 		dec_zone_page_state(page, NR_ZSPAGES);
 		inc_zone_page_state(newpage, NR_ZSPAGES);
@@ -2045,30 +1953,15 @@ static int zs_page_migrate(struct address_space *mapping, struct page *newpage,
 
 static void zs_page_putback(struct page *page)
 {
-	struct zs_pool *pool;
-	struct size_class *class;
-	struct address_space *mapping;
 	struct zspage *zspage;
 
 	VM_BUG_ON_PAGE(!PageMovable(page), page);
 	VM_BUG_ON_PAGE(!PageIsolated(page), page);
 
 	zspage = get_zspage(page);
-	mapping = page_mapping(page);
-	pool = mapping->private_data;
-	class = zspage_class(pool, zspage);
-
-	spin_lock(&class->lock);
+	migrate_write_lock(zspage);
 	dec_zspage_isolation(zspage);
-	if (!is_zspage_isolated(zspage)) {
-		/*
-		 * Due to page_lock, we cannot free zspage immediately
-		 * so let's defer.
-		 */
-		putback_zspage_deferred(pool, class, zspage);
-		zs_pool_dec_isolated(pool);
-	}
-	spin_unlock(&class->lock);
+	migrate_write_unlock(zspage);
 }
 
 static const struct address_space_operations zsmalloc_aops = {
@@ -2090,36 +1983,8 @@ static int zs_register_migration(struct zs_pool *pool)
 	return 0;
 }
 
-static bool pool_isolated_are_drained(struct zs_pool *pool)
-{
-	return atomic_long_read(&pool->isolated_pages) == 0;
-}
-
-/* Function for resolving migration */
-static void wait_for_isolated_drain(struct zs_pool *pool)
-{
-
-	/*
-	 * We're in the process of destroying the pool, so there are no
-	 * active allocations. zs_page_isolate() fails for completely free
-	 * zspages, so we need only wait for the zs_pool's isolated
-	 * count to hit zero.
-	 */
-	wait_event(pool->migration_wait,
-		   pool_isolated_are_drained(pool));
-}
-
 static void zs_unregister_migration(struct zs_pool *pool)
 {
-	pool->destroying = true;
-	/*
-	 * We need a memory barrier here to ensure global visibility of
-	 * pool->destroying. Thus pool->isolated pages will either be 0 in which
-	 * case we don't care, or it will be > 0 and pool->destroying will
-	 * ensure that we wake up once isolation hits 0.
-	 */
-	smp_mb();
-	wait_for_isolated_drain(pool); /* This can block */
 	flush_work(&pool->free_work);
 	iput(pool->inode);
 }
@@ -2149,7 +2014,6 @@ static void async_free_zspage(struct work_struct *work)
 		spin_unlock(&class->lock);
 	}
 
-
 	list_for_each_entry_safe(zspage, tmp, &free_pages, list) {
 		list_del(&zspage->list);
 		lock_zspage(zspage);
@@ -2362,10 +2226,6 @@ struct zs_pool *zs_create_pool(const char *name)
 	if (!pool->name)
 		goto err;
 
-#ifdef CONFIG_COMPACTION
-	init_waitqueue_head(&pool->migration_wait);
-#endif
-
 	if (create_cache(pool))
 		goto err;
 
-- 
2.34.0.rc1.387.gb447b232ab-goog



^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 7/8] zsmalloc: replace per zpage lock with pool->migrate_lock
  2021-11-10 18:54 [PATCH 0/8] zsmalloc: remove bit_spin_lock Minchan Kim
                   ` (5 preceding siblings ...)
  2021-11-10 18:54 ` [PATCH 6/8] zsmalloc: remove zspage isolation for migration Minchan Kim
@ 2021-11-10 18:54 ` Minchan Kim
  2021-11-11  9:07   ` Sebastian Andrzej Siewior
  2021-11-11 10:13   ` kernel test robot
  2021-11-10 18:54 ` [PATCH 8/8] zsmalloc: replace get_cpu_var with local_lock Minchan Kim
  7 siblings, 2 replies; 19+ messages in thread
From: Minchan Kim @ 2021-11-10 18:54 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Sergey Senozhatsky, linux-mm, Minchan Kim

The zsmalloc has used a bit for spin_lock in zpage handle to keep
zpage object alive during several operations. However, it causes
the problem for PREEMPT_RT as well as introducing too complicated.

This patch replaces the bit spin_lock with pool->migrate_lock
rwlock. It could make the code simple as well as zsmalloc work
under PREEMPT_RT.

The drawback is the pool->migrate_lock is bigger granuarity than
per zpage lock so the contention would be higher than old when
both IO-related operations(i.e., zsmalloc, zsfree, zs_[map|unmap])
and compaction(page/zpage migration) are going in parallel(*,
the migrate_lock is rwlock and IO related functions are all read
side lock so there is no contention). However, the write-side
is fast enough(dominant overhead is just page copy) so it wouldn't
affect much. If the lock granurity becomes more problem later,
we could introduce table locks based on handle as a hash value.

Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 mm/zsmalloc.c | 205 +++++++++++++++++++++++---------------------------
 1 file changed, 96 insertions(+), 109 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index b8b098be92fa..5d4c4d254679 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -30,6 +30,14 @@
 
 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
 
+/*
+ * lock ordering:
+ *	page_lock
+ *	pool->migrate_lock
+ *	class->lock
+ *	zspage->lock
+ */
+
 #include <linux/module.h>
 #include <linux/kernel.h>
 #include <linux/sched.h>
@@ -100,15 +108,6 @@
 
 #define _PFN_BITS		(MAX_POSSIBLE_PHYSMEM_BITS - PAGE_SHIFT)
 
-/*
- * Memory for allocating for handle keeps object position by
- * encoding <page, obj_idx> and the encoded value has a room
- * in least bit(ie, look at obj_to_location).
- * We use the bit to synchronize between object access by
- * user and migration.
- */
-#define HANDLE_PIN_BIT	0
-
 /*
  * Head in allocated object should have OBJ_ALLOCATED_TAG
  * to identify the object was allocated or not.
@@ -255,6 +254,8 @@ struct zs_pool {
 	struct inode *inode;
 	struct work_struct free_work;
 #endif
+	/* protect page/zspage migration */
+	rwlock_t migrate_lock;
 };
 
 struct zspage {
@@ -297,6 +298,9 @@ static void zs_unregister_migration(struct zs_pool *pool);
 static void migrate_lock_init(struct zspage *zspage);
 static void migrate_read_lock(struct zspage *zspage);
 static void migrate_read_unlock(struct zspage *zspage);
+static void migrate_write_lock(struct zspage *zspage);
+static void migrate_write_lock_nested(struct zspage *zspage);
+static void migrate_write_unlock(struct zspage *zspage);
 static void kick_deferred_free(struct zs_pool *pool);
 static void init_deferred_free(struct zs_pool *pool);
 static void SetZsPageMovable(struct zs_pool *pool, struct zspage *zspage);
@@ -308,6 +312,9 @@ static void zs_unregister_migration(struct zs_pool *pool) {}
 static void migrate_lock_init(struct zspage *zspage) {}
 static void migrate_read_lock(struct zspage *zspage) {}
 static void migrate_read_unlock(struct zspage *zspage) {}
+static void migrate_write_lock(struct zspage *zspage) {}
+static void migrate_write_lock_nested(struct zspage *zspage) {}
+static void migrate_write_unlock(struct zspage *zspage) {}
 static void kick_deferred_free(struct zs_pool *pool) {}
 static void init_deferred_free(struct zs_pool *pool) {}
 static void SetZsPageMovable(struct zs_pool *pool, struct zspage *zspage) {}
@@ -359,14 +366,10 @@ static void cache_free_zspage(struct zs_pool *pool, struct zspage *zspage)
 	kmem_cache_free(pool->zspage_cachep, zspage);
 }
 
+/* class->lock(which owns the handle) synchronizes races */
 static void record_obj(unsigned long handle, unsigned long obj)
 {
-	/*
-	 * lsb of @obj represents handle lock while other bits
-	 * represent object value the handle is pointing so
-	 * updating shouldn't do store tearing.
-	 */
-	WRITE_ONCE(*(unsigned long *)handle, obj);
+	*(unsigned long *)handle = obj;
 }
 
 /* zpool driver */
@@ -880,26 +883,6 @@ static bool obj_allocated(struct page *page, void *obj, unsigned long *phandle)
 	return true;
 }
 
-static inline int testpin_tag(unsigned long handle)
-{
-	return bit_spin_is_locked(HANDLE_PIN_BIT, (unsigned long *)handle);
-}
-
-static inline int trypin_tag(unsigned long handle)
-{
-	return bit_spin_trylock(HANDLE_PIN_BIT, (unsigned long *)handle);
-}
-
-static void pin_tag(unsigned long handle) __acquires(bitlock)
-{
-	bit_spin_lock(HANDLE_PIN_BIT, (unsigned long *)handle);
-}
-
-static void unpin_tag(unsigned long handle) __releases(bitlock)
-{
-	bit_spin_unlock(HANDLE_PIN_BIT, (unsigned long *)handle);
-}
-
 static void reset_page(struct page *page)
 {
 	__ClearPageMovable(page);
@@ -968,6 +951,11 @@ static void free_zspage(struct zs_pool *pool, struct size_class *class,
 	VM_BUG_ON(get_zspage_inuse(zspage));
 	VM_BUG_ON(list_empty(&zspage->list));
 
+	/*
+	 * Since zs_free couldn't be sleepable, this function cannot call
+	 * lock_page. The page locks trylock_zspage got will be released
+	 * by __free_zspage.
+	 */
 	if (!trylock_zspage(zspage)) {
 		kick_deferred_free(pool);
 		return;
@@ -1263,15 +1251,20 @@ void *zs_map_object(struct zs_pool *pool, unsigned long handle,
 	 */
 	BUG_ON(in_interrupt());
 
-	/* From now on, migration cannot move the object */
-	pin_tag(handle);
-
+	/* It guarantees it can get zspage from handle safely */
+	read_lock(&pool->migrate_lock);
 	obj = handle_to_obj(handle);
 	obj_to_location(obj, &page, &obj_idx);
 	zspage = get_zspage(page);
 
-	/* migration cannot move any subpage in this zspage */
+	/*
+	 * migration cannot move any zpages in this zspage. Here, class->lock
+	 * is too heavy since callers would take some time until they calls
+	 * zs_unmap_object API so delegate the locking from class to zspage
+	 * which is smaller granularity.
+	 */
 	migrate_read_lock(zspage);
+	read_unlock(&pool->migrate_lock);
 
 	class = zspage_class(pool, zspage);
 	off = (class->size * obj_idx) & ~PAGE_MASK;
@@ -1330,7 +1323,6 @@ void zs_unmap_object(struct zs_pool *pool, unsigned long handle)
 	put_cpu_var(zs_map_area);
 
 	migrate_read_unlock(zspage);
-	unpin_tag(handle);
 }
 EXPORT_SYMBOL_GPL(zs_unmap_object);
 
@@ -1424,6 +1416,7 @@ unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t gfp)
 	size += ZS_HANDLE_SIZE;
 	class = pool->size_class[get_size_class_index(size)];
 
+	/* class->lock effectively protects the zpage migration */
 	spin_lock(&class->lock);
 	zspage = find_get_zspage(class);
 	if (likely(zspage)) {
@@ -1501,30 +1494,27 @@ void zs_free(struct zs_pool *pool, unsigned long handle)
 	if (unlikely(!handle))
 		return;
 
-	pin_tag(handle);
+	/*
+	 * The pool->migrate_lock protects the race with zpage's migration
+	 * so it's safe to get the page from handle.
+	 */
+	read_lock(&pool->migrate_lock);
 	obj = handle_to_obj(handle);
 	obj_to_page(obj, &f_page);
 	zspage = get_zspage(f_page);
-
-	migrate_read_lock(zspage);
 	class = zspage_class(pool, zspage);
-
 	spin_lock(&class->lock);
+	read_unlock(&pool->migrate_lock);
+
 	obj_free(class->size, obj);
 	class_stat_dec(class, OBJ_USED, 1);
 	fullness = fix_fullness_group(class, zspage);
-	if (fullness != ZS_EMPTY) {
-		migrate_read_unlock(zspage);
+	if (fullness != ZS_EMPTY)
 		goto out;
-	}
 
-	migrate_read_unlock(zspage);
-	/* If zspage is isolated, zs_page_putback will free the zspage */
 	free_zspage(pool, class, zspage);
 out:
-
 	spin_unlock(&class->lock);
-	unpin_tag(handle);
 	cache_free_handle(pool, handle);
 }
 EXPORT_SYMBOL_GPL(zs_free);
@@ -1608,11 +1598,8 @@ static unsigned long find_alloced_obj(struct size_class *class,
 	offset += class->size * index;
 
 	while (offset < PAGE_SIZE) {
-		if (obj_allocated(page, addr + offset, &handle)) {
-			if (trypin_tag(handle))
-				break;
-			handle = 0;
-		}
+		if (obj_allocated(page, addr + offset, &handle))
+			break;
 
 		offset += class->size;
 		index++;
@@ -1658,7 +1645,6 @@ static int migrate_zspage(struct zs_pool *pool, struct size_class *class,
 
 		/* Stop if there is no more space */
 		if (zspage_full(class, get_zspage(d_page))) {
-			unpin_tag(handle);
 			ret = -ENOMEM;
 			break;
 		}
@@ -1667,15 +1653,7 @@ static int migrate_zspage(struct zs_pool *pool, struct size_class *class,
 		free_obj = obj_malloc(pool, get_zspage(d_page), handle);
 		zs_object_copy(class, free_obj, used_obj);
 		obj_idx++;
-		/*
-		 * record_obj updates handle's value to free_obj and it will
-		 * invalidate lock bit(ie, HANDLE_PIN_BIT) of handle, which
-		 * breaks synchronization using pin_tag(e,g, zs_free) so
-		 * let's keep the lock bit.
-		 */
-		free_obj |= BIT(HANDLE_PIN_BIT);
 		record_obj(handle, free_obj);
-		unpin_tag(handle);
 		obj_free(class->size, used_obj);
 	}
 
@@ -1789,6 +1767,11 @@ static void migrate_write_lock(struct zspage *zspage)
 	write_lock(&zspage->lock);
 }
 
+static void migrate_write_lock_nested(struct zspage *zspage)
+{
+	write_lock_nested(&zspage->lock, SINGLE_DEPTH_NESTING);
+}
+
 static void migrate_write_unlock(struct zspage *zspage)
 {
 	write_unlock(&zspage->lock);
@@ -1856,11 +1839,10 @@ static int zs_page_migrate(struct address_space *mapping, struct page *newpage,
 	struct zspage *zspage;
 	struct page *dummy;
 	void *s_addr, *d_addr, *addr;
-	int offset, pos;
+	int offset;
 	unsigned long handle;
 	unsigned long old_obj, new_obj;
 	unsigned int obj_idx;
-	int ret = -EAGAIN;
 
 	/*
 	 * We cannot support the _NO_COPY case here, because copy needs to
@@ -1873,32 +1855,25 @@ static int zs_page_migrate(struct address_space *mapping, struct page *newpage,
 	VM_BUG_ON_PAGE(!PageMovable(page), page);
 	VM_BUG_ON_PAGE(!PageIsolated(page), page);
 
-	zspage = get_zspage(page);
-
-	/* Concurrent compactor cannot migrate any subpage in zspage */
-	migrate_write_lock(zspage);
 	pool = mapping->private_data;
+
+	/*
+	 * The pool migrate_lock protects the race between zpage migration
+	 * and zs_free.
+	 */
+	write_lock(&pool->migrate_lock);
+	zspage = get_zspage(page);
 	class = zspage_class(pool, zspage);
-	offset = get_first_obj_offset(page);
 
+	/*
+	 * the class lock protects zpage alloc/free in the zspage.
+	 */
 	spin_lock(&class->lock);
-	if (!get_zspage_inuse(zspage)) {
-		/*
-		 * Set "offset" to end of the page so that every loops
-		 * skips unnecessary object scanning.
-		 */
-		offset = PAGE_SIZE;
-	}
+	/* the migrate_write_lock protects zpage access via zs_map_object */
+	migrate_write_lock(zspage);
 
-	pos = offset;
+	offset = get_first_obj_offset(page);
 	s_addr = kmap_atomic(page);
-	while (pos < PAGE_SIZE) {
-		if (obj_allocated(page, s_addr + pos, &handle)) {
-			if (!trypin_tag(handle))
-				goto unpin_objects;
-		}
-		pos += class->size;
-	}
 
 	/*
 	 * Here, any user cannot access all objects in the zspage so let's move.
@@ -1907,25 +1882,30 @@ static int zs_page_migrate(struct address_space *mapping, struct page *newpage,
 	memcpy(d_addr, s_addr, PAGE_SIZE);
 	kunmap_atomic(d_addr);
 
-	for (addr = s_addr + offset; addr < s_addr + pos;
+	for (addr = s_addr + offset; addr < s_addr + PAGE_SIZE;
 					addr += class->size) {
 		if (obj_allocated(page, addr, &handle)) {
-			BUG_ON(!testpin_tag(handle));
 
 			old_obj = handle_to_obj(handle);
 			obj_to_location(old_obj, &dummy, &obj_idx);
 			new_obj = (unsigned long)location_to_obj(newpage,
 								obj_idx);
-			new_obj |= BIT(HANDLE_PIN_BIT);
 			record_obj(handle, new_obj);
 		}
 	}
+	kunmap_atomic(s_addr);
 
 	replace_sub_page(class, zspage, newpage, page);
-	get_page(newpage);
-
+	/*
+	 * Since we complete the data copy and set up new zspage structure,
+	 * it's okay to release migration_lock.
+	 */
+	write_unlock(&pool->migrate_lock);
+	spin_unlock(&class->lock);
 	dec_zspage_isolation(zspage);
+	migrate_write_unlock(zspage);
 
+	get_page(newpage);
 	if (page_zone(newpage) != page_zone(page)) {
 		dec_zone_page_state(page, NR_ZSPAGES);
 		inc_zone_page_state(newpage, NR_ZSPAGES);
@@ -1933,22 +1913,8 @@ static int zs_page_migrate(struct address_space *mapping, struct page *newpage,
 
 	reset_page(page);
 	put_page(page);
-	page = newpage;
-
-	ret = MIGRATEPAGE_SUCCESS;
-unpin_objects:
-	for (addr = s_addr + offset; addr < s_addr + pos;
-						addr += class->size) {
-		if (obj_allocated(page, addr, &handle)) {
-			BUG_ON(!testpin_tag(handle));
-			unpin_tag(handle);
-		}
-	}
-	kunmap_atomic(s_addr);
-	spin_unlock(&class->lock);
-	migrate_write_unlock(zspage);
 
-	return ret;
+	return MIGRATEPAGE_SUCCESS;
 }
 
 static void zs_page_putback(struct page *page)
@@ -2077,8 +2043,13 @@ static unsigned long __zs_compact(struct zs_pool *pool,
 	struct zspage *dst_zspage = NULL;
 	unsigned long pages_freed = 0;
 
+	/* protect the race between zpage migration and zs_free */
+	write_lock(&pool->migrate_lock);
+	/* protect zpage allocation/free */
 	spin_lock(&class->lock);
 	while ((src_zspage = isolate_zspage(class, true))) {
+		/* protect someone accessing the zspage(i.e., zs_map_object) */
+		migrate_write_lock(src_zspage);
 
 		if (!zs_can_compact(class))
 			break;
@@ -2087,6 +2058,8 @@ static unsigned long __zs_compact(struct zs_pool *pool,
 		cc.s_page = get_first_page(src_zspage);
 
 		while ((dst_zspage = isolate_zspage(class, false))) {
+			migrate_write_lock_nested(dst_zspage);
+
 			cc.d_page = get_first_page(dst_zspage);
 			/*
 			 * If there is no more space in dst_page, resched
@@ -2096,6 +2069,10 @@ static unsigned long __zs_compact(struct zs_pool *pool,
 				break;
 
 			putback_zspage(class, dst_zspage);
+			migrate_write_unlock(dst_zspage);
+			dst_zspage = NULL;
+			if (rwlock_is_contended(&pool->migrate_lock))
+				break;
 		}
 
 		/* Stop if we couldn't find slot */
@@ -2103,19 +2080,28 @@ static unsigned long __zs_compact(struct zs_pool *pool,
 			break;
 
 		putback_zspage(class, dst_zspage);
+		migrate_write_unlock(dst_zspage);
+
 		if (putback_zspage(class, src_zspage) == ZS_EMPTY) {
+			migrate_write_unlock(src_zspage);
 			free_zspage(pool, class, src_zspage);
 			pages_freed += class->pages_per_zspage;
-		}
+		} else
+			migrate_write_unlock(src_zspage);
 		spin_unlock(&class->lock);
+		write_unlock(&pool->migrate_lock);
 		cond_resched();
+		write_lock(&pool->migrate_lock);
 		spin_lock(&class->lock);
 	}
 
-	if (src_zspage)
+	if (src_zspage) {
 		putback_zspage(class, src_zspage);
+		migrate_write_unlock(src_zspage);
+	}
 
 	spin_unlock(&class->lock);
+	write_unlock(&pool->migrate_lock);
 
 	return pages_freed;
 }
@@ -2221,6 +2207,7 @@ struct zs_pool *zs_create_pool(const char *name)
 		return NULL;
 
 	init_deferred_free(pool);
+	rwlock_init(&pool->migrate_lock);
 
 	pool->name = kstrdup(name, GFP_KERNEL);
 	if (!pool->name)
-- 
2.34.0.rc1.387.gb447b232ab-goog



^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 8/8] zsmalloc: replace get_cpu_var with local_lock
  2021-11-10 18:54 [PATCH 0/8] zsmalloc: remove bit_spin_lock Minchan Kim
                   ` (6 preceding siblings ...)
  2021-11-10 18:54 ` [PATCH 7/8] zsmalloc: replace per zpage lock with pool->migrate_lock Minchan Kim
@ 2021-11-10 18:54 ` Minchan Kim
  2021-11-11  8:56   ` Sebastian Andrzej Siewior
  2021-11-15  3:56   ` Davidlohr Bueso
  7 siblings, 2 replies; 19+ messages in thread
From: Minchan Kim @ 2021-11-10 18:54 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Sergey Senozhatsky, linux-mm, Minchan Kim, Sebastian Andrzej Siewior

From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

The usage of get_cpu_var() in zs_map_object() is problematic because
it disables preemption and makes it impossible to acquire any sleeping
lock on PREEMPT_RT such as a spinlock_t.
Replace the get_cpu_var() usage with a local_lock_t which is embedded
struct mapping_area. It ensures that the access the struct is
synchronized against all users on the same CPU.

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
[minchan: remove the bit_spin_lock part and change the title]
Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 mm/zsmalloc.c | 11 ++++++++---
 1 file changed, 8 insertions(+), 3 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 5d4c4d254679..7e03cc9363bb 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -65,6 +65,7 @@
 #include <linux/wait.h>
 #include <linux/pagemap.h>
 #include <linux/fs.h>
+#include <linux/local_lock.h>
 
 #define ZSPAGE_MAGIC	0x58
 
@@ -276,6 +277,7 @@ struct zspage {
 };
 
 struct mapping_area {
+	local_lock_t lock;
 	char *vm_buf; /* copy buffer for objects that span pages */
 	char *vm_addr; /* address of kmap_atomic()'ed pages */
 	enum zs_mapmode vm_mm; /* mapping mode */
@@ -451,7 +453,9 @@ MODULE_ALIAS("zpool-zsmalloc");
 #endif /* CONFIG_ZPOOL */
 
 /* per-cpu VM mapping areas for zspage accesses that cross page boundaries */
-static DEFINE_PER_CPU(struct mapping_area, zs_map_area);
+static DEFINE_PER_CPU(struct mapping_area, zs_map_area) = {
+	.lock	= INIT_LOCAL_LOCK(lock),
+};
 
 static __maybe_unused int is_first_page(struct page *page)
 {
@@ -1269,7 +1273,8 @@ void *zs_map_object(struct zs_pool *pool, unsigned long handle,
 	class = zspage_class(pool, zspage);
 	off = (class->size * obj_idx) & ~PAGE_MASK;
 
-	area = &get_cpu_var(zs_map_area);
+	local_lock(&zs_map_area.lock);
+	area = this_cpu_ptr(&zs_map_area);
 	area->vm_mm = mm;
 	if (off + class->size <= PAGE_SIZE) {
 		/* this object is contained entirely within a page */
@@ -1320,7 +1325,7 @@ void zs_unmap_object(struct zs_pool *pool, unsigned long handle)
 
 		__zs_unmap_object(area, pages, off, class->size);
 	}
-	put_cpu_var(zs_map_area);
+	local_unlock(&zs_map_area.lock);
 
 	migrate_read_unlock(zspage);
 }
-- 
2.34.0.rc1.387.gb447b232ab-goog



^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [PATCH 8/8] zsmalloc: replace get_cpu_var with local_lock
  2021-11-10 18:54 ` [PATCH 8/8] zsmalloc: replace get_cpu_var with local_lock Minchan Kim
@ 2021-11-11  8:56   ` Sebastian Andrzej Siewior
  2021-11-11 23:08     ` Minchan Kim
  2021-11-15  3:56   ` Davidlohr Bueso
  1 sibling, 1 reply; 19+ messages in thread
From: Sebastian Andrzej Siewior @ 2021-11-11  8:56 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, Sergey Senozhatsky, linux-mm, Mike Galbraith,
	Thomas Gleixner

On 2021-11-10 10:54:33 [-0800], Minchan Kim wrote:
> From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

The From line used to be
   From: Mike Galbraith <umgwanakikbuti@gmail.com>

sorry if I dropped it earlier.

> The usage of get_cpu_var() in zs_map_object() is problematic because
> it disables preemption and makes it impossible to acquire any sleeping
> lock on PREEMPT_RT such as a spinlock_t.
> Replace the get_cpu_var() usage with a local_lock_t which is embedded
> struct mapping_area. It ensures that the access the struct is
> synchronized against all users on the same CPU.
> 
> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> [minchan: remove the bit_spin_lock part and change the title]
> Signed-off-by: Minchan Kim <minchan@kernel.org>

So you removed the bit_spin_lock part here but replaced the bitspin lock
with a rwsem in an earlier patch in this series.
This should work.

Thank you.

Sebastian


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 7/8] zsmalloc: replace per zpage lock with pool->migrate_lock
  2021-11-10 18:54 ` [PATCH 7/8] zsmalloc: replace per zpage lock with pool->migrate_lock Minchan Kim
@ 2021-11-11  9:07   ` Sebastian Andrzej Siewior
  2021-11-11 23:11     ` Minchan Kim
  2021-11-11 10:13   ` kernel test robot
  1 sibling, 1 reply; 19+ messages in thread
From: Sebastian Andrzej Siewior @ 2021-11-11  9:07 UTC (permalink / raw)
  To: Minchan Kim; +Cc: Andrew Morton, Sergey Senozhatsky, linux-mm, Thomas Gleixner

On 2021-11-10 10:54:32 [-0800], Minchan Kim wrote:
> The zsmalloc has used a bit for spin_lock in zpage handle to keep
> zpage object alive during several operations. However, it causes
> the problem for PREEMPT_RT as well as introducing too complicated.
> 
> This patch replaces the bit spin_lock with pool->migrate_lock
> rwlock. It could make the code simple as well as zsmalloc work
> under PREEMPT_RT.
> 
> The drawback is the pool->migrate_lock is bigger granuarity than
> per zpage lock so the contention would be higher than old when
> both IO-related operations(i.e., zsmalloc, zsfree, zs_[map|unmap])
> and compaction(page/zpage migration) are going in parallel(*,
> the migrate_lock is rwlock and IO related functions are all read
> side lock so there is no contention). However, the write-side
> is fast enough(dominant overhead is just page copy) so it wouldn't
> affect much. If the lock granurity becomes more problem later,
> we could introduce table locks based on handle as a hash value.
> 
> Signed-off-by: Minchan Kim <minchan@kernel.org>
> index b8b098be92fa..5d4c4d254679 100644
> --- a/mm/zsmalloc.c
> +++ b/mm/zsmalloc.c
> @@ -1789,6 +1767,11 @@ static void migrate_write_lock(struct zspage *zspage)
>  	write_lock(&zspage->lock);
>  }
>  
> +static void migrate_write_lock_nested(struct zspage *zspage)
> +{
> +	write_lock_nested(&zspage->lock, SINGLE_DEPTH_NESTING);

I don't have this in my tree. 

> +}
> +
>  static void migrate_write_unlock(struct zspage *zspage)
>  {
>  	write_unlock(&zspage->lock);
> @@ -2077,8 +2043,13 @@ static unsigned long __zs_compact(struct zs_pool *pool,
>  	struct zspage *dst_zspage = NULL;
>  	unsigned long pages_freed = 0;
>  
> +	/* protect the race between zpage migration and zs_free */
> +	write_lock(&pool->migrate_lock);
> +	/* protect zpage allocation/free */
>  	spin_lock(&class->lock);
>  	while ((src_zspage = isolate_zspage(class, true))) {
> +		/* protect someone accessing the zspage(i.e., zs_map_object) */
> +		migrate_write_lock(src_zspage);
>  
>  		if (!zs_can_compact(class))
>  			break;
> @@ -2087,6 +2058,8 @@ static unsigned long __zs_compact(struct zs_pool *pool,
>  		cc.s_page = get_first_page(src_zspage);
>  
>  		while ((dst_zspage = isolate_zspage(class, false))) {
> +			migrate_write_lock_nested(dst_zspage);
> +
>  			cc.d_page = get_first_page(dst_zspage);
>  			/*
>  			 * If there is no more space in dst_page, resched

Looking at the these two chunks, the page here comes from a list, you
remove that page from that list and this ensures that you can't lock the
very same pages in reverse order as in:

   migrate_write_lock(dst_zspage);
   …
   	migrate_write_lock(src_zspage);

right?

Sebastian


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 7/8] zsmalloc: replace per zpage lock with pool->migrate_lock
  2021-11-10 18:54 ` [PATCH 7/8] zsmalloc: replace per zpage lock with pool->migrate_lock Minchan Kim
  2021-11-11  9:07   ` Sebastian Andrzej Siewior
@ 2021-11-11 10:13   ` kernel test robot
  1 sibling, 0 replies; 19+ messages in thread
From: kernel test robot @ 2021-11-11 10:13 UTC (permalink / raw)
  To: Minchan Kim, Andrew Morton
  Cc: kbuild-all, Linux Memory Management List, Sergey Senozhatsky,
	Minchan Kim

[-- Attachment #1: Type: text/plain, Size: 2138 bytes --]

Hi Minchan,

I love your patch! Yet something to improve:

[auto build test ERROR on v5.15]
[cannot apply to hnaz-mm/master next-20211111]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Minchan-Kim/zsmalloc-remove-bit_spin_lock/20211111-025648
base:   DEBUG invalid remote for branch v5.15 8bb7eca972ad531c9b149c0a51ab43a417385813
config: xtensa-buildonly-randconfig-r004-20211111 (attached as .config)
compiler: xtensa-linux-gcc (GCC) 11.2.0
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://github.com/0day-ci/linux/commit/f1a88e64864de6af4e2a560bcf57a8b5f9737404
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Minchan-Kim/zsmalloc-remove-bit_spin_lock/20211111-025648
        git checkout f1a88e64864de6af4e2a560bcf57a8b5f9737404
        # save the attached .config to linux build tree
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-11.2.0 make.cross ARCH=xtensa 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

   mm/zsmalloc.c: In function 'migrate_write_lock_nested':
>> mm/zsmalloc.c:1772:9: error: implicit declaration of function 'write_lock_nested'; did you mean 'inode_lock_nested'? [-Werror=implicit-function-declaration]
    1772 |         write_lock_nested(&zspage->lock, SINGLE_DEPTH_NESTING);
         |         ^~~~~~~~~~~~~~~~~
         |         inode_lock_nested
   cc1: some warnings being treated as errors


vim +1772 mm/zsmalloc.c

  1769	
  1770	static void migrate_write_lock_nested(struct zspage *zspage)
  1771	{
> 1772		write_lock_nested(&zspage->lock, SINGLE_DEPTH_NESTING);
  1773	}
  1774	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 37878 bytes --]

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 8/8] zsmalloc: replace get_cpu_var with local_lock
  2021-11-11  8:56   ` Sebastian Andrzej Siewior
@ 2021-11-11 23:08     ` Minchan Kim
  0 siblings, 0 replies; 19+ messages in thread
From: Minchan Kim @ 2021-11-11 23:08 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: Andrew Morton, Sergey Senozhatsky, linux-mm, Mike Galbraith,
	Thomas Gleixner

On Thu, Nov 11, 2021 at 09:56:58AM +0100, Sebastian Andrzej Siewior wrote:
> On 2021-11-10 10:54:33 [-0800], Minchan Kim wrote:
> > From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> 
> The From line used to be
>    From: Mike Galbraith <umgwanakikbuti@gmail.com>
> 
> sorry if I dropped it earlier.

Let me change it at respin.


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 7/8] zsmalloc: replace per zpage lock with pool->migrate_lock
  2021-11-11  9:07   ` Sebastian Andrzej Siewior
@ 2021-11-11 23:11     ` Minchan Kim
  2021-11-12  7:28       ` Sebastian Andrzej Siewior
  2021-11-12  7:31       ` Sebastian Andrzej Siewior
  0 siblings, 2 replies; 19+ messages in thread
From: Minchan Kim @ 2021-11-11 23:11 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: Andrew Morton, Sergey Senozhatsky, linux-mm, Thomas Gleixner

On Thu, Nov 11, 2021 at 10:07:27AM +0100, Sebastian Andrzej Siewior wrote:
> On 2021-11-10 10:54:32 [-0800], Minchan Kim wrote:
> > The zsmalloc has used a bit for spin_lock in zpage handle to keep
> > zpage object alive during several operations. However, it causes
> > the problem for PREEMPT_RT as well as introducing too complicated.
> > 
> > This patch replaces the bit spin_lock with pool->migrate_lock
> > rwlock. It could make the code simple as well as zsmalloc work
> > under PREEMPT_RT.
> > 
> > The drawback is the pool->migrate_lock is bigger granuarity than
> > per zpage lock so the contention would be higher than old when
> > both IO-related operations(i.e., zsmalloc, zsfree, zs_[map|unmap])
> > and compaction(page/zpage migration) are going in parallel(*,
> > the migrate_lock is rwlock and IO related functions are all read
> > side lock so there is no contention). However, the write-side
> > is fast enough(dominant overhead is just page copy) so it wouldn't
> > affect much. If the lock granurity becomes more problem later,
> > we could introduce table locks based on handle as a hash value.
> > 
> > Signed-off-by: Minchan Kim <minchan@kernel.org>
> …
> > index b8b098be92fa..5d4c4d254679 100644
> > --- a/mm/zsmalloc.c
> > +++ b/mm/zsmalloc.c
> > @@ -1789,6 +1767,11 @@ static void migrate_write_lock(struct zspage *zspage)
> >  	write_lock(&zspage->lock);
> >  }
> >  
> > +static void migrate_write_lock_nested(struct zspage *zspage)
> > +{
> > +	write_lock_nested(&zspage->lock, SINGLE_DEPTH_NESTING);
> 
> I don't have this in my tree. 

I forgot it. I append it at the tail of the thread. 
I will also include it at nest revision.

> 
> > +}
> > +
> >  static void migrate_write_unlock(struct zspage *zspage)
> >  {
> >  	write_unlock(&zspage->lock);
> …
> > @@ -2077,8 +2043,13 @@ static unsigned long __zs_compact(struct zs_pool *pool,
> >  	struct zspage *dst_zspage = NULL;
> >  	unsigned long pages_freed = 0;
> >  
> > +	/* protect the race between zpage migration and zs_free */
> > +	write_lock(&pool->migrate_lock);
> > +	/* protect zpage allocation/free */
> >  	spin_lock(&class->lock);
> >  	while ((src_zspage = isolate_zspage(class, true))) {
> > +		/* protect someone accessing the zspage(i.e., zs_map_object) */
> > +		migrate_write_lock(src_zspage);
> >  
> >  		if (!zs_can_compact(class))
> >  			break;
> > @@ -2087,6 +2058,8 @@ static unsigned long __zs_compact(struct zs_pool *pool,
> >  		cc.s_page = get_first_page(src_zspage);
> >  
> >  		while ((dst_zspage = isolate_zspage(class, false))) {
> > +			migrate_write_lock_nested(dst_zspage);
> > +
> >  			cc.d_page = get_first_page(dst_zspage);
> >  			/*
> >  			 * If there is no more space in dst_page, resched
> 
> Looking at the these two chunks, the page here comes from a list, you
> remove that page from that list and this ensures that you can't lock the
> very same pages in reverse order as in:
> 
>    migrate_write_lock(dst_zspage);
>    …
>    	migrate_write_lock(src_zspage);
> 
> right?

Sure.

From e0bfc5185bbd15c651a7a367b6d053b8c88b1e01 Mon Sep 17 00:00:00 2001
From: Minchan Kim <minchan@kernel.org>
Date: Tue, 19 Oct 2021 15:34:09 -0700
Subject: [PATCH] locking/rwlocks: introduce write_lock_nested

Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 include/linux/rwlock.h          | 6 ++++++
 include/linux/rwlock_api_smp.h  | 9 +++++++++
 include/linux/spinlock_api_up.h | 1 +
 kernel/locking/spinlock.c       | 6 ++++++
 4 files changed, 22 insertions(+)

diff --git a/include/linux/rwlock.h b/include/linux/rwlock.h
index 7ce9a51ae5c0..93086de7bf9e 100644
--- a/include/linux/rwlock.h
+++ b/include/linux/rwlock.h
@@ -70,6 +70,12 @@ do {								\
 #define write_lock(lock)	_raw_write_lock(lock)
 #define read_lock(lock)		_raw_read_lock(lock)
 
+#ifdef CONFIG_DEBUG_LOCK_ALLOC
+#define write_lock_nested(lock, subclass)	_raw_write_lock_nested(lock, subclass)
+#else
+#define write_lock_nested(lock, subclass)	_raw_write_lock(lock)
+#endif
+
 #if defined(CONFIG_SMP) || defined(CONFIG_DEBUG_SPINLOCK)
 
 #define read_lock_irqsave(lock, flags)			\
diff --git a/include/linux/rwlock_api_smp.h b/include/linux/rwlock_api_smp.h
index abfb53ab11be..e0c866177c03 100644
--- a/include/linux/rwlock_api_smp.h
+++ b/include/linux/rwlock_api_smp.h
@@ -17,6 +17,7 @@
 
 void __lockfunc _raw_read_lock(rwlock_t *lock)		__acquires(lock);
 void __lockfunc _raw_write_lock(rwlock_t *lock)		__acquires(lock);
+void __lockfunc _raw_write_lock_nested(rwlock_t *lock, int subclass)	__acquires(lock);
 void __lockfunc _raw_read_lock_bh(rwlock_t *lock)	__acquires(lock);
 void __lockfunc _raw_write_lock_bh(rwlock_t *lock)	__acquires(lock);
 void __lockfunc _raw_read_lock_irq(rwlock_t *lock)	__acquires(lock);
@@ -46,6 +47,7 @@ _raw_write_unlock_irqrestore(rwlock_t *lock, unsigned long flags)
 
 #ifdef CONFIG_INLINE_WRITE_LOCK
 #define _raw_write_lock(lock) __raw_write_lock(lock)
+#define _raw_write_lock_nested(lock, subclass) __raw_write_lock_nested(lock, subclass)
 #endif
 
 #ifdef CONFIG_INLINE_READ_LOCK_BH
@@ -211,6 +213,13 @@ static inline void __raw_write_lock(rwlock_t *lock)
 	LOCK_CONTENDED(lock, do_raw_write_trylock, do_raw_write_lock);
 }
 
+static inline void __raw_write_lock_nested(rwlock_t *lock, int subclass)
+{
+	preempt_disable();
+	rwlock_acquire(&lock->dep_map, subclass, 0, _RET_IP_);
+	LOCK_CONTENDED(lock, do_raw_write_trylock, do_raw_write_lock);
+}
+
 #endif /* !CONFIG_GENERIC_LOCKBREAK || CONFIG_DEBUG_LOCK_ALLOC */
 
 static inline void __raw_write_unlock(rwlock_t *lock)
diff --git a/include/linux/spinlock_api_up.h b/include/linux/spinlock_api_up.h
index d0d188861ad6..b8ba00ccccde 100644
--- a/include/linux/spinlock_api_up.h
+++ b/include/linux/spinlock_api_up.h
@@ -59,6 +59,7 @@
 #define _raw_spin_lock_nested(lock, subclass)	__LOCK(lock)
 #define _raw_read_lock(lock)			__LOCK(lock)
 #define _raw_write_lock(lock)			__LOCK(lock)
+#define _raw_write_lock_nested(lock, subclass)	__LOCK(lock)
 #define _raw_spin_lock_bh(lock)			__LOCK_BH(lock)
 #define _raw_read_lock_bh(lock)			__LOCK_BH(lock)
 #define _raw_write_lock_bh(lock)		__LOCK_BH(lock)
diff --git a/kernel/locking/spinlock.c b/kernel/locking/spinlock.c
index c5830cfa379a..22969ec69288 100644
--- a/kernel/locking/spinlock.c
+++ b/kernel/locking/spinlock.c
@@ -300,6 +300,12 @@ void __lockfunc _raw_write_lock(rwlock_t *lock)
 	__raw_write_lock(lock);
 }
 EXPORT_SYMBOL(_raw_write_lock);
+
+void __lockfunc _raw_write_lock_nested(rwlock_t *lock, int subclass)
+{
+	__raw_write_lock_nested(lock, subclass);
+}
+EXPORT_SYMBOL(_raw_write_lock_nested);
 #endif
 
 #ifndef CONFIG_INLINE_WRITE_LOCK_IRQSAVE
-- 
2.34.0.rc1.387.gb447b232ab-goog



^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [PATCH 7/8] zsmalloc: replace per zpage lock with pool->migrate_lock
  2021-11-11 23:11     ` Minchan Kim
@ 2021-11-12  7:28       ` Sebastian Andrzej Siewior
  2021-11-12  7:31       ` Sebastian Andrzej Siewior
  1 sibling, 0 replies; 19+ messages in thread
From: Sebastian Andrzej Siewior @ 2021-11-12  7:28 UTC (permalink / raw)
  To: Minchan Kim; +Cc: Andrew Morton, Sergey Senozhatsky, linux-mm, Thomas Gleixner

On 2021-11-11 15:11:34 [-0800], Minchan Kim wrote:
> From e0bfc5185bbd15c651a7a367b6d053b8c88b1e01 Mon Sep 17 00:00:00 2001
> From: Minchan Kim <minchan@kernel.org>
> Date: Tue, 19 Oct 2021 15:34:09 -0700
> Subject: [PATCH] locking/rwlocks: introduce write_lock_nested
> 
> Signed-off-by: Minchan Kim <minchan@kernel.org>
> ---
>  include/linux/rwlock.h          | 6 ++++++
>  include/linux/rwlock_api_smp.h  | 9 +++++++++
>  include/linux/spinlock_api_up.h | 1 +
>  kernel/locking/spinlock.c       | 6 ++++++
>  4 files changed, 22 insertions(+)
> 

This looks about right. Could you also please wire up PREEMPT_RT's
version of it, that would be in
	include/linux/rwlock_rt.h
	kernel/locking/spinlock_rt.c

Sebastian


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 7/8] zsmalloc: replace per zpage lock with pool->migrate_lock
  2021-11-11 23:11     ` Minchan Kim
  2021-11-12  7:28       ` Sebastian Andrzej Siewior
@ 2021-11-12  7:31       ` Sebastian Andrzej Siewior
  2021-11-12 22:10         ` Minchan Kim
  1 sibling, 1 reply; 19+ messages in thread
From: Sebastian Andrzej Siewior @ 2021-11-12  7:31 UTC (permalink / raw)
  To: Minchan Kim; +Cc: Andrew Morton, Sergey Senozhatsky, linux-mm, Thomas Gleixner

On 2021-11-11 15:11:34 [-0800], Minchan Kim wrote:
> > > @@ -2077,8 +2043,13 @@ static unsigned long __zs_compact(struct zs_pool *pool,
> > >  	struct zspage *dst_zspage = NULL;
> > >  	unsigned long pages_freed = 0;
> > >  
> > > +	/* protect the race between zpage migration and zs_free */
> > > +	write_lock(&pool->migrate_lock);
> > > +	/* protect zpage allocation/free */
> > >  	spin_lock(&class->lock);
> > >  	while ((src_zspage = isolate_zspage(class, true))) {
> > > +		/* protect someone accessing the zspage(i.e., zs_map_object) */
> > > +		migrate_write_lock(src_zspage);
> > >  
> > >  		if (!zs_can_compact(class))
> > >  			break;
> > > @@ -2087,6 +2058,8 @@ static unsigned long __zs_compact(struct zs_pool *pool,
> > >  		cc.s_page = get_first_page(src_zspage);
> > >  
> > >  		while ((dst_zspage = isolate_zspage(class, false))) {
> > > +			migrate_write_lock_nested(dst_zspage);
> > > +
> > >  			cc.d_page = get_first_page(dst_zspage);
> > >  			/*
> > >  			 * If there is no more space in dst_page, resched
> > 
> > Looking at the these two chunks, the page here comes from a list, you
> > remove that page from that list and this ensures that you can't lock the
> > very same pages in reverse order as in:
> > 
> >    migrate_write_lock(dst_zspage);
> >    …
> >    	migrate_write_lock(src_zspage);
> > 
> > right?
> 
> Sure.

Out of curiosity: why do you need to lock it then if you grab it from
the list and there is no other reference to it? Is it because the page
might be referenced by other means but only by a reader?

Sebastian


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 7/8] zsmalloc: replace per zpage lock with pool->migrate_lock
  2021-11-12  7:31       ` Sebastian Andrzej Siewior
@ 2021-11-12 22:10         ` Minchan Kim
  0 siblings, 0 replies; 19+ messages in thread
From: Minchan Kim @ 2021-11-12 22:10 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: Andrew Morton, Sergey Senozhatsky, linux-mm, Thomas Gleixner

On Fri, Nov 12, 2021 at 08:31:57AM +0100, Sebastian Andrzej Siewior wrote:
> On 2021-11-11 15:11:34 [-0800], Minchan Kim wrote:
> > > > @@ -2077,8 +2043,13 @@ static unsigned long __zs_compact(struct zs_pool *pool,
> > > >  	struct zspage *dst_zspage = NULL;
> > > >  	unsigned long pages_freed = 0;
> > > >  
> > > > +	/* protect the race between zpage migration and zs_free */
> > > > +	write_lock(&pool->migrate_lock);
> > > > +	/* protect zpage allocation/free */
> > > >  	spin_lock(&class->lock);
> > > >  	while ((src_zspage = isolate_zspage(class, true))) {
> > > > +		/* protect someone accessing the zspage(i.e., zs_map_object) */
> > > > +		migrate_write_lock(src_zspage);
> > > >  
> > > >  		if (!zs_can_compact(class))
> > > >  			break;
> > > > @@ -2087,6 +2058,8 @@ static unsigned long __zs_compact(struct zs_pool *pool,
> > > >  		cc.s_page = get_first_page(src_zspage);
> > > >  
> > > >  		while ((dst_zspage = isolate_zspage(class, false))) {
> > > > +			migrate_write_lock_nested(dst_zspage);
> > > > +
> > > >  			cc.d_page = get_first_page(dst_zspage);
> > > >  			/*
> > > >  			 * If there is no more space in dst_page, resched
> > > 
> > > Looking at the these two chunks, the page here comes from a list, you
> > > remove that page from that list and this ensures that you can't lock the
> > > very same pages in reverse order as in:
> > > 
> > >    migrate_write_lock(dst_zspage);
> > >    …
> > >    	migrate_write_lock(src_zspage);
> > > 
> > > right?
> > 
> > Sure.
> 
> Out of curiosity: why do you need to lock it then if you grab it from
> the list and there is no other reference to it? Is it because the page
> might be referenced by other means but only by a reader?

Yub, zs_map_object.


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 8/8] zsmalloc: replace get_cpu_var with local_lock
  2021-11-10 18:54 ` [PATCH 8/8] zsmalloc: replace get_cpu_var with local_lock Minchan Kim
  2021-11-11  8:56   ` Sebastian Andrzej Siewior
@ 2021-11-15  3:56   ` Davidlohr Bueso
  2021-11-15  7:27     ` Sebastian Andrzej Siewior
  1 sibling, 1 reply; 19+ messages in thread
From: Davidlohr Bueso @ 2021-11-15  3:56 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, Sergey Senozhatsky, linux-mm,
	Sebastian Andrzej Siewior, Mike Galbraith, Thomas Gleixner

On Wed, 10 Nov 2021, Minchan Kim wrote:

>From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
>
>The usage of get_cpu_var() in zs_map_object() is problematic because
>it disables preemption and makes it impossible to acquire any sleeping
>lock on PREEMPT_RT such as a spinlock_t.
>Replace the get_cpu_var() usage with a local_lock_t which is embedded
>struct mapping_area. It ensures that the access the struct is
>synchronized against all users on the same CPU.

On a similar note (and sorry for hijacking the thread) I've been looking
at the remaining users of get_cpu_light() in a hope to see them upstreamed
and removed out-of-tree now that we have local locks.

There are six, and afaict, they can be addressed with either using local
locks:

1. netif_rx. We can add a local_lock_t to softnet_data which is the pcpu
data strucutre used by enqueue_to_backlog(). Then replace both preempt_disable
and get_cpu with local_lock(&softnet_data.lock).

2. blk-mq. Such scenarios rely on implicitly disabling preemption to guarantee
running __blk_mq_run_hw_queue() on the right CPU. But we can use a local lock
for synchronous hw queue runs, thus adding a local_lock_t to struct blk_mq_hw_ctx.

3. raid5. We can add a local_lock_t to struct raid5_percpu.

4. scsi/fcoe. There are 3 things here to consider: tx stats management,
fcoe_percpu_s and the exchange manager pool. For the first two adding
a local_lock_t to fc_stats and fcoe_percpu_s should do it, but we would
have to do a migrate_disable() for pool case in fc_exch_em_alloc() which
yes is ugly... pool->lock is already there.

... or flat-out migrate_disabling when the per-CPU data structure already
has a spinlock it acquires anyway, which will do the serialization:

5. vmalloc. Since we already have a vmap_block_queue.lock

6. sunrpc. Since we already have a pool->sp_lock.

I've got patches for these but perhaps I'm missing a fundamental reason as
to why these are still out-of-tree and get_cpu()-light family is still around.
For example, direct migrate_disable() calls are frowned upon and could well
be unacceptable - albeit it's recent user growth upstream.

Thoughts?

Thanks,
Davidlohr


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 8/8] zsmalloc: replace get_cpu_var with local_lock
  2021-11-15  3:56   ` Davidlohr Bueso
@ 2021-11-15  7:27     ` Sebastian Andrzej Siewior
  0 siblings, 0 replies; 19+ messages in thread
From: Sebastian Andrzej Siewior @ 2021-11-15  7:27 UTC (permalink / raw)
  To: Minchan Kim, Andrew Morton, Sergey Senozhatsky, linux-mm,
	Mike Galbraith, Thomas Gleixner

On 2021-11-14 19:56:47 [-0800], Davidlohr Bueso wrote:
> ... or flat-out migrate_disabling when the per-CPU data structure already
> has a spinlock it acquires anyway, which will do the serialization:
> 
> 5. vmalloc. Since we already have a vmap_block_queue.lock
> 
> 6. sunrpc. Since we already have a pool->sp_lock.
> 
> I've got patches for these but perhaps I'm missing a fundamental reason as
> to why these are still out-of-tree and get_cpu()-light family is still around.
> For example, direct migrate_disable() calls are frowned upon and could well
> be unacceptable - albeit it's recent user growth upstream.
> 
> Thoughts?

I think tglx is looking into this to get it done differently. We had a
few more users of get_cpu_light() and we got rid of a few of them. We
also had more local_lock() users but we got rid of all but two I think
before local_lock() was suggested upstream.
From RT's point of view, get_cpu() and get_cpu_var() leads almost always
to trouble. The vmalloc example from above, I don't think there is need
for get_cpu() or migrate_disable() or anything at all because the
per-CPU data structure itself is locked. The window of possible CPU
migration is little so even if it happens, the result is correct.

> Thanks,
> Davidlohr

Sebastian


^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2021-11-15  7:28 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-11-10 18:54 [PATCH 0/8] zsmalloc: remove bit_spin_lock Minchan Kim
2021-11-10 18:54 ` [PATCH 1/8] zsmalloc: introduce some helper functions Minchan Kim
2021-11-10 18:54 ` [PATCH 2/8] zsmalloc: rename zs_stat_type to class_stat_type Minchan Kim
2021-11-10 18:54 ` [PATCH 3/8] zsmalloc: decouple class actions from zspage works Minchan Kim
2021-11-10 18:54 ` [PATCH 4/8] zsmalloc: introduce obj_allocated Minchan Kim
2021-11-10 18:54 ` [PATCH 5/8] zsmalloc: move huge compressed obj from page to zspage Minchan Kim
2021-11-10 18:54 ` [PATCH 6/8] zsmalloc: remove zspage isolation for migration Minchan Kim
2021-11-10 18:54 ` [PATCH 7/8] zsmalloc: replace per zpage lock with pool->migrate_lock Minchan Kim
2021-11-11  9:07   ` Sebastian Andrzej Siewior
2021-11-11 23:11     ` Minchan Kim
2021-11-12  7:28       ` Sebastian Andrzej Siewior
2021-11-12  7:31       ` Sebastian Andrzej Siewior
2021-11-12 22:10         ` Minchan Kim
2021-11-11 10:13   ` kernel test robot
2021-11-10 18:54 ` [PATCH 8/8] zsmalloc: replace get_cpu_var with local_lock Minchan Kim
2021-11-11  8:56   ` Sebastian Andrzej Siewior
2021-11-11 23:08     ` Minchan Kim
2021-11-15  3:56   ` Davidlohr Bueso
2021-11-15  7:27     ` Sebastian Andrzej Siewior

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).