linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/9] zsmalloc: remove bit_spin_lock
@ 2021-11-15 18:59 Minchan Kim
  2021-11-15 18:59 ` [PATCH v2 1/9] zsmalloc: introduce some helper functions Minchan Kim
                   ` (8 more replies)
  0 siblings, 9 replies; 15+ messages in thread
From: Minchan Kim @ 2021-11-15 18:59 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Sergey Senozhatsky, linux-mm, LKML, Minchan Kim

The zsmalloc has used bit_spin_lock to minimize space overhead
since it's zpage granularity lock. However, it causes zsmalloc
non-working under PREEMPT_RT as well as adding too much
complication.

This patchset tries to replace the bit_spin_lock with per-pool
rwlock. It also removes unnecessary zspage isolation logic
from class, which was the other part too much complication
added into zsmalloc.
Last patch changes the get_cpu_var to local_lock to make it
work in PREEMPT_RT.

Mike Galbraith (1):
  zsmalloc: replace get_cpu_var with local_lock

Minchan Kim (8):
  zsmalloc: introduce some helper functions
  zsmalloc: rename zs_stat_type to class_stat_type
  zsmalloc: decouple class actions from zspage works
  zsmalloc: introduce obj_allocated
  zsmalloc: move huge compressed obj from page to zspage
  zsmalloc: remove zspage isolation for migration
  locking/rwlocks: introduce write_lock_nested
  zsmalloc: replace per zpage lock with pool->migrate_lock

 include/linux/rwlock.h          |   6 +
 include/linux/rwlock_api_smp.h  |   9 +
 include/linux/rwlock_rt.h       |   6 +
 include/linux/spinlock_api_up.h |   1 +
 kernel/locking/spinlock.c       |   6 +
 kernel/locking/spinlock_rt.c    |  12 +
 mm/zsmalloc.c                   | 529 ++++++++++++--------------------
 7 files changed, 228 insertions(+), 341 deletions(-)

-- 

* from v1 - https://lore.kernel.org/linux-mm/20211110185433.1981097-1-minchan@kernel.org/
  * add write_lock_nested for rwlock
  * change fromline to "Mike Galbraith" - bigeasy@

2.34.0.rc1.387.gb447b232ab-goog


^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH v2 1/9] zsmalloc: introduce some helper functions
  2021-11-15 18:59 [PATCH v2 0/9] zsmalloc: remove bit_spin_lock Minchan Kim
@ 2021-11-15 18:59 ` Minchan Kim
  2021-11-15 18:59 ` [PATCH v2 2/9] zsmalloc: rename zs_stat_type to class_stat_type Minchan Kim
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 15+ messages in thread
From: Minchan Kim @ 2021-11-15 18:59 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Sergey Senozhatsky, linux-mm, LKML, Minchan Kim

get_zspage_mapping returns fullness as well as class_idx. However,
the fullness is usually not used since it could be stale in some
contexts. It causes misleading as well as unnecessary instructions
so this patch introduces zspage_class.

obj_to_location also produces page and index but we don't need
always the index, either so this patch introduces obj_to_page.

Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 mm/zsmalloc.c | 54 ++++++++++++++++++++++-----------------------------
 1 file changed, 23 insertions(+), 31 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index b897ce3b399a..f8c63bacd22e 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -517,6 +517,12 @@ static void get_zspage_mapping(struct zspage *zspage,
 	*class_idx = zspage->class;
 }
 
+static struct size_class *zspage_class(struct zs_pool *pool,
+					     struct zspage *zspage)
+{
+	return pool->size_class[zspage->class];
+}
+
 static void set_zspage_mapping(struct zspage *zspage,
 				unsigned int class_idx,
 				enum fullness_group fullness)
@@ -844,6 +850,12 @@ static void obj_to_location(unsigned long obj, struct page **page,
 	*obj_idx = (obj & OBJ_INDEX_MASK);
 }
 
+static void obj_to_page(unsigned long obj, struct page **page)
+{
+	obj >>= OBJ_TAG_BITS;
+	*page = pfn_to_page(obj >> OBJ_INDEX_BITS);
+}
+
 /**
  * location_to_obj - get obj value encoded from (<page>, <obj_idx>)
  * @page: page object resides in zspage
@@ -1246,8 +1258,6 @@ void *zs_map_object(struct zs_pool *pool, unsigned long handle,
 	unsigned long obj, off;
 	unsigned int obj_idx;
 
-	unsigned int class_idx;
-	enum fullness_group fg;
 	struct size_class *class;
 	struct mapping_area *area;
 	struct page *pages[2];
@@ -1270,8 +1280,7 @@ void *zs_map_object(struct zs_pool *pool, unsigned long handle,
 	/* migration cannot move any subpage in this zspage */
 	migrate_read_lock(zspage);
 
-	get_zspage_mapping(zspage, &class_idx, &fg);
-	class = pool->size_class[class_idx];
+	class = zspage_class(pool, zspage);
 	off = (class->size * obj_idx) & ~PAGE_MASK;
 
 	area = &get_cpu_var(zs_map_area);
@@ -1304,16 +1313,13 @@ void zs_unmap_object(struct zs_pool *pool, unsigned long handle)
 	unsigned long obj, off;
 	unsigned int obj_idx;
 
-	unsigned int class_idx;
-	enum fullness_group fg;
 	struct size_class *class;
 	struct mapping_area *area;
 
 	obj = handle_to_obj(handle);
 	obj_to_location(obj, &page, &obj_idx);
 	zspage = get_zspage(page);
-	get_zspage_mapping(zspage, &class_idx, &fg);
-	class = pool->size_class[class_idx];
+	class = zspage_class(pool, zspage);
 	off = (class->size * obj_idx) & ~PAGE_MASK;
 
 	area = this_cpu_ptr(&zs_map_area);
@@ -1491,8 +1497,6 @@ void zs_free(struct zs_pool *pool, unsigned long handle)
 	struct zspage *zspage;
 	struct page *f_page;
 	unsigned long obj;
-	unsigned int f_objidx;
-	int class_idx;
 	struct size_class *class;
 	enum fullness_group fullness;
 	bool isolated;
@@ -1502,13 +1506,11 @@ void zs_free(struct zs_pool *pool, unsigned long handle)
 
 	pin_tag(handle);
 	obj = handle_to_obj(handle);
-	obj_to_location(obj, &f_page, &f_objidx);
+	obj_to_page(obj, &f_page);
 	zspage = get_zspage(f_page);
 
 	migrate_read_lock(zspage);
-
-	get_zspage_mapping(zspage, &class_idx, &fullness);
-	class = pool->size_class[class_idx];
+	class = zspage_class(pool, zspage);
 
 	spin_lock(&class->lock);
 	obj_free(class, obj);
@@ -1866,8 +1868,6 @@ static bool zs_page_isolate(struct page *page, isolate_mode_t mode)
 {
 	struct zs_pool *pool;
 	struct size_class *class;
-	int class_idx;
-	enum fullness_group fullness;
 	struct zspage *zspage;
 	struct address_space *mapping;
 
@@ -1880,15 +1880,10 @@ static bool zs_page_isolate(struct page *page, isolate_mode_t mode)
 
 	zspage = get_zspage(page);
 
-	/*
-	 * Without class lock, fullness could be stale while class_idx is okay
-	 * because class_idx is constant unless page is freed so we should get
-	 * fullness again under class lock.
-	 */
-	get_zspage_mapping(zspage, &class_idx, &fullness);
 	mapping = page_mapping(page);
 	pool = mapping->private_data;
-	class = pool->size_class[class_idx];
+
+	class = zspage_class(pool, zspage);
 
 	spin_lock(&class->lock);
 	if (get_zspage_inuse(zspage) == 0) {
@@ -1907,6 +1902,9 @@ static bool zs_page_isolate(struct page *page, isolate_mode_t mode)
 	 * size_class to prevent further object allocation from the zspage.
 	 */
 	if (!list_empty(&zspage->list) && !is_zspage_isolated(zspage)) {
+		enum fullness_group fullness;
+		unsigned int class_idx;
+
 		get_zspage_mapping(zspage, &class_idx, &fullness);
 		atomic_long_inc(&pool->isolated_pages);
 		remove_zspage(class, zspage, fullness);
@@ -1923,8 +1921,6 @@ static int zs_page_migrate(struct address_space *mapping, struct page *newpage,
 {
 	struct zs_pool *pool;
 	struct size_class *class;
-	int class_idx;
-	enum fullness_group fullness;
 	struct zspage *zspage;
 	struct page *dummy;
 	void *s_addr, *d_addr, *addr;
@@ -1949,9 +1945,8 @@ static int zs_page_migrate(struct address_space *mapping, struct page *newpage,
 
 	/* Concurrent compactor cannot migrate any subpage in zspage */
 	migrate_write_lock(zspage);
-	get_zspage_mapping(zspage, &class_idx, &fullness);
 	pool = mapping->private_data;
-	class = pool->size_class[class_idx];
+	class = zspage_class(pool, zspage);
 	offset = get_first_obj_offset(page);
 
 	spin_lock(&class->lock);
@@ -2049,8 +2044,6 @@ static void zs_page_putback(struct page *page)
 {
 	struct zs_pool *pool;
 	struct size_class *class;
-	int class_idx;
-	enum fullness_group fg;
 	struct address_space *mapping;
 	struct zspage *zspage;
 
@@ -2058,10 +2051,9 @@ static void zs_page_putback(struct page *page)
 	VM_BUG_ON_PAGE(!PageIsolated(page), page);
 
 	zspage = get_zspage(page);
-	get_zspage_mapping(zspage, &class_idx, &fg);
 	mapping = page_mapping(page);
 	pool = mapping->private_data;
-	class = pool->size_class[class_idx];
+	class = zspage_class(pool, zspage);
 
 	spin_lock(&class->lock);
 	dec_zspage_isolation(zspage);
-- 
2.34.0.rc1.387.gb447b232ab-goog


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v2 2/9] zsmalloc: rename zs_stat_type to class_stat_type
  2021-11-15 18:59 [PATCH v2 0/9] zsmalloc: remove bit_spin_lock Minchan Kim
  2021-11-15 18:59 ` [PATCH v2 1/9] zsmalloc: introduce some helper functions Minchan Kim
@ 2021-11-15 18:59 ` Minchan Kim
  2021-11-15 18:59 ` [PATCH v2 3/9] zsmalloc: decouple class actions from zspage works Minchan Kim
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 15+ messages in thread
From: Minchan Kim @ 2021-11-15 18:59 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Sergey Senozhatsky, linux-mm, LKML, Minchan Kim

The stat aims for class stat, not zspage so rename it.

Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 mm/zsmalloc.c | 24 ++++++++++++------------
 1 file changed, 12 insertions(+), 12 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index f8c63bacd22e..c149ccf734ba 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -158,7 +158,7 @@ enum fullness_group {
 	NR_ZS_FULLNESS,
 };
 
-enum zs_stat_type {
+enum class_stat_type {
 	CLASS_EMPTY,
 	CLASS_ALMOST_EMPTY,
 	CLASS_ALMOST_FULL,
@@ -549,21 +549,21 @@ static int get_size_class_index(int size)
 	return min_t(int, ZS_SIZE_CLASSES - 1, idx);
 }
 
-/* type can be of enum type zs_stat_type or fullness_group */
-static inline void zs_stat_inc(struct size_class *class,
+/* type can be of enum type class_stat_type or fullness_group */
+static inline void class_stat_inc(struct size_class *class,
 				int type, unsigned long cnt)
 {
 	class->stats.objs[type] += cnt;
 }
 
-/* type can be of enum type zs_stat_type or fullness_group */
-static inline void zs_stat_dec(struct size_class *class,
+/* type can be of enum type class_stat_type or fullness_group */
+static inline void class_stat_dec(struct size_class *class,
 				int type, unsigned long cnt)
 {
 	class->stats.objs[type] -= cnt;
 }
 
-/* type can be of enum type zs_stat_type or fullness_group */
+/* type can be of enum type class_stat_type or fullness_group */
 static inline unsigned long zs_stat_get(struct size_class *class,
 				int type)
 {
@@ -725,7 +725,7 @@ static void insert_zspage(struct size_class *class,
 {
 	struct zspage *head;
 
-	zs_stat_inc(class, fullness, 1);
+	class_stat_inc(class, fullness, 1);
 	head = list_first_entry_or_null(&class->fullness_list[fullness],
 					struct zspage, list);
 	/*
@@ -750,7 +750,7 @@ static void remove_zspage(struct size_class *class,
 	VM_BUG_ON(is_zspage_isolated(zspage));
 
 	list_del_init(&zspage->list);
-	zs_stat_dec(class, fullness, 1);
+	class_stat_dec(class, fullness, 1);
 }
 
 /*
@@ -964,7 +964,7 @@ static void __free_zspage(struct zs_pool *pool, struct size_class *class,
 
 	cache_free_zspage(pool, zspage);
 
-	zs_stat_dec(class, OBJ_ALLOCATED, class->objs_per_zspage);
+	class_stat_dec(class, OBJ_ALLOCATED, class->objs_per_zspage);
 	atomic_long_sub(class->pages_per_zspage,
 					&pool->pages_allocated);
 }
@@ -1394,7 +1394,7 @@ static unsigned long obj_malloc(struct size_class *class,
 
 	kunmap_atomic(vaddr);
 	mod_zspage_inuse(zspage, 1);
-	zs_stat_inc(class, OBJ_USED, 1);
+	class_stat_inc(class, OBJ_USED, 1);
 
 	obj = location_to_obj(m_page, obj);
 
@@ -1458,7 +1458,7 @@ unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t gfp)
 	record_obj(handle, obj);
 	atomic_long_add(class->pages_per_zspage,
 				&pool->pages_allocated);
-	zs_stat_inc(class, OBJ_ALLOCATED, class->objs_per_zspage);
+	class_stat_inc(class, OBJ_ALLOCATED, class->objs_per_zspage);
 
 	/* We completely set up zspage so mark them as movable */
 	SetZsPageMovable(pool, zspage);
@@ -1489,7 +1489,7 @@ static void obj_free(struct size_class *class, unsigned long obj)
 	kunmap_atomic(vaddr);
 	set_freeobj(zspage, f_objidx);
 	mod_zspage_inuse(zspage, -1);
-	zs_stat_dec(class, OBJ_USED, 1);
+	class_stat_dec(class, OBJ_USED, 1);
 }
 
 void zs_free(struct zs_pool *pool, unsigned long handle)
-- 
2.34.0.rc1.387.gb447b232ab-goog


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v2 3/9] zsmalloc: decouple class actions from zspage works
  2021-11-15 18:59 [PATCH v2 0/9] zsmalloc: remove bit_spin_lock Minchan Kim
  2021-11-15 18:59 ` [PATCH v2 1/9] zsmalloc: introduce some helper functions Minchan Kim
  2021-11-15 18:59 ` [PATCH v2 2/9] zsmalloc: rename zs_stat_type to class_stat_type Minchan Kim
@ 2021-11-15 18:59 ` Minchan Kim
  2021-11-15 18:59 ` [PATCH v2 4/9] zsmalloc: introduce obj_allocated Minchan Kim
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 15+ messages in thread
From: Minchan Kim @ 2021-11-15 18:59 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Sergey Senozhatsky, linux-mm, LKML, Minchan Kim

This patch moves class stat update out of obj_malloc since
it's not related to zspage operation.
This is a preparation to introduce new lock scheme in next
patch.

Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 mm/zsmalloc.c | 23 +++++++++++++----------
 1 file changed, 13 insertions(+), 10 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index c149ccf734ba..7a14090e4a53 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -1360,17 +1360,19 @@ size_t zs_huge_class_size(struct zs_pool *pool)
 }
 EXPORT_SYMBOL_GPL(zs_huge_class_size);
 
-static unsigned long obj_malloc(struct size_class *class,
+static unsigned long obj_malloc(struct zs_pool *pool,
 				struct zspage *zspage, unsigned long handle)
 {
 	int i, nr_page, offset;
 	unsigned long obj;
 	struct link_free *link;
+	struct size_class *class;
 
 	struct page *m_page;
 	unsigned long m_offset;
 	void *vaddr;
 
+	class = pool->size_class[zspage->class];
 	handle |= OBJ_ALLOCATED_TAG;
 	obj = get_freeobj(zspage);
 
@@ -1394,7 +1396,6 @@ static unsigned long obj_malloc(struct size_class *class,
 
 	kunmap_atomic(vaddr);
 	mod_zspage_inuse(zspage, 1);
-	class_stat_inc(class, OBJ_USED, 1);
 
 	obj = location_to_obj(m_page, obj);
 
@@ -1433,10 +1434,11 @@ unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t gfp)
 	spin_lock(&class->lock);
 	zspage = find_get_zspage(class);
 	if (likely(zspage)) {
-		obj = obj_malloc(class, zspage, handle);
+		obj = obj_malloc(pool, zspage, handle);
 		/* Now move the zspage to another fullness group, if required */
 		fix_fullness_group(class, zspage);
 		record_obj(handle, obj);
+		class_stat_inc(class, OBJ_USED, 1);
 		spin_unlock(&class->lock);
 
 		return handle;
@@ -1451,7 +1453,7 @@ unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t gfp)
 	}
 
 	spin_lock(&class->lock);
-	obj = obj_malloc(class, zspage, handle);
+	obj = obj_malloc(pool, zspage, handle);
 	newfg = get_fullness_group(class, zspage);
 	insert_zspage(class, zspage, newfg);
 	set_zspage_mapping(zspage, class->index, newfg);
@@ -1459,6 +1461,7 @@ unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t gfp)
 	atomic_long_add(class->pages_per_zspage,
 				&pool->pages_allocated);
 	class_stat_inc(class, OBJ_ALLOCATED, class->objs_per_zspage);
+	class_stat_inc(class, OBJ_USED, 1);
 
 	/* We completely set up zspage so mark them as movable */
 	SetZsPageMovable(pool, zspage);
@@ -1468,7 +1471,7 @@ unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t gfp)
 }
 EXPORT_SYMBOL_GPL(zs_malloc);
 
-static void obj_free(struct size_class *class, unsigned long obj)
+static void obj_free(int class_size, unsigned long obj)
 {
 	struct link_free *link;
 	struct zspage *zspage;
@@ -1478,7 +1481,7 @@ static void obj_free(struct size_class *class, unsigned long obj)
 	void *vaddr;
 
 	obj_to_location(obj, &f_page, &f_objidx);
-	f_offset = (class->size * f_objidx) & ~PAGE_MASK;
+	f_offset = (class_size * f_objidx) & ~PAGE_MASK;
 	zspage = get_zspage(f_page);
 
 	vaddr = kmap_atomic(f_page);
@@ -1489,7 +1492,6 @@ static void obj_free(struct size_class *class, unsigned long obj)
 	kunmap_atomic(vaddr);
 	set_freeobj(zspage, f_objidx);
 	mod_zspage_inuse(zspage, -1);
-	class_stat_dec(class, OBJ_USED, 1);
 }
 
 void zs_free(struct zs_pool *pool, unsigned long handle)
@@ -1513,7 +1515,8 @@ void zs_free(struct zs_pool *pool, unsigned long handle)
 	class = zspage_class(pool, zspage);
 
 	spin_lock(&class->lock);
-	obj_free(class, obj);
+	obj_free(class->size, obj);
+	class_stat_dec(class, OBJ_USED, 1);
 	fullness = fix_fullness_group(class, zspage);
 	if (fullness != ZS_EMPTY) {
 		migrate_read_unlock(zspage);
@@ -1671,7 +1674,7 @@ static int migrate_zspage(struct zs_pool *pool, struct size_class *class,
 		}
 
 		used_obj = handle_to_obj(handle);
-		free_obj = obj_malloc(class, get_zspage(d_page), handle);
+		free_obj = obj_malloc(pool, get_zspage(d_page), handle);
 		zs_object_copy(class, free_obj, used_obj);
 		obj_idx++;
 		/*
@@ -1683,7 +1686,7 @@ static int migrate_zspage(struct zs_pool *pool, struct size_class *class,
 		free_obj |= BIT(HANDLE_PIN_BIT);
 		record_obj(handle, free_obj);
 		unpin_tag(handle);
-		obj_free(class, used_obj);
+		obj_free(class->size, used_obj);
 	}
 
 	/* Remember last position in this iteration */
-- 
2.34.0.rc1.387.gb447b232ab-goog


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v2 4/9] zsmalloc: introduce obj_allocated
  2021-11-15 18:59 [PATCH v2 0/9] zsmalloc: remove bit_spin_lock Minchan Kim
                   ` (2 preceding siblings ...)
  2021-11-15 18:59 ` [PATCH v2 3/9] zsmalloc: decouple class actions from zspage works Minchan Kim
@ 2021-11-15 18:59 ` Minchan Kim
  2021-11-15 18:59 ` [PATCH v2 5/9] zsmalloc: move huge compressed obj from page to zspage Minchan Kim
                   ` (4 subsequent siblings)
  8 siblings, 0 replies; 15+ messages in thread
From: Minchan Kim @ 2021-11-15 18:59 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Sergey Senozhatsky, linux-mm, LKML, Minchan Kim

The usage pattern for obj_to_head is to check whether the zpage
is allocated or not. Thus, introduce obj_allocated.

Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 mm/zsmalloc.c | 33 ++++++++++++++++-----------------
 1 file changed, 16 insertions(+), 17 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 7a14090e4a53..6ca130c0f7dc 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -877,13 +877,21 @@ static unsigned long handle_to_obj(unsigned long handle)
 	return *(unsigned long *)handle;
 }
 
-static unsigned long obj_to_head(struct page *page, void *obj)
+static bool obj_allocated(struct page *page, void *obj, unsigned long *phandle)
 {
+	unsigned long handle;
+
 	if (unlikely(PageHugeObject(page))) {
 		VM_BUG_ON_PAGE(!is_first_page(page), page);
-		return page->index;
+		handle = page->index;
 	} else
-		return *(unsigned long *)obj;
+		handle = *(unsigned long *)obj;
+
+	if (!(handle & OBJ_ALLOCATED_TAG))
+		return false;
+
+	*phandle = handle & ~OBJ_ALLOCATED_TAG;
+	return true;
 }
 
 static inline int testpin_tag(unsigned long handle)
@@ -1606,7 +1614,6 @@ static void zs_object_copy(struct size_class *class, unsigned long dst,
 static unsigned long find_alloced_obj(struct size_class *class,
 					struct page *page, int *obj_idx)
 {
-	unsigned long head;
 	int offset = 0;
 	int index = *obj_idx;
 	unsigned long handle = 0;
@@ -1616,9 +1623,7 @@ static unsigned long find_alloced_obj(struct size_class *class,
 	offset += class->size * index;
 
 	while (offset < PAGE_SIZE) {
-		head = obj_to_head(page, addr + offset);
-		if (head & OBJ_ALLOCATED_TAG) {
-			handle = head & ~OBJ_ALLOCATED_TAG;
+		if (obj_allocated(page, addr + offset, &handle)) {
 			if (trypin_tag(handle))
 				break;
 			handle = 0;
@@ -1928,7 +1933,7 @@ static int zs_page_migrate(struct address_space *mapping, struct page *newpage,
 	struct page *dummy;
 	void *s_addr, *d_addr, *addr;
 	int offset, pos;
-	unsigned long handle, head;
+	unsigned long handle;
 	unsigned long old_obj, new_obj;
 	unsigned int obj_idx;
 	int ret = -EAGAIN;
@@ -1964,9 +1969,7 @@ static int zs_page_migrate(struct address_space *mapping, struct page *newpage,
 	pos = offset;
 	s_addr = kmap_atomic(page);
 	while (pos < PAGE_SIZE) {
-		head = obj_to_head(page, s_addr + pos);
-		if (head & OBJ_ALLOCATED_TAG) {
-			handle = head & ~OBJ_ALLOCATED_TAG;
+		if (obj_allocated(page, s_addr + pos, &handle)) {
 			if (!trypin_tag(handle))
 				goto unpin_objects;
 		}
@@ -1982,9 +1985,7 @@ static int zs_page_migrate(struct address_space *mapping, struct page *newpage,
 
 	for (addr = s_addr + offset; addr < s_addr + pos;
 					addr += class->size) {
-		head = obj_to_head(page, addr);
-		if (head & OBJ_ALLOCATED_TAG) {
-			handle = head & ~OBJ_ALLOCATED_TAG;
+		if (obj_allocated(page, addr, &handle)) {
 			BUG_ON(!testpin_tag(handle));
 
 			old_obj = handle_to_obj(handle);
@@ -2029,9 +2030,7 @@ static int zs_page_migrate(struct address_space *mapping, struct page *newpage,
 unpin_objects:
 	for (addr = s_addr + offset; addr < s_addr + pos;
 						addr += class->size) {
-		head = obj_to_head(page, addr);
-		if (head & OBJ_ALLOCATED_TAG) {
-			handle = head & ~OBJ_ALLOCATED_TAG;
+		if (obj_allocated(page, addr, &handle)) {
 			BUG_ON(!testpin_tag(handle));
 			unpin_tag(handle);
 		}
-- 
2.34.0.rc1.387.gb447b232ab-goog


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v2 5/9] zsmalloc: move huge compressed obj from page to zspage
  2021-11-15 18:59 [PATCH v2 0/9] zsmalloc: remove bit_spin_lock Minchan Kim
                   ` (3 preceding siblings ...)
  2021-11-15 18:59 ` [PATCH v2 4/9] zsmalloc: introduce obj_allocated Minchan Kim
@ 2021-11-15 18:59 ` Minchan Kim
  2021-11-15 18:59 ` [PATCH v2 6/9] zsmalloc: remove zspage isolation for migration Minchan Kim
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 15+ messages in thread
From: Minchan Kim @ 2021-11-15 18:59 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Sergey Senozhatsky, linux-mm, LKML, Minchan Kim

the flag aims for zspage, not per page. Let's move it to zspage.

Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 mm/zsmalloc.c | 50 ++++++++++++++++++++++++++------------------------
 1 file changed, 26 insertions(+), 24 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 6ca130c0f7dc..26e571cc354e 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -121,6 +121,7 @@
 #define OBJ_INDEX_BITS	(BITS_PER_LONG - _PFN_BITS - OBJ_TAG_BITS)
 #define OBJ_INDEX_MASK	((_AC(1, UL) << OBJ_INDEX_BITS) - 1)
 
+#define HUGE_BITS	1
 #define FULLNESS_BITS	2
 #define CLASS_BITS	8
 #define ISOLATED_BITS	3
@@ -213,22 +214,6 @@ struct size_class {
 	struct zs_size_stat stats;
 };
 
-/* huge object: pages_per_zspage == 1 && maxobj_per_zspage == 1 */
-static void SetPageHugeObject(struct page *page)
-{
-	SetPageOwnerPriv1(page);
-}
-
-static void ClearPageHugeObject(struct page *page)
-{
-	ClearPageOwnerPriv1(page);
-}
-
-static int PageHugeObject(struct page *page)
-{
-	return PageOwnerPriv1(page);
-}
-
 /*
  * Placed within free objects to form a singly linked list.
  * For every zspage, zspage->freeobj gives head of this list.
@@ -278,6 +263,7 @@ struct zs_pool {
 
 struct zspage {
 	struct {
+		unsigned int huge:HUGE_BITS;
 		unsigned int fullness:FULLNESS_BITS;
 		unsigned int class:CLASS_BITS + 1;
 		unsigned int isolated:ISOLATED_BITS;
@@ -298,6 +284,17 @@ struct mapping_area {
 	enum zs_mapmode vm_mm; /* mapping mode */
 };
 
+/* huge object: pages_per_zspage == 1 && maxobj_per_zspage == 1 */
+static void SetZsHugePage(struct zspage *zspage)
+{
+	zspage->huge = 1;
+}
+
+static bool ZsHugePage(struct zspage *zspage)
+{
+	return zspage->huge;
+}
+
 #ifdef CONFIG_COMPACTION
 static int zs_register_migration(struct zs_pool *pool);
 static void zs_unregister_migration(struct zs_pool *pool);
@@ -830,7 +827,9 @@ static struct zspage *get_zspage(struct page *page)
 
 static struct page *get_next_page(struct page *page)
 {
-	if (unlikely(PageHugeObject(page)))
+	struct zspage *zspage = get_zspage(page);
+
+	if (unlikely(ZsHugePage(zspage)))
 		return NULL;
 
 	return page->freelist;
@@ -880,8 +879,9 @@ static unsigned long handle_to_obj(unsigned long handle)
 static bool obj_allocated(struct page *page, void *obj, unsigned long *phandle)
 {
 	unsigned long handle;
+	struct zspage *zspage = get_zspage(page);
 
-	if (unlikely(PageHugeObject(page))) {
+	if (unlikely(ZsHugePage(zspage))) {
 		VM_BUG_ON_PAGE(!is_first_page(page), page);
 		handle = page->index;
 	} else
@@ -920,7 +920,6 @@ static void reset_page(struct page *page)
 	ClearPagePrivate(page);
 	set_page_private(page, 0);
 	page_mapcount_reset(page);
-	ClearPageHugeObject(page);
 	page->freelist = NULL;
 }
 
@@ -1062,7 +1061,7 @@ static void create_page_chain(struct size_class *class, struct zspage *zspage,
 			SetPagePrivate(page);
 			if (unlikely(class->objs_per_zspage == 1 &&
 					class->pages_per_zspage == 1))
-				SetPageHugeObject(page);
+				SetZsHugePage(zspage);
 		} else {
 			prev_page->freelist = page;
 		}
@@ -1307,7 +1306,7 @@ void *zs_map_object(struct zs_pool *pool, unsigned long handle,
 
 	ret = __zs_map_object(area, pages, off, class->size);
 out:
-	if (likely(!PageHugeObject(page)))
+	if (likely(!ZsHugePage(zspage)))
 		ret += ZS_HANDLE_SIZE;
 
 	return ret;
@@ -1395,7 +1394,7 @@ static unsigned long obj_malloc(struct zs_pool *pool,
 	vaddr = kmap_atomic(m_page);
 	link = (struct link_free *)vaddr + m_offset / sizeof(*link);
 	set_freeobj(zspage, link->next >> OBJ_TAG_BITS);
-	if (likely(!PageHugeObject(m_page)))
+	if (likely(!ZsHugePage(zspage)))
 		/* record handle in the header of allocated chunk */
 		link->handle = handle;
 	else
@@ -1496,7 +1495,10 @@ static void obj_free(int class_size, unsigned long obj)
 
 	/* Insert this object in containing zspage's freelist */
 	link = (struct link_free *)(vaddr + f_offset);
-	link->next = get_freeobj(zspage) << OBJ_TAG_BITS;
+	if (likely(!ZsHugePage(zspage)))
+		link->next = get_freeobj(zspage) << OBJ_TAG_BITS;
+	else
+		f_page->index = 0;
 	kunmap_atomic(vaddr);
 	set_freeobj(zspage, f_objidx);
 	mod_zspage_inuse(zspage, -1);
@@ -1867,7 +1869,7 @@ static void replace_sub_page(struct size_class *class, struct zspage *zspage,
 
 	create_page_chain(class, zspage, pages);
 	set_first_obj_offset(newpage, get_first_obj_offset(oldpage));
-	if (unlikely(PageHugeObject(oldpage)))
+	if (unlikely(ZsHugePage(zspage)))
 		newpage->index = oldpage->index;
 	__SetPageMovable(newpage, page_mapping(oldpage));
 }
-- 
2.34.0.rc1.387.gb447b232ab-goog


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v2 6/9] zsmalloc: remove zspage isolation for migration
  2021-11-15 18:59 [PATCH v2 0/9] zsmalloc: remove bit_spin_lock Minchan Kim
                   ` (4 preceding siblings ...)
  2021-11-15 18:59 ` [PATCH v2 5/9] zsmalloc: move huge compressed obj from page to zspage Minchan Kim
@ 2021-11-15 18:59 ` Minchan Kim
  2021-11-15 18:59 ` [PATCH v2 7/9] locking/rwlocks: introduce write_lock_nested Minchan Kim
                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 15+ messages in thread
From: Minchan Kim @ 2021-11-15 18:59 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Sergey Senozhatsky, linux-mm, LKML, Minchan Kim

zspage isolation for migration introduced additional exceptions
to be dealt with since the zspage was isolated from class list.
The reason why I isolated zspage from class list was to prevent
race between obj_malloc and page migration via allocating zpage
from the zspage further. However, it couldn't prevent object
freeing from zspage so it needed corner case handling.

This patch removes the whole mess. Now, we are fine since
class->lock and zspage->lock can prevent the race.

Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 mm/zsmalloc.c | 157 +++-----------------------------------------------
 1 file changed, 8 insertions(+), 149 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 26e571cc354e..b8b098be92fa 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -254,10 +254,6 @@ struct zs_pool {
 #ifdef CONFIG_COMPACTION
 	struct inode *inode;
 	struct work_struct free_work;
-	/* A wait queue for when migration races with async_free_zspage() */
-	struct wait_queue_head migration_wait;
-	atomic_long_t isolated_pages;
-	bool destroying;
 #endif
 };
 
@@ -454,11 +450,6 @@ MODULE_ALIAS("zpool-zsmalloc");
 /* per-cpu VM mapping areas for zspage accesses that cross page boundaries */
 static DEFINE_PER_CPU(struct mapping_area, zs_map_area);
 
-static bool is_zspage_isolated(struct zspage *zspage)
-{
-	return zspage->isolated;
-}
-
 static __maybe_unused int is_first_page(struct page *page)
 {
 	return PagePrivate(page);
@@ -744,7 +735,6 @@ static void remove_zspage(struct size_class *class,
 				enum fullness_group fullness)
 {
 	VM_BUG_ON(list_empty(&class->fullness_list[fullness]));
-	VM_BUG_ON(is_zspage_isolated(zspage));
 
 	list_del_init(&zspage->list);
 	class_stat_dec(class, fullness, 1);
@@ -770,13 +760,9 @@ static enum fullness_group fix_fullness_group(struct size_class *class,
 	if (newfg == currfg)
 		goto out;
 
-	if (!is_zspage_isolated(zspage)) {
-		remove_zspage(class, zspage, currfg);
-		insert_zspage(class, zspage, newfg);
-	}
-
+	remove_zspage(class, zspage, currfg);
+	insert_zspage(class, zspage, newfg);
 	set_zspage_mapping(zspage, class_idx, newfg);
-
 out:
 	return newfg;
 }
@@ -1511,7 +1497,6 @@ void zs_free(struct zs_pool *pool, unsigned long handle)
 	unsigned long obj;
 	struct size_class *class;
 	enum fullness_group fullness;
-	bool isolated;
 
 	if (unlikely(!handle))
 		return;
@@ -1533,11 +1518,9 @@ void zs_free(struct zs_pool *pool, unsigned long handle)
 		goto out;
 	}
 
-	isolated = is_zspage_isolated(zspage);
 	migrate_read_unlock(zspage);
 	/* If zspage is isolated, zs_page_putback will free the zspage */
-	if (likely(!isolated))
-		free_zspage(pool, class, zspage);
+	free_zspage(pool, class, zspage);
 out:
 
 	spin_unlock(&class->lock);
@@ -1718,7 +1701,6 @@ static struct zspage *isolate_zspage(struct size_class *class, bool source)
 		zspage = list_first_entry_or_null(&class->fullness_list[fg[i]],
 							struct zspage, list);
 		if (zspage) {
-			VM_BUG_ON(is_zspage_isolated(zspage));
 			remove_zspage(class, zspage, fg[i]);
 			return zspage;
 		}
@@ -1739,8 +1721,6 @@ static enum fullness_group putback_zspage(struct size_class *class,
 {
 	enum fullness_group fullness;
 
-	VM_BUG_ON(is_zspage_isolated(zspage));
-
 	fullness = get_fullness_group(class, zspage);
 	insert_zspage(class, zspage, fullness);
 	set_zspage_mapping(zspage, class->index, fullness);
@@ -1822,35 +1802,10 @@ static void inc_zspage_isolation(struct zspage *zspage)
 
 static void dec_zspage_isolation(struct zspage *zspage)
 {
+	VM_BUG_ON(zspage->isolated == 0);
 	zspage->isolated--;
 }
 
-static void putback_zspage_deferred(struct zs_pool *pool,
-				    struct size_class *class,
-				    struct zspage *zspage)
-{
-	enum fullness_group fg;
-
-	fg = putback_zspage(class, zspage);
-	if (fg == ZS_EMPTY)
-		schedule_work(&pool->free_work);
-
-}
-
-static inline void zs_pool_dec_isolated(struct zs_pool *pool)
-{
-	VM_BUG_ON(atomic_long_read(&pool->isolated_pages) <= 0);
-	atomic_long_dec(&pool->isolated_pages);
-	/*
-	 * Checking pool->destroying must happen after atomic_long_dec()
-	 * for pool->isolated_pages above. Paired with the smp_mb() in
-	 * zs_unregister_migration().
-	 */
-	smp_mb__after_atomic();
-	if (atomic_long_read(&pool->isolated_pages) == 0 && pool->destroying)
-		wake_up_all(&pool->migration_wait);
-}
-
 static void replace_sub_page(struct size_class *class, struct zspage *zspage,
 				struct page *newpage, struct page *oldpage)
 {
@@ -1876,10 +1831,7 @@ static void replace_sub_page(struct size_class *class, struct zspage *zspage,
 
 static bool zs_page_isolate(struct page *page, isolate_mode_t mode)
 {
-	struct zs_pool *pool;
-	struct size_class *class;
 	struct zspage *zspage;
-	struct address_space *mapping;
 
 	/*
 	 * Page is locked so zspage couldn't be destroyed. For detail, look at
@@ -1889,39 +1841,9 @@ static bool zs_page_isolate(struct page *page, isolate_mode_t mode)
 	VM_BUG_ON_PAGE(PageIsolated(page), page);
 
 	zspage = get_zspage(page);
-
-	mapping = page_mapping(page);
-	pool = mapping->private_data;
-
-	class = zspage_class(pool, zspage);
-
-	spin_lock(&class->lock);
-	if (get_zspage_inuse(zspage) == 0) {
-		spin_unlock(&class->lock);
-		return false;
-	}
-
-	/* zspage is isolated for object migration */
-	if (list_empty(&zspage->list) && !is_zspage_isolated(zspage)) {
-		spin_unlock(&class->lock);
-		return false;
-	}
-
-	/*
-	 * If this is first time isolation for the zspage, isolate zspage from
-	 * size_class to prevent further object allocation from the zspage.
-	 */
-	if (!list_empty(&zspage->list) && !is_zspage_isolated(zspage)) {
-		enum fullness_group fullness;
-		unsigned int class_idx;
-
-		get_zspage_mapping(zspage, &class_idx, &fullness);
-		atomic_long_inc(&pool->isolated_pages);
-		remove_zspage(class, zspage, fullness);
-	}
-
+	migrate_write_lock(zspage);
 	inc_zspage_isolation(zspage);
-	spin_unlock(&class->lock);
+	migrate_write_unlock(zspage);
 
 	return true;
 }
@@ -2004,21 +1926,6 @@ static int zs_page_migrate(struct address_space *mapping, struct page *newpage,
 
 	dec_zspage_isolation(zspage);
 
-	/*
-	 * Page migration is done so let's putback isolated zspage to
-	 * the list if @page is final isolated subpage in the zspage.
-	 */
-	if (!is_zspage_isolated(zspage)) {
-		/*
-		 * We cannot race with zs_destroy_pool() here because we wait
-		 * for isolation to hit zero before we start destroying.
-		 * Also, we ensure that everyone can see pool->destroying before
-		 * we start waiting.
-		 */
-		putback_zspage_deferred(pool, class, zspage);
-		zs_pool_dec_isolated(pool);
-	}
-
 	if (page_zone(newpage) != page_zone(page)) {
 		dec_zone_page_state(page, NR_ZSPAGES);
 		inc_zone_page_state(newpage, NR_ZSPAGES);
@@ -2046,30 +1953,15 @@ static int zs_page_migrate(struct address_space *mapping, struct page *newpage,
 
 static void zs_page_putback(struct page *page)
 {
-	struct zs_pool *pool;
-	struct size_class *class;
-	struct address_space *mapping;
 	struct zspage *zspage;
 
 	VM_BUG_ON_PAGE(!PageMovable(page), page);
 	VM_BUG_ON_PAGE(!PageIsolated(page), page);
 
 	zspage = get_zspage(page);
-	mapping = page_mapping(page);
-	pool = mapping->private_data;
-	class = zspage_class(pool, zspage);
-
-	spin_lock(&class->lock);
+	migrate_write_lock(zspage);
 	dec_zspage_isolation(zspage);
-	if (!is_zspage_isolated(zspage)) {
-		/*
-		 * Due to page_lock, we cannot free zspage immediately
-		 * so let's defer.
-		 */
-		putback_zspage_deferred(pool, class, zspage);
-		zs_pool_dec_isolated(pool);
-	}
-	spin_unlock(&class->lock);
+	migrate_write_unlock(zspage);
 }
 
 static const struct address_space_operations zsmalloc_aops = {
@@ -2091,36 +1983,8 @@ static int zs_register_migration(struct zs_pool *pool)
 	return 0;
 }
 
-static bool pool_isolated_are_drained(struct zs_pool *pool)
-{
-	return atomic_long_read(&pool->isolated_pages) == 0;
-}
-
-/* Function for resolving migration */
-static void wait_for_isolated_drain(struct zs_pool *pool)
-{
-
-	/*
-	 * We're in the process of destroying the pool, so there are no
-	 * active allocations. zs_page_isolate() fails for completely free
-	 * zspages, so we need only wait for the zs_pool's isolated
-	 * count to hit zero.
-	 */
-	wait_event(pool->migration_wait,
-		   pool_isolated_are_drained(pool));
-}
-
 static void zs_unregister_migration(struct zs_pool *pool)
 {
-	pool->destroying = true;
-	/*
-	 * We need a memory barrier here to ensure global visibility of
-	 * pool->destroying. Thus pool->isolated pages will either be 0 in which
-	 * case we don't care, or it will be > 0 and pool->destroying will
-	 * ensure that we wake up once isolation hits 0.
-	 */
-	smp_mb();
-	wait_for_isolated_drain(pool); /* This can block */
 	flush_work(&pool->free_work);
 	iput(pool->inode);
 }
@@ -2150,7 +2014,6 @@ static void async_free_zspage(struct work_struct *work)
 		spin_unlock(&class->lock);
 	}
 
-
 	list_for_each_entry_safe(zspage, tmp, &free_pages, list) {
 		list_del(&zspage->list);
 		lock_zspage(zspage);
@@ -2363,10 +2226,6 @@ struct zs_pool *zs_create_pool(const char *name)
 	if (!pool->name)
 		goto err;
 
-#ifdef CONFIG_COMPACTION
-	init_waitqueue_head(&pool->migration_wait);
-#endif
-
 	if (create_cache(pool))
 		goto err;
 
-- 
2.34.0.rc1.387.gb447b232ab-goog


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v2 7/9] locking/rwlocks: introduce write_lock_nested
  2021-11-15 18:59 [PATCH v2 0/9] zsmalloc: remove bit_spin_lock Minchan Kim
                   ` (5 preceding siblings ...)
  2021-11-15 18:59 ` [PATCH v2 6/9] zsmalloc: remove zspage isolation for migration Minchan Kim
@ 2021-11-15 18:59 ` Minchan Kim
  2021-11-16 10:27   ` Peter Zijlstra
                     ` (2 more replies)
  2021-11-15 18:59 ` [PATCH v2 8/9] zsmalloc: replace per zpage lock with pool->migrate_lock Minchan Kim
  2021-11-15 18:59 ` [PATCH v2 9/9] zsmalloc: replace get_cpu_var with local_lock Minchan Kim
  8 siblings, 3 replies; 15+ messages in thread
From: Minchan Kim @ 2021-11-15 18:59 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Sergey Senozhatsky, linux-mm, LKML, Minchan Kim, Peter Zijlstra

In preparation for converting bit_spin_lock to rwlock in zsmalloc
so that multiple writers of zspages can run at the same time but
those zspages are supposed to be different zspage instance. Thus,
it's not deadlock. This patch adds write_lock_nested to support
the case for LOCKDEP.

Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 include/linux/rwlock.h          |  6 ++++++
 include/linux/rwlock_api_smp.h  |  9 +++++++++
 include/linux/rwlock_rt.h       |  6 ++++++
 include/linux/spinlock_api_up.h |  1 +
 kernel/locking/spinlock.c       |  6 ++++++
 kernel/locking/spinlock_rt.c    | 12 ++++++++++++
 6 files changed, 40 insertions(+)

diff --git a/include/linux/rwlock.h b/include/linux/rwlock.h
index 2c0ad417ce3c..8f416c5e929e 100644
--- a/include/linux/rwlock.h
+++ b/include/linux/rwlock.h
@@ -55,6 +55,12 @@ do {								\
 #define write_lock(lock)	_raw_write_lock(lock)
 #define read_lock(lock)		_raw_read_lock(lock)
 
+#ifdef CONFIG_DEBUG_LOCK_ALLOC
+#define write_lock_nested(lock, subclass)	_raw_write_lock_nested(lock, subclass)
+#else
+#define write_lock_nested(lock, subclass)	_raw_write_lock(lock)
+#endif
+
 #if defined(CONFIG_SMP) || defined(CONFIG_DEBUG_SPINLOCK)
 
 #define read_lock_irqsave(lock, flags)			\
diff --git a/include/linux/rwlock_api_smp.h b/include/linux/rwlock_api_smp.h
index f1db6f17c4fb..f0c535ec4e65 100644
--- a/include/linux/rwlock_api_smp.h
+++ b/include/linux/rwlock_api_smp.h
@@ -17,6 +17,7 @@
 
 void __lockfunc _raw_read_lock(rwlock_t *lock)		__acquires(lock);
 void __lockfunc _raw_write_lock(rwlock_t *lock)		__acquires(lock);
+void __lockfunc _raw_write_lock_nested(rwlock_t *lock, int subclass)	__acquires(lock);
 void __lockfunc _raw_read_lock_bh(rwlock_t *lock)	__acquires(lock);
 void __lockfunc _raw_write_lock_bh(rwlock_t *lock)	__acquires(lock);
 void __lockfunc _raw_read_lock_irq(rwlock_t *lock)	__acquires(lock);
@@ -46,6 +47,7 @@ _raw_write_unlock_irqrestore(rwlock_t *lock, unsigned long flags)
 
 #ifdef CONFIG_INLINE_WRITE_LOCK
 #define _raw_write_lock(lock) __raw_write_lock(lock)
+#define _raw_write_lock_nested(lock, subclass) __raw_write_lock_nested(lock, subclass)
 #endif
 
 #ifdef CONFIG_INLINE_READ_LOCK_BH
@@ -209,6 +211,13 @@ static inline void __raw_write_lock(rwlock_t *lock)
 	LOCK_CONTENDED(lock, do_raw_write_trylock, do_raw_write_lock);
 }
 
+static inline void __raw_write_lock_nested(rwlock_t *lock, int subclass)
+{
+	preempt_disable();
+	rwlock_acquire(&lock->dep_map, subclass, 0, _RET_IP_);
+	LOCK_CONTENDED(lock, do_raw_write_trylock, do_raw_write_lock);
+}
+
 #endif /* !CONFIG_GENERIC_LOCKBREAK || CONFIG_DEBUG_LOCK_ALLOC */
 
 static inline void __raw_write_unlock(rwlock_t *lock)
diff --git a/include/linux/rwlock_rt.h b/include/linux/rwlock_rt.h
index 49c1f3842ed5..efd6da62c893 100644
--- a/include/linux/rwlock_rt.h
+++ b/include/linux/rwlock_rt.h
@@ -28,6 +28,7 @@ extern void rt_read_lock(rwlock_t *rwlock);
 extern int rt_read_trylock(rwlock_t *rwlock);
 extern void rt_read_unlock(rwlock_t *rwlock);
 extern void rt_write_lock(rwlock_t *rwlock);
+extern void rt_write_lock_nested(rwlock_t *rwlock, int subclass);
 extern int rt_write_trylock(rwlock_t *rwlock);
 extern void rt_write_unlock(rwlock_t *rwlock);
 
@@ -83,6 +84,11 @@ static __always_inline void write_lock(rwlock_t *rwlock)
 	rt_write_lock(rwlock);
 }
 
+static __always_inline void write_lock_nested(rwlock_t *rwlock, int subclass)
+{
+	rt_write_lock_nested(rwlock, subclass);
+}
+
 static __always_inline void write_lock_bh(rwlock_t *rwlock)
 {
 	local_bh_disable();
diff --git a/include/linux/spinlock_api_up.h b/include/linux/spinlock_api_up.h
index d0d188861ad6..b8ba00ccccde 100644
--- a/include/linux/spinlock_api_up.h
+++ b/include/linux/spinlock_api_up.h
@@ -59,6 +59,7 @@
 #define _raw_spin_lock_nested(lock, subclass)	__LOCK(lock)
 #define _raw_read_lock(lock)			__LOCK(lock)
 #define _raw_write_lock(lock)			__LOCK(lock)
+#define _raw_write_lock_nested(lock, subclass)	__LOCK(lock)
 #define _raw_spin_lock_bh(lock)			__LOCK_BH(lock)
 #define _raw_read_lock_bh(lock)			__LOCK_BH(lock)
 #define _raw_write_lock_bh(lock)		__LOCK_BH(lock)
diff --git a/kernel/locking/spinlock.c b/kernel/locking/spinlock.c
index b562f9289372..996811efa6d6 100644
--- a/kernel/locking/spinlock.c
+++ b/kernel/locking/spinlock.c
@@ -300,6 +300,12 @@ void __lockfunc _raw_write_lock(rwlock_t *lock)
 	__raw_write_lock(lock);
 }
 EXPORT_SYMBOL(_raw_write_lock);
+
+void __lockfunc _raw_write_lock_nested(rwlock_t *lock, int subclass)
+{
+	__raw_write_lock_nested(lock, subclass);
+}
+EXPORT_SYMBOL(_raw_write_lock_nested);
 #endif
 
 #ifndef CONFIG_INLINE_WRITE_LOCK_IRQSAVE
diff --git a/kernel/locking/spinlock_rt.c b/kernel/locking/spinlock_rt.c
index b2e553f9255b..b82d346f1e00 100644
--- a/kernel/locking/spinlock_rt.c
+++ b/kernel/locking/spinlock_rt.c
@@ -239,6 +239,18 @@ void __sched rt_write_lock(rwlock_t *rwlock)
 }
 EXPORT_SYMBOL(rt_write_lock);
 
+#ifdef CONFIG_DEBUG_LOCK_ALLOC
+void __sched rt_write_lock_nested(rwlock_t *rwlock, int subclass)
+{
+	___might_sleep(__FILE__, __LINE__, 0);
+	rwlock_acquire(&rwlock->dep_map, subclass, 0, _RET_IP_);
+	rwbase_write_lock(&rwlock->rwbase, TASK_RTLOCK_WAIT);
+	rcu_read_lock();
+	migrate_disable();
+}
+EXPORT_SYMBOL(rt_write_lock_nested);
+#endif
+
 void __sched rt_read_unlock(rwlock_t *rwlock)
 {
 	rwlock_release(&rwlock->dep_map, _RET_IP_);
-- 
2.34.0.rc1.387.gb447b232ab-goog


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v2 8/9] zsmalloc: replace per zpage lock with pool->migrate_lock
  2021-11-15 18:59 [PATCH v2 0/9] zsmalloc: remove bit_spin_lock Minchan Kim
                   ` (6 preceding siblings ...)
  2021-11-15 18:59 ` [PATCH v2 7/9] locking/rwlocks: introduce write_lock_nested Minchan Kim
@ 2021-11-15 18:59 ` Minchan Kim
  2021-11-15 18:59 ` [PATCH v2 9/9] zsmalloc: replace get_cpu_var with local_lock Minchan Kim
  8 siblings, 0 replies; 15+ messages in thread
From: Minchan Kim @ 2021-11-15 18:59 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Sergey Senozhatsky, linux-mm, LKML, Minchan Kim

The zsmalloc has used a bit for spin_lock in zpage handle to keep
zpage object alive during several operations. However, it causes
the problem for PREEMPT_RT as well as introducing too complicated.

This patch replaces the bit spin_lock with pool->migrate_lock
rwlock. It could make the code simple as well as zsmalloc work
under PREEMPT_RT.

The drawback is the pool->migrate_lock is bigger granuarity than
per zpage lock so the contention would be higher than old when
both IO-related operations(i.e., zsmalloc, zsfree, zs_[map|unmap])
and compaction(page/zpage migration) are going in parallel(*,
the migrate_lock is rwlock and IO related functions are all read
side lock so there is no contention). However, the write-side
is fast enough(dominant overhead is just page copy) so it wouldn't
affect much. If the lock granurity becomes more problem later,
we could introduce table locks based on handle as a hash value.

Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 mm/zsmalloc.c | 205 +++++++++++++++++++++++---------------------------
 1 file changed, 96 insertions(+), 109 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index b8b098be92fa..5d4c4d254679 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -30,6 +30,14 @@
 
 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
 
+/*
+ * lock ordering:
+ *	page_lock
+ *	pool->migrate_lock
+ *	class->lock
+ *	zspage->lock
+ */
+
 #include <linux/module.h>
 #include <linux/kernel.h>
 #include <linux/sched.h>
@@ -100,15 +108,6 @@
 
 #define _PFN_BITS		(MAX_POSSIBLE_PHYSMEM_BITS - PAGE_SHIFT)
 
-/*
- * Memory for allocating for handle keeps object position by
- * encoding <page, obj_idx> and the encoded value has a room
- * in least bit(ie, look at obj_to_location).
- * We use the bit to synchronize between object access by
- * user and migration.
- */
-#define HANDLE_PIN_BIT	0
-
 /*
  * Head in allocated object should have OBJ_ALLOCATED_TAG
  * to identify the object was allocated or not.
@@ -255,6 +254,8 @@ struct zs_pool {
 	struct inode *inode;
 	struct work_struct free_work;
 #endif
+	/* protect page/zspage migration */
+	rwlock_t migrate_lock;
 };
 
 struct zspage {
@@ -297,6 +298,9 @@ static void zs_unregister_migration(struct zs_pool *pool);
 static void migrate_lock_init(struct zspage *zspage);
 static void migrate_read_lock(struct zspage *zspage);
 static void migrate_read_unlock(struct zspage *zspage);
+static void migrate_write_lock(struct zspage *zspage);
+static void migrate_write_lock_nested(struct zspage *zspage);
+static void migrate_write_unlock(struct zspage *zspage);
 static void kick_deferred_free(struct zs_pool *pool);
 static void init_deferred_free(struct zs_pool *pool);
 static void SetZsPageMovable(struct zs_pool *pool, struct zspage *zspage);
@@ -308,6 +312,9 @@ static void zs_unregister_migration(struct zs_pool *pool) {}
 static void migrate_lock_init(struct zspage *zspage) {}
 static void migrate_read_lock(struct zspage *zspage) {}
 static void migrate_read_unlock(struct zspage *zspage) {}
+static void migrate_write_lock(struct zspage *zspage) {}
+static void migrate_write_lock_nested(struct zspage *zspage) {}
+static void migrate_write_unlock(struct zspage *zspage) {}
 static void kick_deferred_free(struct zs_pool *pool) {}
 static void init_deferred_free(struct zs_pool *pool) {}
 static void SetZsPageMovable(struct zs_pool *pool, struct zspage *zspage) {}
@@ -359,14 +366,10 @@ static void cache_free_zspage(struct zs_pool *pool, struct zspage *zspage)
 	kmem_cache_free(pool->zspage_cachep, zspage);
 }
 
+/* class->lock(which owns the handle) synchronizes races */
 static void record_obj(unsigned long handle, unsigned long obj)
 {
-	/*
-	 * lsb of @obj represents handle lock while other bits
-	 * represent object value the handle is pointing so
-	 * updating shouldn't do store tearing.
-	 */
-	WRITE_ONCE(*(unsigned long *)handle, obj);
+	*(unsigned long *)handle = obj;
 }
 
 /* zpool driver */
@@ -880,26 +883,6 @@ static bool obj_allocated(struct page *page, void *obj, unsigned long *phandle)
 	return true;
 }
 
-static inline int testpin_tag(unsigned long handle)
-{
-	return bit_spin_is_locked(HANDLE_PIN_BIT, (unsigned long *)handle);
-}
-
-static inline int trypin_tag(unsigned long handle)
-{
-	return bit_spin_trylock(HANDLE_PIN_BIT, (unsigned long *)handle);
-}
-
-static void pin_tag(unsigned long handle) __acquires(bitlock)
-{
-	bit_spin_lock(HANDLE_PIN_BIT, (unsigned long *)handle);
-}
-
-static void unpin_tag(unsigned long handle) __releases(bitlock)
-{
-	bit_spin_unlock(HANDLE_PIN_BIT, (unsigned long *)handle);
-}
-
 static void reset_page(struct page *page)
 {
 	__ClearPageMovable(page);
@@ -968,6 +951,11 @@ static void free_zspage(struct zs_pool *pool, struct size_class *class,
 	VM_BUG_ON(get_zspage_inuse(zspage));
 	VM_BUG_ON(list_empty(&zspage->list));
 
+	/*
+	 * Since zs_free couldn't be sleepable, this function cannot call
+	 * lock_page. The page locks trylock_zspage got will be released
+	 * by __free_zspage.
+	 */
 	if (!trylock_zspage(zspage)) {
 		kick_deferred_free(pool);
 		return;
@@ -1263,15 +1251,20 @@ void *zs_map_object(struct zs_pool *pool, unsigned long handle,
 	 */
 	BUG_ON(in_interrupt());
 
-	/* From now on, migration cannot move the object */
-	pin_tag(handle);
-
+	/* It guarantees it can get zspage from handle safely */
+	read_lock(&pool->migrate_lock);
 	obj = handle_to_obj(handle);
 	obj_to_location(obj, &page, &obj_idx);
 	zspage = get_zspage(page);
 
-	/* migration cannot move any subpage in this zspage */
+	/*
+	 * migration cannot move any zpages in this zspage. Here, class->lock
+	 * is too heavy since callers would take some time until they calls
+	 * zs_unmap_object API so delegate the locking from class to zspage
+	 * which is smaller granularity.
+	 */
 	migrate_read_lock(zspage);
+	read_unlock(&pool->migrate_lock);
 
 	class = zspage_class(pool, zspage);
 	off = (class->size * obj_idx) & ~PAGE_MASK;
@@ -1330,7 +1323,6 @@ void zs_unmap_object(struct zs_pool *pool, unsigned long handle)
 	put_cpu_var(zs_map_area);
 
 	migrate_read_unlock(zspage);
-	unpin_tag(handle);
 }
 EXPORT_SYMBOL_GPL(zs_unmap_object);
 
@@ -1424,6 +1416,7 @@ unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t gfp)
 	size += ZS_HANDLE_SIZE;
 	class = pool->size_class[get_size_class_index(size)];
 
+	/* class->lock effectively protects the zpage migration */
 	spin_lock(&class->lock);
 	zspage = find_get_zspage(class);
 	if (likely(zspage)) {
@@ -1501,30 +1494,27 @@ void zs_free(struct zs_pool *pool, unsigned long handle)
 	if (unlikely(!handle))
 		return;
 
-	pin_tag(handle);
+	/*
+	 * The pool->migrate_lock protects the race with zpage's migration
+	 * so it's safe to get the page from handle.
+	 */
+	read_lock(&pool->migrate_lock);
 	obj = handle_to_obj(handle);
 	obj_to_page(obj, &f_page);
 	zspage = get_zspage(f_page);
-
-	migrate_read_lock(zspage);
 	class = zspage_class(pool, zspage);
-
 	spin_lock(&class->lock);
+	read_unlock(&pool->migrate_lock);
+
 	obj_free(class->size, obj);
 	class_stat_dec(class, OBJ_USED, 1);
 	fullness = fix_fullness_group(class, zspage);
-	if (fullness != ZS_EMPTY) {
-		migrate_read_unlock(zspage);
+	if (fullness != ZS_EMPTY)
 		goto out;
-	}
 
-	migrate_read_unlock(zspage);
-	/* If zspage is isolated, zs_page_putback will free the zspage */
 	free_zspage(pool, class, zspage);
 out:
-
 	spin_unlock(&class->lock);
-	unpin_tag(handle);
 	cache_free_handle(pool, handle);
 }
 EXPORT_SYMBOL_GPL(zs_free);
@@ -1608,11 +1598,8 @@ static unsigned long find_alloced_obj(struct size_class *class,
 	offset += class->size * index;
 
 	while (offset < PAGE_SIZE) {
-		if (obj_allocated(page, addr + offset, &handle)) {
-			if (trypin_tag(handle))
-				break;
-			handle = 0;
-		}
+		if (obj_allocated(page, addr + offset, &handle))
+			break;
 
 		offset += class->size;
 		index++;
@@ -1658,7 +1645,6 @@ static int migrate_zspage(struct zs_pool *pool, struct size_class *class,
 
 		/* Stop if there is no more space */
 		if (zspage_full(class, get_zspage(d_page))) {
-			unpin_tag(handle);
 			ret = -ENOMEM;
 			break;
 		}
@@ -1667,15 +1653,7 @@ static int migrate_zspage(struct zs_pool *pool, struct size_class *class,
 		free_obj = obj_malloc(pool, get_zspage(d_page), handle);
 		zs_object_copy(class, free_obj, used_obj);
 		obj_idx++;
-		/*
-		 * record_obj updates handle's value to free_obj and it will
-		 * invalidate lock bit(ie, HANDLE_PIN_BIT) of handle, which
-		 * breaks synchronization using pin_tag(e,g, zs_free) so
-		 * let's keep the lock bit.
-		 */
-		free_obj |= BIT(HANDLE_PIN_BIT);
 		record_obj(handle, free_obj);
-		unpin_tag(handle);
 		obj_free(class->size, used_obj);
 	}
 
@@ -1789,6 +1767,11 @@ static void migrate_write_lock(struct zspage *zspage)
 	write_lock(&zspage->lock);
 }
 
+static void migrate_write_lock_nested(struct zspage *zspage)
+{
+	write_lock_nested(&zspage->lock, SINGLE_DEPTH_NESTING);
+}
+
 static void migrate_write_unlock(struct zspage *zspage)
 {
 	write_unlock(&zspage->lock);
@@ -1856,11 +1839,10 @@ static int zs_page_migrate(struct address_space *mapping, struct page *newpage,
 	struct zspage *zspage;
 	struct page *dummy;
 	void *s_addr, *d_addr, *addr;
-	int offset, pos;
+	int offset;
 	unsigned long handle;
 	unsigned long old_obj, new_obj;
 	unsigned int obj_idx;
-	int ret = -EAGAIN;
 
 	/*
 	 * We cannot support the _NO_COPY case here, because copy needs to
@@ -1873,32 +1855,25 @@ static int zs_page_migrate(struct address_space *mapping, struct page *newpage,
 	VM_BUG_ON_PAGE(!PageMovable(page), page);
 	VM_BUG_ON_PAGE(!PageIsolated(page), page);
 
-	zspage = get_zspage(page);
-
-	/* Concurrent compactor cannot migrate any subpage in zspage */
-	migrate_write_lock(zspage);
 	pool = mapping->private_data;
+
+	/*
+	 * The pool migrate_lock protects the race between zpage migration
+	 * and zs_free.
+	 */
+	write_lock(&pool->migrate_lock);
+	zspage = get_zspage(page);
 	class = zspage_class(pool, zspage);
-	offset = get_first_obj_offset(page);
 
+	/*
+	 * the class lock protects zpage alloc/free in the zspage.
+	 */
 	spin_lock(&class->lock);
-	if (!get_zspage_inuse(zspage)) {
-		/*
-		 * Set "offset" to end of the page so that every loops
-		 * skips unnecessary object scanning.
-		 */
-		offset = PAGE_SIZE;
-	}
+	/* the migrate_write_lock protects zpage access via zs_map_object */
+	migrate_write_lock(zspage);
 
-	pos = offset;
+	offset = get_first_obj_offset(page);
 	s_addr = kmap_atomic(page);
-	while (pos < PAGE_SIZE) {
-		if (obj_allocated(page, s_addr + pos, &handle)) {
-			if (!trypin_tag(handle))
-				goto unpin_objects;
-		}
-		pos += class->size;
-	}
 
 	/*
 	 * Here, any user cannot access all objects in the zspage so let's move.
@@ -1907,25 +1882,30 @@ static int zs_page_migrate(struct address_space *mapping, struct page *newpage,
 	memcpy(d_addr, s_addr, PAGE_SIZE);
 	kunmap_atomic(d_addr);
 
-	for (addr = s_addr + offset; addr < s_addr + pos;
+	for (addr = s_addr + offset; addr < s_addr + PAGE_SIZE;
 					addr += class->size) {
 		if (obj_allocated(page, addr, &handle)) {
-			BUG_ON(!testpin_tag(handle));
 
 			old_obj = handle_to_obj(handle);
 			obj_to_location(old_obj, &dummy, &obj_idx);
 			new_obj = (unsigned long)location_to_obj(newpage,
 								obj_idx);
-			new_obj |= BIT(HANDLE_PIN_BIT);
 			record_obj(handle, new_obj);
 		}
 	}
+	kunmap_atomic(s_addr);
 
 	replace_sub_page(class, zspage, newpage, page);
-	get_page(newpage);
-
+	/*
+	 * Since we complete the data copy and set up new zspage structure,
+	 * it's okay to release migration_lock.
+	 */
+	write_unlock(&pool->migrate_lock);
+	spin_unlock(&class->lock);
 	dec_zspage_isolation(zspage);
+	migrate_write_unlock(zspage);
 
+	get_page(newpage);
 	if (page_zone(newpage) != page_zone(page)) {
 		dec_zone_page_state(page, NR_ZSPAGES);
 		inc_zone_page_state(newpage, NR_ZSPAGES);
@@ -1933,22 +1913,8 @@ static int zs_page_migrate(struct address_space *mapping, struct page *newpage,
 
 	reset_page(page);
 	put_page(page);
-	page = newpage;
-
-	ret = MIGRATEPAGE_SUCCESS;
-unpin_objects:
-	for (addr = s_addr + offset; addr < s_addr + pos;
-						addr += class->size) {
-		if (obj_allocated(page, addr, &handle)) {
-			BUG_ON(!testpin_tag(handle));
-			unpin_tag(handle);
-		}
-	}
-	kunmap_atomic(s_addr);
-	spin_unlock(&class->lock);
-	migrate_write_unlock(zspage);
 
-	return ret;
+	return MIGRATEPAGE_SUCCESS;
 }
 
 static void zs_page_putback(struct page *page)
@@ -2077,8 +2043,13 @@ static unsigned long __zs_compact(struct zs_pool *pool,
 	struct zspage *dst_zspage = NULL;
 	unsigned long pages_freed = 0;
 
+	/* protect the race between zpage migration and zs_free */
+	write_lock(&pool->migrate_lock);
+	/* protect zpage allocation/free */
 	spin_lock(&class->lock);
 	while ((src_zspage = isolate_zspage(class, true))) {
+		/* protect someone accessing the zspage(i.e., zs_map_object) */
+		migrate_write_lock(src_zspage);
 
 		if (!zs_can_compact(class))
 			break;
@@ -2087,6 +2058,8 @@ static unsigned long __zs_compact(struct zs_pool *pool,
 		cc.s_page = get_first_page(src_zspage);
 
 		while ((dst_zspage = isolate_zspage(class, false))) {
+			migrate_write_lock_nested(dst_zspage);
+
 			cc.d_page = get_first_page(dst_zspage);
 			/*
 			 * If there is no more space in dst_page, resched
@@ -2096,6 +2069,10 @@ static unsigned long __zs_compact(struct zs_pool *pool,
 				break;
 
 			putback_zspage(class, dst_zspage);
+			migrate_write_unlock(dst_zspage);
+			dst_zspage = NULL;
+			if (rwlock_is_contended(&pool->migrate_lock))
+				break;
 		}
 
 		/* Stop if we couldn't find slot */
@@ -2103,19 +2080,28 @@ static unsigned long __zs_compact(struct zs_pool *pool,
 			break;
 
 		putback_zspage(class, dst_zspage);
+		migrate_write_unlock(dst_zspage);
+
 		if (putback_zspage(class, src_zspage) == ZS_EMPTY) {
+			migrate_write_unlock(src_zspage);
 			free_zspage(pool, class, src_zspage);
 			pages_freed += class->pages_per_zspage;
-		}
+		} else
+			migrate_write_unlock(src_zspage);
 		spin_unlock(&class->lock);
+		write_unlock(&pool->migrate_lock);
 		cond_resched();
+		write_lock(&pool->migrate_lock);
 		spin_lock(&class->lock);
 	}
 
-	if (src_zspage)
+	if (src_zspage) {
 		putback_zspage(class, src_zspage);
+		migrate_write_unlock(src_zspage);
+	}
 
 	spin_unlock(&class->lock);
+	write_unlock(&pool->migrate_lock);
 
 	return pages_freed;
 }
@@ -2221,6 +2207,7 @@ struct zs_pool *zs_create_pool(const char *name)
 		return NULL;
 
 	init_deferred_free(pool);
+	rwlock_init(&pool->migrate_lock);
 
 	pool->name = kstrdup(name, GFP_KERNEL);
 	if (!pool->name)
-- 
2.34.0.rc1.387.gb447b232ab-goog


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v2 9/9] zsmalloc: replace get_cpu_var with local_lock
  2021-11-15 18:59 [PATCH v2 0/9] zsmalloc: remove bit_spin_lock Minchan Kim
                   ` (7 preceding siblings ...)
  2021-11-15 18:59 ` [PATCH v2 8/9] zsmalloc: replace per zpage lock with pool->migrate_lock Minchan Kim
@ 2021-11-15 18:59 ` Minchan Kim
  8 siblings, 0 replies; 15+ messages in thread
From: Minchan Kim @ 2021-11-15 18:59 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Sergey Senozhatsky, linux-mm, LKML, Minchan Kim, Mike Galbraith,
	Thomas Gleixner, Sebastian Andrzej Siewior

From: Mike Galbraith <umgwanakikbuti@gmail.com>

The usage of get_cpu_var() in zs_map_object() is problematic because
it disables preemption and makes it impossible to acquire any sleeping
lock on PREEMPT_RT such as a spinlock_t.
Replace the get_cpu_var() usage with a local_lock_t which is embedded
struct mapping_area. It ensures that the access the struct is
synchronized against all users on the same CPU.

Signed-off-by: Mike Galbraith <umgwanakikbuti@gmail.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
[minchan: remove the bit_spin_lock part and change the title]
Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 mm/zsmalloc.c | 11 ++++++++---
 1 file changed, 8 insertions(+), 3 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 5d4c4d254679..7e03cc9363bb 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -65,6 +65,7 @@
 #include <linux/wait.h>
 #include <linux/pagemap.h>
 #include <linux/fs.h>
+#include <linux/local_lock.h>
 
 #define ZSPAGE_MAGIC	0x58
 
@@ -276,6 +277,7 @@ struct zspage {
 };
 
 struct mapping_area {
+	local_lock_t lock;
 	char *vm_buf; /* copy buffer for objects that span pages */
 	char *vm_addr; /* address of kmap_atomic()'ed pages */
 	enum zs_mapmode vm_mm; /* mapping mode */
@@ -451,7 +453,9 @@ MODULE_ALIAS("zpool-zsmalloc");
 #endif /* CONFIG_ZPOOL */
 
 /* per-cpu VM mapping areas for zspage accesses that cross page boundaries */
-static DEFINE_PER_CPU(struct mapping_area, zs_map_area);
+static DEFINE_PER_CPU(struct mapping_area, zs_map_area) = {
+	.lock	= INIT_LOCAL_LOCK(lock),
+};
 
 static __maybe_unused int is_first_page(struct page *page)
 {
@@ -1269,7 +1273,8 @@ void *zs_map_object(struct zs_pool *pool, unsigned long handle,
 	class = zspage_class(pool, zspage);
 	off = (class->size * obj_idx) & ~PAGE_MASK;
 
-	area = &get_cpu_var(zs_map_area);
+	local_lock(&zs_map_area.lock);
+	area = this_cpu_ptr(&zs_map_area);
 	area->vm_mm = mm;
 	if (off + class->size <= PAGE_SIZE) {
 		/* this object is contained entirely within a page */
@@ -1320,7 +1325,7 @@ void zs_unmap_object(struct zs_pool *pool, unsigned long handle)
 
 		__zs_unmap_object(area, pages, off, class->size);
 	}
-	put_cpu_var(zs_map_area);
+	local_unlock(&zs_map_area.lock);
 
 	migrate_read_unlock(zspage);
 }
-- 
2.34.0.rc1.387.gb447b232ab-goog


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 7/9] locking/rwlocks: introduce write_lock_nested
  2021-11-15 18:59 ` [PATCH v2 7/9] locking/rwlocks: introduce write_lock_nested Minchan Kim
@ 2021-11-16 10:27   ` Peter Zijlstra
  2021-11-19 10:35   ` Sebastian Andrzej Siewior
  2021-11-20  3:50   ` kernel test robot
  2 siblings, 0 replies; 15+ messages in thread
From: Peter Zijlstra @ 2021-11-16 10:27 UTC (permalink / raw)
  To: Minchan Kim; +Cc: Andrew Morton, Sergey Senozhatsky, linux-mm, LKML

On Mon, Nov 15, 2021 at 10:59:07AM -0800, Minchan Kim wrote:
> In preparation for converting bit_spin_lock to rwlock in zsmalloc
> so that multiple writers of zspages can run at the same time but
> those zspages are supposed to be different zspage instance. Thus,
> it's not deadlock. This patch adds write_lock_nested to support
> the case for LOCKDEP.
> 
> Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
> Signed-off-by: Minchan Kim <minchan@kernel.org>
> ---
>  include/linux/rwlock.h          |  6 ++++++
>  include/linux/rwlock_api_smp.h  |  9 +++++++++
>  include/linux/rwlock_rt.h       |  6 ++++++
>  include/linux/spinlock_api_up.h |  1 +
>  kernel/locking/spinlock.c       |  6 ++++++
>  kernel/locking/spinlock_rt.c    | 12 ++++++++++++
>  6 files changed, 40 insertions(+)

Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 7/9] locking/rwlocks: introduce write_lock_nested
  2021-11-15 18:59 ` [PATCH v2 7/9] locking/rwlocks: introduce write_lock_nested Minchan Kim
  2021-11-16 10:27   ` Peter Zijlstra
@ 2021-11-19 10:35   ` Sebastian Andrzej Siewior
  2021-11-19 18:21     ` Minchan Kim
  2021-11-20  3:50   ` kernel test robot
  2 siblings, 1 reply; 15+ messages in thread
From: Sebastian Andrzej Siewior @ 2021-11-19 10:35 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, Sergey Senozhatsky, linux-mm, LKML,
	Peter Zijlstra, Thomas Gleixner

On 2021-11-15 10:59:07 [-0800], Minchan Kim wrote:
> diff --git a/include/linux/rwlock_rt.h b/include/linux/rwlock_rt.h
> index 49c1f3842ed5..efd6da62c893 100644
> --- a/include/linux/rwlock_rt.h
> +++ b/include/linux/rwlock_rt.h
> @@ -28,6 +28,7 @@ extern void rt_read_lock(rwlock_t *rwlock);
>  extern int rt_read_trylock(rwlock_t *rwlock);
>  extern void rt_read_unlock(rwlock_t *rwlock);
>  extern void rt_write_lock(rwlock_t *rwlock);
> +extern void rt_write_lock_nested(rwlock_t *rwlock, int subclass);
>  extern int rt_write_trylock(rwlock_t *rwlock);
>  extern void rt_write_unlock(rwlock_t *rwlock);
>
> @@ -83,6 +84,11 @@ static __always_inline void write_lock(rwlock_t *rwlock)
>  	rt_write_lock(rwlock);
>  }
>  
> +static __always_inline void write_lock_nested(rwlock_t *rwlock, int subclass)
> +{
> +	rt_write_lock_nested(rwlock, subclass);
> +}
> +

These two hunks as-is don't work. You need a CONFIG_DEBUG_LOCK_ALLOC block and
in the !CONFIG_DEBUG_LOCK_ALLOC case you need

#define rt_write_lock_nested(lock, subclass)     rt_write_lock(((void)(subclass), (lock)))

>  static __always_inline void write_lock_bh(rwlock_t *rwlock)
>  {
>  	local_bh_disable();
> diff --git a/kernel/locking/spinlock_rt.c b/kernel/locking/spinlock_rt.c
> index b2e553f9255b..b82d346f1e00 100644
> --- a/kernel/locking/spinlock_rt.c
> +++ b/kernel/locking/spinlock_rt.c
> @@ -239,6 +239,18 @@ void __sched rt_write_lock(rwlock_t *rwlock)
>  }
>  EXPORT_SYMBOL(rt_write_lock);
>  
> +#ifdef CONFIG_DEBUG_LOCK_ALLOC
> +void __sched rt_write_lock_nested(rwlock_t *rwlock, int subclass)
> +{
> +	___might_sleep(__FILE__, __LINE__, 0);

This _must_ be rtlock_might_resched() like it is done in rt_write_lock()
above.

> +	rwlock_acquire(&rwlock->dep_map, subclass, 0, _RET_IP_);
> +	rwbase_write_lock(&rwlock->rwbase, TASK_RTLOCK_WAIT);
> +	rcu_read_lock();
> +	migrate_disable();
> +}
> +EXPORT_SYMBOL(rt_write_lock_nested);
> +#endif
> +
>  void __sched rt_read_unlock(rwlock_t *rwlock)
>  {
>  	rwlock_release(&rwlock->dep_map, _RET_IP_);

Sebastian

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 7/9] locking/rwlocks: introduce write_lock_nested
  2021-11-19 10:35   ` Sebastian Andrzej Siewior
@ 2021-11-19 18:21     ` Minchan Kim
  2021-11-20 15:38       ` Sebastian Andrzej Siewior
  0 siblings, 1 reply; 15+ messages in thread
From: Minchan Kim @ 2021-11-19 18:21 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: Andrew Morton, Sergey Senozhatsky, linux-mm, LKML,
	Peter Zijlstra, Thomas Gleixner

On Fri, Nov 19, 2021 at 11:35:16AM +0100, Sebastian Andrzej Siewior wrote:
> On 2021-11-15 10:59:07 [-0800], Minchan Kim wrote:
> > diff --git a/include/linux/rwlock_rt.h b/include/linux/rwlock_rt.h
> > index 49c1f3842ed5..efd6da62c893 100644
> > --- a/include/linux/rwlock_rt.h
> > +++ b/include/linux/rwlock_rt.h
> > @@ -28,6 +28,7 @@ extern void rt_read_lock(rwlock_t *rwlock);
> >  extern int rt_read_trylock(rwlock_t *rwlock);
> >  extern void rt_read_unlock(rwlock_t *rwlock);
> >  extern void rt_write_lock(rwlock_t *rwlock);
> > +extern void rt_write_lock_nested(rwlock_t *rwlock, int subclass);
> >  extern int rt_write_trylock(rwlock_t *rwlock);
> >  extern void rt_write_unlock(rwlock_t *rwlock);
> >
> > @@ -83,6 +84,11 @@ static __always_inline void write_lock(rwlock_t *rwlock)
> >  	rt_write_lock(rwlock);
> >  }
> >  
> > +static __always_inline void write_lock_nested(rwlock_t *rwlock, int subclass)
> > +{
> > +	rt_write_lock_nested(rwlock, subclass);
> > +}
> > +
> 
> These two hunks as-is don't work. You need a CONFIG_DEBUG_LOCK_ALLOC block and
> in the !CONFIG_DEBUG_LOCK_ALLOC case you need
> 
> #define rt_write_lock_nested(lock, subclass)     rt_write_lock(((void)(subclass), (lock)))

Guess you meant #define write_lock_nested.

> 
> >  static __always_inline void write_lock_bh(rwlock_t *rwlock)
> >  {
> >  	local_bh_disable();
> > diff --git a/kernel/locking/spinlock_rt.c b/kernel/locking/spinlock_rt.c
> > index b2e553f9255b..b82d346f1e00 100644
> > --- a/kernel/locking/spinlock_rt.c
> > +++ b/kernel/locking/spinlock_rt.c
> > @@ -239,6 +239,18 @@ void __sched rt_write_lock(rwlock_t *rwlock)
> >  }
> >  EXPORT_SYMBOL(rt_write_lock);
> >  
> > +#ifdef CONFIG_DEBUG_LOCK_ALLOC
> > +void __sched rt_write_lock_nested(rwlock_t *rwlock, int subclass)
> > +{
> > +	___might_sleep(__FILE__, __LINE__, 0);
> 
> This _must_ be rtlock_might_resched() like it is done in rt_write_lock()
> above.

I should have Cced you. Thanks for the catch.
If it's fine, Andrew, could you fold it?

Thank you.

From 81f8721bc76d5f8c94770e53c6ad2e41aec8ab21 Mon Sep 17 00:00:00 2001
From: Minchan Kim <minchan@kernel.org>
Date: Fri, 19 Nov 2021 10:15:00 -0800
Subject: [PATCH] locking/rwlocks: fix write_lock_nested for RT

Fix build break of write_lock_nested for RT.

Suggested-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 include/linux/rwlock_rt.h    | 4 ++++
 kernel/locking/spinlock_rt.c | 2 +-
 2 files changed, 5 insertions(+), 1 deletion(-)

diff --git a/include/linux/rwlock_rt.h b/include/linux/rwlock_rt.h
index efd6da62c893..8544ff05e594 100644
--- a/include/linux/rwlock_rt.h
+++ b/include/linux/rwlock_rt.h
@@ -84,10 +84,14 @@ static __always_inline void write_lock(rwlock_t *rwlock)
 	rt_write_lock(rwlock);
 }
 
+#ifdef CONFIG_DEBUG_LOCK_ALLOC
 static __always_inline void write_lock_nested(rwlock_t *rwlock, int subclass)
 {
 	rt_write_lock_nested(rwlock, subclass);
 }
+#else
+#define write_lock_nested(lock, subclass)	rt_write_lock(((void)(subclass), (lock)))
+#endif
 
 static __always_inline void write_lock_bh(rwlock_t *rwlock)
 {
diff --git a/kernel/locking/spinlock_rt.c b/kernel/locking/spinlock_rt.c
index b82d346f1e00..b501aef820d5 100644
--- a/kernel/locking/spinlock_rt.c
+++ b/kernel/locking/spinlock_rt.c
@@ -242,7 +242,7 @@ EXPORT_SYMBOL(rt_write_lock);
 #ifdef CONFIG_DEBUG_LOCK_ALLOC
 void __sched rt_write_lock_nested(rwlock_t *rwlock, int subclass)
 {
-	___might_sleep(__FILE__, __LINE__, 0);
+	rtlock_might_resched();
 	rwlock_acquire(&rwlock->dep_map, subclass, 0, _RET_IP_);
 	rwbase_write_lock(&rwlock->rwbase, TASK_RTLOCK_WAIT);
 	rcu_read_lock();
-- 
2.34.0.rc2.393.gf8c9666880-goog


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 7/9] locking/rwlocks: introduce write_lock_nested
  2021-11-15 18:59 ` [PATCH v2 7/9] locking/rwlocks: introduce write_lock_nested Minchan Kim
  2021-11-16 10:27   ` Peter Zijlstra
  2021-11-19 10:35   ` Sebastian Andrzej Siewior
@ 2021-11-20  3:50   ` kernel test robot
  2 siblings, 0 replies; 15+ messages in thread
From: kernel test robot @ 2021-11-20  3:50 UTC (permalink / raw)
  To: Minchan Kim, Andrew Morton
  Cc: kbuild-all, Linux Memory Management List, Sergey Senozhatsky,
	LKML, Minchan Kim, Peter Zijlstra

[-- Attachment #1: Type: text/plain, Size: 6980 bytes --]

Hi Minchan,

I love your patch! Yet something to improve:

[auto build test ERROR on tip/master]
[also build test ERROR on linux/master linus/master v5.16-rc1]
[cannot apply to hnaz-mm/master tip/locking/core next-20211118]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Minchan-Kim/zsmalloc-remove-bit_spin_lock/20211116-030720
base:   https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git 8ab774587903771821b59471cc723bba6d893942
config: nds32-randconfig-r003-20211115 (attached as .config)
compiler: nds32le-linux-gcc (GCC) 11.2.0
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://github.com/0day-ci/linux/commit/c24db750268d85953fe12742e6e4a7b8baf16623
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Minchan-Kim/zsmalloc-remove-bit_spin_lock/20211116-030720
        git checkout c24db750268d85953fe12742e6e4a7b8baf16623
        # save the attached .config to linux build tree
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-11.2.0 make.cross ARCH=nds32 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

   kernel/locking/spinlock.c:68:17: warning: no previous prototype for '__raw_spin_lock' [-Wmissing-prototypes]
      68 | void __lockfunc __raw_##op##_lock(locktype##_t *lock)                   \
         |                 ^~~~~~
   kernel/locking/spinlock.c:126:1: note: in expansion of macro 'BUILD_LOCK_OPS'
     126 | BUILD_LOCK_OPS(spin, raw_spinlock);
         | ^~~~~~~~~~~~~~
   kernel/locking/spinlock.c:80:26: warning: no previous prototype for '__raw_spin_lock_irqsave' [-Wmissing-prototypes]
      80 | unsigned long __lockfunc __raw_##op##_lock_irqsave(locktype##_t *lock)  \
         |                          ^~~~~~
   kernel/locking/spinlock.c:126:1: note: in expansion of macro 'BUILD_LOCK_OPS'
     126 | BUILD_LOCK_OPS(spin, raw_spinlock);
         | ^~~~~~~~~~~~~~
   kernel/locking/spinlock.c:98:17: warning: no previous prototype for '__raw_spin_lock_irq' [-Wmissing-prototypes]
      98 | void __lockfunc __raw_##op##_lock_irq(locktype##_t *lock)               \
         |                 ^~~~~~
   kernel/locking/spinlock.c:126:1: note: in expansion of macro 'BUILD_LOCK_OPS'
     126 | BUILD_LOCK_OPS(spin, raw_spinlock);
         | ^~~~~~~~~~~~~~
   kernel/locking/spinlock.c:103:17: warning: no previous prototype for '__raw_spin_lock_bh' [-Wmissing-prototypes]
     103 | void __lockfunc __raw_##op##_lock_bh(locktype##_t *lock)                \
         |                 ^~~~~~
   kernel/locking/spinlock.c:126:1: note: in expansion of macro 'BUILD_LOCK_OPS'
     126 | BUILD_LOCK_OPS(spin, raw_spinlock);
         | ^~~~~~~~~~~~~~
   kernel/locking/spinlock.c:68:17: warning: no previous prototype for '__raw_read_lock' [-Wmissing-prototypes]
      68 | void __lockfunc __raw_##op##_lock(locktype##_t *lock)                   \
         |                 ^~~~~~
   kernel/locking/spinlock.c:129:1: note: in expansion of macro 'BUILD_LOCK_OPS'
     129 | BUILD_LOCK_OPS(read, rwlock);
         | ^~~~~~~~~~~~~~
   kernel/locking/spinlock.c:80:26: warning: no previous prototype for '__raw_read_lock_irqsave' [-Wmissing-prototypes]
      80 | unsigned long __lockfunc __raw_##op##_lock_irqsave(locktype##_t *lock)  \
         |                          ^~~~~~
   kernel/locking/spinlock.c:129:1: note: in expansion of macro 'BUILD_LOCK_OPS'
     129 | BUILD_LOCK_OPS(read, rwlock);
         | ^~~~~~~~~~~~~~
   kernel/locking/spinlock.c:98:17: warning: no previous prototype for '__raw_read_lock_irq' [-Wmissing-prototypes]
      98 | void __lockfunc __raw_##op##_lock_irq(locktype##_t *lock)               \
         |                 ^~~~~~
   kernel/locking/spinlock.c:129:1: note: in expansion of macro 'BUILD_LOCK_OPS'
     129 | BUILD_LOCK_OPS(read, rwlock);
         | ^~~~~~~~~~~~~~
   kernel/locking/spinlock.c:103:17: warning: no previous prototype for '__raw_read_lock_bh' [-Wmissing-prototypes]
     103 | void __lockfunc __raw_##op##_lock_bh(locktype##_t *lock)                \
         |                 ^~~~~~
   kernel/locking/spinlock.c:129:1: note: in expansion of macro 'BUILD_LOCK_OPS'
     129 | BUILD_LOCK_OPS(read, rwlock);
         | ^~~~~~~~~~~~~~
   kernel/locking/spinlock.c:68:17: warning: no previous prototype for '__raw_write_lock' [-Wmissing-prototypes]
      68 | void __lockfunc __raw_##op##_lock(locktype##_t *lock)                   \
         |                 ^~~~~~
   kernel/locking/spinlock.c:130:1: note: in expansion of macro 'BUILD_LOCK_OPS'
     130 | BUILD_LOCK_OPS(write, rwlock);
         | ^~~~~~~~~~~~~~
   kernel/locking/spinlock.c:80:26: warning: no previous prototype for '__raw_write_lock_irqsave' [-Wmissing-prototypes]
      80 | unsigned long __lockfunc __raw_##op##_lock_irqsave(locktype##_t *lock)  \
         |                          ^~~~~~
   kernel/locking/spinlock.c:130:1: note: in expansion of macro 'BUILD_LOCK_OPS'
     130 | BUILD_LOCK_OPS(write, rwlock);
         | ^~~~~~~~~~~~~~
   kernel/locking/spinlock.c:98:17: warning: no previous prototype for '__raw_write_lock_irq' [-Wmissing-prototypes]
      98 | void __lockfunc __raw_##op##_lock_irq(locktype##_t *lock)               \
         |                 ^~~~~~
   kernel/locking/spinlock.c:130:1: note: in expansion of macro 'BUILD_LOCK_OPS'
     130 | BUILD_LOCK_OPS(write, rwlock);
         | ^~~~~~~~~~~~~~
   kernel/locking/spinlock.c:103:17: warning: no previous prototype for '__raw_write_lock_bh' [-Wmissing-prototypes]
     103 | void __lockfunc __raw_##op##_lock_bh(locktype##_t *lock)                \
         |                 ^~~~~~
   kernel/locking/spinlock.c:130:1: note: in expansion of macro 'BUILD_LOCK_OPS'
     130 | BUILD_LOCK_OPS(write, rwlock);
         | ^~~~~~~~~~~~~~
   kernel/locking/spinlock.c: In function '_raw_write_lock_nested':
>> kernel/locking/spinlock.c:306:9: error: implicit declaration of function '__raw_write_lock_nested'; did you mean '_raw_write_lock_nested'? [-Werror=implicit-function-declaration]
     306 |         __raw_write_lock_nested(lock, subclass);
         |         ^~~~~~~~~~~~~~~~~~~~~~~
         |         _raw_write_lock_nested
   cc1: some warnings being treated as errors


vim +306 kernel/locking/spinlock.c

   303	
   304	void __lockfunc _raw_write_lock_nested(rwlock_t *lock, int subclass)
   305	{
 > 306		__raw_write_lock_nested(lock, subclass);
   307	}
   308	EXPORT_SYMBOL(_raw_write_lock_nested);
   309	#endif
   310	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 35183 bytes --]

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 7/9] locking/rwlocks: introduce write_lock_nested
  2021-11-19 18:21     ` Minchan Kim
@ 2021-11-20 15:38       ` Sebastian Andrzej Siewior
  0 siblings, 0 replies; 15+ messages in thread
From: Sebastian Andrzej Siewior @ 2021-11-20 15:38 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, Sergey Senozhatsky, linux-mm, LKML,
	Peter Zijlstra, Thomas Gleixner

On 2021-11-19 10:21:37 [-0800], Minchan Kim wrote:
> > #define rt_write_lock_nested(lock, subclass)     rt_write_lock(((void)(subclass), (lock)))
> 
> Guess you meant #define write_lock_nested.

indeed, yes.

> I should have Cced you. Thanks for the catch.
> If it's fine, Andrew, could you fold it?

You are welcome. I tested the series in my RT queue and it works.

Acked-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

> Thank you.

Sebastian

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2021-11-20 15:39 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-11-15 18:59 [PATCH v2 0/9] zsmalloc: remove bit_spin_lock Minchan Kim
2021-11-15 18:59 ` [PATCH v2 1/9] zsmalloc: introduce some helper functions Minchan Kim
2021-11-15 18:59 ` [PATCH v2 2/9] zsmalloc: rename zs_stat_type to class_stat_type Minchan Kim
2021-11-15 18:59 ` [PATCH v2 3/9] zsmalloc: decouple class actions from zspage works Minchan Kim
2021-11-15 18:59 ` [PATCH v2 4/9] zsmalloc: introduce obj_allocated Minchan Kim
2021-11-15 18:59 ` [PATCH v2 5/9] zsmalloc: move huge compressed obj from page to zspage Minchan Kim
2021-11-15 18:59 ` [PATCH v2 6/9] zsmalloc: remove zspage isolation for migration Minchan Kim
2021-11-15 18:59 ` [PATCH v2 7/9] locking/rwlocks: introduce write_lock_nested Minchan Kim
2021-11-16 10:27   ` Peter Zijlstra
2021-11-19 10:35   ` Sebastian Andrzej Siewior
2021-11-19 18:21     ` Minchan Kim
2021-11-20 15:38       ` Sebastian Andrzej Siewior
2021-11-20  3:50   ` kernel test robot
2021-11-15 18:59 ` [PATCH v2 8/9] zsmalloc: replace per zpage lock with pool->migrate_lock Minchan Kim
2021-11-15 18:59 ` [PATCH v2 9/9] zsmalloc: replace get_cpu_var with local_lock Minchan Kim

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).