linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/3] mm/zsmalloc: fix and optimize objects/page migration
@ 2024-02-19 13:33 Chengming Zhou
  2024-02-19 13:33 ` [PATCH 1/3] mm/zsmalloc: fix migrate_write_lock() when !CONFIG_COMPACTION Chengming Zhou
                   ` (2 more replies)
  0 siblings, 3 replies; 9+ messages in thread
From: Chengming Zhou @ 2024-02-19 13:33 UTC (permalink / raw)
  To: nphamcs, yosryahmed, Sergey Senozhatsky, Minchan Kim,
	Andrew Morton, hannes
  Cc: linux-mm, Chengming Zhou, linux-kernel

Hello,

This series is to fix and optimize the zsmalloc objects/page migration.

patch 01 fix the empty migrate_write_lock() when !CONFIG_COMPACTION.
patch 02 remove the migrate_write_lock_nested() in objects migration.
patch 03 remove the unused zspage->isolated counter in page migration.

Thanks for review and comments!

Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com>
---
Chengming Zhou (3):
      mm/zsmalloc: fix migrate_write_lock() when !CONFIG_COMPACTION
      mm/zsmalloc: remove migrate_write_lock_nested()
      mm/zsmalloc: remove unused zspage->isolated

 mm/zsmalloc.c | 63 ++++++++---------------------------------------------------
 1 file changed, 8 insertions(+), 55 deletions(-)
---
base-commit: 9951769060d8f5eb001acaca67c1439d2cfe1c6b
change-id: 20240219-b4-szmalloc-migrate-92971221bb01

Best regards,
-- 
Chengming Zhou <zhouchengming@bytedance.com>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH 1/3] mm/zsmalloc: fix migrate_write_lock() when !CONFIG_COMPACTION
  2024-02-19 13:33 [PATCH 0/3] mm/zsmalloc: fix and optimize objects/page migration Chengming Zhou
@ 2024-02-19 13:33 ` Chengming Zhou
  2024-02-19 13:33 ` [PATCH 2/3] mm/zsmalloc: remove migrate_write_lock_nested() Chengming Zhou
  2024-02-19 13:33 ` [PATCH 3/3] mm/zsmalloc: remove unused zspage->isolated Chengming Zhou
  2 siblings, 0 replies; 9+ messages in thread
From: Chengming Zhou @ 2024-02-19 13:33 UTC (permalink / raw)
  To: nphamcs, yosryahmed, Sergey Senozhatsky, Minchan Kim,
	Andrew Morton, hannes
  Cc: linux-mm, Chengming Zhou, linux-kernel

migrate_write_lock() is a empty function when !CONFIG_COMPACTION, in
which case zs_compact() can be triggered from shrinker reclaim context.
(Maybe it's better to rename it to zs_shrink()?)

And zspage map object users rely on this migrate_read_lock() so object
won't be migrated elsewhere.

Fix it by always implementing the migrate_write_lock() related functions.

Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com>
---
 mm/zsmalloc.c | 9 +++------
 1 file changed, 3 insertions(+), 6 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index c937635e0ad1..64d5533fa5d8 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -278,18 +278,15 @@ static bool ZsHugePage(struct zspage *zspage)
 static void migrate_lock_init(struct zspage *zspage);
 static void migrate_read_lock(struct zspage *zspage);
 static void migrate_read_unlock(struct zspage *zspage);
-
-#ifdef CONFIG_COMPACTION
 static void migrate_write_lock(struct zspage *zspage);
 static void migrate_write_lock_nested(struct zspage *zspage);
 static void migrate_write_unlock(struct zspage *zspage);
+
+#ifdef CONFIG_COMPACTION
 static void kick_deferred_free(struct zs_pool *pool);
 static void init_deferred_free(struct zs_pool *pool);
 static void SetZsPageMovable(struct zs_pool *pool, struct zspage *zspage);
 #else
-static void migrate_write_lock(struct zspage *zspage) {}
-static void migrate_write_lock_nested(struct zspage *zspage) {}
-static void migrate_write_unlock(struct zspage *zspage) {}
 static void kick_deferred_free(struct zs_pool *pool) {}
 static void init_deferred_free(struct zs_pool *pool) {}
 static void SetZsPageMovable(struct zs_pool *pool, struct zspage *zspage) {}
@@ -1725,7 +1722,6 @@ static void migrate_read_unlock(struct zspage *zspage) __releases(&zspage->lock)
 	read_unlock(&zspage->lock);
 }
 
-#ifdef CONFIG_COMPACTION
 static void migrate_write_lock(struct zspage *zspage)
 {
 	write_lock(&zspage->lock);
@@ -1741,6 +1737,7 @@ static void migrate_write_unlock(struct zspage *zspage)
 	write_unlock(&zspage->lock);
 }
 
+#ifdef CONFIG_COMPACTION
 /* Number of isolated subpage for *page migration* in this zspage */
 static void inc_zspage_isolation(struct zspage *zspage)
 {

-- 
b4 0.10.1

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 2/3] mm/zsmalloc: remove migrate_write_lock_nested()
  2024-02-19 13:33 [PATCH 0/3] mm/zsmalloc: fix and optimize objects/page migration Chengming Zhou
  2024-02-19 13:33 ` [PATCH 1/3] mm/zsmalloc: fix migrate_write_lock() when !CONFIG_COMPACTION Chengming Zhou
@ 2024-02-19 13:33 ` Chengming Zhou
  2024-02-20  4:48   ` Sergey Senozhatsky
  2024-02-19 13:33 ` [PATCH 3/3] mm/zsmalloc: remove unused zspage->isolated Chengming Zhou
  2 siblings, 1 reply; 9+ messages in thread
From: Chengming Zhou @ 2024-02-19 13:33 UTC (permalink / raw)
  To: nphamcs, yosryahmed, Sergey Senozhatsky, Minchan Kim,
	Andrew Morton, hannes
  Cc: linux-mm, Chengming Zhou, linux-kernel

The migrate write lock is to protect the race between zspage migration
and zspage objects' map users.

We only need to lock out the map users of src zspage, not dst zspage,
which is safe to map by users concurrently, since we only need to do
obj_malloc() from dst zspage.

So we can remove the migrate_write_lock_nested() use case.

As we are here, cleanup the __zs_compact() by moving putback_zspage()
outside of migrate_write_unlock since we hold pool lock, no malloc or
free users can come in.

Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com>
---
 mm/zsmalloc.c | 22 +++++-----------------
 1 file changed, 5 insertions(+), 17 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 64d5533fa5d8..f2ae7d4c6f21 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -279,7 +279,6 @@ static void migrate_lock_init(struct zspage *zspage);
 static void migrate_read_lock(struct zspage *zspage);
 static void migrate_read_unlock(struct zspage *zspage);
 static void migrate_write_lock(struct zspage *zspage);
-static void migrate_write_lock_nested(struct zspage *zspage);
 static void migrate_write_unlock(struct zspage *zspage);
 
 #ifdef CONFIG_COMPACTION
@@ -1727,11 +1726,6 @@ static void migrate_write_lock(struct zspage *zspage)
 	write_lock(&zspage->lock);
 }
 
-static void migrate_write_lock_nested(struct zspage *zspage)
-{
-	write_lock_nested(&zspage->lock, SINGLE_DEPTH_NESTING);
-}
-
 static void migrate_write_unlock(struct zspage *zspage)
 {
 	write_unlock(&zspage->lock);
@@ -2003,19 +1997,17 @@ static unsigned long __zs_compact(struct zs_pool *pool,
 			dst_zspage = isolate_dst_zspage(class);
 			if (!dst_zspage)
 				break;
-			migrate_write_lock(dst_zspage);
 		}
 
 		src_zspage = isolate_src_zspage(class);
 		if (!src_zspage)
 			break;
 
-		migrate_write_lock_nested(src_zspage);
-
+		migrate_write_lock(src_zspage);
 		migrate_zspage(pool, src_zspage, dst_zspage);
-		fg = putback_zspage(class, src_zspage);
 		migrate_write_unlock(src_zspage);
 
+		fg = putback_zspage(class, src_zspage);
 		if (fg == ZS_INUSE_RATIO_0) {
 			free_zspage(pool, class, src_zspage);
 			pages_freed += class->pages_per_zspage;
@@ -2025,7 +2017,6 @@ static unsigned long __zs_compact(struct zs_pool *pool,
 		if (get_fullness_group(class, dst_zspage) == ZS_INUSE_RATIO_100
 		    || spin_is_contended(&pool->lock)) {
 			putback_zspage(class, dst_zspage);
-			migrate_write_unlock(dst_zspage);
 			dst_zspage = NULL;
 
 			spin_unlock(&pool->lock);
@@ -2034,15 +2025,12 @@ static unsigned long __zs_compact(struct zs_pool *pool,
 		}
 	}
 
-	if (src_zspage) {
+	if (src_zspage)
 		putback_zspage(class, src_zspage);
-		migrate_write_unlock(src_zspage);
-	}
 
-	if (dst_zspage) {
+	if (dst_zspage)
 		putback_zspage(class, dst_zspage);
-		migrate_write_unlock(dst_zspage);
-	}
+
 	spin_unlock(&pool->lock);
 
 	return pages_freed;

-- 
b4 0.10.1

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 3/3] mm/zsmalloc: remove unused zspage->isolated
  2024-02-19 13:33 [PATCH 0/3] mm/zsmalloc: fix and optimize objects/page migration Chengming Zhou
  2024-02-19 13:33 ` [PATCH 1/3] mm/zsmalloc: fix migrate_write_lock() when !CONFIG_COMPACTION Chengming Zhou
  2024-02-19 13:33 ` [PATCH 2/3] mm/zsmalloc: remove migrate_write_lock_nested() Chengming Zhou
@ 2024-02-19 13:33 ` Chengming Zhou
  2 siblings, 0 replies; 9+ messages in thread
From: Chengming Zhou @ 2024-02-19 13:33 UTC (permalink / raw)
  To: nphamcs, yosryahmed, Sergey Senozhatsky, Minchan Kim,
	Andrew Morton, hannes
  Cc: linux-mm, Chengming Zhou, linux-kernel

The zspage->isolated is not used anywhere, we don't need to maintain it,
which needs to hold the heavy pool lock to update it, so just remove it.

Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com>
---
 mm/zsmalloc.c | 32 --------------------------------
 1 file changed, 32 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index f2ae7d4c6f21..a48f4651d143 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -116,7 +116,6 @@
 #define HUGE_BITS	1
 #define FULLNESS_BITS	4
 #define CLASS_BITS	8
-#define ISOLATED_BITS	5
 #define MAGIC_VAL_BITS	8
 
 #define MAX(a, b) ((a) >= (b) ? (a) : (b))
@@ -246,7 +245,6 @@ struct zspage {
 		unsigned int huge:HUGE_BITS;
 		unsigned int fullness:FULLNESS_BITS;
 		unsigned int class:CLASS_BITS + 1;
-		unsigned int isolated:ISOLATED_BITS;
 		unsigned int magic:MAGIC_VAL_BITS;
 	};
 	unsigned int inuse;
@@ -1732,17 +1730,6 @@ static void migrate_write_unlock(struct zspage *zspage)
 }
 
 #ifdef CONFIG_COMPACTION
-/* Number of isolated subpage for *page migration* in this zspage */
-static void inc_zspage_isolation(struct zspage *zspage)
-{
-	zspage->isolated++;
-}
-
-static void dec_zspage_isolation(struct zspage *zspage)
-{
-	VM_BUG_ON(zspage->isolated == 0);
-	zspage->isolated--;
-}
 
 static const struct movable_operations zsmalloc_mops;
 
@@ -1771,21 +1758,12 @@ static void replace_sub_page(struct size_class *class, struct zspage *zspage,
 
 static bool zs_page_isolate(struct page *page, isolate_mode_t mode)
 {
-	struct zs_pool *pool;
-	struct zspage *zspage;
-
 	/*
 	 * Page is locked so zspage couldn't be destroyed. For detail, look at
 	 * lock_zspage in free_zspage.
 	 */
 	VM_BUG_ON_PAGE(PageIsolated(page), page);
 
-	zspage = get_zspage(page);
-	pool = zspage->pool;
-	spin_lock(&pool->lock);
-	inc_zspage_isolation(zspage);
-	spin_unlock(&pool->lock);
-
 	return true;
 }
 
@@ -1850,7 +1828,6 @@ static int zs_page_migrate(struct page *newpage, struct page *page,
 	kunmap_atomic(s_addr);
 
 	replace_sub_page(class, zspage, newpage, page);
-	dec_zspage_isolation(zspage);
 	/*
 	 * Since we complete the data copy and set up new zspage structure,
 	 * it's okay to release the pool's lock.
@@ -1872,16 +1849,7 @@ static int zs_page_migrate(struct page *newpage, struct page *page,
 
 static void zs_page_putback(struct page *page)
 {
-	struct zs_pool *pool;
-	struct zspage *zspage;
-
 	VM_BUG_ON_PAGE(!PageIsolated(page), page);
-
-	zspage = get_zspage(page);
-	pool = zspage->pool;
-	spin_lock(&pool->lock);
-	dec_zspage_isolation(zspage);
-	spin_unlock(&pool->lock);
 }
 
 static const struct movable_operations zsmalloc_mops = {

-- 
b4 0.10.1

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH 2/3] mm/zsmalloc: remove migrate_write_lock_nested()
  2024-02-19 13:33 ` [PATCH 2/3] mm/zsmalloc: remove migrate_write_lock_nested() Chengming Zhou
@ 2024-02-20  4:48   ` Sergey Senozhatsky
  2024-02-20  4:51     ` Chengming Zhou
  0 siblings, 1 reply; 9+ messages in thread
From: Sergey Senozhatsky @ 2024-02-20  4:48 UTC (permalink / raw)
  To: Chengming Zhou
  Cc: nphamcs, yosryahmed, Sergey Senozhatsky, Minchan Kim,
	Andrew Morton, hannes, linux-mm, linux-kernel

On (24/02/19 13:33), Chengming Zhou wrote:
>  static void migrate_write_unlock(struct zspage *zspage)
>  {
>  	write_unlock(&zspage->lock);
> @@ -2003,19 +1997,17 @@ static unsigned long __zs_compact(struct zs_pool *pool,
>  			dst_zspage = isolate_dst_zspage(class);
>  			if (!dst_zspage)
>  				break;
> -			migrate_write_lock(dst_zspage);
>  		}
>  
>  		src_zspage = isolate_src_zspage(class);
>  		if (!src_zspage)
>  			break;
>  
> -		migrate_write_lock_nested(src_zspage);
> -
> +		migrate_write_lock(src_zspage);
>  		migrate_zspage(pool, src_zspage, dst_zspage);
> -		fg = putback_zspage(class, src_zspage);
>  		migrate_write_unlock(src_zspage);
>  
> +		fg = putback_zspage(class, src_zspage);

Hmm. Lockless putback doesn't look right to me. We modify critical
zspage fileds in putback_zspage().

>  		if (fg == ZS_INUSE_RATIO_0) {
>  			free_zspage(pool, class, src_zspage);
>  			pages_freed += class->pages_per_zspage;
> @@ -2025,7 +2017,6 @@ static unsigned long __zs_compact(struct zs_pool *pool,
>  		if (get_fullness_group(class, dst_zspage) == ZS_INUSE_RATIO_100
>  		    || spin_is_contended(&pool->lock)) {
>  			putback_zspage(class, dst_zspage);
> -			migrate_write_unlock(dst_zspage);
>  			dst_zspage = NULL;
>  
>  			spin_unlock(&pool->lock);
> @@ -2034,15 +2025,12 @@ static unsigned long __zs_compact(struct zs_pool *pool,
>  		}
>  	}
>  
> -	if (src_zspage) {
> +	if (src_zspage)
>  		putback_zspage(class, src_zspage);
> -		migrate_write_unlock(src_zspage);
> -	}
>  
> -	if (dst_zspage) {
> +	if (dst_zspage)
>  		putback_zspage(class, dst_zspage);
> -		migrate_write_unlock(dst_zspage);
> -	}

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 2/3] mm/zsmalloc: remove migrate_write_lock_nested()
  2024-02-20  4:48   ` Sergey Senozhatsky
@ 2024-02-20  4:51     ` Chengming Zhou
  2024-02-20  4:53       ` Sergey Senozhatsky
  0 siblings, 1 reply; 9+ messages in thread
From: Chengming Zhou @ 2024-02-20  4:51 UTC (permalink / raw)
  To: Sergey Senozhatsky
  Cc: nphamcs, yosryahmed, Minchan Kim, Andrew Morton, hannes,
	linux-mm, linux-kernel

On 2024/2/20 12:48, Sergey Senozhatsky wrote:
> On (24/02/19 13:33), Chengming Zhou wrote:
>>  static void migrate_write_unlock(struct zspage *zspage)
>>  {
>>  	write_unlock(&zspage->lock);
>> @@ -2003,19 +1997,17 @@ static unsigned long __zs_compact(struct zs_pool *pool,
>>  			dst_zspage = isolate_dst_zspage(class);
>>  			if (!dst_zspage)
>>  				break;
>> -			migrate_write_lock(dst_zspage);
>>  		}
>>  
>>  		src_zspage = isolate_src_zspage(class);
>>  		if (!src_zspage)
>>  			break;
>>  
>> -		migrate_write_lock_nested(src_zspage);
>> -
>> +		migrate_write_lock(src_zspage);
>>  		migrate_zspage(pool, src_zspage, dst_zspage);
>> -		fg = putback_zspage(class, src_zspage);
>>  		migrate_write_unlock(src_zspage);
>>  
>> +		fg = putback_zspage(class, src_zspage);
> 
> Hmm. Lockless putback doesn't look right to me. We modify critical
> zspage fileds in putback_zspage().

Which I think is protected by pool->lock, right? We already held it.

> 
>>  		if (fg == ZS_INUSE_RATIO_0) {
>>  			free_zspage(pool, class, src_zspage);
>>  			pages_freed += class->pages_per_zspage;
>> @@ -2025,7 +2017,6 @@ static unsigned long __zs_compact(struct zs_pool *pool,
>>  		if (get_fullness_group(class, dst_zspage) == ZS_INUSE_RATIO_100
>>  		    || spin_is_contended(&pool->lock)) {
>>  			putback_zspage(class, dst_zspage);
>> -			migrate_write_unlock(dst_zspage);
>>  			dst_zspage = NULL;
>>  
>>  			spin_unlock(&pool->lock);
>> @@ -2034,15 +2025,12 @@ static unsigned long __zs_compact(struct zs_pool *pool,
>>  		}
>>  	}
>>  
>> -	if (src_zspage) {
>> +	if (src_zspage)
>>  		putback_zspage(class, src_zspage);
>> -		migrate_write_unlock(src_zspage);
>> -	}
>>  
>> -	if (dst_zspage) {
>> +	if (dst_zspage)
>>  		putback_zspage(class, dst_zspage);
>> -		migrate_write_unlock(dst_zspage);
>> -	}

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 2/3] mm/zsmalloc: remove migrate_write_lock_nested()
  2024-02-20  4:51     ` Chengming Zhou
@ 2024-02-20  4:53       ` Sergey Senozhatsky
  2024-02-20  4:59         ` Chengming Zhou
  0 siblings, 1 reply; 9+ messages in thread
From: Sergey Senozhatsky @ 2024-02-20  4:53 UTC (permalink / raw)
  To: Chengming Zhou
  Cc: Sergey Senozhatsky, nphamcs, yosryahmed, Minchan Kim,
	Andrew Morton, hannes, linux-mm, linux-kernel

On (24/02/20 12:51), Chengming Zhou wrote:
> On 2024/2/20 12:48, Sergey Senozhatsky wrote:
> > On (24/02/19 13:33), Chengming Zhou wrote:
> >>  static void migrate_write_unlock(struct zspage *zspage)
> >>  {
> >>  	write_unlock(&zspage->lock);
> >> @@ -2003,19 +1997,17 @@ static unsigned long __zs_compact(struct zs_pool *pool,
> >>  			dst_zspage = isolate_dst_zspage(class);
> >>  			if (!dst_zspage)
> >>  				break;
> >> -			migrate_write_lock(dst_zspage);
> >>  		}
> >>  
> >>  		src_zspage = isolate_src_zspage(class);
> >>  		if (!src_zspage)
> >>  			break;
> >>  
> >> -		migrate_write_lock_nested(src_zspage);
> >> -
> >> +		migrate_write_lock(src_zspage);
> >>  		migrate_zspage(pool, src_zspage, dst_zspage);
> >> -		fg = putback_zspage(class, src_zspage);
> >>  		migrate_write_unlock(src_zspage);
> >>  
> >> +		fg = putback_zspage(class, src_zspage);
> > 
> > Hmm. Lockless putback doesn't look right to me. We modify critical
> > zspage fileds in putback_zspage().
> 
> Which I think is protected by pool->lock, right? We already held it.

Not really. We have, for example, the following patterns:

	get_zspage_mapping()
	spin_lock(&pool->lock)

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 2/3] mm/zsmalloc: remove migrate_write_lock_nested()
  2024-02-20  4:53       ` Sergey Senozhatsky
@ 2024-02-20  4:59         ` Chengming Zhou
  2024-02-20  5:02           ` Sergey Senozhatsky
  0 siblings, 1 reply; 9+ messages in thread
From: Chengming Zhou @ 2024-02-20  4:59 UTC (permalink / raw)
  To: Sergey Senozhatsky
  Cc: nphamcs, yosryahmed, Minchan Kim, Andrew Morton, hannes,
	linux-mm, linux-kernel

On 2024/2/20 12:53, Sergey Senozhatsky wrote:
> On (24/02/20 12:51), Chengming Zhou wrote:
>> On 2024/2/20 12:48, Sergey Senozhatsky wrote:
>>> On (24/02/19 13:33), Chengming Zhou wrote:
>>>>  static void migrate_write_unlock(struct zspage *zspage)
>>>>  {
>>>>  	write_unlock(&zspage->lock);
>>>> @@ -2003,19 +1997,17 @@ static unsigned long __zs_compact(struct zs_pool *pool,
>>>>  			dst_zspage = isolate_dst_zspage(class);
>>>>  			if (!dst_zspage)
>>>>  				break;
>>>> -			migrate_write_lock(dst_zspage);
>>>>  		}
>>>>  
>>>>  		src_zspage = isolate_src_zspage(class);
>>>>  		if (!src_zspage)
>>>>  			break;
>>>>  
>>>> -		migrate_write_lock_nested(src_zspage);
>>>> -
>>>> +		migrate_write_lock(src_zspage);
>>>>  		migrate_zspage(pool, src_zspage, dst_zspage);
>>>> -		fg = putback_zspage(class, src_zspage);
>>>>  		migrate_write_unlock(src_zspage);
>>>>  
>>>> +		fg = putback_zspage(class, src_zspage);
>>>
>>> Hmm. Lockless putback doesn't look right to me. We modify critical
>>> zspage fileds in putback_zspage().
>>
>> Which I think is protected by pool->lock, right? We already held it.
> 
> Not really. We have, for example, the following patterns:
> 
> 	get_zspage_mapping()
> 	spin_lock(&pool->lock)

Right, this pattern is not safe actually, since we can't get stable fullness
value of zspage outside pool->lock.

But this pattern usage is only used in free_zspage path, so should be ok.
Actually we don't use the fullness value returned from get_zspage_mapping()
in the free_zspage() path, only use the class value to get the class.

Anyway, this pattern is confusing, I think we should clean up that?

Thanks.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 2/3] mm/zsmalloc: remove migrate_write_lock_nested()
  2024-02-20  4:59         ` Chengming Zhou
@ 2024-02-20  5:02           ` Sergey Senozhatsky
  0 siblings, 0 replies; 9+ messages in thread
From: Sergey Senozhatsky @ 2024-02-20  5:02 UTC (permalink / raw)
  To: Chengming Zhou
  Cc: Sergey Senozhatsky, nphamcs, yosryahmed, Minchan Kim,
	Andrew Morton, hannes, linux-mm, linux-kernel

On (24/02/20 12:59), Chengming Zhou wrote:
> On 2024/2/20 12:53, Sergey Senozhatsky wrote:
> > On (24/02/20 12:51), Chengming Zhou wrote:
> >> On 2024/2/20 12:48, Sergey Senozhatsky wrote:
> >>> On (24/02/19 13:33), Chengming Zhou wrote:
> >>> [..]
> > 
> > Not really. We have, for example, the following patterns:
> > 
> > 	get_zspage_mapping()
> > 	spin_lock(&pool->lock)
> 
> Right, this pattern is not safe actually, since we can't get stable fullness
> value of zspage outside pool->lock.
> 
> But this pattern usage is only used in free_zspage path, so should be ok.
> Actually we don't use the fullness value returned from get_zspage_mapping()
> in the free_zspage() path, only use the class value to get the class.
> 
> Anyway, this pattern is confusing, I think we should clean up that?

Right, looks so.

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2024-02-20  5:02 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-02-19 13:33 [PATCH 0/3] mm/zsmalloc: fix and optimize objects/page migration Chengming Zhou
2024-02-19 13:33 ` [PATCH 1/3] mm/zsmalloc: fix migrate_write_lock() when !CONFIG_COMPACTION Chengming Zhou
2024-02-19 13:33 ` [PATCH 2/3] mm/zsmalloc: remove migrate_write_lock_nested() Chengming Zhou
2024-02-20  4:48   ` Sergey Senozhatsky
2024-02-20  4:51     ` Chengming Zhou
2024-02-20  4:53       ` Sergey Senozhatsky
2024-02-20  4:59         ` Chengming Zhou
2024-02-20  5:02           ` Sergey Senozhatsky
2024-02-19 13:33 ` [PATCH 3/3] mm/zsmalloc: remove unused zspage->isolated Chengming Zhou

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).