linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/2] zsmalloc: small compaction improvements
@ 2023-06-23  4:40 Sergey Senozhatsky
  2023-06-23  4:40 ` [PATCH 1/2] zsmalloc: do not scan for allocated objects in empty zspage Sergey Senozhatsky
                   ` (2 more replies)
  0 siblings, 3 replies; 8+ messages in thread
From: Sergey Senozhatsky @ 2023-06-23  4:40 UTC (permalink / raw)
  To: Minchan Kim, Andrew Morton; +Cc: linux-mm, linux-kernel, Sergey Senozhatsky

Hi,
	A tiny series that can reduce the number of
find_alloced_obj() invocations (which perform a linear
scan of sub-page) during compaction. Inspired by Alexey
Romanov's findings.

Sergey Senozhatsky (2):
  zsmalloc: do not scan for allocated objects in empty zspage
  zsmalloc: move migration destination zspage inuse check

 mm/zsmalloc.c | 17 +++++++++++++----
 1 file changed, 13 insertions(+), 4 deletions(-)

-- 
2.41.0.162.gfafddb0af9-goog


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH 1/2] zsmalloc: do not scan for allocated objects in empty zspage
  2023-06-23  4:40 [PATCH 0/2] zsmalloc: small compaction improvements Sergey Senozhatsky
@ 2023-06-23  4:40 ` Sergey Senozhatsky
  2023-06-23 10:49   ` Alexey Romanov
  2023-06-23  4:40 ` [PATCH 2/2] zsmalloc: move migration destination zspage inuse check Sergey Senozhatsky
  2023-06-23 17:03 ` [PATCH 0/2] zsmalloc: small compaction improvements Minchan Kim
  2 siblings, 1 reply; 8+ messages in thread
From: Sergey Senozhatsky @ 2023-06-23  4:40 UTC (permalink / raw)
  To: Minchan Kim, Andrew Morton
  Cc: linux-mm, linux-kernel, Sergey Senozhatsky, Alexey Romanov

zspage migration can terminate as soon as it moves the last
allocated object from the source zspage.  Add a simple helper
zspage_empty() that tests zspage ->inuse on each migration
iteration.

Suggested-by: Alexey Romanov <AVRomanov@sberdevices.ru>
Signed-off-by: Sergey Senozhatsky <senozhatsky@chromium.org>
---
 mm/zsmalloc.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 3f057970504e..5d60eaedc3b7 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -1147,6 +1147,11 @@ static bool zspage_full(struct size_class *class, struct zspage *zspage)
 	return get_zspage_inuse(zspage) == class->objs_per_zspage;
 }
 
+static bool zspage_empty(struct zspage *zspage)
+{
+	return get_zspage_inuse(zspage) == 0;
+}
+
 /**
  * zs_lookup_class_index() - Returns index of the zsmalloc &size_class
  * that hold objects of the provided size.
@@ -1625,6 +1630,10 @@ static void migrate_zspage(struct zs_pool *pool, struct size_class *class,
 		obj_idx++;
 		record_obj(handle, free_obj);
 		obj_free(class->size, used_obj);
+
+		/* Stop if there are no more objects to migrate */
+		if (zspage_empty(get_zspage(s_page)))
+			break;
 	}
 
 	/* Remember last position in this iteration */
-- 
2.41.0.162.gfafddb0af9-goog


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 2/2] zsmalloc: move migration destination zspage inuse check
  2023-06-23  4:40 [PATCH 0/2] zsmalloc: small compaction improvements Sergey Senozhatsky
  2023-06-23  4:40 ` [PATCH 1/2] zsmalloc: do not scan for allocated objects in empty zspage Sergey Senozhatsky
@ 2023-06-23  4:40 ` Sergey Senozhatsky
  2023-06-23 17:03 ` [PATCH 0/2] zsmalloc: small compaction improvements Minchan Kim
  2 siblings, 0 replies; 8+ messages in thread
From: Sergey Senozhatsky @ 2023-06-23  4:40 UTC (permalink / raw)
  To: Minchan Kim, Andrew Morton; +Cc: linux-mm, linux-kernel, Sergey Senozhatsky

Destination zspage fullness check need to be done after
zs_object_copy() because that's where source and destination
zspages fullness change.  Checking destination zspage fullness
before zs_object_copy() may cause migration to loop through
source zspage sub-pages scanning for allocate objects just to
find out at the end that the destination zspage is full.

Signed-off-by: Sergey Senozhatsky <senozhatsky@chromium.org>
---
 mm/zsmalloc.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 5d60eaedc3b7..4a84f7877669 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -1620,10 +1620,6 @@ static void migrate_zspage(struct zs_pool *pool, struct size_class *class,
 			continue;
 		}
 
-		/* Stop if there is no more space */
-		if (zspage_full(class, get_zspage(d_page)))
-			break;
-
 		used_obj = handle_to_obj(handle);
 		free_obj = obj_malloc(pool, get_zspage(d_page), handle);
 		zs_object_copy(class, free_obj, used_obj);
@@ -1631,6 +1627,10 @@ static void migrate_zspage(struct zs_pool *pool, struct size_class *class,
 		record_obj(handle, free_obj);
 		obj_free(class->size, used_obj);
 
+		/* Stop if there is no more space */
+		if (zspage_full(class, get_zspage(d_page)))
+			break;
+
 		/* Stop if there are no more objects to migrate */
 		if (zspage_empty(get_zspage(s_page)))
 			break;
-- 
2.41.0.162.gfafddb0af9-goog


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH 1/2] zsmalloc: do not scan for allocated objects in empty zspage
  2023-06-23  4:40 ` [PATCH 1/2] zsmalloc: do not scan for allocated objects in empty zspage Sergey Senozhatsky
@ 2023-06-23 10:49   ` Alexey Romanov
  2023-06-24  2:29     ` Sergey Senozhatsky
  0 siblings, 1 reply; 8+ messages in thread
From: Alexey Romanov @ 2023-06-23 10:49 UTC (permalink / raw)
  To: Sergey Senozhatsky; +Cc: Minchan Kim, Andrew Morton, linux-mm, linux-kernel

Hello!

On Fri, Jun 23, 2023 at 01:40:01PM +0900, Sergey Senozhatsky wrote:
> zspage migration can terminate as soon as it moves the last
> allocated object from the source zspage.  Add a simple helper
> zspage_empty() that tests zspage ->inuse on each migration
> iteration.
> 
> Suggested-by: Alexey Romanov <AVRomanov@sberdevices.ru>
> Signed-off-by: Sergey Senozhatsky <senozhatsky@chromium.org>
> ---
>  mm/zsmalloc.c | 9 +++++++++
>  1 file changed, 9 insertions(+)
> 
> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> index 3f057970504e..5d60eaedc3b7 100644
> --- a/mm/zsmalloc.c
> +++ b/mm/zsmalloc.c
> @@ -1147,6 +1147,11 @@ static bool zspage_full(struct size_class *class, struct zspage *zspage)
>  	return get_zspage_inuse(zspage) == class->objs_per_zspage;
>  }
>  
> +static bool zspage_empty(struct zspage *zspage)
> +{
> +	return get_zspage_inuse(zspage) == 0;
> +}
> +
>  /**
>   * zs_lookup_class_index() - Returns index of the zsmalloc &size_class
>   * that hold objects of the provided size.
> @@ -1625,6 +1630,10 @@ static void migrate_zspage(struct zs_pool *pool, struct size_class *class,
>  		obj_idx++;
>  		record_obj(handle, free_obj);
>  		obj_free(class->size, used_obj);
> +
> +		/* Stop if there are no more objects to migrate */
> +		if (zspage_empty(get_zspage(s_page)))
> +			break;
>  	}
>  
>  	/* Remember last position in this iteration */
> -- 
> 2.41.0.162.gfafddb0af9-goog
> 

I think we can add similar check in zs_reclaim_page() function.
There we also scan zspage to find the allocated object.

-- 
Thank you,
Alexey

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 0/2] zsmalloc: small compaction improvements
  2023-06-23  4:40 [PATCH 0/2] zsmalloc: small compaction improvements Sergey Senozhatsky
  2023-06-23  4:40 ` [PATCH 1/2] zsmalloc: do not scan for allocated objects in empty zspage Sergey Senozhatsky
  2023-06-23  4:40 ` [PATCH 2/2] zsmalloc: move migration destination zspage inuse check Sergey Senozhatsky
@ 2023-06-23 17:03 ` Minchan Kim
  2023-06-24  5:00   ` Sergey Senozhatsky
  2 siblings, 1 reply; 8+ messages in thread
From: Minchan Kim @ 2023-06-23 17:03 UTC (permalink / raw)
  To: Sergey Senozhatsky; +Cc: Andrew Morton, linux-mm, linux-kernel

On Fri, Jun 23, 2023 at 01:40:00PM +0900, Sergey Senozhatsky wrote:
> Hi,
> 	A tiny series that can reduce the number of
> find_alloced_obj() invocations (which perform a linear
> scan of sub-page) during compaction. Inspired by Alexey
> Romanov's findings.
> 

Both patches looks good to me.

In this chance, can we have little more cleanup after these two patches?

From 3cfd4bb0bf395e271f270ee16cb08964ff785a3a Mon Sep 17 00:00:00 2001
From: Minchan Kim <minchan@kernel.org>
Date: Fri, 23 Jun 2023 09:45:33 -0700
Subject: [PATCH] zsmalloc: remove zs_compact_control

__zs_compact always putback src_zspage into class list after
migrate_zspage. Thus, we don't need to keep last position of
src_zspage any more. Let's remove it.

Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 mm/zsmalloc.c | 37 +++++++++----------------------------
 1 file changed, 9 insertions(+), 28 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 4a84f7877669..84beadc088b8 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -1590,25 +1590,14 @@ static unsigned long find_alloced_obj(struct size_class *class,
 	return find_tagged_obj(class, page, obj_idx, OBJ_ALLOCATED_TAG);
 }
 
-struct zs_compact_control {
-	/* Source spage for migration which could be a subpage of zspage */
-	struct page *s_page;
-	/* Destination page for migration which should be a first page
-	 * of zspage. */
-	struct page *d_page;
-	 /* Starting object index within @s_page which used for live object
-	  * in the subpage. */
-	int obj_idx;
-};
-
-static void migrate_zspage(struct zs_pool *pool, struct size_class *class,
-			   struct zs_compact_control *cc)
+static void migrate_zspage(struct zs_pool *pool, struct zspage *src_zspage,
+			   struct zspage *dst_zspage)
 {
 	unsigned long used_obj, free_obj;
 	unsigned long handle;
-	struct page *s_page = cc->s_page;
-	struct page *d_page = cc->d_page;
-	int obj_idx = cc->obj_idx;
+	int obj_idx = 0;
+	struct page *s_page = get_first_page(src_zspage);
+	struct size_class *class = pool->size_class[src_zspage->class];
 
 	while (1) {
 		handle = find_alloced_obj(class, s_page, &obj_idx);
@@ -1621,24 +1610,20 @@ static void migrate_zspage(struct zs_pool *pool, struct size_class *class,
 		}
 
 		used_obj = handle_to_obj(handle);
-		free_obj = obj_malloc(pool, get_zspage(d_page), handle);
+		free_obj = obj_malloc(pool, dst_zspage, handle);
 		zs_object_copy(class, free_obj, used_obj);
 		obj_idx++;
 		record_obj(handle, free_obj);
 		obj_free(class->size, used_obj);
 
 		/* Stop if there is no more space */
-		if (zspage_full(class, get_zspage(d_page)))
+		if (zspage_full(class, dst_zspage))
 			break;
 
 		/* Stop if there are no more objects to migrate */
-		if (zspage_empty(get_zspage(s_page)))
+		if (zspage_empty(src_zspage))
 			break;
 	}
-
-	/* Remember last position in this iteration */
-	cc->s_page = s_page;
-	cc->obj_idx = obj_idx;
 }
 
 static struct zspage *isolate_src_zspage(struct size_class *class)
@@ -2013,7 +1998,6 @@ static unsigned long zs_can_compact(struct size_class *class)
 static unsigned long __zs_compact(struct zs_pool *pool,
 				  struct size_class *class)
 {
-	struct zs_compact_control cc;
 	struct zspage *src_zspage = NULL;
 	struct zspage *dst_zspage = NULL;
 	unsigned long pages_freed = 0;
@@ -2031,7 +2015,6 @@ static unsigned long __zs_compact(struct zs_pool *pool,
 			if (!dst_zspage)
 				break;
 			migrate_write_lock(dst_zspage);
-			cc.d_page = get_first_page(dst_zspage);
 		}
 
 		src_zspage = isolate_src_zspage(class);
@@ -2040,9 +2023,7 @@ static unsigned long __zs_compact(struct zs_pool *pool,
 
 		migrate_write_lock_nested(src_zspage);
 
-		cc.obj_idx = 0;
-		cc.s_page = get_first_page(src_zspage);
-		migrate_zspage(pool, class, &cc);
+		migrate_zspage(pool, src_zspage, dst_zspage);
 		fg = putback_zspage(class, src_zspage);
 		migrate_write_unlock(src_zspage);
 
-- 
2.41.0.178.g377b9f9a00-goog


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH 1/2] zsmalloc: do not scan for allocated objects in empty zspage
  2023-06-23 10:49   ` Alexey Romanov
@ 2023-06-24  2:29     ` Sergey Senozhatsky
  2023-06-26 10:54       ` Alexey Romanov
  0 siblings, 1 reply; 8+ messages in thread
From: Sergey Senozhatsky @ 2023-06-24  2:29 UTC (permalink / raw)
  To: Alexey Romanov
  Cc: Sergey Senozhatsky, Minchan Kim, Andrew Morton, linux-mm, linux-kernel

On (23/06/23 10:49), Alexey Romanov wrote:
> > +static bool zspage_empty(struct zspage *zspage)
> > +{
> > +	return get_zspage_inuse(zspage) == 0;
> > +}
> > +
> >  /**
> >   * zs_lookup_class_index() - Returns index of the zsmalloc &size_class
> >   * that hold objects of the provided size.
> > @@ -1625,6 +1630,10 @@ static void migrate_zspage(struct zs_pool *pool, struct size_class *class,
> >  		obj_idx++;
> >  		record_obj(handle, free_obj);
> >  		obj_free(class->size, used_obj);
> > +
> > +		/* Stop if there are no more objects to migrate */
> > +		if (zspage_empty(get_zspage(s_page)))
> > +			break;
> >  	}
> >  
> >  	/* Remember last position in this iteration */
> > -- 
> > 2.41.0.162.gfafddb0af9-goog
> > 
> 
> I think we can add similar check in zs_reclaim_page() function.
> There we also scan zspage to find the allocated object.

LRU was moved to zswap, so zs_reclaim_page() doesn't exist any longer
(in linux-next).

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 0/2] zsmalloc: small compaction improvements
  2023-06-23 17:03 ` [PATCH 0/2] zsmalloc: small compaction improvements Minchan Kim
@ 2023-06-24  5:00   ` Sergey Senozhatsky
  0 siblings, 0 replies; 8+ messages in thread
From: Sergey Senozhatsky @ 2023-06-24  5:00 UTC (permalink / raw)
  To: Minchan Kim; +Cc: Sergey Senozhatsky, Andrew Morton, linux-mm, linux-kernel

On (23/06/23 10:03), Minchan Kim wrote:
> On Fri, Jun 23, 2023 at 01:40:00PM +0900, Sergey Senozhatsky wrote:
> > Hi,
> > 	A tiny series that can reduce the number of
> > find_alloced_obj() invocations (which perform a linear
> > scan of sub-page) during compaction. Inspired by Alexey
> > Romanov's findings.
> > 
> 
> Both patches looks good to me.

Thanks.

> In this chance, can we have little more cleanup after these two patches?

Looks good. I'll pick it up for v2.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 1/2] zsmalloc: do not scan for allocated objects in empty zspage
  2023-06-24  2:29     ` Sergey Senozhatsky
@ 2023-06-26 10:54       ` Alexey Romanov
  0 siblings, 0 replies; 8+ messages in thread
From: Alexey Romanov @ 2023-06-26 10:54 UTC (permalink / raw)
  To: Sergey Senozhatsky; +Cc: Minchan Kim, Andrew Morton, linux-mm, linux-kernel

On Sat, Jun 24, 2023 at 11:29:17AM +0900, Sergey Senozhatsky wrote:
> On (23/06/23 10:49), Alexey Romanov wrote:
> > > +static bool zspage_empty(struct zspage *zspage)
> > > +{
> > > +	return get_zspage_inuse(zspage) == 0;
> > > +}
> > > +
> > >  /**
> > >   * zs_lookup_class_index() - Returns index of the zsmalloc &size_class
> > >   * that hold objects of the provided size.
> > > @@ -1625,6 +1630,10 @@ static void migrate_zspage(struct zs_pool *pool, struct size_class *class,
> > >  		obj_idx++;
> > >  		record_obj(handle, free_obj);
> > >  		obj_free(class->size, used_obj);
> > > +
> > > +		/* Stop if there are no more objects to migrate */
> > > +		if (zspage_empty(get_zspage(s_page)))
> > > +			break;
> > >  	}
> > >  
> > >  	/* Remember last position in this iteration */
> > > -- 
> > > 2.41.0.162.gfafddb0af9-goog
> > > 
> > 
> > I think we can add similar check in zs_reclaim_page() function.
> > There we also scan zspage to find the allocated object.
> 
> LRU was moved to zswap, so zs_reclaim_page() doesn't exist any longer
> (in linux-next).

Yeah, sorry. Just looking in current linux master.

-- 
Thank you,
Alexey

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2023-06-26 10:55 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-06-23  4:40 [PATCH 0/2] zsmalloc: small compaction improvements Sergey Senozhatsky
2023-06-23  4:40 ` [PATCH 1/2] zsmalloc: do not scan for allocated objects in empty zspage Sergey Senozhatsky
2023-06-23 10:49   ` Alexey Romanov
2023-06-24  2:29     ` Sergey Senozhatsky
2023-06-26 10:54       ` Alexey Romanov
2023-06-23  4:40 ` [PATCH 2/2] zsmalloc: move migration destination zspage inuse check Sergey Senozhatsky
2023-06-23 17:03 ` [PATCH 0/2] zsmalloc: small compaction improvements Minchan Kim
2023-06-24  5:00   ` Sergey Senozhatsky

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).