linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH next] mm/vmscan: __isolate_lru_page_prepare clean up
@ 2020-11-20  8:03 Alex Shi
  2020-11-20 23:13 ` Andrew Morton
  0 siblings, 1 reply; 10+ messages in thread
From: Alex Shi @ 2020-11-20  8:03 UTC (permalink / raw)
  Cc: Andrew Morton, Hugh Dickins, Yu Zhao, Vlastimil Babka,
	Michal Hocko, linux-mm, linux-kernel

The function just return 2 results, so use a 'switch' to deal with its
result is unnecessary, and simplify it to a bool func as Vlastimil
suggested.

Also removed 'goto' in using by reusing list_move().

Signed-off-by: Alex Shi <alex.shi@linux.alibaba.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Hugh Dickins <hughd@google.com>
Cc: Yu Zhao <yuzhao@google.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Michal Hocko <mhocko@suse.com>
Cc: linux-mm@kvack.org
Cc: linux-kernel@vger.kernel.org
---
 mm/compaction.c |  2 +-
 mm/vmscan.c     | 53 ++++++++++++++++++++++++-------------------------
 2 files changed, 27 insertions(+), 28 deletions(-)

diff --git a/mm/compaction.c b/mm/compaction.c
index b68931854253..8d71ffebe6cb 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -988,7 +988,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
 		if (unlikely(!get_page_unless_zero(page)))
 			goto isolate_fail;
 
-		if (__isolate_lru_page_prepare(page, isolate_mode) != 0)
+		if (!__isolate_lru_page_prepare(page, isolate_mode))
 			goto isolate_fail_put;
 
 		/* Try isolate the page */
diff --git a/mm/vmscan.c b/mm/vmscan.c
index c6f94e55c3fe..7f32c1979804 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1540,7 +1540,7 @@ unsigned int reclaim_clean_pages_from_list(struct zone *zone,
  */
 int __isolate_lru_page_prepare(struct page *page, isolate_mode_t mode)
 {
-	int ret = -EBUSY;
+	int ret = false;
 
 	/* Only take pages on the LRU. */
 	if (!PageLRU(page))
@@ -1590,7 +1590,7 @@ int __isolate_lru_page_prepare(struct page *page, isolate_mode_t mode)
 	if ((mode & ISOLATE_UNMAPPED) && page_mapped(page))
 		return ret;
 
-	return 0;
+	return true;
 }
 
 /*
@@ -1674,35 +1674,34 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan,
 		 * only when the page is being freed somewhere else.
 		 */
 		scan += nr_pages;
-		switch (__isolate_lru_page_prepare(page, mode)) {
-		case 0:
+		if (!__isolate_lru_page_prepare(page, mode)) {
+			/* else it is being freed elsewhere */
+			list_move(&page->lru, src);
+			continue;
+		}
+		/*
+		 * Be careful not to clear PageLRU until after we're
+		 * sure the page is not being freed elsewhere -- the
+		 * page release code relies on it.
+		 */
+		if (unlikely(!get_page_unless_zero(page))) {
+			list_move(&page->lru, src);
+			continue;
+		}
+
+		if (!TestClearPageLRU(page)) {
 			/*
-			 * Be careful not to clear PageLRU until after we're
-			 * sure the page is not being freed elsewhere -- the
-			 * page release code relies on it.
+			 * This page may in other isolation path,
+			 * but we still hold lru_lock.
 			 */
-			if (unlikely(!get_page_unless_zero(page)))
-				goto busy;
-
-			if (!TestClearPageLRU(page)) {
-				/*
-				 * This page may in other isolation path,
-				 * but we still hold lru_lock.
-				 */
-				put_page(page);
-				goto busy;
-			}
-
-			nr_taken += nr_pages;
-			nr_zone_taken[page_zonenum(page)] += nr_pages;
-			list_move(&page->lru, dst);
-			break;
-
-		default:
-busy:
-			/* else it is being freed elsewhere */
+			put_page(page);
 			list_move(&page->lru, src);
+			continue;
 		}
+
+		nr_taken += nr_pages;
+		nr_zone_taken[page_zonenum(page)] += nr_pages;
+		list_move(&page->lru, dst);
 	}
 
 	/*
-- 
2.29.GIT


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH next] mm/vmscan: __isolate_lru_page_prepare clean up
  2020-11-20  8:03 [PATCH next] mm/vmscan: __isolate_lru_page_prepare clean up Alex Shi
@ 2020-11-20 23:13 ` Andrew Morton
  2020-11-22 12:00   ` Alex Shi
  0 siblings, 1 reply; 10+ messages in thread
From: Andrew Morton @ 2020-11-20 23:13 UTC (permalink / raw)
  To: Alex Shi
  Cc: Hugh Dickins, Yu Zhao, Vlastimil Babka, Michal Hocko, linux-mm,
	linux-kernel

On Fri, 20 Nov 2020 16:03:33 +0800 Alex Shi <alex.shi@linux.alibaba.com> wrote:

> The function just return 2 results, so use a 'switch' to deal with its
> result is unnecessary, and simplify it to a bool func as Vlastimil
> suggested.
> 
> Also removed 'goto' in using by reusing list_move().
> 
> ...
>
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -1540,7 +1540,7 @@ unsigned int reclaim_clean_pages_from_list(struct zone *zone,
>   */
>  int __isolate_lru_page_prepare(struct page *page, isolate_mode_t mode)
>  {
> -	int ret = -EBUSY;
> +	int ret = false;
>  
>  	/* Only take pages on the LRU. */
>  	if (!PageLRU(page))
> @@ -1590,7 +1590,7 @@ int __isolate_lru_page_prepare(struct page *page, isolate_mode_t mode)
>  	if ((mode & ISOLATE_UNMAPPED) && page_mapped(page))
>  		return ret;
>  
> -	return 0;
> +	return true;
>  }

The resulting __isolate_lru_page_prepare() is rather unpleasing.

- Why return an int and not a bool?

- `int ret = false' is a big hint that `ret' should have bool type!

- Why not just remove `ret' and do `return false' in all those `return
  ret' places?

- The __isolate_lru_page_prepare() kerneldoc still says "returns 0 on
  success, -ve errno on failure".  

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH next] mm/vmscan: __isolate_lru_page_prepare clean up
  2020-11-20 23:13 ` Andrew Morton
@ 2020-11-22 12:00   ` Alex Shi
  2020-11-22 12:35     ` Matthew Wilcox
  0 siblings, 1 reply; 10+ messages in thread
From: Alex Shi @ 2020-11-22 12:00 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Hugh Dickins, Yu Zhao, Vlastimil Babka, Michal Hocko, linux-mm,
	linux-kernel



在 2020/11/21 上午7:13, Andrew Morton 写道:
> On Fri, 20 Nov 2020 16:03:33 +0800 Alex Shi <alex.shi@linux.alibaba.com> wrote:
> 
>> The function just return 2 results, so use a 'switch' to deal with its
>> result is unnecessary, and simplify it to a bool func as Vlastimil
>> suggested.
>>
>> Also removed 'goto' in using by reusing list_move().
>>
>> ...
>>
>> --- a/mm/vmscan.c
>> +++ b/mm/vmscan.c
>> @@ -1540,7 +1540,7 @@ unsigned int reclaim_clean_pages_from_list(struct zone *zone,
>>   */
>>  int __isolate_lru_page_prepare(struct page *page, isolate_mode_t mode)
>>  {
>> -	int ret = -EBUSY;
>> +	int ret = false;
>>  
>>  	/* Only take pages on the LRU. */
>>  	if (!PageLRU(page))
>> @@ -1590,7 +1590,7 @@ int __isolate_lru_page_prepare(struct page *page, isolate_mode_t mode)
>>  	if ((mode & ISOLATE_UNMAPPED) && page_mapped(page))
>>  		return ret;
>>  
>> -	return 0;
>> +	return true;
>>  }
> 
> The resulting __isolate_lru_page_prepare() is rather unpleasing.
> 
> - Why return an int and not a bool?
> 
> - `int ret = false' is a big hint that `ret' should have bool type!
> 
> - Why not just remove `ret' and do `return false' in all those `return
>   ret' places?
> 
> - The __isolate_lru_page_prepare() kerneldoc still says "returns 0 on
>   success, -ve errno on failure".  
> 

Hi Andrew,

Thanks a lot for caching and sorry for the bad patch.
It initially a 'int' version, and change it to bool in a hurry weekend.
I am sorry.

From 36c4fbda2d55633d3c1a3e79f045cd9877453ab7 Mon Sep 17 00:00:00 2001
From: Alex Shi <alex.shi@linux.alibaba.com>
Date: Fri, 20 Nov 2020 14:49:16 +0800
Subject: [PATCH v2 next] mm/vmscan: __isolate_lru_page_prepare clean up

The function just return 2 results, so use a 'switch' to deal with its
result is unnecessary, and simplify it to a bool func as Vlastimil
suggested.

Also remove 'goto' by reusing list_move().

Signed-off-by: Alex Shi <alex.shi@linux.alibaba.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Hugh Dickins <hughd@google.com>
Cc: Yu Zhao <yuzhao@google.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Michal Hocko <mhocko@suse.com>
Cc: linux-mm@kvack.org
Cc: linux-kernel@vger.kernel.org
---
 mm/compaction.c |  2 +-
 mm/vmscan.c     | 69 +++++++++++++++++++++++--------------------------
 2 files changed, 34 insertions(+), 37 deletions(-)

diff --git a/mm/compaction.c b/mm/compaction.c
index b68931854253..8d71ffebe6cb 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -988,7 +988,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
 		if (unlikely(!get_page_unless_zero(page)))
 			goto isolate_fail;
 
-		if (__isolate_lru_page_prepare(page, isolate_mode) != 0)
+		if (!__isolate_lru_page_prepare(page, isolate_mode))
 			goto isolate_fail_put;
 
 		/* Try isolate the page */
diff --git a/mm/vmscan.c b/mm/vmscan.c
index c6f94e55c3fe..ab2fdee0828e 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1536,19 +1536,17 @@ unsigned int reclaim_clean_pages_from_list(struct zone *zone,
  * page:	page to consider
  * mode:	one of the LRU isolation modes defined above
  *
- * returns 0 on success, -ve errno on failure.
+ * returns ture on success, false on failure.
  */
-int __isolate_lru_page_prepare(struct page *page, isolate_mode_t mode)
+bool __isolate_lru_page_prepare(struct page *page, isolate_mode_t mode)
 {
-	int ret = -EBUSY;
-
 	/* Only take pages on the LRU. */
 	if (!PageLRU(page))
-		return ret;
+		return false;
 
 	/* Compaction should not handle unevictable pages but CMA can do so */
 	if (PageUnevictable(page) && !(mode & ISOLATE_UNEVICTABLE))
-		return ret;
+		return false;
 
 	/*
 	 * To minimise LRU disruption, the caller can indicate that it only
@@ -1561,7 +1559,7 @@ int __isolate_lru_page_prepare(struct page *page, isolate_mode_t mode)
 	if (mode & ISOLATE_ASYNC_MIGRATE) {
 		/* All the caller can do on PageWriteback is block */
 		if (PageWriteback(page))
-			return ret;
+			return false;
 
 		if (PageDirty(page)) {
 			struct address_space *mapping;
@@ -1577,20 +1575,20 @@ int __isolate_lru_page_prepare(struct page *page, isolate_mode_t mode)
 			 * from the page cache.
 			 */
 			if (!trylock_page(page))
-				return ret;
+				return false;
 
 			mapping = page_mapping(page);
 			migrate_dirty = !mapping || mapping->a_ops->migratepage;
 			unlock_page(page);
 			if (!migrate_dirty)
-				return ret;
+				return false;
 		}
 	}
 
 	if ((mode & ISOLATE_UNMAPPED) && page_mapped(page))
-		return ret;
+		return false;
 
-	return 0;
+	return true;
 }
 
 /*
@@ -1674,35 +1672,34 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan,
 		 * only when the page is being freed somewhere else.
 		 */
 		scan += nr_pages;
-		switch (__isolate_lru_page_prepare(page, mode)) {
-		case 0:
+		if (!__isolate_lru_page_prepare(page, mode)) {
+			/* else it is being freed elsewhere */
+			list_move(&page->lru, src);
+			continue;
+		}
+		/*
+		 * Be careful not to clear PageLRU until after we're
+		 * sure the page is not being freed elsewhere -- the
+		 * page release code relies on it.
+		 */
+		if (unlikely(!get_page_unless_zero(page))) {
+			list_move(&page->lru, src);
+			continue;
+		}
+
+		if (!TestClearPageLRU(page)) {
 			/*
-			 * Be careful not to clear PageLRU until after we're
-			 * sure the page is not being freed elsewhere -- the
-			 * page release code relies on it.
+			 * This page may in other isolation path,
+			 * but we still hold lru_lock.
 			 */
-			if (unlikely(!get_page_unless_zero(page)))
-				goto busy;
-
-			if (!TestClearPageLRU(page)) {
-				/*
-				 * This page may in other isolation path,
-				 * but we still hold lru_lock.
-				 */
-				put_page(page);
-				goto busy;
-			}
-
-			nr_taken += nr_pages;
-			nr_zone_taken[page_zonenum(page)] += nr_pages;
-			list_move(&page->lru, dst);
-			break;
-
-		default:
-busy:
-			/* else it is being freed elsewhere */
+			put_page(page);
 			list_move(&page->lru, src);
+			continue;
 		}
+
+		nr_taken += nr_pages;
+		nr_zone_taken[page_zonenum(page)] += nr_pages;
+		list_move(&page->lru, dst);
 	}
 
 	/*
-- 
2.29.GIT


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH next] mm/vmscan: __isolate_lru_page_prepare clean up
  2020-11-22 12:00   ` Alex Shi
@ 2020-11-22 12:35     ` Matthew Wilcox
  2020-11-22 14:00       ` Alex Shi
  0 siblings, 1 reply; 10+ messages in thread
From: Matthew Wilcox @ 2020-11-22 12:35 UTC (permalink / raw)
  To: Alex Shi
  Cc: Andrew Morton, Hugh Dickins, Yu Zhao, Vlastimil Babka,
	Michal Hocko, linux-mm, linux-kernel

On Sun, Nov 22, 2020 at 08:00:19PM +0800, Alex Shi wrote:
>  mm/compaction.c |  2 +-
>  mm/vmscan.c     | 69 +++++++++++++++++++++++--------------------------
>  2 files changed, 34 insertions(+), 37 deletions(-)

How is it possible you're changing the signature of a function without
touching a header file?  Surely __isolate_lru_page_prepare() must be declared
in mm/internal.h ?

> +++ b/mm/vmscan.c
> @@ -1536,19 +1536,17 @@ unsigned int reclaim_clean_pages_from_list(struct zone *zone,
>   * page:	page to consider
>   * mode:	one of the LRU isolation modes defined above
>   *
> - * returns 0 on success, -ve errno on failure.
> + * returns ture on success, false on failure.

"true".

> @@ -1674,35 +1672,34 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan,
>  		 * only when the page is being freed somewhere else.
>  		 */
>  		scan += nr_pages;
> -		switch (__isolate_lru_page_prepare(page, mode)) {
> -		case 0:
> +		if (!__isolate_lru_page_prepare(page, mode)) {
> +			/* else it is being freed elsewhere */

I don't think the word "else" helps here.  Just
			/* It is being freed elsewhere */

> +		if (!TestClearPageLRU(page)) {
>  			/*
> +			 * This page may in other isolation path,
> +			 * but we still hold lru_lock.
>  			 */

I don't think this comment helps me understand what's going on here.
Maybe:

			/* Another thread is already isolating this page */

> +			put_page(page);
>  			list_move(&page->lru, src);
> +			continue;
>  		}

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH next] mm/vmscan: __isolate_lru_page_prepare clean up
  2020-11-22 12:35     ` Matthew Wilcox
@ 2020-11-22 14:00       ` Alex Shi
  2020-11-24 11:21         ` Vlastimil Babka
  0 siblings, 1 reply; 10+ messages in thread
From: Alex Shi @ 2020-11-22 14:00 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: Andrew Morton, Hugh Dickins, Yu Zhao, Vlastimil Babka,
	Michal Hocko, linux-mm, linux-kernel



在 2020/11/22 下午8:35, Matthew Wilcox 写道:
> On Sun, Nov 22, 2020 at 08:00:19PM +0800, Alex Shi wrote:
>>  mm/compaction.c |  2 +-
>>  mm/vmscan.c     | 69 +++++++++++++++++++++++--------------------------
>>  2 files changed, 34 insertions(+), 37 deletions(-)
> 
> How is it possible you're changing the signature of a function without
> touching a header file?  Surely __isolate_lru_page_prepare() must be declared
> in mm/internal.h ?
> 
>> +++ b/mm/vmscan.c
>> @@ -1536,19 +1536,17 @@ unsigned int reclaim_clean_pages_from_list(struct zone *zone,
>>   * page:	page to consider
>>   * mode:	one of the LRU isolation modes defined above
>>   *
>> - * returns 0 on success, -ve errno on failure.
>> + * returns ture on success, false on failure.
> 
> "true".
> 
>> @@ -1674,35 +1672,34 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan,
>>  		 * only when the page is being freed somewhere else.
>>  		 */
>>  		scan += nr_pages;
>> -		switch (__isolate_lru_page_prepare(page, mode)) {
>> -		case 0:
>> +		if (!__isolate_lru_page_prepare(page, mode)) {
>> +			/* else it is being freed elsewhere */
> 
> I don't think the word "else" helps here.  Just
> 			/* It is being freed elsewhere */
> 
>> +		if (!TestClearPageLRU(page)) {
>>  			/*
>> +			 * This page may in other isolation path,
>> +			 * but we still hold lru_lock.
>>  			 */
> 
> I don't think this comment helps me understand what's going on here.
> Maybe:
> 
> 			/* Another thread is already isolating this page */
> 
>> +			put_page(page);
>>  			list_move(&page->lru, src);
>> +			continue;
>>  		}

Hi Matthew,

Thanks a lot for all comments, I picked all up and here is the v3:

From 167131dd106a96fd08af725df850e0da6ec899af Mon Sep 17 00:00:00 2001
From: Alex Shi <alex.shi@linux.alibaba.com>
Date: Fri, 20 Nov 2020 14:49:16 +0800
Subject: [PATCH v3 next] mm/vmscan: __isolate_lru_page_prepare clean up

The function just return 2 results, so use a 'switch' to deal with its
result is unnecessary, and simplify it to a bool func as Vlastimil
suggested.

Also remove 'goto' by reusing list_move(), and take Matthew Wilcox's
suggestion to update comments in function.

Signed-off-by: Alex Shi <alex.shi@linux.alibaba.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Hugh Dickins <hughd@google.com>
Cc: Yu Zhao <yuzhao@google.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Michal Hocko <mhocko@suse.com>
Cc: linux-mm@kvack.org
Cc: linux-kernel@vger.kernel.org
---
 include/linux/swap.h |  2 +-
 mm/compaction.c      |  2 +-
 mm/vmscan.c          | 68 ++++++++++++++++++++------------------------
 3 files changed, 33 insertions(+), 39 deletions(-)

diff --git a/include/linux/swap.h b/include/linux/swap.h
index 596bc2f4d9b0..5bba15ac5a2e 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -356,7 +356,7 @@ extern void lru_cache_add_inactive_or_unevictable(struct page *page,
 extern unsigned long zone_reclaimable_pages(struct zone *zone);
 extern unsigned long try_to_free_pages(struct zonelist *zonelist, int order,
 					gfp_t gfp_mask, nodemask_t *mask);
-extern int __isolate_lru_page_prepare(struct page *page, isolate_mode_t mode);
+extern bool __isolate_lru_page_prepare(struct page *page, isolate_mode_t mode);
 extern unsigned long try_to_free_mem_cgroup_pages(struct mem_cgroup *memcg,
 						  unsigned long nr_pages,
 						  gfp_t gfp_mask,
diff --git a/mm/compaction.c b/mm/compaction.c
index b68931854253..8d71ffebe6cb 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -988,7 +988,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
 		if (unlikely(!get_page_unless_zero(page)))
 			goto isolate_fail;
 
-		if (__isolate_lru_page_prepare(page, isolate_mode) != 0)
+		if (!__isolate_lru_page_prepare(page, isolate_mode))
 			goto isolate_fail_put;
 
 		/* Try isolate the page */
diff --git a/mm/vmscan.c b/mm/vmscan.c
index c6f94e55c3fe..4d2703c43310 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1536,19 +1536,17 @@ unsigned int reclaim_clean_pages_from_list(struct zone *zone,
  * page:	page to consider
  * mode:	one of the LRU isolation modes defined above
  *
- * returns 0 on success, -ve errno on failure.
+ * returns true on success, false on failure.
  */
-int __isolate_lru_page_prepare(struct page *page, isolate_mode_t mode)
+bool __isolate_lru_page_prepare(struct page *page, isolate_mode_t mode)
 {
-	int ret = -EBUSY;
-
 	/* Only take pages on the LRU. */
 	if (!PageLRU(page))
-		return ret;
+		return false;
 
 	/* Compaction should not handle unevictable pages but CMA can do so */
 	if (PageUnevictable(page) && !(mode & ISOLATE_UNEVICTABLE))
-		return ret;
+		return false;
 
 	/*
 	 * To minimise LRU disruption, the caller can indicate that it only
@@ -1561,7 +1559,7 @@ int __isolate_lru_page_prepare(struct page *page, isolate_mode_t mode)
 	if (mode & ISOLATE_ASYNC_MIGRATE) {
 		/* All the caller can do on PageWriteback is block */
 		if (PageWriteback(page))
-			return ret;
+			return false;
 
 		if (PageDirty(page)) {
 			struct address_space *mapping;
@@ -1577,20 +1575,20 @@ int __isolate_lru_page_prepare(struct page *page, isolate_mode_t mode)
 			 * from the page cache.
 			 */
 			if (!trylock_page(page))
-				return ret;
+				return false;
 
 			mapping = page_mapping(page);
 			migrate_dirty = !mapping || mapping->a_ops->migratepage;
 			unlock_page(page);
 			if (!migrate_dirty)
-				return ret;
+				return false;
 		}
 	}
 
 	if ((mode & ISOLATE_UNMAPPED) && page_mapped(page))
-		return ret;
+		return false;
 
-	return 0;
+	return true;
 }
 
 /*
@@ -1674,35 +1672,31 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan,
 		 * only when the page is being freed somewhere else.
 		 */
 		scan += nr_pages;
-		switch (__isolate_lru_page_prepare(page, mode)) {
-		case 0:
-			/*
-			 * Be careful not to clear PageLRU until after we're
-			 * sure the page is not being freed elsewhere -- the
-			 * page release code relies on it.
-			 */
-			if (unlikely(!get_page_unless_zero(page)))
-				goto busy;
-
-			if (!TestClearPageLRU(page)) {
-				/*
-				 * This page may in other isolation path,
-				 * but we still hold lru_lock.
-				 */
-				put_page(page);
-				goto busy;
-			}
-
-			nr_taken += nr_pages;
-			nr_zone_taken[page_zonenum(page)] += nr_pages;
-			list_move(&page->lru, dst);
-			break;
+		if (!__isolate_lru_page_prepare(page, mode)) {
+			/* It is being freed elsewhere */
+			list_move(&page->lru, src);
+			continue;
+		}
+		/*
+		 * Be careful not to clear PageLRU until after we're
+		 * sure the page is not being freed elsewhere -- the
+		 * page release code relies on it.
+		 */
+		if (unlikely(!get_page_unless_zero(page))) {
+			list_move(&page->lru, src);
+			continue;
+		}
 
-		default:
-busy:
-			/* else it is being freed elsewhere */
+		if (!TestClearPageLRU(page)) {
+			/* Another thread is already isolating this page */
+			put_page(page);
 			list_move(&page->lru, src);
+			continue;
 		}
+
+		nr_taken += nr_pages;
+		nr_zone_taken[page_zonenum(page)] += nr_pages;
+		list_move(&page->lru, dst);
 	}
 
 	/*
-- 
2.29.GIT


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH next] mm/vmscan: __isolate_lru_page_prepare clean up
  2020-11-22 14:00       ` Alex Shi
@ 2020-11-24 11:21         ` Vlastimil Babka
  2020-11-25 23:43           ` Andrew Morton
  0 siblings, 1 reply; 10+ messages in thread
From: Vlastimil Babka @ 2020-11-24 11:21 UTC (permalink / raw)
  To: Alex Shi, Matthew Wilcox
  Cc: Andrew Morton, Hugh Dickins, Yu Zhao, Michal Hocko, linux-mm,
	linux-kernel

On 11/22/20 3:00 PM, Alex Shi wrote:
> Thanks a lot for all comments, I picked all up and here is the v3:
> 
>  From 167131dd106a96fd08af725df850e0da6ec899af Mon Sep 17 00:00:00 2001
> From: Alex Shi <alex.shi@linux.alibaba.com>
> Date: Fri, 20 Nov 2020 14:49:16 +0800
> Subject: [PATCH v3 next] mm/vmscan: __isolate_lru_page_prepare clean up
> 
> The function just return 2 results, so use a 'switch' to deal with its
> result is unnecessary, and simplify it to a bool func as Vlastimil
> suggested.
> 
> Also remove 'goto' by reusing list_move(), and take Matthew Wilcox's
> suggestion to update comments in function.

I wouldn't mind if the goto stayed, but it's not repeating that much 
without it (list_move() + continue, 3 times) so...

Acked-by: Vlastimil Babka <vbabka@suse.cz>

> Signed-off-by: Alex Shi <alex.shi@linux.alibaba.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Matthew Wilcox <willy@infradead.org>
> Cc: Hugh Dickins <hughd@google.com>
> Cc: Yu Zhao <yuzhao@google.com>
> Cc: Vlastimil Babka <vbabka@suse.cz>
> Cc: Michal Hocko <mhocko@suse.com>
> Cc: linux-mm@kvack.org
> Cc: linux-kernel@vger.kernel.org
> ---
>   include/linux/swap.h |  2 +-
>   mm/compaction.c      |  2 +-
>   mm/vmscan.c          | 68 ++++++++++++++++++++------------------------
>   3 files changed, 33 insertions(+), 39 deletions(-)
> 
> diff --git a/include/linux/swap.h b/include/linux/swap.h
> index 596bc2f4d9b0..5bba15ac5a2e 100644
> --- a/include/linux/swap.h
> +++ b/include/linux/swap.h
> @@ -356,7 +356,7 @@ extern void lru_cache_add_inactive_or_unevictable(struct page *page,
>   extern unsigned long zone_reclaimable_pages(struct zone *zone);
>   extern unsigned long try_to_free_pages(struct zonelist *zonelist, int order,
>   					gfp_t gfp_mask, nodemask_t *mask);
> -extern int __isolate_lru_page_prepare(struct page *page, isolate_mode_t mode);
> +extern bool __isolate_lru_page_prepare(struct page *page, isolate_mode_t mode);
>   extern unsigned long try_to_free_mem_cgroup_pages(struct mem_cgroup *memcg,
>   						  unsigned long nr_pages,
>   						  gfp_t gfp_mask,
> diff --git a/mm/compaction.c b/mm/compaction.c
> index b68931854253..8d71ffebe6cb 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -988,7 +988,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
>   		if (unlikely(!get_page_unless_zero(page)))
>   			goto isolate_fail;
>   
> -		if (__isolate_lru_page_prepare(page, isolate_mode) != 0)
> +		if (!__isolate_lru_page_prepare(page, isolate_mode))
>   			goto isolate_fail_put;
>   
>   		/* Try isolate the page */
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index c6f94e55c3fe..4d2703c43310 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -1536,19 +1536,17 @@ unsigned int reclaim_clean_pages_from_list(struct zone *zone,
>    * page:	page to consider
>    * mode:	one of the LRU isolation modes defined above
>    *
> - * returns 0 on success, -ve errno on failure.
> + * returns true on success, false on failure.
>    */
> -int __isolate_lru_page_prepare(struct page *page, isolate_mode_t mode)
> +bool __isolate_lru_page_prepare(struct page *page, isolate_mode_t mode)
>   {
> -	int ret = -EBUSY;
> -
>   	/* Only take pages on the LRU. */
>   	if (!PageLRU(page))
> -		return ret;
> +		return false;
>   
>   	/* Compaction should not handle unevictable pages but CMA can do so */
>   	if (PageUnevictable(page) && !(mode & ISOLATE_UNEVICTABLE))
> -		return ret;
> +		return false;
>   
>   	/*
>   	 * To minimise LRU disruption, the caller can indicate that it only
> @@ -1561,7 +1559,7 @@ int __isolate_lru_page_prepare(struct page *page, isolate_mode_t mode)
>   	if (mode & ISOLATE_ASYNC_MIGRATE) {
>   		/* All the caller can do on PageWriteback is block */
>   		if (PageWriteback(page))
> -			return ret;
> +			return false;
>   
>   		if (PageDirty(page)) {
>   			struct address_space *mapping;
> @@ -1577,20 +1575,20 @@ int __isolate_lru_page_prepare(struct page *page, isolate_mode_t mode)
>   			 * from the page cache.
>   			 */
>   			if (!trylock_page(page))
> -				return ret;
> +				return false;
>   
>   			mapping = page_mapping(page);
>   			migrate_dirty = !mapping || mapping->a_ops->migratepage;
>   			unlock_page(page);
>   			if (!migrate_dirty)
> -				return ret;
> +				return false;
>   		}
>   	}
>   
>   	if ((mode & ISOLATE_UNMAPPED) && page_mapped(page))
> -		return ret;
> +		return false;
>   
> -	return 0;
> +	return true;
>   }
>   
>   /*
> @@ -1674,35 +1672,31 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan,
>   		 * only when the page is being freed somewhere else.
>   		 */
>   		scan += nr_pages;
> -		switch (__isolate_lru_page_prepare(page, mode)) {
> -		case 0:
> -			/*
> -			 * Be careful not to clear PageLRU until after we're
> -			 * sure the page is not being freed elsewhere -- the
> -			 * page release code relies on it.
> -			 */
> -			if (unlikely(!get_page_unless_zero(page)))
> -				goto busy;
> -
> -			if (!TestClearPageLRU(page)) {
> -				/*
> -				 * This page may in other isolation path,
> -				 * but we still hold lru_lock.
> -				 */
> -				put_page(page);
> -				goto busy;
> -			}
> -
> -			nr_taken += nr_pages;
> -			nr_zone_taken[page_zonenum(page)] += nr_pages;
> -			list_move(&page->lru, dst);
> -			break;
> +		if (!__isolate_lru_page_prepare(page, mode)) {
> +			/* It is being freed elsewhere */
> +			list_move(&page->lru, src);
> +			continue;
> +		}
> +		/*
> +		 * Be careful not to clear PageLRU until after we're
> +		 * sure the page is not being freed elsewhere -- the
> +		 * page release code relies on it.
> +		 */
> +		if (unlikely(!get_page_unless_zero(page))) {
> +			list_move(&page->lru, src);
> +			continue;
> +		}
>   
> -		default:
> -busy:
> -			/* else it is being freed elsewhere */
> +		if (!TestClearPageLRU(page)) {
> +			/* Another thread is already isolating this page */
> +			put_page(page);
>   			list_move(&page->lru, src);
> +			continue;
>   		}
> +
> +		nr_taken += nr_pages;
> +		nr_zone_taken[page_zonenum(page)] += nr_pages;
> +		list_move(&page->lru, dst);
>   	}
>   
>   	/*
> 


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH next] mm/vmscan: __isolate_lru_page_prepare clean up
  2020-11-24 11:21         ` Vlastimil Babka
@ 2020-11-25 23:43           ` Andrew Morton
  2020-11-26  2:25             ` Alex Shi
  0 siblings, 1 reply; 10+ messages in thread
From: Andrew Morton @ 2020-11-25 23:43 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Alex Shi, Matthew Wilcox, Hugh Dickins, Yu Zhao, Michal Hocko,
	linux-mm, linux-kernel

On Tue, 24 Nov 2020 12:21:28 +0100 Vlastimil Babka <vbabka@suse.cz> wrote:

> On 11/22/20 3:00 PM, Alex Shi wrote:
> > Thanks a lot for all comments, I picked all up and here is the v3:
> > 
> >  From 167131dd106a96fd08af725df850e0da6ec899af Mon Sep 17 00:00:00 2001
> > From: Alex Shi <alex.shi@linux.alibaba.com>
> > Date: Fri, 20 Nov 2020 14:49:16 +0800
> > Subject: [PATCH v3 next] mm/vmscan: __isolate_lru_page_prepare clean up
> > 
> > The function just return 2 results, so use a 'switch' to deal with its
> > result is unnecessary, and simplify it to a bool func as Vlastimil
> > suggested.
> > 
> > Also remove 'goto' by reusing list_move(), and take Matthew Wilcox's
> > suggestion to update comments in function.
> 
> I wouldn't mind if the goto stayed, but it's not repeating that much 
> without it (list_move() + continue, 3 times) so...

I tried that, and .text became significantly larger, for reasons which
I didn't investigate ;)


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH next] mm/vmscan: __isolate_lru_page_prepare clean up
  2020-11-25 23:43           ` Andrew Morton
@ 2020-11-26  2:25             ` Alex Shi
  2020-11-26 15:23               ` Vlastimil Babka
  0 siblings, 1 reply; 10+ messages in thread
From: Alex Shi @ 2020-11-26  2:25 UTC (permalink / raw)
  To: Andrew Morton, Vlastimil Babka
  Cc: Matthew Wilcox, Hugh Dickins, Yu Zhao, Michal Hocko, linux-mm,
	linux-kernel



在 2020/11/26 上午7:43, Andrew Morton 写道:
> On Tue, 24 Nov 2020 12:21:28 +0100 Vlastimil Babka <vbabka@suse.cz> wrote:
> 
>> On 11/22/20 3:00 PM, Alex Shi wrote:
>>> Thanks a lot for all comments, I picked all up and here is the v3:
>>>
>>>  From 167131dd106a96fd08af725df850e0da6ec899af Mon Sep 17 00:00:00 2001
>>> From: Alex Shi <alex.shi@linux.alibaba.com>
>>> Date: Fri, 20 Nov 2020 14:49:16 +0800
>>> Subject: [PATCH v3 next] mm/vmscan: __isolate_lru_page_prepare clean up
>>>
>>> The function just return 2 results, so use a 'switch' to deal with its
>>> result is unnecessary, and simplify it to a bool func as Vlastimil
>>> suggested.
>>>
>>> Also remove 'goto' by reusing list_move(), and take Matthew Wilcox's
>>> suggestion to update comments in function.
>>
>> I wouldn't mind if the goto stayed, but it's not repeating that much 
>> without it (list_move() + continue, 3 times) so...
> 
> I tried that, and .text became significantly larger, for reasons which
> I didn't investigate ;)
> 


Uh, BTW, with the gcc 8.3.1 and centos 7, goto or continue version has same size
on my side with or w/o DEBUG_LIST. But actually, this clean up patch could
add 10 bytes also with or w/o DEDBUG_LIST.

Maybe related with different compiler?

Thanks
Alex

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH next] mm/vmscan: __isolate_lru_page_prepare clean up
  2020-11-26  2:25             ` Alex Shi
@ 2020-11-26 15:23               ` Vlastimil Babka
  2020-11-27  1:56                 ` Alex Shi
  0 siblings, 1 reply; 10+ messages in thread
From: Vlastimil Babka @ 2020-11-26 15:23 UTC (permalink / raw)
  To: Alex Shi, Andrew Morton
  Cc: Matthew Wilcox, Hugh Dickins, Yu Zhao, Michal Hocko, linux-mm,
	linux-kernel

On 11/26/20 3:25 AM, Alex Shi wrote:
> 
> 
> 在 2020/11/26 上午7:43, Andrew Morton 写道:
>> On Tue, 24 Nov 2020 12:21:28 +0100 Vlastimil Babka <vbabka@suse.cz> wrote:
>> 
>>> On 11/22/20 3:00 PM, Alex Shi wrote:
>>>> Thanks a lot for all comments, I picked all up and here is the v3:
>>>>
>>>>  From 167131dd106a96fd08af725df850e0da6ec899af Mon Sep 17 00:00:00 2001
>>>> From: Alex Shi <alex.shi@linux.alibaba.com>
>>>> Date: Fri, 20 Nov 2020 14:49:16 +0800
>>>> Subject: [PATCH v3 next] mm/vmscan: __isolate_lru_page_prepare clean up
>>>>
>>>> The function just return 2 results, so use a 'switch' to deal with its
>>>> result is unnecessary, and simplify it to a bool func as Vlastimil
>>>> suggested.
>>>>
>>>> Also remove 'goto' by reusing list_move(), and take Matthew Wilcox's
>>>> suggestion to update comments in function.
>>>
>>> I wouldn't mind if the goto stayed, but it's not repeating that much 
>>> without it (list_move() + continue, 3 times) so...
>> 
>> I tried that, and .text became significantly larger, for reasons which
>> I didn't investigate ;)

I found out that comparing whole .text doesn't often work as changes might be lost in alignment, or
once in a while cross the alignment boundary and become exagerated. bloat-o-meter works nice though.

> Uh, BTW, with the gcc 8.3.1 and centos 7, goto or continue version has same size
> on my side with or w/o DEBUG_LIST. But actually, this clean up patch could
> add 10 bytes also with or w/o DEDBUG_LIST.
> 
> Maybe related with different compiler?

gcc (SUSE Linux) 10.2.1 20201117 [revision 98ba03ffe0b9f37b4916ce6238fad754e00d720b]

./scripts/bloat-o-meter vmscan.o.before mm/vmscan.o
add/remove: 0/0 grow/shrink: 0/1 up/down: 0/-1 (-1)
Function                                     old     new   delta
isolate_lru_pages                           1125    1124      -1
Total: Before=57283, After=57282, chg -0.00%

Not surprising, as I'd expect the compiler to figure out by itself that list_move + continue
repeats and can be unified.  The reason for goto to stay would be rather readability (subjective).

> Thanks
> Alex
> 


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH next] mm/vmscan: __isolate_lru_page_prepare clean up
  2020-11-26 15:23               ` Vlastimil Babka
@ 2020-11-27  1:56                 ` Alex Shi
  0 siblings, 0 replies; 10+ messages in thread
From: Alex Shi @ 2020-11-27  1:56 UTC (permalink / raw)
  To: Vlastimil Babka, Andrew Morton
  Cc: Matthew Wilcox, Hugh Dickins, Yu Zhao, Michal Hocko, linux-mm,
	linux-kernel



在 2020/11/26 下午11:23, Vlastimil Babka 写道:
>>>
>>> I tried that, and .text became significantly larger, for reasons which
>>> I didn't investigate ;)
> 
> I found out that comparing whole .text doesn't often work as changes might be lost in alignment, or
> once in a while cross the alignment boundary and become exagerated. bloat-o-meter works nice though.
> 
>> Uh, BTW, with the gcc 8.3.1 and centos 7, goto or continue version has same size
>> on my side with or w/o DEBUG_LIST. But actually, this clean up patch could
>> add 10 bytes also with or w/o DEDBUG_LIST.
>>
>> Maybe related with different compiler?
> 
> gcc (SUSE Linux) 10.2.1 20201117 [revision 98ba03ffe0b9f37b4916ce6238fad754e00d720b]
> 
> ./scripts/bloat-o-meter vmscan.o.before mm/vmscan.o
> add/remove: 0/0 grow/shrink: 0/1 up/down: 0/-1 (-1)
> Function                                     old     new   delta
> isolate_lru_pages                           1125    1124      -1
> Total: Before=57283, After=57282, chg -0.00%
> 
> Not surprising, as I'd expect the compiler to figure out by itself that list_move + continue
> repeats and can be unified.  The reason for goto to stay would be rather readability (subjective).

Hi Vlastimil,

Thanks for tool sharing! The gcc do give different.

My data is read from 'size' tool and isolate_lru_pages text size from 'objdump -d'. Maybe a
same way like bloat-o-meter. :)

Thanks
Alex

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2020-11-27  1:56 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-11-20  8:03 [PATCH next] mm/vmscan: __isolate_lru_page_prepare clean up Alex Shi
2020-11-20 23:13 ` Andrew Morton
2020-11-22 12:00   ` Alex Shi
2020-11-22 12:35     ` Matthew Wilcox
2020-11-22 14:00       ` Alex Shi
2020-11-24 11:21         ` Vlastimil Babka
2020-11-25 23:43           ` Andrew Morton
2020-11-26  2:25             ` Alex Shi
2020-11-26 15:23               ` Vlastimil Babka
2020-11-27  1:56                 ` Alex Shi

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).