linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Re: [PATCH 1/2] Revert "page cache: fix page_cache_next/prev_miss off by one"
@ 2023-07-24 12:15 Thomas Backlund
  0 siblings, 0 replies; 5+ messages in thread
From: Thomas Backlund @ 2023-07-24 12:15 UTC (permalink / raw)
  To: Mike Kravetz, linux-kernel, linux-mm, linux-fsdevel
  Cc: Matthew Wilcox, Ackerley Tng, Sidhartha Kumar, Muchun Song,
	Vishal Annapurve, Erdem Aktas, Greg Kroah-Hartman, Andrew Morton,
	kernel test robot

Den 2023-06-22 kl. 00:24, skrev Mike Kravetz:
> This reverts commit 9425c591e06a9ab27a145ba655fb50532cf0bcc9
> 
> The reverted commit fixed up routines primarily used by readahead code
> such that they could also be used by hugetlb.  Unfortunately, this
> caused a performance regression as pointed out by the Closes: tag.
> 
> The hugetlb code which uses page_cache_next_miss will be addressed in
> a subsequent patch.
> 
> Reported-by: kernel test robot <oliver.sang@intel.com>
> Closes: https://lore.kernel.org/oe-lkp/202306211346.1e9ff03e-oliver.sang@intel.com
> Fixes: 9425c591e06a ("page cache: fix page_cache_next/prev_miss off by one")
> Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com>


Should not this one be submitted to 6.4 stable branch too ?


git describe --contains 9425c591e06a
v6.4-rc7~29^2~1

The other one (hugetlb: revert use of page_cache_next_miss()) of this 
patch series landed in 6.4.2

Or am I missing something ?

--
Thomas

> ---
>   mm/filemap.c | 26 ++++++++++----------------
>   1 file changed, 10 insertions(+), 16 deletions(-)
> 
> diff --git a/mm/filemap.c b/mm/filemap.c
> index 3b73101f9f86..9e44a49bbd74 100644
> --- a/mm/filemap.c
> +++ b/mm/filemap.c
> @@ -1728,9 +1728,7 @@ bool __folio_lock_or_retry(struct folio *folio, struct mm_struct *mm,
>    *
>    * Return: The index of the gap if found, otherwise an index outside the
>    * range specified (in which case 'return - index >= max_scan' will be true).
> - * In the rare case of index wrap-around, 0 will be returned.  0 will also
> - * be returned if index == 0 and there is a gap at the index.  We can not
> - * wrap-around if passed index == 0.
> + * In the rare case of index wrap-around, 0 will be returned.
>    */
>   pgoff_t page_cache_next_miss(struct address_space *mapping,
>   			     pgoff_t index, unsigned long max_scan)
> @@ -1740,13 +1738,12 @@ pgoff_t page_cache_next_miss(struct address_space *mapping,
>   	while (max_scan--) {
>   		void *entry = xas_next(&xas);
>   		if (!entry || xa_is_value(entry))
> -			return xas.xa_index;
> -		if (xas.xa_index == 0 && index != 0)
> -			return xas.xa_index;
> +			break;
> +		if (xas.xa_index == 0)
> +			break;
>   	}
>   
> -	/* No gaps in range and no wrap-around, return index beyond range */
> -	return xas.xa_index + 1;
> +	return xas.xa_index;
>   }
>   EXPORT_SYMBOL(page_cache_next_miss);
>   
> @@ -1767,9 +1764,7 @@ EXPORT_SYMBOL(page_cache_next_miss);
>    *
>    * Return: The index of the gap if found, otherwise an index outside the
>    * range specified (in which case 'index - return >= max_scan' will be true).
> - * In the rare case of wrap-around, ULONG_MAX will be returned.  ULONG_MAX
> - * will also be returned if index == ULONG_MAX and there is a gap at the
> - * index.  We can not wrap-around if passed index == ULONG_MAX.
> + * In the rare case of wrap-around, ULONG_MAX will be returned.
>    */
>   pgoff_t page_cache_prev_miss(struct address_space *mapping,
>   			     pgoff_t index, unsigned long max_scan)
> @@ -1779,13 +1774,12 @@ pgoff_t page_cache_prev_miss(struct address_space *mapping,
>   	while (max_scan--) {
>   		void *entry = xas_prev(&xas);
>   		if (!entry || xa_is_value(entry))
> -			return xas.xa_index;
> -		if (xas.xa_index == ULONG_MAX && index != ULONG_MAX)
> -			return xas.xa_index;
> +			break;
> +		if (xas.xa_index == ULONG_MAX)
> +			break;
>   	}
>   
> -	/* No gaps in range and no wrap-around, return index beyond range */
> -	return xas.xa_index - 1;
> +	return xas.xa_index;
>   }
>   EXPORT_SYMBOL(page_cache_prev_miss);
>   



^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH 1/2] Revert "page cache: fix page_cache_next/prev_miss off by one"
  2023-06-21 22:18 ` Andrew Morton
@ 2023-06-21 23:01   ` Mike Kravetz
  0 siblings, 0 replies; 5+ messages in thread
From: Mike Kravetz @ 2023-06-21 23:01 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-kernel, linux-mm, linux-fsdevel, Matthew Wilcox,
	Ackerley Tng, Sidhartha Kumar, Muchun Song, Vishal Annapurve,
	Erdem Aktas, Greg Kroah-Hartman, kernel test robot

On 06/21/23 15:18, Andrew Morton wrote:
> On Wed, 21 Jun 2023 14:24:02 -0700 Mike Kravetz <mike.kravetz@oracle.com> wrote:
> 
> > This reverts commit 9425c591e06a9ab27a145ba655fb50532cf0bcc9
> > 
> > The reverted commit fixed up routines primarily used by readahead code
> > such that they could also be used by hugetlb.  Unfortunately, this
> > caused a performance regression as pointed out by the Closes: tag.
> > 
> > The hugetlb code which uses page_cache_next_miss will be addressed in
> > a subsequent patch.
> 
> Often these throughput changes are caused by rather random
> alignment/layout changes and the code change itself was innocent.
> 
> Do we have an explanation for this regression, or was it a surprise?

It was not a total surprise.  As mentioned, the primary user of this
interface is the readahead code.  The code in question is in
ondemand_readahead.

		rcu_read_lock();
		start = page_cache_next_miss(ractl->mapping, index + 1,
				max_pages);
		rcu_read_unlock();

		if (!start || start - index > max_pages)
			return;

With the reverted changes, we will take that quick return when there are
no gaps in the range.  Previously we did not.

I am of the belief that page_cache_next_miss behavior did not match the
function description.  Matthew suggested page_cache_next_miss use in hugetlb
code and I can only guess that he also though it behaved as documented.

I do not know the readahead code well enough to know exactly what is
expected.  readahead certainly performs worse with my proposed changes.
Since we can easily 'fix' hugetlb code in another way, let's do that and
leave the readahead code alone unless someone more knowledgable can
provide insight.
-- 
Mike Kravetz

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH 1/2] Revert "page cache: fix page_cache_next/prev_miss off by one"
  2023-06-21 21:24 Mike Kravetz
  2023-06-21 22:15 ` Sidhartha Kumar
@ 2023-06-21 22:18 ` Andrew Morton
  2023-06-21 23:01   ` Mike Kravetz
  1 sibling, 1 reply; 5+ messages in thread
From: Andrew Morton @ 2023-06-21 22:18 UTC (permalink / raw)
  To: Mike Kravetz
  Cc: linux-kernel, linux-mm, linux-fsdevel, Matthew Wilcox,
	Ackerley Tng, Sidhartha Kumar, Muchun Song, Vishal Annapurve,
	Erdem Aktas, Greg Kroah-Hartman, kernel test robot

On Wed, 21 Jun 2023 14:24:02 -0700 Mike Kravetz <mike.kravetz@oracle.com> wrote:

> This reverts commit 9425c591e06a9ab27a145ba655fb50532cf0bcc9
> 
> The reverted commit fixed up routines primarily used by readahead code
> such that they could also be used by hugetlb.  Unfortunately, this
> caused a performance regression as pointed out by the Closes: tag.
> 
> The hugetlb code which uses page_cache_next_miss will be addressed in
> a subsequent patch.

Often these throughput changes are caused by rather random
alignment/layout changes and the code change itself was innocent.

Do we have an explanation for this regression, or was it a surprise?

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH 1/2] Revert "page cache: fix page_cache_next/prev_miss off by one"
  2023-06-21 21:24 Mike Kravetz
@ 2023-06-21 22:15 ` Sidhartha Kumar
  2023-06-21 22:18 ` Andrew Morton
  1 sibling, 0 replies; 5+ messages in thread
From: Sidhartha Kumar @ 2023-06-21 22:15 UTC (permalink / raw)
  To: Mike Kravetz, linux-kernel, linux-mm, linux-fsdevel
  Cc: Matthew Wilcox, Ackerley Tng, Muchun Song, Vishal Annapurve,
	Erdem Aktas, Greg Kroah-Hartman, Andrew Morton,
	kernel test robot

On 6/21/23 2:24 PM, Mike Kravetz wrote:
> This reverts commit 9425c591e06a9ab27a145ba655fb50532cf0bcc9
> 
> The reverted commit fixed up routines primarily used by readahead code
> such that they could also be used by hugetlb.  Unfortunately, this
> caused a performance regression as pointed out by the Closes: tag.
> 
> The hugetlb code which uses page_cache_next_miss will be addressed in
> a subsequent patch.
> 
> Reported-by: kernel test robot <oliver.sang@intel.com>
> Closes: https://lore.kernel.org/oe-lkp/202306211346.1e9ff03e-oliver.sang@intel.com
> Fixes: 9425c591e06a ("page cache: fix page_cache_next/prev_miss off by one")
> Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com>
> ---
>   mm/filemap.c | 26 ++++++++++----------------
>   1 file changed, 10 insertions(+), 16 deletions(-)
> 
> diff --git a/mm/filemap.c b/mm/filemap.c
> index 3b73101f9f86..9e44a49bbd74 100644
> --- a/mm/filemap.c
> +++ b/mm/filemap.c
> @@ -1728,9 +1728,7 @@ bool __folio_lock_or_retry(struct folio *folio, struct mm_struct *mm,
>    *
>    * Return: The index of the gap if found, otherwise an index outside the
>    * range specified (in which case 'return - index >= max_scan' will be true).
> - * In the rare case of index wrap-around, 0 will be returned.  0 will also
> - * be returned if index == 0 and there is a gap at the index.  We can not
> - * wrap-around if passed index == 0.
> + * In the rare case of index wrap-around, 0 will be returned.
>    */
>   pgoff_t page_cache_next_miss(struct address_space *mapping,
>   			     pgoff_t index, unsigned long max_scan)
> @@ -1740,13 +1738,12 @@ pgoff_t page_cache_next_miss(struct address_space *mapping,
>   	while (max_scan--) {
>   		void *entry = xas_next(&xas);
>   		if (!entry || xa_is_value(entry))
> -			return xas.xa_index;
> -		if (xas.xa_index == 0 && index != 0)
> -			return xas.xa_index;
> +			break;
> +		if (xas.xa_index == 0)
> +			break;
>   	}
>   
> -	/* No gaps in range and no wrap-around, return index beyond range */
> -	return xas.xa_index + 1;
> +	return xas.xa_index;
>   }
>   EXPORT_SYMBOL(page_cache_next_miss);
>   
> @@ -1767,9 +1764,7 @@ EXPORT_SYMBOL(page_cache_next_miss);
>    *
>    * Return: The index of the gap if found, otherwise an index outside the
>    * range specified (in which case 'index - return >= max_scan' will be true).
> - * In the rare case of wrap-around, ULONG_MAX will be returned.  ULONG_MAX
> - * will also be returned if index == ULONG_MAX and there is a gap at the
> - * index.  We can not wrap-around if passed index == ULONG_MAX.
> + * In the rare case of wrap-around, ULONG_MAX will be returned.
>    */
>   pgoff_t page_cache_prev_miss(struct address_space *mapping,
>   			     pgoff_t index, unsigned long max_scan)
> @@ -1779,13 +1774,12 @@ pgoff_t page_cache_prev_miss(struct address_space *mapping,
>   	while (max_scan--) {
>   		void *entry = xas_prev(&xas);
>   		if (!entry || xa_is_value(entry))
> -			return xas.xa_index;
> -		if (xas.xa_index == ULONG_MAX && index != ULONG_MAX)
> -			return xas.xa_index;
> +			break;
> +		if (xas.xa_index == ULONG_MAX)
> +			break;
>   	}
>   
> -	/* No gaps in range and no wrap-around, return index beyond range */
> -	return xas.xa_index - 1;
> +	return xas.xa_index;
>   }
>   EXPORT_SYMBOL(page_cache_prev_miss);
>   
Reviewed-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>

^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH 1/2] Revert "page cache: fix page_cache_next/prev_miss off by one"
@ 2023-06-21 21:24 Mike Kravetz
  2023-06-21 22:15 ` Sidhartha Kumar
  2023-06-21 22:18 ` Andrew Morton
  0 siblings, 2 replies; 5+ messages in thread
From: Mike Kravetz @ 2023-06-21 21:24 UTC (permalink / raw)
  To: linux-kernel, linux-mm, linux-fsdevel
  Cc: Matthew Wilcox, Ackerley Tng, Sidhartha Kumar, Muchun Song,
	Vishal Annapurve, Erdem Aktas, Greg Kroah-Hartman, Andrew Morton,
	Mike Kravetz, kernel test robot

This reverts commit 9425c591e06a9ab27a145ba655fb50532cf0bcc9

The reverted commit fixed up routines primarily used by readahead code
such that they could also be used by hugetlb.  Unfortunately, this
caused a performance regression as pointed out by the Closes: tag.

The hugetlb code which uses page_cache_next_miss will be addressed in
a subsequent patch.

Reported-by: kernel test robot <oliver.sang@intel.com>
Closes: https://lore.kernel.org/oe-lkp/202306211346.1e9ff03e-oliver.sang@intel.com
Fixes: 9425c591e06a ("page cache: fix page_cache_next/prev_miss off by one")
Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com>
---
 mm/filemap.c | 26 ++++++++++----------------
 1 file changed, 10 insertions(+), 16 deletions(-)

diff --git a/mm/filemap.c b/mm/filemap.c
index 3b73101f9f86..9e44a49bbd74 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -1728,9 +1728,7 @@ bool __folio_lock_or_retry(struct folio *folio, struct mm_struct *mm,
  *
  * Return: The index of the gap if found, otherwise an index outside the
  * range specified (in which case 'return - index >= max_scan' will be true).
- * In the rare case of index wrap-around, 0 will be returned.  0 will also
- * be returned if index == 0 and there is a gap at the index.  We can not
- * wrap-around if passed index == 0.
+ * In the rare case of index wrap-around, 0 will be returned.
  */
 pgoff_t page_cache_next_miss(struct address_space *mapping,
 			     pgoff_t index, unsigned long max_scan)
@@ -1740,13 +1738,12 @@ pgoff_t page_cache_next_miss(struct address_space *mapping,
 	while (max_scan--) {
 		void *entry = xas_next(&xas);
 		if (!entry || xa_is_value(entry))
-			return xas.xa_index;
-		if (xas.xa_index == 0 && index != 0)
-			return xas.xa_index;
+			break;
+		if (xas.xa_index == 0)
+			break;
 	}
 
-	/* No gaps in range and no wrap-around, return index beyond range */
-	return xas.xa_index + 1;
+	return xas.xa_index;
 }
 EXPORT_SYMBOL(page_cache_next_miss);
 
@@ -1767,9 +1764,7 @@ EXPORT_SYMBOL(page_cache_next_miss);
  *
  * Return: The index of the gap if found, otherwise an index outside the
  * range specified (in which case 'index - return >= max_scan' will be true).
- * In the rare case of wrap-around, ULONG_MAX will be returned.  ULONG_MAX
- * will also be returned if index == ULONG_MAX and there is a gap at the
- * index.  We can not wrap-around if passed index == ULONG_MAX.
+ * In the rare case of wrap-around, ULONG_MAX will be returned.
  */
 pgoff_t page_cache_prev_miss(struct address_space *mapping,
 			     pgoff_t index, unsigned long max_scan)
@@ -1779,13 +1774,12 @@ pgoff_t page_cache_prev_miss(struct address_space *mapping,
 	while (max_scan--) {
 		void *entry = xas_prev(&xas);
 		if (!entry || xa_is_value(entry))
-			return xas.xa_index;
-		if (xas.xa_index == ULONG_MAX && index != ULONG_MAX)
-			return xas.xa_index;
+			break;
+		if (xas.xa_index == ULONG_MAX)
+			break;
 	}
 
-	/* No gaps in range and no wrap-around, return index beyond range */
-	return xas.xa_index - 1;
+	return xas.xa_index;
 }
 EXPORT_SYMBOL(page_cache_prev_miss);
 
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2023-07-24 12:16 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-07-24 12:15 [PATCH 1/2] Revert "page cache: fix page_cache_next/prev_miss off by one" Thomas Backlund
  -- strict thread matches above, loose matches on Subject: below --
2023-06-21 21:24 Mike Kravetz
2023-06-21 22:15 ` Sidhartha Kumar
2023-06-21 22:18 ` Andrew Morton
2023-06-21 23:01   ` Mike Kravetz

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).