linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [RFC][PATCH 0/2] mm: tweak page cache migration
@ 2020-05-01 21:05 Dave Hansen
  2020-05-01 21:05 ` [RFC][PATCH 1/2] mm/migrate: remove extra page_count() check Dave Hansen
  2020-05-01 21:05 ` [RFC][PATCH 2/2] mm/migrate: annotate possible unnecessary xas_load() Dave Hansen
  0 siblings, 2 replies; 4+ messages in thread
From: Dave Hansen @ 2020-05-01 21:05 UTC (permalink / raw)
  To: linux-kernel; +Cc: Dave Hansen, npiggin, akpm, willy, yang.shi, linux-mm

First of all, I think this little slice of code is a bit
under-documented.  Perhaps this will help clarify things.

I'm pretty confident the page_count() check in the first
patch is right, which is why I removed it outright.  The
xas_load() check is a bit murkier, so I just left a
warning in for it.

Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Yang Shi <yang.shi@linux.alibaba.com>
Cc: linux-mm@kvack.org
Cc: linux-kernel@vger.kernel.org


^ permalink raw reply	[flat|nested] 4+ messages in thread

* [RFC][PATCH 1/2] mm/migrate: remove extra page_count() check
  2020-05-01 21:05 [RFC][PATCH 0/2] mm: tweak page cache migration Dave Hansen
@ 2020-05-01 21:05 ` Dave Hansen
  2020-05-05 20:18   ` Yang Shi
  2020-05-01 21:05 ` [RFC][PATCH 2/2] mm/migrate: annotate possible unnecessary xas_load() Dave Hansen
  1 sibling, 1 reply; 4+ messages in thread
From: Dave Hansen @ 2020-05-01 21:05 UTC (permalink / raw)
  To: linux-kernel; +Cc: Dave Hansen, npiggin, akpm, willy, yang.shi, linux-mm


From: Dave Hansen <dave.hansen@linux.intel.com>

This is not a bug fix.  It was found by inspection, but I believe
that it is confusing as it stands.

First, page_ref_freeze() is implemented internally with:

	atomic_cmpxchg(&page->_refcount, expected, 0) == expected

The "cmp" part of cmpxchg is making sure that _refcount==expected
which means that there's an implicit check here, equivalent to:

	page_count(page) == expected_count

This appears to have originated in "e286781: mm: speculative page
references", which is pretty ancient.  This check is also somewhat
dangerous to have here because it might lead someone to think that
page_ref_freeze() *doesn't* do its own page_count() checking.

Remove the unnecessary check.

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Yang Shi <yang.shi@linux.alibaba.com>
Cc: linux-mm@kvack.org
Cc: linux-kernel@vger.kernel.org
---

 b/mm/migrate.c |    3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff -puN mm/migrate.c~remove_extra_page_count_check mm/migrate.c
--- a/mm/migrate.c~remove_extra_page_count_check	2020-05-01 14:00:42.331525924 -0700
+++ b/mm/migrate.c	2020-05-01 14:00:42.336525924 -0700
@@ -425,11 +425,12 @@ int migrate_page_move_mapping(struct add
 	newzone = page_zone(newpage);
 
 	xas_lock_irq(&xas);
-	if (page_count(page) != expected_count || xas_load(&xas) != page) {
+	if (xas_load(&xas) != page) {
 		xas_unlock_irq(&xas);
 		return -EAGAIN;
 	}
 
+	/* Freezing will fail if page_count()!=expected_count */
 	if (!page_ref_freeze(page, expected_count)) {
 		xas_unlock_irq(&xas);
 		return -EAGAIN;
_


^ permalink raw reply	[flat|nested] 4+ messages in thread

* [RFC][PATCH 2/2] mm/migrate: annotate possible unnecessary xas_load()
  2020-05-01 21:05 [RFC][PATCH 0/2] mm: tweak page cache migration Dave Hansen
  2020-05-01 21:05 ` [RFC][PATCH 1/2] mm/migrate: remove extra page_count() check Dave Hansen
@ 2020-05-01 21:05 ` Dave Hansen
  1 sibling, 0 replies; 4+ messages in thread
From: Dave Hansen @ 2020-05-01 21:05 UTC (permalink / raw)
  To: linux-kernel; +Cc: Dave Hansen, npiggin, akpm, willy, yang.shi, linux-mm


From: Dave Hansen <dave.hansen@linux.intel.com>

The xas_load() in question also originated in "e286781: mm:
speculative page references" as a radix_tree_deref_slot(), the
only one in the tree at the time.

I'm thoroughly confused why it is needed, though.  A page's
slot in the page cache should be stabilized by lock_page()
being held.

So, first of all, add a VM_BUG_ON_ONCE() to make it totally
clear that the page is locked.

But, even if the page was truncated, we normally check:

	page_mapping(page) != mapping

to check for truncation.  This would seem to imply that we
are looking for some kind of state change that can happen
to the xarray slot for a page, but without changing
page->mapping.

I'm at a loss for that that might be.  Stick a WARN_ON_ONCE()
in there to see if we ever actually hit this.

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Yang Shi <yang.shi@linux.alibaba.com>
Cc: linux-mm@kvack.org
Cc: linux-kernel@vger.kernel.org
---

 b/mm/migrate.c |    8 ++++++++
 1 file changed, 8 insertions(+)

diff -puN mm/migrate.c~remove_extra_xas_load_check mm/migrate.c
--- a/mm/migrate.c~remove_extra_xas_load_check	2020-05-01 14:00:43.377525921 -0700
+++ b/mm/migrate.c	2020-05-01 14:00:43.381525921 -0700
@@ -407,6 +407,8 @@ int migrate_page_move_mapping(struct add
 	int dirty;
 	int expected_count = expected_page_refs(mapping, page) + extra_count;
 
+	VM_WARN_ONCE(!PageLocked(page));
+
 	if (!mapping) {
 		/* Anonymous page without mapping */
 		if (page_count(page) != expected_count)
@@ -425,7 +427,13 @@ int migrate_page_move_mapping(struct add
 	newzone = page_zone(newpage);
 
 	xas_lock_irq(&xas);
+	/*
+	 * 'mapping' was established under the page lock, which
+	 * prevents the xarray slot for 'page' from being changed.
+	 * Thus, xas_load() failure here is unexpected.
+	 */
 	if (xas_load(&xas) != page) {
+		WARN_ON_ONCE(1);
 		xas_unlock_irq(&xas);
 		return -EAGAIN;
 	}
_


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [RFC][PATCH 1/2] mm/migrate: remove extra page_count() check
  2020-05-01 21:05 ` [RFC][PATCH 1/2] mm/migrate: remove extra page_count() check Dave Hansen
@ 2020-05-05 20:18   ` Yang Shi
  0 siblings, 0 replies; 4+ messages in thread
From: Yang Shi @ 2020-05-05 20:18 UTC (permalink / raw)
  To: Dave Hansen, linux-kernel; +Cc: npiggin, akpm, willy, linux-mm



On 5/1/20 2:05 PM, Dave Hansen wrote:
> From: Dave Hansen <dave.hansen@linux.intel.com>
>
> This is not a bug fix.  It was found by inspection, but I believe
> that it is confusing as it stands.
>
> First, page_ref_freeze() is implemented internally with:
>
> 	atomic_cmpxchg(&page->_refcount, expected, 0) == expected
>
> The "cmp" part of cmpxchg is making sure that _refcount==expected
> which means that there's an implicit check here, equivalent to:
>
> 	page_count(page) == expected_count
>
> This appears to have originated in "e286781: mm: speculative page
> references", which is pretty ancient.  This check is also somewhat
> dangerous to have here because it might lead someone to think that
> page_ref_freeze() *doesn't* do its own page_count() checking.
>
> Remove the unnecessary check.

Make sense to me. Acked-by: Yang Shi <yang.shi@linux.alibaba.com>

>
> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
> Cc: Nicholas Piggin <npiggin@gmail.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
> Cc: Yang Shi <yang.shi@linux.alibaba.com>
> Cc: linux-mm@kvack.org
> Cc: linux-kernel@vger.kernel.org
> ---
>
>   b/mm/migrate.c |    3 ++-
>   1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff -puN mm/migrate.c~remove_extra_page_count_check mm/migrate.c
> --- a/mm/migrate.c~remove_extra_page_count_check	2020-05-01 14:00:42.331525924 -0700
> +++ b/mm/migrate.c	2020-05-01 14:00:42.336525924 -0700
> @@ -425,11 +425,12 @@ int migrate_page_move_mapping(struct add
>   	newzone = page_zone(newpage);
>   
>   	xas_lock_irq(&xas);
> -	if (page_count(page) != expected_count || xas_load(&xas) != page) {
> +	if (xas_load(&xas) != page) {
>   		xas_unlock_irq(&xas);
>   		return -EAGAIN;
>   	}
>   
> +	/* Freezing will fail if page_count()!=expected_count */
>   	if (!page_ref_freeze(page, expected_count)) {
>   		xas_unlock_irq(&xas);
>   		return -EAGAIN;
> _



^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2020-05-05 20:19 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-05-01 21:05 [RFC][PATCH 0/2] mm: tweak page cache migration Dave Hansen
2020-05-01 21:05 ` [RFC][PATCH 1/2] mm/migrate: remove extra page_count() check Dave Hansen
2020-05-05 20:18   ` Yang Shi
2020-05-01 21:05 ` [RFC][PATCH 2/2] mm/migrate: annotate possible unnecessary xas_load() Dave Hansen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).