linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3] Resolve LRU page-pinning issue for file-backed pages
@ 2021-01-13 21:17 Chris Goldsworthy
  2021-01-13 21:17 ` [PATCH v3] fs/buffer.c: Revoke LRU when trying to drop buffers Chris Goldsworthy
  0 siblings, 1 reply; 4+ messages in thread
From: Chris Goldsworthy @ 2021-01-13 21:17 UTC (permalink / raw)
  To: Alexander Viro
  Cc: Matthew Wilcox, linux-mm, linux-fsdevel, linux-kernel, Chris Goldsworthy

It is possible for file-backed pages to end up in a contiguous memory area
(CMA), such that the relevant page must be migrated using the .migratepage()
callback when its backing physical memory is selected for use in an CMA
allocation (through cma_alloc()).  However, if a set of address space
operations (AOPs) for a file-backed page lacks a migratepage() page call-back,
fallback_migrate_page() will be used instead, which through
try_to_release_page() calls try_to_free_buffers() (which is called directly or
through a try_to_free_buffers() callback.  try_to_free_buffers() in turn calls
drop_buffers()

drop_buffers() itself can fail due to the buffer_head associated with a page
being busy. However, it is possible that the buffer_head is on an LRU list for
a CPU, such that we can try removing the buffer_head from that list, in order
to successfully release the page.  Do this.

v1: https://lore.kernel.org/lkml/cover.1606194703.git.cgoldswo@codeaurora.org/T/#m3a44b5745054206665455625ccaf27379df8a190
Original version of the patch (with updates to make to account for changes in
on_each_cpu_cond()).

v2: https://lore.kernel.org/lkml/cover.1609829465.git.cgoldswo@codeaurora.org/
Follow Matthew Wilcox's suggestion of reducing the number of calls to
on_each_cpu_cond(), by iterating over a page's busy buffer_heads inside of
on_each_cpu_cond(). To copy from his e-mail, we go from:

for_each_buffer
	for_each_cpu
		for_each_lru_entry

to:

for_each_cpu
	for_each_buffer
		for_each_lru_entry

This is done using xarrays, which I found to be the cleanest data structure to
use, though a pre-allocated array of page_size(page) / bh->b_size elements might
be more performant.

v3: Replace xas_for_each() with xa_for_each() to account for proper locking.

Laura Abbott (1):
  fs/buffer.c: Revoke LRU when trying to drop buffers

 fs/buffer.c | 81 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++----
 1 file changed, 76 insertions(+), 5 deletions(-)

-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project



^ permalink raw reply	[flat|nested] 4+ messages in thread

* [PATCH v3] fs/buffer.c: Revoke LRU when trying to drop buffers
  2021-01-13 21:17 [PATCH v3] Resolve LRU page-pinning issue for file-backed pages Chris Goldsworthy
@ 2021-01-13 21:17 ` Chris Goldsworthy
  2021-03-15 23:41   ` Andrew Morton
  0 siblings, 1 reply; 4+ messages in thread
From: Chris Goldsworthy @ 2021-01-13 21:17 UTC (permalink / raw)
  To: Alexander Viro
  Cc: Matthew Wilcox, linux-mm, linux-fsdevel, linux-kernel,
	Laura Abbott, Chris Goldsworthy

From: Laura Abbott <lauraa@codeaurora.org>

When a buffer is added to the LRU list, a reference is taken which is
not dropped until the buffer is evicted from the LRU list. This is the
correct behavior, however this LRU reference will prevent the buffer
from being dropped. This means that the buffer can't actually be dropped
until it is selected for eviction. There's no bound on the time spent
on the LRU list, which means that the buffer may be undroppable for
very long periods of time. Given that migration involves dropping
buffers, the associated page is now unmigratible for long periods of
time as well. CMA relies on being able to migrate a specific range
of pages, so these types of failures make CMA significantly
less reliable, especially under high filesystem usage.

Rather than waiting for the LRU algorithm to eventually kick out
the buffer, explicitly remove the buffer from the LRU list when trying
to drop it. There is still the possibility that the buffer
could be added back on the list, but that indicates the buffer is
still in use and would probably have other 'in use' indicates to
prevent dropping.

Note: a bug reported by "kernel test robot" lead to a switch from
using xas_for_each() to xa_for_each().

Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Chris Goldsworthy <cgoldswo@codeaurora.org>
Cc: Matthew Wilcox <willy@infradead.org>
Reported-by: kernel test robot <oliver.sang@intel.com>
---
 fs/buffer.c | 81 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++----
 1 file changed, 76 insertions(+), 5 deletions(-)

diff --git a/fs/buffer.c b/fs/buffer.c
index 96c7604..d2d1237 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -48,6 +48,7 @@
 #include <linux/sched/mm.h>
 #include <trace/events/block.h>
 #include <linux/fscrypt.h>
+#include <linux/xarray.h>
 
 #include "internal.h"
 
@@ -1471,12 +1472,59 @@ static bool has_bh_in_lru(int cpu, void *dummy)
 	return false;
 }
 
+static void __evict_bhs_lru(void *arg)
+{
+	struct bh_lru *b = &get_cpu_var(bh_lrus);
+	struct xarray *busy_bhs = arg;
+	struct buffer_head *bh;
+	unsigned long i, xarray_index;
+
+	xa_for_each(busy_bhs, xarray_index, bh) {
+		for (i = 0; i < BH_LRU_SIZE; i++) {
+			if (b->bhs[i] == bh) {
+				brelse(b->bhs[i]);
+				b->bhs[i] = NULL;
+				break;
+			}
+		}
+
+		bh = bh->b_this_page;
+	}
+
+	put_cpu_var(bh_lrus);
+}
+
+static bool page_has_bhs_in_lru(int cpu, void *arg)
+{
+	struct bh_lru *b = per_cpu_ptr(&bh_lrus, cpu);
+	struct xarray *busy_bhs = arg;
+	struct buffer_head *bh;
+	unsigned long i, xarray_index;
+
+	xa_for_each(busy_bhs, xarray_index, bh) {
+		for (i = 0; i < BH_LRU_SIZE; i++) {
+			if (b->bhs[i] == bh)
+				return true;
+		}
+
+		bh = bh->b_this_page;
+	}
+
+	return false;
+
+}
 void invalidate_bh_lrus(void)
 {
 	on_each_cpu_cond(has_bh_in_lru, invalidate_bh_lru, NULL, 1);
 }
 EXPORT_SYMBOL_GPL(invalidate_bh_lrus);
 
+static void evict_bh_lrus(struct xarray *busy_bhs)
+{
+	on_each_cpu_cond(page_has_bhs_in_lru, __evict_bhs_lru,
+			 busy_bhs, 1);
+}
+
 void set_bh_page(struct buffer_head *bh,
 		struct page *page, unsigned long offset)
 {
@@ -3242,14 +3290,36 @@ drop_buffers(struct page *page, struct buffer_head **buffers_to_free)
 {
 	struct buffer_head *head = page_buffers(page);
 	struct buffer_head *bh;
+	struct xarray busy_bhs;
+	int bh_count = 0;
+	int xa_ret, ret = 0;
+
+	xa_init(&busy_bhs);
 
 	bh = head;
 	do {
-		if (buffer_busy(bh))
-			goto failed;
+		if (buffer_busy(bh)) {
+			xa_ret = xa_err(xa_store(&busy_bhs, bh_count++,
+						 bh, GFP_ATOMIC));
+			if (xa_ret)
+				goto out;
+		}
 		bh = bh->b_this_page;
 	} while (bh != head);
 
+	if (bh_count) {
+		/*
+		 * Check if the busy failure was due to an outstanding
+		 * LRU reference
+		 */
+		evict_bh_lrus(&busy_bhs);
+		do {
+			if (buffer_busy(bh))
+				goto out;
+		} while (bh != head);
+	}
+
+	ret = 1;
 	do {
 		struct buffer_head *next = bh->b_this_page;
 
@@ -3259,9 +3329,10 @@ drop_buffers(struct page *page, struct buffer_head **buffers_to_free)
 	} while (bh != head);
 	*buffers_to_free = head;
 	detach_page_private(page);
-	return 1;
-failed:
-	return 0;
+out:
+	xa_destroy(&busy_bhs);
+
+	return ret;
 }
 
 int try_to_free_buffers(struct page *page)
-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project



^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH v3] fs/buffer.c: Revoke LRU when trying to drop buffers
  2021-01-13 21:17 ` [PATCH v3] fs/buffer.c: Revoke LRU when trying to drop buffers Chris Goldsworthy
@ 2021-03-15 23:41   ` Andrew Morton
  2021-03-15 23:46     ` Matthew Wilcox
  0 siblings, 1 reply; 4+ messages in thread
From: Andrew Morton @ 2021-03-15 23:41 UTC (permalink / raw)
  To: Chris Goldsworthy
  Cc: Alexander Viro, Matthew Wilcox, linux-mm, linux-fsdevel,
	linux-kernel, Laura Abbott

On Wed, 13 Jan 2021 13:17:30 -0800 Chris Goldsworthy <cgoldswo@codeaurora.org> wrote:

> From: Laura Abbott <lauraa@codeaurora.org>
> 
> When a buffer is added to the LRU list, a reference is taken which is
> not dropped until the buffer is evicted from the LRU list. This is the
> correct behavior, however this LRU reference will prevent the buffer
> from being dropped. This means that the buffer can't actually be dropped
> until it is selected for eviction. There's no bound on the time spent
> on the LRU list, which means that the buffer may be undroppable for
> very long periods of time. Given that migration involves dropping
> buffers, the associated page is now unmigratible for long periods of
> time as well. CMA relies on being able to migrate a specific range
> of pages, so these types of failures make CMA significantly
> less reliable, especially under high filesystem usage.
> 
> Rather than waiting for the LRU algorithm to eventually kick out
> the buffer, explicitly remove the buffer from the LRU list when trying
> to drop it. There is still the possibility that the buffer
> could be added back on the list, but that indicates the buffer is
> still in use and would probably have other 'in use' indicates to
> prevent dropping.
> 
> Note: a bug reported by "kernel test robot" lead to a switch from
> using xas_for_each() to xa_for_each().

(hm, why isn't drop_buffers() static to fs/buffer.c??)

It looks like patch this turns drop_buffers() into a very expensive
operation.  And that expensive operation occurs under the
address_space-wide private_lock, which is more ouch.

How carefully has this been tested for performance?  In pathological
circumstances (which are always someone's common case :()


Just thinking out loud...

If a buffer_head* is sitting in one or more of the LRUs, what is
stopping us from stripping it from the page anyway?  Then
try_to_free_buffers() can mark the bh as buffer_dead(), declare success
and leave the bh sitting in the LRU, with the LRU as the only reference
to that buffer.  Teach lookup_bh_lru() to skip over buffer_dead()
buffers and our now-dead buffer will eventually reach the tail of the
lru and get freed for real.



^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH v3] fs/buffer.c: Revoke LRU when trying to drop buffers
  2021-03-15 23:41   ` Andrew Morton
@ 2021-03-15 23:46     ` Matthew Wilcox
  0 siblings, 0 replies; 4+ messages in thread
From: Matthew Wilcox @ 2021-03-15 23:46 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Chris Goldsworthy, Alexander Viro, linux-mm, linux-fsdevel,
	linux-kernel, Laura Abbott

On Mon, Mar 15, 2021 at 04:41:38PM -0700, Andrew Morton wrote:
> > When a buffer is added to the LRU list, a reference is taken which is
> > not dropped until the buffer is evicted from the LRU list. This is the
> > correct behavior, however this LRU reference will prevent the buffer
> > from being dropped. This means that the buffer can't actually be dropped
> > until it is selected for eviction. There's no bound on the time spent
> > on the LRU list, which means that the buffer may be undroppable for
> > very long periods of time. Given that migration involves dropping
> > buffers, the associated page is now unmigratible for long periods of
> > time as well. CMA relies on being able to migrate a specific range
> > of pages, so these types of failures make CMA significantly
> > less reliable, especially under high filesystem usage.
>
> It looks like patch this turns drop_buffers() into a very expensive
> operation.  And that expensive operation occurs under the
> address_space-wide private_lock, which is more ouch.

This patch set is obsoleted by Minchan Kim's more recent patch-set.


^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2021-03-15 23:46 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-01-13 21:17 [PATCH v3] Resolve LRU page-pinning issue for file-backed pages Chris Goldsworthy
2021-01-13 21:17 ` [PATCH v3] fs/buffer.c: Revoke LRU when trying to drop buffers Chris Goldsworthy
2021-03-15 23:41   ` Andrew Morton
2021-03-15 23:46     ` Matthew Wilcox

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).