linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] mm/shmem: make find_get_pages_range() work for huge page
@ 2019-01-10  3:08 Yu Zhao
  2019-01-10 11:43 ` William Kucharski
  0 siblings, 1 reply; 4+ messages in thread
From: Yu Zhao @ 2019-01-10  3:08 UTC (permalink / raw)
  To: Matthew Wilcox, Andrew Morton, Amir Goldstein, Dave Chinner,
	Darrick J . Wong
  Cc: Johannes Weiner, Souptick Joarder, Hugh Dickins,
	Kirill A . Shutemov, linux-mm, linux-kernel, Yu Zhao

find_get_pages_range() and find_get_pages_range_tag() already
correctly increment reference count on head when seeing compound
page, but they may still use page index from tail. Page index
from tail is always zero, so these functions don't work on huge
shmem. This hasn't been a problem because, AFAIK, nobody calls
these functions on (huge) shmem. Fix them anyway just in case.

Signed-off-by: Yu Zhao <yuzhao@google.com>
---
 mm/filemap.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/mm/filemap.c b/mm/filemap.c
index 81adec8ee02c..cf5fd773314a 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -1704,7 +1704,7 @@ unsigned find_get_pages_range(struct address_space *mapping, pgoff_t *start,
 
 		pages[ret] = page;
 		if (++ret == nr_pages) {
-			*start = page->index + 1;
+			*start = xas.xa_index + 1;
 			goto out;
 		}
 		continue;
@@ -1850,7 +1850,7 @@ unsigned find_get_pages_range_tag(struct address_space *mapping, pgoff_t *index,
 
 		pages[ret] = page;
 		if (++ret == nr_pages) {
-			*index = page->index + 1;
+			*index = xas.xa_index + 1;
 			goto out;
 		}
 		continue;
-- 
2.20.1.97.g81188d93c3-goog


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH] mm/shmem: make find_get_pages_range() work for huge page
  2019-01-10  3:08 [PATCH] mm/shmem: make find_get_pages_range() work for huge page Yu Zhao
@ 2019-01-10 11:43 ` William Kucharski
  2019-02-12 23:57   ` Yu Zhao
  0 siblings, 1 reply; 4+ messages in thread
From: William Kucharski @ 2019-01-10 11:43 UTC (permalink / raw)
  To: Yu Zhao
  Cc: Matthew Wilcox, Andrew Morton, Amir Goldstein, Dave Chinner,
	Darrick J . Wong, Johannes Weiner, Souptick Joarder,
	Hugh Dickins, Kirill A . Shutemov, linux-mm, linux-kernel



> On Jan 9, 2019, at 8:08 PM, Yu Zhao <yuzhao@google.com> wrote:
> 
> find_get_pages_range() and find_get_pages_range_tag() already
> correctly increment reference count on head when seeing compound
> page, but they may still use page index from tail. Page index
> from tail is always zero, so these functions don't work on huge
> shmem. This hasn't been a problem because, AFAIK, nobody calls
> these functions on (huge) shmem. Fix them anyway just in case.
> 
> Signed-off-by: Yu Zhao <yuzhao@google.com>
> ---
> mm/filemap.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/filemap.c b/mm/filemap.c
> index 81adec8ee02c..cf5fd773314a 100644
> --- a/mm/filemap.c
> +++ b/mm/filemap.c
> @@ -1704,7 +1704,7 @@ unsigned find_get_pages_range(struct address_space *mapping, pgoff_t *start,
> 
> 		pages[ret] = page;
> 		if (++ret == nr_pages) {
> -			*start = page->index + 1;
> +			*start = xas.xa_index + 1;
> 			goto out;
> 		}
> 		continue;
> @@ -1850,7 +1850,7 @@ unsigned find_get_pages_range_tag(struct address_space *mapping, pgoff_t *index,
> 
> 		pages[ret] = page;
> 		if (++ret == nr_pages) {
> -			*index = page->index + 1;
> +			*index = xas.xa_index + 1;
> 			goto out;
> 		}
> 		continue;
> -- 

While this works, it seems like this would be more readable for future maintainers were it to
instead squirrel away the value for *start/*index when ret was zero on the first iteration through
the loop.

Though xa_index is designed to hold the first index of the entry, it seems inappropriate to have
these routines deference elements of xas directly; I guess it depends on how opaque we want to keep
xas and struct xa_state.

Does anyone else have a feeling one way or the other? I could be persuaded either way.

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] mm/shmem: make find_get_pages_range() work for huge page
  2019-01-10 11:43 ` William Kucharski
@ 2019-02-12 23:57   ` Yu Zhao
  2019-02-14 11:18     ` William Kucharski
  0 siblings, 1 reply; 4+ messages in thread
From: Yu Zhao @ 2019-02-12 23:57 UTC (permalink / raw)
  To: William Kucharski
  Cc: Matthew Wilcox, Andrew Morton, Amir Goldstein, Dave Chinner,
	Darrick J . Wong, Johannes Weiner, Souptick Joarder,
	Hugh Dickins, Kirill A . Shutemov, linux-mm, linux-kernel

On Thu, Jan 10, 2019 at 04:43:57AM -0700, William Kucharski wrote:
> 
> 
> > On Jan 9, 2019, at 8:08 PM, Yu Zhao <yuzhao@google.com> wrote:
> > 
> > find_get_pages_range() and find_get_pages_range_tag() already
> > correctly increment reference count on head when seeing compound
> > page, but they may still use page index from tail. Page index
> > from tail is always zero, so these functions don't work on huge
> > shmem. This hasn't been a problem because, AFAIK, nobody calls
> > these functions on (huge) shmem. Fix them anyway just in case.
> > 
> > Signed-off-by: Yu Zhao <yuzhao@google.com>
> > ---
> > mm/filemap.c | 4 ++--
> > 1 file changed, 2 insertions(+), 2 deletions(-)
> > 
> > diff --git a/mm/filemap.c b/mm/filemap.c
> > index 81adec8ee02c..cf5fd773314a 100644
> > --- a/mm/filemap.c
> > +++ b/mm/filemap.c
> > @@ -1704,7 +1704,7 @@ unsigned find_get_pages_range(struct address_space *mapping, pgoff_t *start,
> > 
> > 		pages[ret] = page;
> > 		if (++ret == nr_pages) {
> > -			*start = page->index + 1;
> > +			*start = xas.xa_index + 1;
> > 			goto out;
> > 		}
> > 		continue;
> > @@ -1850,7 +1850,7 @@ unsigned find_get_pages_range_tag(struct address_space *mapping, pgoff_t *index,
> > 
> > 		pages[ret] = page;
> > 		if (++ret == nr_pages) {
> > -			*index = page->index + 1;
> > +			*index = xas.xa_index + 1;
> > 			goto out;
> > 		}
> > 		continue;
> > -- 
> 
> While this works, it seems like this would be more readable for future maintainers were it to
> instead squirrel away the value for *start/*index when ret was zero on the first iteration through
> the loop.

I'm not sure how this could be more readable, and it sounds
independent from the problem the patch fixes.

> Though xa_index is designed to hold the first index of the entry, it seems inappropriate to have
> these routines deference elements of xas directly; I guess it depends on how opaque we want to keep
> xas and struct xa_state.

It seems to me it's pefectly fine to use fields of xas directly,
and it's being done this way throughout the file.

> Does anyone else have a feeling one way or the other? I could be persuaded either way.

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] mm/shmem: make find_get_pages_range() work for huge page
  2019-02-12 23:57   ` Yu Zhao
@ 2019-02-14 11:18     ` William Kucharski
  0 siblings, 0 replies; 4+ messages in thread
From: William Kucharski @ 2019-02-14 11:18 UTC (permalink / raw)
  To: Yu Zhao
  Cc: Matthew Wilcox, Andrew Morton, Amir Goldstein, Dave Chinner,
	Darrick J . Wong, Johannes Weiner, Souptick Joarder,
	Hugh Dickins, Kirill A . Shutemov, linux-mm, linux-kernel



> On Feb 12, 2019, at 4:57 PM, Yu Zhao <yuzhao@google.com> wrote:
> 
> It seems to me it's pefectly fine to use fields of xas directly,
> and it's being done this way throughout the file.

Fair enough.

Reviewed-by: William Kucharski <william.kucharski@oracle.com>


^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2019-02-14 11:18 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-01-10  3:08 [PATCH] mm/shmem: make find_get_pages_range() work for huge page Yu Zhao
2019-01-10 11:43 ` William Kucharski
2019-02-12 23:57   ` Yu Zhao
2019-02-14 11:18     ` William Kucharski

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).