linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/3] Large pages in the page cache
@ 2019-09-05 18:23 Matthew Wilcox
  2019-09-05 18:23 ` [PATCH 1/3] mm: Add __page_cache_alloc_order Matthew Wilcox
                   ` (3 more replies)
  0 siblings, 4 replies; 21+ messages in thread
From: Matthew Wilcox @ 2019-09-05 18:23 UTC (permalink / raw)
  To: linux-mm, linux-fsdevel
  Cc: Matthew Wilcox (Oracle),
	Kirill Shutemov, Song Liu, William Kucharski, Johannes Weiner

From: "Matthew Wilcox (Oracle)" <willy@infradead.org>

Michael Hocko's reaction to Bill's implementation of filemap_huge_fault
was "convoluted so much I cannot wrap my head around it".  This spurred me
to finish up something I'd been working on in the background prompted by
Kirill's desire to be able to allocate large page cache pages in paths
other than the fault handler.

This is in no sense complete as there's nothing in this patch series
which actually uses FGP_PMD.  It should remove a lot of the complexity
from a future filemap_huge_fault() implementation and make it possible
to allocate larger pages in the read/write paths in future.

Matthew Wilcox (Oracle) (3):
  mm: Add __page_cache_alloc_order
  mm: Allow large pages to be added to the page cache
  mm: Allow find_get_page to be used for large pages

 include/linux/pagemap.h |  23 ++++++-
 mm/filemap.c            | 132 +++++++++++++++++++++++++++++++++-------
 2 files changed, 130 insertions(+), 25 deletions(-)

-- 
2.23.0.rc1



^ permalink raw reply	[flat|nested] 21+ messages in thread

* [PATCH 1/3] mm: Add __page_cache_alloc_order
  2019-09-05 18:23 [PATCH 0/3] Large pages in the page cache Matthew Wilcox
@ 2019-09-05 18:23 ` Matthew Wilcox
  2019-09-05 18:58   ` Song Liu
  2019-09-05 18:23 ` [PATCH 2/3] mm: Allow large pages to be added to the page cache Matthew Wilcox
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 21+ messages in thread
From: Matthew Wilcox @ 2019-09-05 18:23 UTC (permalink / raw)
  To: linux-mm, linux-fsdevel
  Cc: Matthew Wilcox (Oracle),
	Kirill Shutemov, Song Liu, William Kucharski, Johannes Weiner

From: "Matthew Wilcox (Oracle)" <willy@infradead.org>

This new function allows page cache pages to be allocated that are
larger than an order-0 page.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 include/linux/pagemap.h | 14 +++++++++++---
 mm/filemap.c            | 11 +++++++----
 2 files changed, 18 insertions(+), 7 deletions(-)

diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index 103205494ea0..d2147215d415 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -208,14 +208,22 @@ static inline int page_cache_add_speculative(struct page *page, int count)
 }
 
 #ifdef CONFIG_NUMA
-extern struct page *__page_cache_alloc(gfp_t gfp);
+extern struct page *__page_cache_alloc_order(gfp_t gfp, unsigned int order);
 #else
-static inline struct page *__page_cache_alloc(gfp_t gfp)
+static inline
+struct page *__page_cache_alloc_order(gfp_t gfp, unsigned int order)
 {
-	return alloc_pages(gfp, 0);
+	if (order > 0)
+		gfp |= __GFP_COMP;
+	return alloc_pages(gfp, order);
 }
 #endif
 
+static inline struct page *__page_cache_alloc(gfp_t gfp)
+{
+	return __page_cache_alloc_order(gfp, 0);
+}
+
 static inline struct page *page_cache_alloc(struct address_space *x)
 {
 	return __page_cache_alloc(mapping_gfp_mask(x));
diff --git a/mm/filemap.c b/mm/filemap.c
index 05a5aa82cd32..041c77c4ca56 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -957,24 +957,27 @@ int add_to_page_cache_lru(struct page *page, struct address_space *mapping,
 EXPORT_SYMBOL_GPL(add_to_page_cache_lru);
 
 #ifdef CONFIG_NUMA
-struct page *__page_cache_alloc(gfp_t gfp)
+struct page *__page_cache_alloc_order(gfp_t gfp, unsigned int order)
 {
 	int n;
 	struct page *page;
 
+	if (order > 0)
+		gfp |= __GFP_COMP;
+
 	if (cpuset_do_page_mem_spread()) {
 		unsigned int cpuset_mems_cookie;
 		do {
 			cpuset_mems_cookie = read_mems_allowed_begin();
 			n = cpuset_mem_spread_node();
-			page = __alloc_pages_node(n, gfp, 0);
+			page = __alloc_pages_node(n, gfp, order);
 		} while (!page && read_mems_allowed_retry(cpuset_mems_cookie));
 
 		return page;
 	}
-	return alloc_pages(gfp, 0);
+	return alloc_pages(gfp, order);
 }
-EXPORT_SYMBOL(__page_cache_alloc);
+EXPORT_SYMBOL(__page_cache_alloc_order);
 #endif
 
 /*
-- 
2.23.0.rc1



^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 2/3] mm: Allow large pages to be added to the page cache
  2019-09-05 18:23 [PATCH 0/3] Large pages in the page cache Matthew Wilcox
  2019-09-05 18:23 ` [PATCH 1/3] mm: Add __page_cache_alloc_order Matthew Wilcox
@ 2019-09-05 18:23 ` Matthew Wilcox
  2019-09-05 18:28   ` Matthew Wilcox
                     ` (2 more replies)
  2019-09-05 18:23 ` [PATCH 3/3] mm: Allow find_get_page to be used for large pages Matthew Wilcox
  2019-09-06 15:59 ` [PATCH 4/3] Prepare transhuge pages properly Matthew Wilcox
  3 siblings, 3 replies; 21+ messages in thread
From: Matthew Wilcox @ 2019-09-05 18:23 UTC (permalink / raw)
  To: linux-mm, linux-fsdevel
  Cc: Matthew Wilcox (Oracle),
	Kirill Shutemov, Song Liu, William Kucharski, Johannes Weiner

From: "Matthew Wilcox (Oracle)" <willy@infradead.org>

We return -EEXIST if there are any non-shadow entries in the page
cache in the range covered by the large page.  If there are multiple
shadow entries in the range, we set *shadowp to one of them (currently
the one at the highest index).  If that turns out to be the wrong
answer, we can implement something more complex.  This is mostly
modelled after the equivalent function in the shmem code.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/filemap.c | 39 ++++++++++++++++++++++++++++-----------
 1 file changed, 28 insertions(+), 11 deletions(-)

diff --git a/mm/filemap.c b/mm/filemap.c
index 041c77c4ca56..ae3c0a70a8e9 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -850,6 +850,7 @@ static int __add_to_page_cache_locked(struct page *page,
 	int huge = PageHuge(page);
 	struct mem_cgroup *memcg;
 	int error;
+	unsigned int nr = 1;
 	void *old;
 
 	VM_BUG_ON_PAGE(!PageLocked(page), page);
@@ -861,31 +862,47 @@ static int __add_to_page_cache_locked(struct page *page,
 					      gfp_mask, &memcg, false);
 		if (error)
 			return error;
+		xas_set_order(&xas, offset, compound_order(page));
+		nr = compound_nr(page);
 	}
 
-	get_page(page);
+	page_ref_add(page, nr);
 	page->mapping = mapping;
 	page->index = offset;
 
 	do {
+		unsigned long exceptional = 0;
+		unsigned int i = 0;
+
 		xas_lock_irq(&xas);
-		old = xas_load(&xas);
-		if (old && !xa_is_value(old))
+		xas_for_each_conflict(&xas, old) {
+			if (!xa_is_value(old))
+				break;
+			exceptional++;
+			if (shadowp)
+				*shadowp = old;
+		}
+		if (old) {
 			xas_set_err(&xas, -EEXIST);
-		xas_store(&xas, page);
+			break;
+		}
+		xas_create_range(&xas);
 		if (xas_error(&xas))
 			goto unlock;
 
-		if (xa_is_value(old)) {
-			mapping->nrexceptional--;
-			if (shadowp)
-				*shadowp = old;
+next:
+		xas_store(&xas, page);
+		if (++i < nr) {
+			xas_next(&xas);
+			goto next;
 		}
-		mapping->nrpages++;
+		mapping->nrexceptional -= exceptional;
+		mapping->nrpages += nr;
 
 		/* hugetlb pages do not participate in page cache accounting */
 		if (!huge)
-			__inc_node_page_state(page, NR_FILE_PAGES);
+			__mod_node_page_state(page_pgdat(page), NR_FILE_PAGES,
+						nr);
 unlock:
 		xas_unlock_irq(&xas);
 	} while (xas_nomem(&xas, gfp_mask & GFP_RECLAIM_MASK));
@@ -902,7 +919,7 @@ static int __add_to_page_cache_locked(struct page *page,
 	/* Leave page->index set: truncation relies upon it */
 	if (!huge)
 		mem_cgroup_cancel_charge(page, memcg, false);
-	put_page(page);
+	page_ref_sub(page, nr);
 	return xas_error(&xas);
 }
 ALLOW_ERROR_INJECTION(__add_to_page_cache_locked, ERRNO);
-- 
2.23.0.rc1



^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 3/3] mm: Allow find_get_page to be used for large pages
  2019-09-05 18:23 [PATCH 0/3] Large pages in the page cache Matthew Wilcox
  2019-09-05 18:23 ` [PATCH 1/3] mm: Add __page_cache_alloc_order Matthew Wilcox
  2019-09-05 18:23 ` [PATCH 2/3] mm: Allow large pages to be added to the page cache Matthew Wilcox
@ 2019-09-05 18:23 ` Matthew Wilcox
  2019-09-05 21:41   ` kbuild test robot
                     ` (2 more replies)
  2019-09-06 15:59 ` [PATCH 4/3] Prepare transhuge pages properly Matthew Wilcox
  3 siblings, 3 replies; 21+ messages in thread
From: Matthew Wilcox @ 2019-09-05 18:23 UTC (permalink / raw)
  To: linux-mm, linux-fsdevel
  Cc: Matthew Wilcox (Oracle),
	Kirill Shutemov, Song Liu, William Kucharski, Johannes Weiner

From: "Matthew Wilcox (Oracle)" <willy@infradead.org>

Add FGP_PMD to indicate that we're trying to find-or-create a page that
is at least PMD_ORDER in size.  The internal 'conflict' entry usage
is modelled after that in DAX, but the implementations are different
due to DAX using multi-order entries and the page cache using multiple
order-0 entries.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 include/linux/pagemap.h |  9 +++++
 mm/filemap.c            | 82 +++++++++++++++++++++++++++++++++++++----
 2 files changed, 84 insertions(+), 7 deletions(-)

diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index d2147215d415..72101811524c 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -248,6 +248,15 @@ pgoff_t page_cache_prev_miss(struct address_space *mapping,
 #define FGP_NOFS		0x00000010
 #define FGP_NOWAIT		0x00000020
 #define FGP_FOR_MMAP		0x00000040
+/*
+ * If you add more flags, increment FGP_ORDER_SHIFT (no further than 25).
+ * Do not insert flags above the FGP order bits.
+ */
+#define FGP_ORDER_SHIFT		7
+#define FGP_PMD			((PMD_SHIFT - PAGE_SHIFT) << FGP_ORDER_SHIFT)
+#define FGP_PUD			((PUD_SHIFT - PAGE_SHIFT) << FGP_ORDER_SHIFT)
+
+#define fgp_order(fgp)		((fgp) >> FGP_ORDER_SHIFT)
 
 struct page *pagecache_get_page(struct address_space *mapping, pgoff_t offset,
 		int fgp_flags, gfp_t cache_gfp_mask);
diff --git a/mm/filemap.c b/mm/filemap.c
index ae3c0a70a8e9..904dfabbea52 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -1572,7 +1572,71 @@ struct page *find_get_entry(struct address_space *mapping, pgoff_t offset)
 
 	return page;
 }
-EXPORT_SYMBOL(find_get_entry);
+
+static bool pagecache_is_conflict(struct page *page)
+{
+	return page == XA_RETRY_ENTRY;
+}
+
+/**
+ * __find_get_page - Find and get a page cache entry.
+ * @mapping: The address_space to search.
+ * @offset: The page cache index.
+ * @order: The minimum order of the entry to return.
+ *
+ * Looks up the page cache entries at @mapping between @offset and
+ * @offset + 2^@order.  If there is a page cache page, it is returned with
+ * an increased refcount unless it is smaller than @order.
+ *
+ * If the slot holds a shadow entry of a previously evicted page, or a
+ * swap entry from shmem/tmpfs, it is returned.
+ *
+ * Return: the found page, a value indicating a conflicting page or %NULL if
+ * there are no pages in this range.
+ */
+static struct page *__find_get_page(struct address_space *mapping,
+		unsigned long offset, unsigned int order)
+{
+	XA_STATE(xas, &mapping->i_pages, offset);
+	struct page *page;
+
+	rcu_read_lock();
+repeat:
+	xas_reset(&xas);
+	page = xas_find(&xas, offset | ((1UL << order) - 1));
+	if (xas_retry(&xas, page))
+		goto repeat;
+	/*
+	 * A shadow entry of a recently evicted page, or a swap entry from
+	 * shmem/tmpfs.  Skip it; keep looking for pages.
+	 */
+	if (xa_is_value(page))
+		goto repeat;
+	if (!page)
+		goto out;
+	if (compound_order(page) < order) {
+		page = XA_RETRY_ENTRY;
+		goto out;
+	}
+
+	if (!page_cache_get_speculative(page))
+		goto repeat;
+
+	/*
+	 * Has the page moved or been split?
+	 * This is part of the lockless pagecache protocol. See
+	 * include/linux/pagemap.h for details.
+	 */
+	if (unlikely(page != xas_reload(&xas))) {
+		put_page(page);
+		goto repeat;
+	}
+	page = find_subpage(page, offset);
+out:
+	rcu_read_unlock();
+
+	return page;
+}
 
 /**
  * find_lock_entry - locate, pin and lock a page cache entry
@@ -1614,12 +1678,12 @@ EXPORT_SYMBOL(find_lock_entry);
  * pagecache_get_page - find and get a page reference
  * @mapping: the address_space to search
  * @offset: the page index
- * @fgp_flags: PCG flags
+ * @fgp_flags: FGP flags
  * @gfp_mask: gfp mask to use for the page cache data page allocation
  *
  * Looks up the page cache slot at @mapping & @offset.
  *
- * PCG flags modify how the page is returned.
+ * FGP flags modify how the page is returned.
  *
  * @fgp_flags can be:
  *
@@ -1632,6 +1696,10 @@ EXPORT_SYMBOL(find_lock_entry);
  * - FGP_FOR_MMAP: Similar to FGP_CREAT, only we want to allow the caller to do
  *   its own locking dance if the page is already in cache, or unlock the page
  *   before returning if we had to add the page to pagecache.
+ * - FGP_PMD: We're only interested in pages at PMD granularity.  If there
+ *   is no page here (and FGP_CREATE is set), we'll create one large enough.
+ *   If there is a smaller page in the cache that overlaps the PMD page, we
+ *   return %NULL and do not attempt to create a page.
  *
  * If FGP_LOCK or FGP_CREAT are specified then the function may sleep even
  * if the GFP flags specified for FGP_CREAT are atomic.
@@ -1646,9 +1714,9 @@ struct page *pagecache_get_page(struct address_space *mapping, pgoff_t offset,
 	struct page *page;
 
 repeat:
-	page = find_get_entry(mapping, offset);
-	if (xa_is_value(page))
-		page = NULL;
+	page = __find_get_page(mapping, offset, fgp_order(fgp_flags));
+	if (pagecache_is_conflict(page))
+		return NULL;
 	if (!page)
 		goto no_page;
 
@@ -1682,7 +1750,7 @@ struct page *pagecache_get_page(struct address_space *mapping, pgoff_t offset,
 		if (fgp_flags & FGP_NOFS)
 			gfp_mask &= ~__GFP_FS;
 
-		page = __page_cache_alloc(gfp_mask);
+		page = __page_cache_alloc_order(gfp_mask, fgp_order(fgp_flags));
 		if (!page)
 			return NULL;
 
-- 
2.23.0.rc1



^ permalink raw reply related	[flat|nested] 21+ messages in thread

* Re: [PATCH 2/3] mm: Allow large pages to be added to the page cache
  2019-09-05 18:23 ` [PATCH 2/3] mm: Allow large pages to be added to the page cache Matthew Wilcox
@ 2019-09-05 18:28   ` Matthew Wilcox
  2019-09-05 20:56   ` kbuild test robot
  2019-09-06 12:09   ` Kirill A. Shutemov
  2 siblings, 0 replies; 21+ messages in thread
From: Matthew Wilcox @ 2019-09-05 18:28 UTC (permalink / raw)
  To: linux-mm, linux-fsdevel
  Cc: Kirill Shutemov, Song Liu, William Kucharski, Johannes Weiner

On Thu, Sep 05, 2019 at 11:23:47AM -0700, Matthew Wilcox wrote:
> +		xas_for_each_conflict(&xas, old) {
> +			if (!xa_is_value(old))
> +				break;
> +			exceptional++;
> +			if (shadowp)
> +				*shadowp = old;
> +		}
> +		if (old) {
>  			xas_set_err(&xas, -EEXIST);
> -		xas_store(&xas, page);
> +			break;

Of course, one cannot see one's own bugs until one has posted them
publically.  This will exit the loop with the lock held.

> +		}
> +		xas_create_range(&xas);
>  		if (xas_error(&xas))
>  			goto unlock;
>  

The stanza should read:

                if (old) 
                        xas_set_err(&xas, -EEXIST);
                xas_create_range(&xas);
                if (xas_error(&xas))
                        goto unlock;

just like the corresponding stanza in mm/shmem.c.

(while the xa_state is in an error condition, the xas_create_range()
function will return without doing anything).


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/3] mm: Add __page_cache_alloc_order
  2019-09-05 18:23 ` [PATCH 1/3] mm: Add __page_cache_alloc_order Matthew Wilcox
@ 2019-09-05 18:58   ` Song Liu
  2019-09-05 19:02     ` Matthew Wilcox
  0 siblings, 1 reply; 21+ messages in thread
From: Song Liu @ 2019-09-05 18:58 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: Linux MM, linux-fsdevel, Kirill Shutemov, William Kucharski,
	Johannes Weiner



> On Sep 5, 2019, at 11:23 AM, Matthew Wilcox <willy@infradead.org> wrote:
> 
> From: "Matthew Wilcox (Oracle)" <willy@infradead.org>
> 
> This new function allows page cache pages to be allocated that are
> larger than an order-0 page.
> 
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> ---
> include/linux/pagemap.h | 14 +++++++++++---
> mm/filemap.c            | 11 +++++++----
> 2 files changed, 18 insertions(+), 7 deletions(-)
> 
> diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
> index 103205494ea0..d2147215d415 100644
> --- a/include/linux/pagemap.h
> +++ b/include/linux/pagemap.h
> @@ -208,14 +208,22 @@ static inline int page_cache_add_speculative(struct page *page, int count)
> }
> 
> #ifdef CONFIG_NUMA
> -extern struct page *__page_cache_alloc(gfp_t gfp);
> +extern struct page *__page_cache_alloc_order(gfp_t gfp, unsigned int order);

I guess we need __page_cache_alloc(gfp_t gfp) here for CONFIG_NUMA. 


> #else
> -static inline struct page *__page_cache_alloc(gfp_t gfp)
> +static inline
> +struct page *__page_cache_alloc_order(gfp_t gfp, unsigned int order)
> {
> -	return alloc_pages(gfp, 0);
> +	if (order > 0)
> +		gfp |= __GFP_COMP;
> +	return alloc_pages(gfp, order);
> }
> #endif
> 
> +static inline struct page *__page_cache_alloc(gfp_t gfp)
> +{
> +	return __page_cache_alloc_order(gfp, 0);

Maybe "return alloc_pages(gfp, 0);" here to avoid checking "order > 0"?

> +}
> +
> static inline struct page *page_cache_alloc(struct address_space *x)
> {
> 	return __page_cache_alloc(mapping_gfp_mask(x));
> diff --git a/mm/filemap.c b/mm/filemap.c
> index 05a5aa82cd32..041c77c4ca56 100644
> --- a/mm/filemap.c
> +++ b/mm/filemap.c
> @@ -957,24 +957,27 @@ int add_to_page_cache_lru(struct page *page, struct address_space *mapping,
> EXPORT_SYMBOL_GPL(add_to_page_cache_lru);
> 
> #ifdef CONFIG_NUMA
> -struct page *__page_cache_alloc(gfp_t gfp)
> +struct page *__page_cache_alloc_order(gfp_t gfp, unsigned int order)
> {
> 	int n;
> 	struct page *page;
> 
> +	if (order > 0)
> +		gfp |= __GFP_COMP;
> +

I think it will be good to have separate __page_cache_alloc() for order 0, 
so that we avoid checking "order > 0", but that may require too much 
duplication. So I am on the fence for this one. 

Thanks,
Song

> 	if (cpuset_do_page_mem_spread()) {
> 		unsigned int cpuset_mems_cookie;
> 		do {
> 			cpuset_mems_cookie = read_mems_allowed_begin();
> 			n = cpuset_mem_spread_node();
> -			page = __alloc_pages_node(n, gfp, 0);
> +			page = __alloc_pages_node(n, gfp, order);
> 		} while (!page && read_mems_allowed_retry(cpuset_mems_cookie));
> 
> 		return page;
> 	}
> -	return alloc_pages(gfp, 0);
> +	return alloc_pages(gfp, order);
> }
> -EXPORT_SYMBOL(__page_cache_alloc);
> +EXPORT_SYMBOL(__page_cache_alloc_order);
> #endif
> 
> /*
> -- 
> 2.23.0.rc1
> 



^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/3] mm: Add __page_cache_alloc_order
  2019-09-05 18:58   ` Song Liu
@ 2019-09-05 19:02     ` Matthew Wilcox
  2019-09-05 19:06       ` Song Liu
  0 siblings, 1 reply; 21+ messages in thread
From: Matthew Wilcox @ 2019-09-05 19:02 UTC (permalink / raw)
  To: Song Liu
  Cc: Linux MM, linux-fsdevel, Kirill Shutemov, William Kucharski,
	Johannes Weiner

On Thu, Sep 05, 2019 at 06:58:53PM +0000, Song Liu wrote:
> > On Sep 5, 2019, at 11:23 AM, Matthew Wilcox <willy@infradead.org> wrote:
> > This new function allows page cache pages to be allocated that are
> > larger than an order-0 page.
> > 
> > Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> > ---
> > include/linux/pagemap.h | 14 +++++++++++---
> > mm/filemap.c            | 11 +++++++----
> > 2 files changed, 18 insertions(+), 7 deletions(-)
> > 
> > diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
> > index 103205494ea0..d2147215d415 100644
> > --- a/include/linux/pagemap.h
> > +++ b/include/linux/pagemap.h
> > @@ -208,14 +208,22 @@ static inline int page_cache_add_speculative(struct page *page, int count)
> > }
> > 
> > #ifdef CONFIG_NUMA
> > -extern struct page *__page_cache_alloc(gfp_t gfp);
> > +extern struct page *__page_cache_alloc_order(gfp_t gfp, unsigned int order);
> 
> I guess we need __page_cache_alloc(gfp_t gfp) here for CONFIG_NUMA. 

... no?  The __page_cache_alloc() below is outside the ifdef/else/endif, so
it's the same for both NUMA and non-NUMA.

> > #else
> > -static inline struct page *__page_cache_alloc(gfp_t gfp)
> > +static inline
> > +struct page *__page_cache_alloc_order(gfp_t gfp, unsigned int order)
> > {
> > -	return alloc_pages(gfp, 0);
> > +	if (order > 0)
> > +		gfp |= __GFP_COMP;
> > +	return alloc_pages(gfp, order);
> > }
> > #endif
> > 
> > +static inline struct page *__page_cache_alloc(gfp_t gfp)
> > +{
> > +	return __page_cache_alloc_order(gfp, 0);
> 
> Maybe "return alloc_pages(gfp, 0);" here to avoid checking "order > 0"?

For non-NUMA cases, the __page_cache_alloc_order() will be inlined into
__page_cache_alloc() and the copiler will eliminate the test.  Or you
need a better compiler ;-)

> > -struct page *__page_cache_alloc(gfp_t gfp)
> > +struct page *__page_cache_alloc_order(gfp_t gfp, unsigned int order)
> > {
> > 	int n;
> > 	struct page *page;
> > 
> > +	if (order > 0)
> > +		gfp |= __GFP_COMP;
> > +
> 
> I think it will be good to have separate __page_cache_alloc() for order 0, 
> so that we avoid checking "order > 0", but that may require too much 
> duplication. So I am on the fence for this one. 

We're about to dive into the page allocator ... two extra instructions
here aren't going to be noticable.


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/3] mm: Add __page_cache_alloc_order
  2019-09-05 19:02     ` Matthew Wilcox
@ 2019-09-05 19:06       ` Song Liu
  0 siblings, 0 replies; 21+ messages in thread
From: Song Liu @ 2019-09-05 19:06 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: Linux MM, linux-fsdevel, Kirill Shutemov, William Kucharski,
	Johannes Weiner



> On Sep 5, 2019, at 12:02 PM, Matthew Wilcox <willy@infradead.org> wrote:
> 
> On Thu, Sep 05, 2019 at 06:58:53PM +0000, Song Liu wrote:
>>> On Sep 5, 2019, at 11:23 AM, Matthew Wilcox <willy@infradead.org> wrote:
>>> This new function allows page cache pages to be allocated that are
>>> larger than an order-0 page.
>>> 
>>> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
>>> ---
>>> include/linux/pagemap.h | 14 +++++++++++---
>>> mm/filemap.c            | 11 +++++++----
>>> 2 files changed, 18 insertions(+), 7 deletions(-)
>>> 
>>> diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
>>> index 103205494ea0..d2147215d415 100644
>>> --- a/include/linux/pagemap.h
>>> +++ b/include/linux/pagemap.h
>>> @@ -208,14 +208,22 @@ static inline int page_cache_add_speculative(struct page *page, int count)
>>> }
>>> 
>>> #ifdef CONFIG_NUMA
>>> -extern struct page *__page_cache_alloc(gfp_t gfp);
>>> +extern struct page *__page_cache_alloc_order(gfp_t gfp, unsigned int order);
>> 
>> I guess we need __page_cache_alloc(gfp_t gfp) here for CONFIG_NUMA. 
> 
> ... no?  The __page_cache_alloc() below is outside the ifdef/else/endif, so
> it's the same for both NUMA and non-NUMA.

You are right. I misread this one. 

> 
>>> #else
>>> -static inline struct page *__page_cache_alloc(gfp_t gfp)
>>> +static inline
>>> +struct page *__page_cache_alloc_order(gfp_t gfp, unsigned int order)
>>> {
>>> -	return alloc_pages(gfp, 0);
>>> +	if (order > 0)
>>> +		gfp |= __GFP_COMP;
>>> +	return alloc_pages(gfp, order);
>>> }
>>> #endif
>>> 
>>> +static inline struct page *__page_cache_alloc(gfp_t gfp)
>>> +{
>>> +	return __page_cache_alloc_order(gfp, 0);
>> 
>> Maybe "return alloc_pages(gfp, 0);" here to avoid checking "order > 0"?
> 
> For non-NUMA cases, the __page_cache_alloc_order() will be inlined into
> __page_cache_alloc() and the copiler will eliminate the test.  Or you
> need a better compiler ;-)
> 
>>> -struct page *__page_cache_alloc(gfp_t gfp)
>>> +struct page *__page_cache_alloc_order(gfp_t gfp, unsigned int order)
>>> {
>>> 	int n;
>>> 	struct page *page;
>>> 
>>> +	if (order > 0)
>>> +		gfp |= __GFP_COMP;
>>> +
>> 
>> I think it will be good to have separate __page_cache_alloc() for order 0, 
>> so that we avoid checking "order > 0", but that may require too much 
>> duplication. So I am on the fence for this one. 
> 
> We're about to dive into the page allocator ... two extra instructions
> here aren't going to be noticable.

True. Thanks for the explanation. 

Acked-by: Song Liu <songliubraving@fb.com>




^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 2/3] mm: Allow large pages to be added to the page cache
  2019-09-05 18:23 ` [PATCH 2/3] mm: Allow large pages to be added to the page cache Matthew Wilcox
  2019-09-05 18:28   ` Matthew Wilcox
@ 2019-09-05 20:56   ` kbuild test robot
  2019-09-06 12:09   ` Kirill A. Shutemov
  2 siblings, 0 replies; 21+ messages in thread
From: kbuild test robot @ 2019-09-05 20:56 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: kbuild-all, linux-mm, linux-fsdevel, Matthew Wilcox (Oracle),
	Kirill Shutemov, Song Liu, William Kucharski, Johannes Weiner

[-- Attachment #1: Type: text/plain, Size: 3651 bytes --]

Hi Matthew,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on linus/master]
[cannot apply to v5.3-rc7 next-20190904]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/Matthew-Wilcox/mm-Add-__page_cache_alloc_order/20190906-034745
config: i386-tinyconfig (attached as .config)
compiler: gcc-7 (Debian 7.4.0-11) 7.4.0
reproduce:
        # save the attached .config to linux build tree
        make ARCH=i386 

If you fix the issue, kindly add following tag
Reported-by: kbuild test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

   mm/filemap.c: In function '__add_to_page_cache_locked':
>> mm/filemap.c:863:8: error: implicit declaration of function 'compound_nr'; did you mean 'compound_order'? [-Werror=implicit-function-declaration]
      nr = compound_nr(page);
           ^~~~~~~~~~~
           compound_order
   cc1: some warnings being treated as errors

vim +863 mm/filemap.c

   840	
   841	static int __add_to_page_cache_locked(struct page *page,
   842					      struct address_space *mapping,
   843					      pgoff_t offset, gfp_t gfp_mask,
   844					      void **shadowp)
   845	{
   846		XA_STATE(xas, &mapping->i_pages, offset);
   847		int huge = PageHuge(page);
   848		struct mem_cgroup *memcg;
   849		int error;
   850		unsigned int nr = 1;
   851		void *old;
   852	
   853		VM_BUG_ON_PAGE(!PageLocked(page), page);
   854		VM_BUG_ON_PAGE(PageSwapBacked(page), page);
   855		mapping_set_update(&xas, mapping);
   856	
   857		if (!huge) {
   858			error = mem_cgroup_try_charge(page, current->mm,
   859						      gfp_mask, &memcg, false);
   860			if (error)
   861				return error;
   862			xas_set_order(&xas, offset, compound_order(page));
 > 863			nr = compound_nr(page);
   864		}
   865	
   866		page_ref_add(page, nr);
   867		page->mapping = mapping;
   868		page->index = offset;
   869	
   870		do {
   871			unsigned long exceptional = 0;
   872			unsigned int i = 0;
   873	
   874			xas_lock_irq(&xas);
   875			xas_for_each_conflict(&xas, old) {
   876				if (!xa_is_value(old))
   877					break;
   878				exceptional++;
   879				if (shadowp)
   880					*shadowp = old;
   881			}
   882			if (old) {
   883				xas_set_err(&xas, -EEXIST);
   884				break;
   885			}
   886			xas_create_range(&xas);
   887			if (xas_error(&xas))
   888				goto unlock;
   889	
   890	next:
   891			xas_store(&xas, page);
   892			if (++i < nr) {
   893				xas_next(&xas);
   894				goto next;
   895			}
   896			mapping->nrexceptional -= exceptional;
   897			mapping->nrpages += nr;
   898	
   899			/* hugetlb pages do not participate in page cache accounting */
   900			if (!huge)
   901				__mod_node_page_state(page_pgdat(page), NR_FILE_PAGES,
   902							nr);
   903	unlock:
   904			xas_unlock_irq(&xas);
   905		} while (xas_nomem(&xas, gfp_mask & GFP_RECLAIM_MASK));
   906	
   907		if (xas_error(&xas))
   908			goto error;
   909	
   910		if (!huge)
   911			mem_cgroup_commit_charge(page, memcg, false, false);
   912		trace_mm_filemap_add_to_page_cache(page);
   913		return 0;
   914	error:
   915		page->mapping = NULL;
   916		/* Leave page->index set: truncation relies upon it */
   917		if (!huge)
   918			mem_cgroup_cancel_charge(page, memcg, false);
   919		page_ref_sub(page, nr);
   920		return xas_error(&xas);
   921	}
   922	ALLOW_ERROR_INJECTION(__add_to_page_cache_locked, ERRNO);
   923	

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 7185 bytes --]

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 3/3] mm: Allow find_get_page to be used for large pages
  2019-09-05 18:23 ` [PATCH 3/3] mm: Allow find_get_page to be used for large pages Matthew Wilcox
@ 2019-09-05 21:41   ` kbuild test robot
  2019-09-05 22:04   ` kbuild test robot
  2019-09-06 12:59   ` Kirill A. Shutemov
  2 siblings, 0 replies; 21+ messages in thread
From: kbuild test robot @ 2019-09-05 21:41 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: kbuild-all, linux-mm, linux-fsdevel, Matthew Wilcox (Oracle),
	Kirill Shutemov, Song Liu, William Kucharski, Johannes Weiner

[-- Attachment #1: Type: text/plain, Size: 3847 bytes --]

Hi Matthew,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on linus/master]
[cannot apply to v5.3-rc7 next-20190904]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/Matthew-Wilcox/mm-Add-__page_cache_alloc_order/20190906-034745
config: nds32-defconfig (attached as .config)
compiler: nds32le-linux-gcc (GCC) 8.1.0
reproduce:
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        GCC_VERSION=8.1.0 make.cross ARCH=nds32 

If you fix the issue, kindly add following tag
Reported-by: kbuild test robot <lkp@intel.com>

All warnings (new ones prefixed by >>):

   mm/filemap.c: In function '__add_to_page_cache_locked':
   mm/filemap.c:863:8: error: implicit declaration of function 'compound_nr'; did you mean 'compound_order'? [-Werror=implicit-function-declaration]
      nr = compound_nr(page);
           ^~~~~~~~~~~
           compound_order
   mm/filemap.c: In function '__find_get_page':
   mm/filemap.c:1637:9: error: implicit declaration of function 'find_subpage'; did you mean 'find_get_page'? [-Werror=implicit-function-declaration]
     page = find_subpage(page, offset);
            ^~~~~~~~~~~~
            find_get_page
>> mm/filemap.c:1637:7: warning: assignment to 'struct page *' from 'int' makes pointer from integer without a cast [-Wint-conversion]
     page = find_subpage(page, offset);
          ^
   cc1: some warnings being treated as errors

vim +1637 mm/filemap.c

  1583	
  1584	/**
  1585	 * __find_get_page - Find and get a page cache entry.
  1586	 * @mapping: The address_space to search.
  1587	 * @offset: The page cache index.
  1588	 * @order: The minimum order of the entry to return.
  1589	 *
  1590	 * Looks up the page cache entries at @mapping between @offset and
  1591	 * @offset + 2^@order.  If there is a page cache page, it is returned with
  1592	 * an increased refcount unless it is smaller than @order.
  1593	 *
  1594	 * If the slot holds a shadow entry of a previously evicted page, or a
  1595	 * swap entry from shmem/tmpfs, it is returned.
  1596	 *
  1597	 * Return: the found page, a value indicating a conflicting page or %NULL if
  1598	 * there are no pages in this range.
  1599	 */
  1600	static struct page *__find_get_page(struct address_space *mapping,
  1601			unsigned long offset, unsigned int order)
  1602	{
  1603		XA_STATE(xas, &mapping->i_pages, offset);
  1604		struct page *page;
  1605	
  1606		rcu_read_lock();
  1607	repeat:
  1608		xas_reset(&xas);
  1609		page = xas_find(&xas, offset | ((1UL << order) - 1));
  1610		if (xas_retry(&xas, page))
  1611			goto repeat;
  1612		/*
  1613		 * A shadow entry of a recently evicted page, or a swap entry from
  1614		 * shmem/tmpfs.  Skip it; keep looking for pages.
  1615		 */
  1616		if (xa_is_value(page))
  1617			goto repeat;
  1618		if (!page)
  1619			goto out;
  1620		if (compound_order(page) < order) {
  1621			page = XA_RETRY_ENTRY;
  1622			goto out;
  1623		}
  1624	
  1625		if (!page_cache_get_speculative(page))
  1626			goto repeat;
  1627	
  1628		/*
  1629		 * Has the page moved or been split?
  1630		 * This is part of the lockless pagecache protocol. See
  1631		 * include/linux/pagemap.h for details.
  1632		 */
  1633		if (unlikely(page != xas_reload(&xas))) {
  1634			put_page(page);
  1635			goto repeat;
  1636		}
> 1637		page = find_subpage(page, offset);
  1638	out:
  1639		rcu_read_unlock();
  1640	
  1641		return page;
  1642	}
  1643	

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 10587 bytes --]

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 3/3] mm: Allow find_get_page to be used for large pages
  2019-09-05 18:23 ` [PATCH 3/3] mm: Allow find_get_page to be used for large pages Matthew Wilcox
  2019-09-05 21:41   ` kbuild test robot
@ 2019-09-05 22:04   ` kbuild test robot
  2019-09-05 22:12     ` Matthew Wilcox
  2019-09-06 12:59   ` Kirill A. Shutemov
  2 siblings, 1 reply; 21+ messages in thread
From: kbuild test robot @ 2019-09-05 22:04 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: kbuild-all, linux-mm, linux-fsdevel, Matthew Wilcox (Oracle),
	Kirill Shutemov, Song Liu, William Kucharski, Johannes Weiner

[-- Attachment #1: Type: text/plain, Size: 3815 bytes --]

Hi Matthew,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on linus/master]
[cannot apply to v5.3-rc7 next-20190904]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/Matthew-Wilcox/mm-Add-__page_cache_alloc_order/20190906-034745
config: ia64-allmodconfig (attached as .config)
compiler: ia64-linux-gcc (GCC) 7.4.0
reproduce:
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        GCC_VERSION=7.4.0 make.cross ARCH=ia64 

If you fix the issue, kindly add following tag
Reported-by: kbuild test robot <lkp@intel.com>

All error/warnings (new ones prefixed by >>):

   mm/filemap.c: In function '__add_to_page_cache_locked':
   mm/filemap.c:863:8: error: implicit declaration of function 'compound_nr'; did you mean 'compound_order'? [-Werror=implicit-function-declaration]
      nr = compound_nr(page);
           ^~~~~~~~~~~
           compound_order
   mm/filemap.c: In function '__find_get_page':
>> mm/filemap.c:1637:9: error: implicit declaration of function 'find_subpage'; did you mean 'find_get_page'? [-Werror=implicit-function-declaration]
     page = find_subpage(page, offset);
            ^~~~~~~~~~~~
            find_get_page
>> mm/filemap.c:1637:7: warning: assignment makes pointer from integer without a cast [-Wint-conversion]
     page = find_subpage(page, offset);
          ^
   cc1: some warnings being treated as errors

vim +1637 mm/filemap.c

  1583	
  1584	/**
  1585	 * __find_get_page - Find and get a page cache entry.
  1586	 * @mapping: The address_space to search.
  1587	 * @offset: The page cache index.
  1588	 * @order: The minimum order of the entry to return.
  1589	 *
  1590	 * Looks up the page cache entries at @mapping between @offset and
  1591	 * @offset + 2^@order.  If there is a page cache page, it is returned with
  1592	 * an increased refcount unless it is smaller than @order.
  1593	 *
  1594	 * If the slot holds a shadow entry of a previously evicted page, or a
  1595	 * swap entry from shmem/tmpfs, it is returned.
  1596	 *
  1597	 * Return: the found page, a value indicating a conflicting page or %NULL if
  1598	 * there are no pages in this range.
  1599	 */
  1600	static struct page *__find_get_page(struct address_space *mapping,
  1601			unsigned long offset, unsigned int order)
  1602	{
  1603		XA_STATE(xas, &mapping->i_pages, offset);
  1604		struct page *page;
  1605	
  1606		rcu_read_lock();
  1607	repeat:
  1608		xas_reset(&xas);
  1609		page = xas_find(&xas, offset | ((1UL << order) - 1));
  1610		if (xas_retry(&xas, page))
  1611			goto repeat;
  1612		/*
  1613		 * A shadow entry of a recently evicted page, or a swap entry from
  1614		 * shmem/tmpfs.  Skip it; keep looking for pages.
  1615		 */
  1616		if (xa_is_value(page))
  1617			goto repeat;
  1618		if (!page)
  1619			goto out;
  1620		if (compound_order(page) < order) {
  1621			page = XA_RETRY_ENTRY;
  1622			goto out;
  1623		}
  1624	
  1625		if (!page_cache_get_speculative(page))
  1626			goto repeat;
  1627	
  1628		/*
  1629		 * Has the page moved or been split?
  1630		 * This is part of the lockless pagecache protocol. See
  1631		 * include/linux/pagemap.h for details.
  1632		 */
  1633		if (unlikely(page != xas_reload(&xas))) {
  1634			put_page(page);
  1635			goto repeat;
  1636		}
> 1637		page = find_subpage(page, offset);
  1638	out:
  1639		rcu_read_unlock();
  1640	
  1641		return page;
  1642	}
  1643	

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 54582 bytes --]

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 3/3] mm: Allow find_get_page to be used for large pages
  2019-09-05 22:04   ` kbuild test robot
@ 2019-09-05 22:12     ` Matthew Wilcox
  2019-09-09  0:42       ` [kbuild-all] " Rong Chen
  0 siblings, 1 reply; 21+ messages in thread
From: Matthew Wilcox @ 2019-09-05 22:12 UTC (permalink / raw)
  To: kbuild test robot
  Cc: kbuild-all, linux-mm, linux-fsdevel, Kirill Shutemov, Song Liu,
	William Kucharski, Johannes Weiner

On Fri, Sep 06, 2019 at 06:04:05AM +0800, kbuild test robot wrote:
> Hi Matthew,
> 
> Thank you for the patch! Yet something to improve:
> 
> [auto build test ERROR on linus/master]
> [cannot apply to v5.3-rc7 next-20190904]
> [if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

It looks like you're not applying these to the -mm tree?  I thought that
was included in -next.




^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 2/3] mm: Allow large pages to be added to the page cache
  2019-09-05 18:23 ` [PATCH 2/3] mm: Allow large pages to be added to the page cache Matthew Wilcox
  2019-09-05 18:28   ` Matthew Wilcox
  2019-09-05 20:56   ` kbuild test robot
@ 2019-09-06 12:09   ` Kirill A. Shutemov
  2019-09-06 13:31     ` Matthew Wilcox
  2 siblings, 1 reply; 21+ messages in thread
From: Kirill A. Shutemov @ 2019-09-06 12:09 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: linux-mm, linux-fsdevel, Song Liu, William Kucharski, Johannes Weiner

On Thu, Sep 05, 2019 at 11:23:47AM -0700, Matthew Wilcox wrote:
> From: "Matthew Wilcox (Oracle)" <willy@infradead.org>
> 
> We return -EEXIST if there are any non-shadow entries in the page
> cache in the range covered by the large page.  If there are multiple
> shadow entries in the range, we set *shadowp to one of them (currently
> the one at the highest index).  If that turns out to be the wrong
> answer, we can implement something more complex.  This is mostly
> modelled after the equivalent function in the shmem code.
> 
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> ---
>  mm/filemap.c | 39 ++++++++++++++++++++++++++++-----------
>  1 file changed, 28 insertions(+), 11 deletions(-)
> 
> diff --git a/mm/filemap.c b/mm/filemap.c
> index 041c77c4ca56..ae3c0a70a8e9 100644
> --- a/mm/filemap.c
> +++ b/mm/filemap.c
> @@ -850,6 +850,7 @@ static int __add_to_page_cache_locked(struct page *page,
>  	int huge = PageHuge(page);
>  	struct mem_cgroup *memcg;
>  	int error;
> +	unsigned int nr = 1;
>  	void *old;
>  
>  	VM_BUG_ON_PAGE(!PageLocked(page), page);
> @@ -861,31 +862,47 @@ static int __add_to_page_cache_locked(struct page *page,
>  					      gfp_mask, &memcg, false);
>  		if (error)
>  			return error;
> +		xas_set_order(&xas, offset, compound_order(page));
> +		nr = compound_nr(page);
>  	}
>  
> -	get_page(page);
> +	page_ref_add(page, nr);
>  	page->mapping = mapping;
>  	page->index = offset;
>  
>  	do {
> +		unsigned long exceptional = 0;
> +		unsigned int i = 0;
> +
>  		xas_lock_irq(&xas);
> -		old = xas_load(&xas);
> -		if (old && !xa_is_value(old))
> +		xas_for_each_conflict(&xas, old) {
> +			if (!xa_is_value(old))
> +				break;
> +			exceptional++;
> +			if (shadowp)
> +				*shadowp = old;
> +		}
> +		if (old) {
>  			xas_set_err(&xas, -EEXIST);
> -		xas_store(&xas, page);
> +			break;
> +		}
> +		xas_create_range(&xas);
>  		if (xas_error(&xas))
>  			goto unlock;
>  
> -		if (xa_is_value(old)) {
> -			mapping->nrexceptional--;
> -			if (shadowp)
> -				*shadowp = old;
> +next:
> +		xas_store(&xas, page);
> +		if (++i < nr) {
> +			xas_next(&xas);
> +			goto next;
>  		}

Can we have a proper loop here instead of goto?

		do {
			xas_store(&xas, page);
			/* Do not move xas ouside the range */
			if (++i != nr)
				xas_next(&xas);
		} while (i < nr);

> -		mapping->nrpages++;
> +		mapping->nrexceptional -= exceptional;
> +		mapping->nrpages += nr;
>  
>  		/* hugetlb pages do not participate in page cache accounting */
>  		if (!huge)
> -			__inc_node_page_state(page, NR_FILE_PAGES);
> +			__mod_node_page_state(page_pgdat(page), NR_FILE_PAGES,
> +						nr);
>  unlock:
>  		xas_unlock_irq(&xas);
>  	} while (xas_nomem(&xas, gfp_mask & GFP_RECLAIM_MASK));
> @@ -902,7 +919,7 @@ static int __add_to_page_cache_locked(struct page *page,
>  	/* Leave page->index set: truncation relies upon it */
>  	if (!huge)
>  		mem_cgroup_cancel_charge(page, memcg, false);
> -	put_page(page);
> +	page_ref_sub(page, nr);
>  	return xas_error(&xas);
>  }
>  ALLOW_ERROR_INJECTION(__add_to_page_cache_locked, ERRNO);
> -- 
> 2.23.0.rc1
> 

-- 
 Kirill A. Shutemov


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 3/3] mm: Allow find_get_page to be used for large pages
  2019-09-05 18:23 ` [PATCH 3/3] mm: Allow find_get_page to be used for large pages Matthew Wilcox
  2019-09-05 21:41   ` kbuild test robot
  2019-09-05 22:04   ` kbuild test robot
@ 2019-09-06 12:59   ` Kirill A. Shutemov
  2019-09-06 13:41     ` Matthew Wilcox
  2 siblings, 1 reply; 21+ messages in thread
From: Kirill A. Shutemov @ 2019-09-06 12:59 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: linux-mm, linux-fsdevel, Song Liu, William Kucharski, Johannes Weiner

On Thu, Sep 05, 2019 at 11:23:48AM -0700, Matthew Wilcox wrote:
> From: "Matthew Wilcox (Oracle)" <willy@infradead.org>
> 
> Add FGP_PMD to indicate that we're trying to find-or-create a page that
> is at least PMD_ORDER in size.  The internal 'conflict' entry usage
> is modelled after that in DAX, but the implementations are different
> due to DAX using multi-order entries and the page cache using multiple
> order-0 entries.
> 
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> ---
>  include/linux/pagemap.h |  9 +++++
>  mm/filemap.c            | 82 +++++++++++++++++++++++++++++++++++++----
>  2 files changed, 84 insertions(+), 7 deletions(-)
> 
> diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
> index d2147215d415..72101811524c 100644
> --- a/include/linux/pagemap.h
> +++ b/include/linux/pagemap.h
> @@ -248,6 +248,15 @@ pgoff_t page_cache_prev_miss(struct address_space *mapping,
>  #define FGP_NOFS		0x00000010
>  #define FGP_NOWAIT		0x00000020
>  #define FGP_FOR_MMAP		0x00000040
> +/*
> + * If you add more flags, increment FGP_ORDER_SHIFT (no further than 25).

Maybe some BUILD_BUG_ON()s to ensure FGP_ORDER_SHIFT is sane?

> + * Do not insert flags above the FGP order bits.
> + */
> +#define FGP_ORDER_SHIFT		7
> +#define FGP_PMD			((PMD_SHIFT - PAGE_SHIFT) << FGP_ORDER_SHIFT)
> +#define FGP_PUD			((PUD_SHIFT - PAGE_SHIFT) << FGP_ORDER_SHIFT)
> +
> +#define fgp_order(fgp)		((fgp) >> FGP_ORDER_SHIFT)
>  
>  struct page *pagecache_get_page(struct address_space *mapping, pgoff_t offset,
>  		int fgp_flags, gfp_t cache_gfp_mask);
> diff --git a/mm/filemap.c b/mm/filemap.c
> index ae3c0a70a8e9..904dfabbea52 100644
> --- a/mm/filemap.c
> +++ b/mm/filemap.c
> @@ -1572,7 +1572,71 @@ struct page *find_get_entry(struct address_space *mapping, pgoff_t offset)
>  
>  	return page;
>  }
> -EXPORT_SYMBOL(find_get_entry);
> +
> +static bool pagecache_is_conflict(struct page *page)
> +{
> +	return page == XA_RETRY_ENTRY;
> +}
> +
> +/**
> + * __find_get_page - Find and get a page cache entry.
> + * @mapping: The address_space to search.
> + * @offset: The page cache index.
> + * @order: The minimum order of the entry to return.
> + *
> + * Looks up the page cache entries at @mapping between @offset and
> + * @offset + 2^@order.  If there is a page cache page, it is returned with

Off by one? :P

> + * an increased refcount unless it is smaller than @order.
> + *
> + * If the slot holds a shadow entry of a previously evicted page, or a
> + * swap entry from shmem/tmpfs, it is returned.
> + *
> + * Return: the found page, a value indicating a conflicting page or %NULL if
> + * there are no pages in this range.
> + */
> +static struct page *__find_get_page(struct address_space *mapping,
> +		unsigned long offset, unsigned int order)
> +{
> +	XA_STATE(xas, &mapping->i_pages, offset);
> +	struct page *page;
> +
> +	rcu_read_lock();
> +repeat:
> +	xas_reset(&xas);
> +	page = xas_find(&xas, offset | ((1UL << order) - 1));

Hm. '|' is confusing. What is expectation about offset?
Is round_down(offset, 1UL << order) expected to be equal offset?
If yes, please use '+' instead of '|'.

> +	if (xas_retry(&xas, page))
> +		goto repeat;
> +	/*
> +	 * A shadow entry of a recently evicted page, or a swap entry from
> +	 * shmem/tmpfs.  Skip it; keep looking for pages.
> +	 */
> +	if (xa_is_value(page))
> +		goto repeat;
> +	if (!page)
> +		goto out;
> +	if (compound_order(page) < order) {
> +		page = XA_RETRY_ENTRY;
> +		goto out;
> +	}

compound_order() is not stable if you don't have pin on the page.
Check it after page_cache_get_speculative().

> +
> +	if (!page_cache_get_speculative(page))
> +		goto repeat;
> +
> +	/*
> +	 * Has the page moved or been split?
> +	 * This is part of the lockless pagecache protocol. See
> +	 * include/linux/pagemap.h for details.
> +	 */
> +	if (unlikely(page != xas_reload(&xas))) {
> +		put_page(page);
> +		goto repeat;
> +	}
> +	page = find_subpage(page, offset);
> +out:
> +	rcu_read_unlock();
> +
> +	return page;
> +}
>  
>  /**
>   * find_lock_entry - locate, pin and lock a page cache entry
> @@ -1614,12 +1678,12 @@ EXPORT_SYMBOL(find_lock_entry);
>   * pagecache_get_page - find and get a page reference
>   * @mapping: the address_space to search
>   * @offset: the page index
> - * @fgp_flags: PCG flags
> + * @fgp_flags: FGP flags
>   * @gfp_mask: gfp mask to use for the page cache data page allocation
>   *
>   * Looks up the page cache slot at @mapping & @offset.
>   *
> - * PCG flags modify how the page is returned.
> + * FGP flags modify how the page is returned.
>   *
>   * @fgp_flags can be:
>   *
> @@ -1632,6 +1696,10 @@ EXPORT_SYMBOL(find_lock_entry);
>   * - FGP_FOR_MMAP: Similar to FGP_CREAT, only we want to allow the caller to do
>   *   its own locking dance if the page is already in cache, or unlock the page
>   *   before returning if we had to add the page to pagecache.
> + * - FGP_PMD: We're only interested in pages at PMD granularity.  If there
> + *   is no page here (and FGP_CREATE is set), we'll create one large enough.
> + *   If there is a smaller page in the cache that overlaps the PMD page, we
> + *   return %NULL and do not attempt to create a page.

Is it really the best inteface?

Maybe allow user to ask bitmask of allowed orders? For THP order-0 is fine
if order-9 has failed.

>   *
>   * If FGP_LOCK or FGP_CREAT are specified then the function may sleep even
>   * if the GFP flags specified for FGP_CREAT are atomic.
> @@ -1646,9 +1714,9 @@ struct page *pagecache_get_page(struct address_space *mapping, pgoff_t offset,
>  	struct page *page;
>  
>  repeat:
> -	page = find_get_entry(mapping, offset);
> -	if (xa_is_value(page))
> -		page = NULL;
> +	page = __find_get_page(mapping, offset, fgp_order(fgp_flags));
> +	if (pagecache_is_conflict(page))
> +		return NULL;
>  	if (!page)
>  		goto no_page;
>  
> @@ -1682,7 +1750,7 @@ struct page *pagecache_get_page(struct address_space *mapping, pgoff_t offset,
>  		if (fgp_flags & FGP_NOFS)
>  			gfp_mask &= ~__GFP_FS;
>  
> -		page = __page_cache_alloc(gfp_mask);
> +		page = __page_cache_alloc_order(gfp_mask, fgp_order(fgp_flags));
>  		if (!page)
>  			return NULL;
>  
> -- 
> 2.23.0.rc1
> 

-- 
 Kirill A. Shutemov


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 2/3] mm: Allow large pages to be added to the page cache
  2019-09-06 12:09   ` Kirill A. Shutemov
@ 2019-09-06 13:31     ` Matthew Wilcox
  0 siblings, 0 replies; 21+ messages in thread
From: Matthew Wilcox @ 2019-09-06 13:31 UTC (permalink / raw)
  To: Kirill A. Shutemov
  Cc: linux-mm, linux-fsdevel, Song Liu, William Kucharski, Johannes Weiner

On Fri, Sep 06, 2019 at 03:09:44PM +0300, Kirill A. Shutemov wrote:
> On Thu, Sep 05, 2019 at 11:23:47AM -0700, Matthew Wilcox wrote:
> > +next:
> > +		xas_store(&xas, page);
> > +		if (++i < nr) {
> > +			xas_next(&xas);
> > +			goto next;
> >  		}
> 
> Can we have a proper loop here instead of goto?
> 
> 		do {
> 			xas_store(&xas, page);
> 			/* Do not move xas ouside the range */
> 			if (++i != nr)
> 				xas_next(&xas);
> 		} while (i < nr);

We could.  I wanted to keep it as close to the shmem.c code as possible,
and this code is scheduled to go away once we're using a single large
entry in the xarray instead of N consecutive entries.



^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 3/3] mm: Allow find_get_page to be used for large pages
  2019-09-06 12:59   ` Kirill A. Shutemov
@ 2019-09-06 13:41     ` Matthew Wilcox
  2019-09-06 13:52       ` Kirill A. Shutemov
  0 siblings, 1 reply; 21+ messages in thread
From: Matthew Wilcox @ 2019-09-06 13:41 UTC (permalink / raw)
  To: Kirill A. Shutemov
  Cc: linux-mm, linux-fsdevel, Song Liu, William Kucharski, Johannes Weiner

On Fri, Sep 06, 2019 at 03:59:28PM +0300, Kirill A. Shutemov wrote:
> > @@ -248,6 +248,15 @@ pgoff_t page_cache_prev_miss(struct address_space *mapping,
> >  #define FGP_NOFS		0x00000010
> >  #define FGP_NOWAIT		0x00000020
> >  #define FGP_FOR_MMAP		0x00000040
> > +/*
> > + * If you add more flags, increment FGP_ORDER_SHIFT (no further than 25).
> 
> Maybe some BUILD_BUG_ON()s to ensure FGP_ORDER_SHIFT is sane?

Yeah, probably a good idea.

> > +/**
> > + * __find_get_page - Find and get a page cache entry.
> > + * @mapping: The address_space to search.
> > + * @offset: The page cache index.
> > + * @order: The minimum order of the entry to return.
> > + *
> > + * Looks up the page cache entries at @mapping between @offset and
> > + * @offset + 2^@order.  If there is a page cache page, it is returned with
> 
> Off by one? :P

Hah!  I thought it reasonable to be ambiguous in the English description
...  it's not entirely uncommon to describe something being 'between A
and B' when meaning ">= A and < B".

> > +static struct page *__find_get_page(struct address_space *mapping,
> > +		unsigned long offset, unsigned int order)
> > +{
> > +	XA_STATE(xas, &mapping->i_pages, offset);
> > +	struct page *page;
> > +
> > +	rcu_read_lock();
> > +repeat:
> > +	xas_reset(&xas);
> > +	page = xas_find(&xas, offset | ((1UL << order) - 1));
> 
> Hm. '|' is confusing. What is expectation about offset?
> Is round_down(offset, 1UL << order) expected to be equal offset?
> If yes, please use '+' instead of '|'.

Might make sense to put in ...

	VM_BUG_ON(offset & ((1UL << order) - 1));

> > +	if (xas_retry(&xas, page))
> > +		goto repeat;
> > +	/*
> > +	 * A shadow entry of a recently evicted page, or a swap entry from
> > +	 * shmem/tmpfs.  Skip it; keep looking for pages.
> > +	 */
> > +	if (xa_is_value(page))
> > +		goto repeat;
> > +	if (!page)
> > +		goto out;
> > +	if (compound_order(page) < order) {
> > +		page = XA_RETRY_ENTRY;
> > +		goto out;
> > +	}
> 
> compound_order() is not stable if you don't have pin on the page.
> Check it after page_cache_get_speculative().

Maybe check both before and after?  If we check it before, we don't bother
to bump the refcount on a page which is too small.

> > @@ -1632,6 +1696,10 @@ EXPORT_SYMBOL(find_lock_entry);
> >   * - FGP_FOR_MMAP: Similar to FGP_CREAT, only we want to allow the caller to do
> >   *   its own locking dance if the page is already in cache, or unlock the page
> >   *   before returning if we had to add the page to pagecache.
> > + * - FGP_PMD: We're only interested in pages at PMD granularity.  If there
> > + *   is no page here (and FGP_CREATE is set), we'll create one large enough.
> > + *   If there is a smaller page in the cache that overlaps the PMD page, we
> > + *   return %NULL and do not attempt to create a page.
> 
> Is it really the best inteface?
> 
> Maybe allow user to ask bitmask of allowed orders? For THP order-0 is fine
> if order-9 has failed.

That's the semantics that filemap_huge_fault() wants.  If the page isn't
available at order-9, it needs to return VM_FAULT_FALLBACK (and the VM
will call into filemap_fault() to handle the regular sized fault).

Now, maybe there are other users who want to specify "create a page of
this size if you can, but if there's already something there smaller,
return that".  We can add another FGP flag when those show up ;-)


Thanks for the review.


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 3/3] mm: Allow find_get_page to be used for large pages
  2019-09-06 13:41     ` Matthew Wilcox
@ 2019-09-06 13:52       ` Kirill A. Shutemov
  2019-09-06 15:22         ` Matthew Wilcox
  0 siblings, 1 reply; 21+ messages in thread
From: Kirill A. Shutemov @ 2019-09-06 13:52 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: linux-mm, linux-fsdevel, Song Liu, William Kucharski, Johannes Weiner

On Fri, Sep 06, 2019 at 06:41:45AM -0700, Matthew Wilcox wrote:
> On Fri, Sep 06, 2019 at 03:59:28PM +0300, Kirill A. Shutemov wrote:
> > > +/**
> > > + * __find_get_page - Find and get a page cache entry.
> > > + * @mapping: The address_space to search.
> > > + * @offset: The page cache index.
> > > + * @order: The minimum order of the entry to return.
> > > + *
> > > + * Looks up the page cache entries at @mapping between @offset and
> > > + * @offset + 2^@order.  If there is a page cache page, it is returned with
> > 
> > Off by one? :P
> 
> Hah!  I thought it reasonable to be ambiguous in the English description
> ...  it's not entirely uncommon to describe something being 'between A
> and B' when meaning ">= A and < B".

It is reasable. I was just a nitpick.

> > > +	if (compound_order(page) < order) {
> > > +		page = XA_RETRY_ENTRY;
> > > +		goto out;
> > > +	}
> > 
> > compound_order() is not stable if you don't have pin on the page.
> > Check it after page_cache_get_speculative().
> 
> Maybe check both before and after?  If we check it before, we don't bother
> to bump the refcount on a page which is too small.

Makes sense. False-positives should be rare enough to ignore them.

> > > @@ -1632,6 +1696,10 @@ EXPORT_SYMBOL(find_lock_entry);
> > >   * - FGP_FOR_MMAP: Similar to FGP_CREAT, only we want to allow the caller to do
> > >   *   its own locking dance if the page is already in cache, or unlock the page
> > >   *   before returning if we had to add the page to pagecache.
> > > + * - FGP_PMD: We're only interested in pages at PMD granularity.  If there
> > > + *   is no page here (and FGP_CREATE is set), we'll create one large enough.
> > > + *   If there is a smaller page in the cache that overlaps the PMD page, we
> > > + *   return %NULL and do not attempt to create a page.
> > 
> > Is it really the best inteface?
> > 
> > Maybe allow user to ask bitmask of allowed orders? For THP order-0 is fine
> > if order-9 has failed.
> 
> That's the semantics that filemap_huge_fault() wants.  If the page isn't
> available at order-9, it needs to return VM_FAULT_FALLBACK (and the VM
> will call into filemap_fault() to handle the regular sized fault).

Ideally, we should not have division between ->fault and ->huge_fault.
Integrating them together will give a shorter fallback loop and more
flexible inteface here would give benefit.

But I guess it's out-of-scope of the patchset.

-- 
 Kirill A. Shutemov


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 3/3] mm: Allow find_get_page to be used for large pages
  2019-09-06 13:52       ` Kirill A. Shutemov
@ 2019-09-06 15:22         ` Matthew Wilcox
  0 siblings, 0 replies; 21+ messages in thread
From: Matthew Wilcox @ 2019-09-06 15:22 UTC (permalink / raw)
  To: Kirill A. Shutemov
  Cc: linux-mm, linux-fsdevel, Song Liu, William Kucharski, Johannes Weiner

On Fri, Sep 06, 2019 at 04:52:15PM +0300, Kirill A. Shutemov wrote:
> On Fri, Sep 06, 2019 at 06:41:45AM -0700, Matthew Wilcox wrote:
> > On Fri, Sep 06, 2019 at 03:59:28PM +0300, Kirill A. Shutemov wrote:
> > > > + * - FGP_PMD: We're only interested in pages at PMD granularity.  If there
> > > > + *   is no page here (and FGP_CREATE is set), we'll create one large enough.
> > > > + *   If there is a smaller page in the cache that overlaps the PMD page, we
> > > > + *   return %NULL and do not attempt to create a page.
> > > 
> > > Is it really the best inteface?
> > > 
> > > Maybe allow user to ask bitmask of allowed orders? For THP order-0 is fine
> > > if order-9 has failed.
> > 
> > That's the semantics that filemap_huge_fault() wants.  If the page isn't
> > available at order-9, it needs to return VM_FAULT_FALLBACK (and the VM
> > will call into filemap_fault() to handle the regular sized fault).
> 
> Ideally, we should not have division between ->fault and ->huge_fault.
> Integrating them together will give a shorter fallback loop and more
> flexible inteface here would give benefit.
> 
> But I guess it's out-of-scope of the patchset.

Heh, just a little bit ... there are about 150 occurrences of
vm_operations_struct in the kernel, and I don't fancy one bit converting
them all to use ->huge_fault instead of ->fault!


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 4/3] Prepare transhuge pages properly
  2019-09-05 18:23 [PATCH 0/3] Large pages in the page cache Matthew Wilcox
                   ` (2 preceding siblings ...)
  2019-09-05 18:23 ` [PATCH 3/3] mm: Allow find_get_page to be used for large pages Matthew Wilcox
@ 2019-09-06 15:59 ` Matthew Wilcox
  3 siblings, 0 replies; 21+ messages in thread
From: Matthew Wilcox @ 2019-09-06 15:59 UTC (permalink / raw)
  To: linux-mm, linux-fsdevel
  Cc: Kirill Shutemov, Song Liu, William Kucharski, Johannes Weiner


Bill pointed out I'd forgotten to call prep_transhuge_page().  I'll
fold this into some of the other commits, but this is what I'm thinking
of doing in case anyone has a better idea:

Basically, I prefer being able to do this:

-	return alloc_pages(gfp, order);
+	return prep_transhuge_page(alloc_pages(gfp, order));

to this:

+	struct page *page;
-	return alloc_pages(gfp, order);
+	page = alloc_pages(gfp, order);
+	if (page && (gfp & __GFP_COMP))
+		prep_transhuge_page(page);
+	return page;

diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index 45ede62aa85b..159e63438806 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -153,7 +153,7 @@ extern unsigned long thp_get_unmapped_area(struct file *filp,
 		unsigned long addr, unsigned long len, unsigned long pgoff,
 		unsigned long flags);
 
-extern void prep_transhuge_page(struct page *page);
+extern struct page *prep_transhuge_page(struct page *page);
 extern void free_transhuge_page(struct page *page);
 
 bool can_split_huge_page(struct page *page, int *pextra_pins);
@@ -294,7 +294,10 @@ static inline bool transhuge_vma_suitable(struct vm_area_struct *vma,
 	return false;
 }
 
-static inline void prep_transhuge_page(struct page *page) {}
+static inline struct page *prep_transhuge_page(struct page *page)
+{
+	return page;
+}
 
 #define transparent_hugepage_flags 0UL
 
diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index 72101811524c..8b9d672d868c 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -215,7 +215,7 @@ struct page *__page_cache_alloc_order(gfp_t gfp, unsigned int order)
 {
 	if (order > 0)
 		gfp |= __GFP_COMP;
-	return alloc_pages(gfp, order);
+	return prep_transhuge_page(alloc_pages(gfp, order));
 }
 #endif
 
diff --git a/mm/filemap.c b/mm/filemap.c
index a7fa3a50f750..c2b11799b968 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -986,11 +986,12 @@ struct page *__page_cache_alloc_order(gfp_t gfp, unsigned int order)
 			cpuset_mems_cookie = read_mems_allowed_begin();
 			n = cpuset_mem_spread_node();
 			page = __alloc_pages_node(n, gfp, order);
+			prep_transhuge_page(page);
 		} while (!page && read_mems_allowed_retry(cpuset_mems_cookie));
 
 		return page;
 	}
-	return alloc_pages(gfp, order);
+	return prep_transhuge_page(alloc_pages(gfp, order));
 }
 EXPORT_SYMBOL(__page_cache_alloc_order);
 #endif
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 483b07b2d6ae..3961af907dd7 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -502,15 +502,20 @@ static inline struct list_head *page_deferred_list(struct page *page)
 	return &page[2].deferred_list;
 }
 
-void prep_transhuge_page(struct page *page)
+struct page *prep_transhuge_page(struct page *page)
 {
+	if (!page || compound_order(page) == 0)
+		return page;
 	/*
-	 * we use page->mapping and page->indexlru in second tail page
+	 * we use page->mapping and page->index in second tail page
 	 * as list_head: assuming THP order >= 2
 	 */
+	BUG_ON(compound_order(page) == 1);
 
 	INIT_LIST_HEAD(page_deferred_list(page));
 	set_compound_page_dtor(page, TRANSHUGE_PAGE_DTOR);
+
+	return page;
 }
 
 static unsigned long __thp_get_unmapped_area(struct file *filp, unsigned long len,



^ permalink raw reply related	[flat|nested] 21+ messages in thread

* Re: [kbuild-all] [PATCH 3/3] mm: Allow find_get_page to be used for large pages
  2019-09-05 22:12     ` Matthew Wilcox
@ 2019-09-09  0:42       ` Rong Chen
  2019-09-09  1:12         ` Matthew Wilcox
  0 siblings, 1 reply; 21+ messages in thread
From: Rong Chen @ 2019-09-09  0:42 UTC (permalink / raw)
  To: Matthew Wilcox, kbuild test robot
  Cc: Song Liu, Johannes Weiner, William Kucharski, linux-mm,
	kbuild-all, linux-fsdevel, Kirill Shutemov



On 9/6/19 6:12 AM, Matthew Wilcox wrote:
> On Fri, Sep 06, 2019 at 06:04:05AM +0800, kbuild test robot wrote:
>> Hi Matthew,
>>
>> Thank you for the patch! Yet something to improve:
>>
>> [auto build test ERROR on linus/master]
>> [cannot apply to v5.3-rc7 next-20190904]
>> [if your patch is applied to the wrong git tree, please drop us a note to help improve the system]
> It looks like you're not applying these to the -mm tree?  I thought that
> was included in -next.

Hi,

Sorry for the inconvenience, we'll look into it. and 0day-CI introduced 
'--base' option to record base tree info in format-patch.
could you kindly add it to help robot to base on the right tree? please 
see https://stackoverflow.com/a/37406982

Best Regards,
Rong Chen

>
>
> _______________________________________________
> kbuild-all mailing list
> kbuild-all@lists.01.org
> https://lists.01.org/mailman/listinfo/kbuild-all



^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [kbuild-all] [PATCH 3/3] mm: Allow find_get_page to be used for large pages
  2019-09-09  0:42       ` [kbuild-all] " Rong Chen
@ 2019-09-09  1:12         ` Matthew Wilcox
  0 siblings, 0 replies; 21+ messages in thread
From: Matthew Wilcox @ 2019-09-09  1:12 UTC (permalink / raw)
  To: Rong Chen
  Cc: kbuild test robot, Song Liu, Johannes Weiner, William Kucharski,
	linux-mm, kbuild-all, linux-fsdevel, Kirill Shutemov

On Mon, Sep 09, 2019 at 08:42:03AM +0800, Rong Chen wrote:
> 
> 
> On 9/6/19 6:12 AM, Matthew Wilcox wrote:
> > On Fri, Sep 06, 2019 at 06:04:05AM +0800, kbuild test robot wrote:
> > > Hi Matthew,
> > > 
> > > Thank you for the patch! Yet something to improve:
> > > 
> > > [auto build test ERROR on linus/master]
> > > [cannot apply to v5.3-rc7 next-20190904]
> > > [if your patch is applied to the wrong git tree, please drop us a note to help improve the system]
> > It looks like you're not applying these to the -mm tree?  I thought that
> > was included in -next.
> 
> Hi,
> 
> Sorry for the inconvenience, we'll look into it. and 0day-CI introduced
> '--base' option to record base tree info in format-patch.
> could you kindly add it to help robot to base on the right tree? please see
> https://stackoverflow.com/a/37406982

There isn't a stable git base tree to work from with mmotm:

https://www.ozlabs.org/~akpm/mmotm/mmotm-readme.txt


^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2019-09-09  1:13 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-09-05 18:23 [PATCH 0/3] Large pages in the page cache Matthew Wilcox
2019-09-05 18:23 ` [PATCH 1/3] mm: Add __page_cache_alloc_order Matthew Wilcox
2019-09-05 18:58   ` Song Liu
2019-09-05 19:02     ` Matthew Wilcox
2019-09-05 19:06       ` Song Liu
2019-09-05 18:23 ` [PATCH 2/3] mm: Allow large pages to be added to the page cache Matthew Wilcox
2019-09-05 18:28   ` Matthew Wilcox
2019-09-05 20:56   ` kbuild test robot
2019-09-06 12:09   ` Kirill A. Shutemov
2019-09-06 13:31     ` Matthew Wilcox
2019-09-05 18:23 ` [PATCH 3/3] mm: Allow find_get_page to be used for large pages Matthew Wilcox
2019-09-05 21:41   ` kbuild test robot
2019-09-05 22:04   ` kbuild test robot
2019-09-05 22:12     ` Matthew Wilcox
2019-09-09  0:42       ` [kbuild-all] " Rong Chen
2019-09-09  1:12         ` Matthew Wilcox
2019-09-06 12:59   ` Kirill A. Shutemov
2019-09-06 13:41     ` Matthew Wilcox
2019-09-06 13:52       ` Kirill A. Shutemov
2019-09-06 15:22         ` Matthew Wilcox
2019-09-06 15:59 ` [PATCH 4/3] Prepare transhuge pages properly Matthew Wilcox

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).