Linux-XFS Archive on lore.kernel.org
 help / color / Atom feed
* [RFC v2 0/9] Replacing the readpages a_op
@ 2020-01-15  2:38 Matthew Wilcox
  2020-01-15  2:38 ` [PATCH v2 1/9] mm: Fix the return type of __do_page_cache_readahead Matthew Wilcox
                   ` (9 more replies)
  0 siblings, 10 replies; 15+ messages in thread
From: Matthew Wilcox @ 2020-01-15  2:38 UTC (permalink / raw)
  To: linux-xfs, linux-fsdevel, linux-mm
  Cc: Matthew Wilcox (Oracle), Jeff Layton, Christoph Hellwig, Chris Mason

From: "Matthew Wilcox (Oracle)" <willy@infradead.org>

This is an attempt to add a ->readahead op to replace ->readpages.  I've
converted two users, iomap/xfs and cifs.  The cifs conversion is lacking
fscache support, and that's just because I didn't want to do that work;
I don't believe there's anything fundamental to it.  But I wanted to do
iomap because it is The Infrastructure Of The Future and cifs because it
is the sole remaining user of add_to_page_cache_locked(), which enables
the last two patches in the series.  By the way, that gives CIFS access
to the workingset shadow infrastructure, which it had to ignore before
because it couldn't put pages onto the lru list at the right time.

v2: Chris asked me to show what this would look like if we just have
the implementation look up the pages in the page cache, and I managed
to figure out some things I'd done wrong last time.  It's even simpler
than v1 (net 104 lines deleted).

Matthew Wilcox (Oracle) (9):
  mm: Fix the return type of __do_page_cache_readahead
  readahead: Ignore return value of ->readpages
  XArray: Add xarray_for_each_range
  readahead: Put pages in cache earlier
  mm: Add readahead address space operation
  iomap,xfs: Convert from readpages to readahead
  cifs: Convert from readpages to readahead
  mm: Remove add_to_page_cache_locked
  mm: Unify all add_to_page_cache variants

 Documentation/core-api/xarray.rst     |  10 +-
 Documentation/filesystems/locking.rst |   7 +-
 Documentation/filesystems/vfs.rst     |  11 ++
 fs/cifs/file.c                        | 143 +++++---------------------
 fs/iomap/buffered-io.c                |  72 +++----------
 fs/iomap/trace.h                      |   2 +-
 fs/xfs/xfs_aops.c                     |  10 +-
 include/linux/fs.h                    |   2 +
 include/linux/iomap.h                 |   2 +-
 include/linux/pagemap.h               |  25 ++---
 include/linux/xarray.h                |  30 ++++++
 mm/filemap.c                          |  72 ++++---------
 mm/internal.h                         |   2 +-
 mm/readahead.c                        |  76 +++++++++-----
 14 files changed, 180 insertions(+), 284 deletions(-)

-- 
2.24.1


^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH v2 1/9] mm: Fix the return type of __do_page_cache_readahead
  2020-01-15  2:38 [RFC v2 0/9] Replacing the readpages a_op Matthew Wilcox
@ 2020-01-15  2:38 ` Matthew Wilcox
  2020-01-15  2:38 ` [PATCH v2 2/9] readahead: Ignore return value of ->readpages Matthew Wilcox
                   ` (8 subsequent siblings)
  9 siblings, 0 replies; 15+ messages in thread
From: Matthew Wilcox @ 2020-01-15  2:38 UTC (permalink / raw)
  To: linux-xfs, linux-fsdevel, linux-mm
  Cc: Matthew Wilcox (Oracle), Jeff Layton, Christoph Hellwig, Chris Mason

From: "Matthew Wilcox (Oracle)" <willy@infradead.org>

ra_submit() which is a wrapper around __do_page_cache_readahead() already
returns an unsigned long, and the 'nr_to_read' parameter is an unsigned
long, so fix __do_page_cache_readahead() to return an unsigned long,
even though I'm pretty sure we're not going to readahead more than 2^32
pages ever.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/internal.h  | 2 +-
 mm/readahead.c | 4 ++--
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/mm/internal.h b/mm/internal.h
index 3cf20ab3ca01..41b93c4b3ab7 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -49,7 +49,7 @@ void unmap_page_range(struct mmu_gather *tlb,
 			     unsigned long addr, unsigned long end,
 			     struct zap_details *details);
 
-extern unsigned int __do_page_cache_readahead(struct address_space *mapping,
+extern unsigned long __do_page_cache_readahead(struct address_space *mapping,
 		struct file *filp, pgoff_t offset, unsigned long nr_to_read,
 		unsigned long lookahead_size);
 
diff --git a/mm/readahead.c b/mm/readahead.c
index 2fe72cd29b47..6bf73ef33b7e 100644
--- a/mm/readahead.c
+++ b/mm/readahead.c
@@ -152,7 +152,7 @@ static int read_pages(struct address_space *mapping, struct file *filp,
  *
  * Returns the number of pages requested, or the maximum amount of I/O allowed.
  */
-unsigned int __do_page_cache_readahead(struct address_space *mapping,
+unsigned long __do_page_cache_readahead(struct address_space *mapping,
 		struct file *filp, pgoff_t offset, unsigned long nr_to_read,
 		unsigned long lookahead_size)
 {
@@ -161,7 +161,7 @@ unsigned int __do_page_cache_readahead(struct address_space *mapping,
 	unsigned long end_index;	/* The last page we want to read */
 	LIST_HEAD(page_pool);
 	int page_idx;
-	unsigned int nr_pages = 0;
+	unsigned long nr_pages = 0;
 	loff_t isize = i_size_read(inode);
 	gfp_t gfp_mask = readahead_gfp_mask(mapping);
 
-- 
2.24.1


^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH v2 2/9] readahead: Ignore return value of ->readpages
  2020-01-15  2:38 [RFC v2 0/9] Replacing the readpages a_op Matthew Wilcox
  2020-01-15  2:38 ` [PATCH v2 1/9] mm: Fix the return type of __do_page_cache_readahead Matthew Wilcox
@ 2020-01-15  2:38 ` Matthew Wilcox
  2020-01-15  2:38 ` [PATCH v2 3/9] XArray: Add xarray_for_each_range Matthew Wilcox
                   ` (7 subsequent siblings)
  9 siblings, 0 replies; 15+ messages in thread
From: Matthew Wilcox @ 2020-01-15  2:38 UTC (permalink / raw)
  To: linux-xfs, linux-fsdevel, linux-mm
  Cc: Matthew Wilcox (Oracle), Jeff Layton, Christoph Hellwig, Chris Mason

From: "Matthew Wilcox (Oracle)" <willy@infradead.org>

We used to assign the return value to a variable, which we then ignored.
Remove the pretence of caring.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/readahead.c | 8 ++------
 1 file changed, 2 insertions(+), 6 deletions(-)

diff --git a/mm/readahead.c b/mm/readahead.c
index 6bf73ef33b7e..fc77d13af556 100644
--- a/mm/readahead.c
+++ b/mm/readahead.c
@@ -113,17 +113,16 @@ int read_cache_pages(struct address_space *mapping, struct list_head *pages,
 
 EXPORT_SYMBOL(read_cache_pages);
 
-static int read_pages(struct address_space *mapping, struct file *filp,
+static void read_pages(struct address_space *mapping, struct file *filp,
 		struct list_head *pages, unsigned int nr_pages, gfp_t gfp)
 {
 	struct blk_plug plug;
 	unsigned page_idx;
-	int ret;
 
 	blk_start_plug(&plug);
 
 	if (mapping->a_ops->readpages) {
-		ret = mapping->a_ops->readpages(filp, mapping, pages, nr_pages);
+		mapping->a_ops->readpages(filp, mapping, pages, nr_pages);
 		/* Clean up the remaining pages */
 		put_pages_list(pages);
 		goto out;
@@ -136,12 +135,9 @@ static int read_pages(struct address_space *mapping, struct file *filp,
 			mapping->a_ops->readpage(filp, page);
 		put_page(page);
 	}
-	ret = 0;
 
 out:
 	blk_finish_plug(&plug);
-
-	return ret;
 }
 
 /*
-- 
2.24.1


^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH v2 3/9] XArray: Add xarray_for_each_range
  2020-01-15  2:38 [RFC v2 0/9] Replacing the readpages a_op Matthew Wilcox
  2020-01-15  2:38 ` [PATCH v2 1/9] mm: Fix the return type of __do_page_cache_readahead Matthew Wilcox
  2020-01-15  2:38 ` [PATCH v2 2/9] readahead: Ignore return value of ->readpages Matthew Wilcox
@ 2020-01-15  2:38 ` Matthew Wilcox
  2020-01-15  2:38 ` [PATCH v2 4/9] readahead: Put pages in cache earlier Matthew Wilcox
                   ` (6 subsequent siblings)
  9 siblings, 0 replies; 15+ messages in thread
From: Matthew Wilcox @ 2020-01-15  2:38 UTC (permalink / raw)
  To: linux-xfs, linux-fsdevel, linux-mm
  Cc: Matthew Wilcox (Oracle), Jeff Layton, Christoph Hellwig, Chris Mason

From: "Matthew Wilcox (Oracle)" <willy@infradead.org>

This function supports iterating over a range of an array.  Also add
documentation links for xarray_for_each_start().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 Documentation/core-api/xarray.rst | 10 ++++++----
 include/linux/xarray.h            | 30 ++++++++++++++++++++++++++++++
 2 files changed, 36 insertions(+), 4 deletions(-)

diff --git a/Documentation/core-api/xarray.rst b/Documentation/core-api/xarray.rst
index fcedc5349ace..95a732b5c848 100644
--- a/Documentation/core-api/xarray.rst
+++ b/Documentation/core-api/xarray.rst
@@ -94,10 +94,10 @@ calling xa_clear_mark().  You can ask whether any entry in the
 XArray has a particular mark set by calling xa_marked().
 
 You can copy entries out of the XArray into a plain array by calling
-xa_extract().  Or you can iterate over the present entries in
-the XArray by calling xa_for_each().  You may prefer to use
-xa_find() or xa_find_after() to move to the next present
-entry in the XArray.
+xa_extract().  Or you can iterate over the present entries in the XArray
+by calling xa_for_each(), xa_for_each_start() or xa_for_each_range().
+You may prefer to use xa_find() or xa_find_after() to move to the next
+present entry in the XArray.
 
 Calling xa_store_range() stores the same entry in a range
 of indices.  If you do this, some of the other operations will behave
@@ -180,6 +180,8 @@ No lock needed:
 Takes RCU read lock:
  * xa_load()
  * xa_for_each()
+ * xa_for_each_start()
+ * xa_for_each_range()
  * xa_find()
  * xa_find_after()
  * xa_extract()
diff --git a/include/linux/xarray.h b/include/linux/xarray.h
index 86eecbd98e84..a06cb555fe23 100644
--- a/include/linux/xarray.h
+++ b/include/linux/xarray.h
@@ -445,6 +445,36 @@ static inline bool xa_marked(const struct xarray *xa, xa_mark_t mark)
 	     entry;							\
 	     entry = xa_find_after(xa, &index, ULONG_MAX, XA_PRESENT))
 
+/**
+ * xa_for_each_range() - Iterate over a portion of an XArray.
+ * @xa: XArray.
+ * @index: Index of @entry.
+ * @entry: Entry retrieved from array.
+ * @start: First index to retrieve from array.
+ * @last: Last index to retrieve from array.
+ *
+ * During the iteration, @entry will have the value of the entry stored
+ * in @xa at @index.  You may modify @index during the iteration if you
+ * want to skip or reprocess indices.  It is safe to modify the array
+ * during the iteration.  At the end of the iteration, @entry will be set
+ * to NULL and @index will have a value less than or equal to max.
+ *
+ * xa_for_each_range() is O(n.log(n)) while xas_for_each() is O(n).  You have
+ * to handle your own locking with xas_for_each(), and if you have to unlock
+ * after each iteration, it will also end up being O(n.log(n)).
+ * xa_for_each_range() will spin if it hits a retry entry; if you intend to
+ * see retry entries, you should use the xas_for_each() iterator instead.
+ * The xas_for_each() iterator will expand into more inline code than
+ * xa_for_each_range().
+ *
+ * Context: Any context.  Takes and releases the RCU lock.
+ */
+#define xa_for_each_range(xa, index, entry, start, last)		\
+	for (index = start,						\
+	     entry = xa_find(xa, &index, last, XA_PRESENT);		\
+	     entry;							\
+	     entry = xa_find_after(xa, &index, last, XA_PRESENT))
+
 /**
  * xa_for_each() - Iterate over present entries in an XArray.
  * @xa: XArray.
-- 
2.24.1


^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH v2 4/9] readahead: Put pages in cache earlier
  2020-01-15  2:38 [RFC v2 0/9] Replacing the readpages a_op Matthew Wilcox
                   ` (2 preceding siblings ...)
  2020-01-15  2:38 ` [PATCH v2 3/9] XArray: Add xarray_for_each_range Matthew Wilcox
@ 2020-01-15  2:38 ` Matthew Wilcox
  2020-01-15  2:38 ` [PATCH v2 5/9] mm: Add readahead address space operation Matthew Wilcox
                   ` (5 subsequent siblings)
  9 siblings, 0 replies; 15+ messages in thread
From: Matthew Wilcox @ 2020-01-15  2:38 UTC (permalink / raw)
  To: linux-xfs, linux-fsdevel, linux-mm
  Cc: Matthew Wilcox (Oracle), Jeff Layton, Christoph Hellwig, Chris Mason

From: "Matthew Wilcox (Oracle)" <willy@infradead.org>

At allocation time, put the pages in the cache unless we're using
->readpages.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/readahead.c | 51 +++++++++++++++++++++++++++++++-------------------
 1 file changed, 32 insertions(+), 19 deletions(-)

diff --git a/mm/readahead.c b/mm/readahead.c
index fc77d13af556..5a6676640f20 100644
--- a/mm/readahead.c
+++ b/mm/readahead.c
@@ -114,10 +114,10 @@ int read_cache_pages(struct address_space *mapping, struct list_head *pages,
 EXPORT_SYMBOL(read_cache_pages);
 
 static void read_pages(struct address_space *mapping, struct file *filp,
-		struct list_head *pages, unsigned int nr_pages, gfp_t gfp)
+		struct list_head *pages, pgoff_t start,
+		unsigned int nr_pages)
 {
 	struct blk_plug plug;
-	unsigned page_idx;
 
 	blk_start_plug(&plug);
 
@@ -125,18 +125,17 @@ static void read_pages(struct address_space *mapping, struct file *filp,
 		mapping->a_ops->readpages(filp, mapping, pages, nr_pages);
 		/* Clean up the remaining pages */
 		put_pages_list(pages);
-		goto out;
-	}
+	} else {
+		struct page *page;
+		unsigned long index;
 
-	for (page_idx = 0; page_idx < nr_pages; page_idx++) {
-		struct page *page = lru_to_page(pages);
-		list_del(&page->lru);
-		if (!add_to_page_cache_lru(page, mapping, page->index, gfp))
+		xa_for_each_range(&mapping->i_pages, index, page, start,
+				start + nr_pages - 1) {
 			mapping->a_ops->readpage(filp, page);
-		put_page(page);
+			put_page(page);
+		}
 	}
 
-out:
 	blk_finish_plug(&plug);
 }
 
@@ -157,9 +156,11 @@ unsigned long __do_page_cache_readahead(struct address_space *mapping,
 	unsigned long end_index;	/* The last page we want to read */
 	LIST_HEAD(page_pool);
 	int page_idx;
+	pgoff_t page_offset;
 	unsigned long nr_pages = 0;
 	loff_t isize = i_size_read(inode);
 	gfp_t gfp_mask = readahead_gfp_mask(mapping);
+	bool use_list = mapping->a_ops->readpages;
 
 	if (isize == 0)
 		goto out;
@@ -170,7 +171,7 @@ unsigned long __do_page_cache_readahead(struct address_space *mapping,
 	 * Preallocate as many pages as we will need.
 	 */
 	for (page_idx = 0; page_idx < nr_to_read; page_idx++) {
-		pgoff_t page_offset = offset + page_idx;
+		page_offset = offset + page_idx;
 
 		if (page_offset > end_index)
 			break;
@@ -178,13 +179,14 @@ unsigned long __do_page_cache_readahead(struct address_space *mapping,
 		page = xa_load(&mapping->i_pages, page_offset);
 		if (page && !xa_is_value(page)) {
 			/*
-			 * Page already present?  Kick off the current batch of
-			 * contiguous pages before continuing with the next
-			 * batch.
+			 * Page already present?  Kick off the current batch
+			 * of contiguous pages before continuing with the
+			 * next batch.
 			 */
 			if (nr_pages)
-				read_pages(mapping, filp, &page_pool, nr_pages,
-						gfp_mask);
+				read_pages(mapping, filp, &page_pool,
+						page_offset - nr_pages,
+						nr_pages);
 			nr_pages = 0;
 			continue;
 		}
@@ -192,8 +194,18 @@ unsigned long __do_page_cache_readahead(struct address_space *mapping,
 		page = __page_cache_alloc(gfp_mask);
 		if (!page)
 			break;
-		page->index = page_offset;
-		list_add(&page->lru, &page_pool);
+		if (use_list) {
+			page->index = page_offset;
+			list_add(&page->lru, &page_pool);
+		} else if (!add_to_page_cache_lru(page, mapping, page_offset,
+					gfp_mask)) {
+			if (nr_pages)
+				read_pages(mapping, filp, &page_pool,
+						page_offset - nr_pages,
+						nr_pages);
+			nr_pages = 0;
+			continue;
+		}
 		if (page_idx == nr_to_read - lookahead_size)
 			SetPageReadahead(page);
 		nr_pages++;
@@ -205,7 +217,8 @@ unsigned long __do_page_cache_readahead(struct address_space *mapping,
 	 * will then handle the error.
 	 */
 	if (nr_pages)
-		read_pages(mapping, filp, &page_pool, nr_pages, gfp_mask);
+		read_pages(mapping, filp, &page_pool, page_offset - nr_pages,
+				nr_pages);
 	BUG_ON(!list_empty(&page_pool));
 out:
 	return nr_pages;
-- 
2.24.1


^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH v2 5/9] mm: Add readahead address space operation
  2020-01-15  2:38 [RFC v2 0/9] Replacing the readpages a_op Matthew Wilcox
                   ` (3 preceding siblings ...)
  2020-01-15  2:38 ` [PATCH v2 4/9] readahead: Put pages in cache earlier Matthew Wilcox
@ 2020-01-15  2:38 ` Matthew Wilcox
  2020-01-15  2:38 ` [PATCH v2 6/9] iomap,xfs: Convert from readpages to readahead Matthew Wilcox
                   ` (4 subsequent siblings)
  9 siblings, 0 replies; 15+ messages in thread
From: Matthew Wilcox @ 2020-01-15  2:38 UTC (permalink / raw)
  To: linux-xfs, linux-fsdevel, linux-mm
  Cc: Matthew Wilcox (Oracle), Jeff Layton, Christoph Hellwig, Chris Mason

From: "Matthew Wilcox (Oracle)" <willy@infradead.org>

This replaces ->readpages with a saner interface:
 - Return the number of pages not read instead of an ignored error code.
 - Pages are already in the page cache when ->readahead is called.
 - Implementation looks up the pages in the page cache instead of
   having them passed in a linked list.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 Documentation/filesystems/locking.rst |  7 ++++++-
 Documentation/filesystems/vfs.rst     | 11 +++++++++++
 include/linux/fs.h                    |  2 ++
 include/linux/pagemap.h               | 12 ++++++++++++
 mm/readahead.c                        | 13 ++++++++++++-
 5 files changed, 43 insertions(+), 2 deletions(-)

diff --git a/Documentation/filesystems/locking.rst b/Documentation/filesystems/locking.rst
index 5057e4d9dcd1..d8a5dde914b5 100644
--- a/Documentation/filesystems/locking.rst
+++ b/Documentation/filesystems/locking.rst
@@ -239,6 +239,8 @@ prototypes::
 	int (*readpage)(struct file *, struct page *);
 	int (*writepages)(struct address_space *, struct writeback_control *);
 	int (*set_page_dirty)(struct page *page);
+	unsigned (*readahead)(struct file *, struct address_space *,
+				 pgoff_t start, unsigned nr_pages);
 	int (*readpages)(struct file *filp, struct address_space *mapping,
 			struct list_head *pages, unsigned nr_pages);
 	int (*write_begin)(struct file *, struct address_space *mapping,
@@ -271,7 +273,8 @@ writepage:		yes, unlocks (see below)
 readpage:		yes, unlocks
 writepages:
 set_page_dirty		no
-readpages:
+readahead:              yes, unlocks
+readpages:              no
 write_begin:		locks the page		 exclusive
 write_end:		yes, unlocks		 exclusive
 bmap:
@@ -295,6 +298,8 @@ the request handler (/dev/loop).
 ->readpage() unlocks the page, either synchronously or via I/O
 completion.
 
+->readahead() unlocks the page like ->readpage().
+
 ->readpages() populates the pagecache with the passed pages and starts
 I/O against them.  They come unlocked upon I/O completion.
 
diff --git a/Documentation/filesystems/vfs.rst b/Documentation/filesystems/vfs.rst
index 7d4d09dd5e6d..bb06fb7b120b 100644
--- a/Documentation/filesystems/vfs.rst
+++ b/Documentation/filesystems/vfs.rst
@@ -706,6 +706,8 @@ cache in your filesystem.  The following members are defined:
 		int (*readpage)(struct file *, struct page *);
 		int (*writepages)(struct address_space *, struct writeback_control *);
 		int (*set_page_dirty)(struct page *page);
+		unsigned (*readahead)(struct file *filp, struct address_space *mapping,
+				 pgoff_t start, unsigned nr_pages);
 		int (*readpages)(struct file *filp, struct address_space *mapping,
 				 struct list_head *pages, unsigned nr_pages);
 		int (*write_begin)(struct file *, struct address_space *mapping,
@@ -781,6 +783,15 @@ cache in your filesystem.  The following members are defined:
 	If defined, it should set the PageDirty flag, and the
 	PAGECACHE_TAG_DIRTY tag in the radix tree.
 
+``readahead``
+	called by the VM to read pages associated with the address_space
+	object.  The pages are consecutive in the page cache and are
+        locked.  The implementation should decrement the page refcount after
+        attempting I/O on each page.  Usually the page will be unlocked by
+        the I/O completion handler.  If the function does not attempt I/O on
+        some pages, return the number of pages which were not read so the
+        common code can unlock the pages for you.
+
 ``readpages``
 	called by the VM to read pages associated with the address_space
 	object.  This is essentially just a vector version of readpage.
diff --git a/include/linux/fs.h b/include/linux/fs.h
index 98e0349adb52..a10f3a72e5ac 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -375,6 +375,8 @@ struct address_space_operations {
 	 */
 	int (*readpages)(struct file *filp, struct address_space *mapping,
 			struct list_head *pages, unsigned nr_pages);
+	unsigned (*readahead)(struct file *, struct address_space *,
+			pgoff_t start, unsigned nr_pages);
 
 	int (*write_begin)(struct file *, struct address_space *mapping,
 				loff_t pos, unsigned len, unsigned flags,
diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index 37a4d9e32cd3..0821f584c43c 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -630,6 +630,18 @@ static inline int add_to_page_cache(struct page *page,
 	return error;
 }
 
+/*
+ * Only call this from a ->readahead implementation.
+ */
+static inline
+struct page *readahead_page(struct address_space *mapping, loff_t pos)
+{
+	struct page *page = xa_load(&mapping->i_pages, pos / PAGE_SIZE);
+	VM_BUG_ON_PAGE(!PageLocked(page), page);
+
+	return page;
+}
+
 static inline unsigned long dir_pages(struct inode *inode)
 {
 	return (unsigned long)(inode->i_size + PAGE_SIZE - 1) >>
diff --git a/mm/readahead.c b/mm/readahead.c
index 5a6676640f20..6d65dae6dad0 100644
--- a/mm/readahead.c
+++ b/mm/readahead.c
@@ -121,7 +121,18 @@ static void read_pages(struct address_space *mapping, struct file *filp,
 
 	blk_start_plug(&plug);
 
-	if (mapping->a_ops->readpages) {
+	if (mapping->a_ops->readahead) {
+		unsigned left = mapping->a_ops->readahead(filp, mapping,
+				start, nr_pages);
+
+		while (left) {
+			struct page *page = readahead_page(mapping,
+					start + nr_pages - left - 1);
+			unlock_page(page);
+			put_page(page);
+			left--;
+		}
+	} else if (mapping->a_ops->readpages) {
 		mapping->a_ops->readpages(filp, mapping, pages, nr_pages);
 		/* Clean up the remaining pages */
 		put_pages_list(pages);
-- 
2.24.1


^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH v2 6/9] iomap,xfs: Convert from readpages to readahead
  2020-01-15  2:38 [RFC v2 0/9] Replacing the readpages a_op Matthew Wilcox
                   ` (4 preceding siblings ...)
  2020-01-15  2:38 ` [PATCH v2 5/9] mm: Add readahead address space operation Matthew Wilcox
@ 2020-01-15  2:38 ` Matthew Wilcox
  2020-01-15  7:16   ` Christoph Hellwig
  2020-01-15  2:38 ` [PATCH v2 7/9] cifs: " Matthew Wilcox
                   ` (3 subsequent siblings)
  9 siblings, 1 reply; 15+ messages in thread
From: Matthew Wilcox @ 2020-01-15  2:38 UTC (permalink / raw)
  To: linux-xfs, linux-fsdevel, linux-mm
  Cc: Matthew Wilcox (Oracle), Jeff Layton, Christoph Hellwig, Chris Mason

From: "Matthew Wilcox (Oracle)" <willy@infradead.org>

Use the new readahead operation in XFS and iomap.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 fs/iomap/buffered-io.c | 72 +++++++++---------------------------------
 fs/iomap/trace.h       |  2 +-
 fs/xfs/xfs_aops.c      | 10 +++---
 include/linux/iomap.h  |  2 +-
 4 files changed, 22 insertions(+), 64 deletions(-)

diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
index 828444e14d09..a835b99f281f 100644
--- a/fs/iomap/buffered-io.c
+++ b/fs/iomap/buffered-io.c
@@ -216,7 +216,6 @@ struct iomap_readpage_ctx {
 	bool			cur_page_in_bio;
 	bool			is_readahead;
 	struct bio		*bio;
-	struct list_head	*pages;
 };
 
 static void
@@ -367,36 +366,8 @@ iomap_readpage(struct page *page, const struct iomap_ops *ops)
 }
 EXPORT_SYMBOL_GPL(iomap_readpage);
 
-static struct page *
-iomap_next_page(struct inode *inode, struct list_head *pages, loff_t pos,
-		loff_t length, loff_t *done)
-{
-	while (!list_empty(pages)) {
-		struct page *page = lru_to_page(pages);
-
-		if (page_offset(page) >= (u64)pos + length)
-			break;
-
-		list_del(&page->lru);
-		if (!add_to_page_cache_lru(page, inode->i_mapping, page->index,
-				GFP_NOFS))
-			return page;
-
-		/*
-		 * If we already have a page in the page cache at index we are
-		 * done.  Upper layers don't care if it is uptodate after the
-		 * readpages call itself as every page gets checked again once
-		 * actually needed.
-		 */
-		*done += PAGE_SIZE;
-		put_page(page);
-	}
-
-	return NULL;
-}
-
 static loff_t
-iomap_readpages_actor(struct inode *inode, loff_t pos, loff_t length,
+iomap_readahead_actor(struct inode *inode, loff_t pos, loff_t length,
 		void *data, struct iomap *iomap, struct iomap *srcmap)
 {
 	struct iomap_readpage_ctx *ctx = data;
@@ -410,10 +381,8 @@ iomap_readpages_actor(struct inode *inode, loff_t pos, loff_t length,
 			ctx->cur_page = NULL;
 		}
 		if (!ctx->cur_page) {
-			ctx->cur_page = iomap_next_page(inode, ctx->pages,
-					pos, length, &done);
-			if (!ctx->cur_page)
-				break;
+			ctx->cur_page = readahead_page(inode->i_mapping,
+					pos / PAGE_SIZE);
 			ctx->cur_page_in_bio = false;
 		}
 		ret = iomap_readpage_actor(inode, pos + done, length - done,
@@ -423,48 +392,37 @@ iomap_readpages_actor(struct inode *inode, loff_t pos, loff_t length,
 	return done;
 }
 
-int
-iomap_readpages(struct address_space *mapping, struct list_head *pages,
+unsigned
+iomap_readahead(struct address_space *mapping, pgoff_t start,
 		unsigned nr_pages, const struct iomap_ops *ops)
 {
 	struct iomap_readpage_ctx ctx = {
-		.pages		= pages,
 		.is_readahead	= true,
 	};
-	loff_t pos = page_offset(list_entry(pages->prev, struct page, lru));
-	loff_t last = page_offset(list_entry(pages->next, struct page, lru));
-	loff_t length = last - pos + PAGE_SIZE, ret = 0;
+	loff_t pos = start * PAGE_SIZE;
+	loff_t length = nr_pages * PAGE_SIZE;
 
-	trace_iomap_readpages(mapping->host, nr_pages);
+	trace_iomap_readahead(mapping->host, nr_pages);
 
 	while (length > 0) {
-		ret = iomap_apply(mapping->host, pos, length, 0, ops,
-				&ctx, iomap_readpages_actor);
+		loff_t ret = iomap_apply(mapping->host, pos, length, 0, ops,
+				&ctx, iomap_readahead_actor);
 		if (ret <= 0) {
 			WARN_ON_ONCE(ret == 0);
-			goto done;
+			break;
 		}
 		pos += ret;
 		length -= ret;
 	}
-	ret = 0;
-done:
+
 	if (ctx.bio)
 		submit_bio(ctx.bio);
-	if (ctx.cur_page) {
-		if (!ctx.cur_page_in_bio)
-			unlock_page(ctx.cur_page);
+	if (ctx.cur_page && ctx.cur_page_in_bio)
 		put_page(ctx.cur_page);
-	}
 
-	/*
-	 * Check that we didn't lose a page due to the arcance calling
-	 * conventions..
-	 */
-	WARN_ON_ONCE(!ret && !list_empty(ctx.pages));
-	return ret;
+	return length / PAGE_SIZE;
 }
-EXPORT_SYMBOL_GPL(iomap_readpages);
+EXPORT_SYMBOL_GPL(iomap_readahead);
 
 /*
  * iomap_is_partially_uptodate checks whether blocks within a page are
diff --git a/fs/iomap/trace.h b/fs/iomap/trace.h
index 6dc227b8c47e..d6ba705f938a 100644
--- a/fs/iomap/trace.h
+++ b/fs/iomap/trace.h
@@ -39,7 +39,7 @@ DEFINE_EVENT(iomap_readpage_class, name,	\
 	TP_PROTO(struct inode *inode, int nr_pages), \
 	TP_ARGS(inode, nr_pages))
 DEFINE_READPAGE_EVENT(iomap_readpage);
-DEFINE_READPAGE_EVENT(iomap_readpages);
+DEFINE_READPAGE_EVENT(iomap_readahead);
 
 DECLARE_EVENT_CLASS(iomap_page_class,
 	TP_PROTO(struct inode *inode, struct page *page, unsigned long off,
diff --git a/fs/xfs/xfs_aops.c b/fs/xfs/xfs_aops.c
index 3a688eb5c5ae..4d9da34e759b 100644
--- a/fs/xfs/xfs_aops.c
+++ b/fs/xfs/xfs_aops.c
@@ -621,14 +621,14 @@ xfs_vm_readpage(
 	return iomap_readpage(page, &xfs_read_iomap_ops);
 }
 
-STATIC int
-xfs_vm_readpages(
+STATIC unsigned
+xfs_vm_readahead(
 	struct file		*unused,
 	struct address_space	*mapping,
-	struct list_head	*pages,
+	pgoff_t			start,
 	unsigned		nr_pages)
 {
-	return iomap_readpages(mapping, pages, nr_pages, &xfs_read_iomap_ops);
+	return iomap_readahead(mapping, start, nr_pages, &xfs_read_iomap_ops);
 }
 
 static int
@@ -644,7 +644,7 @@ xfs_iomap_swapfile_activate(
 
 const struct address_space_operations xfs_address_space_operations = {
 	.readpage		= xfs_vm_readpage,
-	.readpages		= xfs_vm_readpages,
+	.readahead		= xfs_vm_readahead,
 	.writepage		= xfs_vm_writepage,
 	.writepages		= xfs_vm_writepages,
 	.set_page_dirty		= iomap_set_page_dirty,
diff --git a/include/linux/iomap.h b/include/linux/iomap.h
index 8b09463dae0d..81c6067e9b61 100644
--- a/include/linux/iomap.h
+++ b/include/linux/iomap.h
@@ -155,7 +155,7 @@ loff_t iomap_apply(struct inode *inode, loff_t pos, loff_t length,
 ssize_t iomap_file_buffered_write(struct kiocb *iocb, struct iov_iter *from,
 		const struct iomap_ops *ops);
 int iomap_readpage(struct page *page, const struct iomap_ops *ops);
-int iomap_readpages(struct address_space *mapping, struct list_head *pages,
+unsigned iomap_readahead(struct address_space *, pgoff_t start,
 		unsigned nr_pages, const struct iomap_ops *ops);
 int iomap_set_page_dirty(struct page *page);
 int iomap_is_partially_uptodate(struct page *page, unsigned long from,
-- 
2.24.1


^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH v2 7/9] cifs: Convert from readpages to readahead
  2020-01-15  2:38 [RFC v2 0/9] Replacing the readpages a_op Matthew Wilcox
                   ` (5 preceding siblings ...)
  2020-01-15  2:38 ` [PATCH v2 6/9] iomap,xfs: Convert from readpages to readahead Matthew Wilcox
@ 2020-01-15  2:38 ` " Matthew Wilcox
  2020-01-15  2:38 ` [PATCH v2 8/9] mm: Remove add_to_page_cache_locked Matthew Wilcox
                   ` (2 subsequent siblings)
  9 siblings, 0 replies; 15+ messages in thread
From: Matthew Wilcox @ 2020-01-15  2:38 UTC (permalink / raw)
  To: linux-xfs, linux-fsdevel, linux-mm
  Cc: Matthew Wilcox (Oracle), Jeff Layton, Christoph Hellwig, Chris Mason

From: "Matthew Wilcox (Oracle)" <willy@infradead.org>

Use the new readahead operation in CIFS

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 fs/cifs/file.c | 143 +++++++++----------------------------------------
 1 file changed, 25 insertions(+), 118 deletions(-)

diff --git a/fs/cifs/file.c b/fs/cifs/file.c
index 043288b5c728..c9162380ad22 100644
--- a/fs/cifs/file.c
+++ b/fs/cifs/file.c
@@ -4280,70 +4280,11 @@ cifs_readpages_copy_into_pages(struct TCP_Server_Info *server,
 	return readpages_fill_pages(server, rdata, iter, iter->count);
 }
 
-static int
-readpages_get_pages(struct address_space *mapping, struct list_head *page_list,
-		    unsigned int rsize, struct list_head *tmplist,
-		    unsigned int *nr_pages, loff_t *offset, unsigned int *bytes)
-{
-	struct page *page, *tpage;
-	unsigned int expected_index;
-	int rc;
-	gfp_t gfp = readahead_gfp_mask(mapping);
-
-	INIT_LIST_HEAD(tmplist);
-
-	page = lru_to_page(page_list);
-
-	/*
-	 * Lock the page and put it in the cache. Since no one else
-	 * should have access to this page, we're safe to simply set
-	 * PG_locked without checking it first.
-	 */
-	__SetPageLocked(page);
-	rc = add_to_page_cache_locked(page, mapping,
-				      page->index, gfp);
-
-	/* give up if we can't stick it in the cache */
-	if (rc) {
-		__ClearPageLocked(page);
-		return rc;
-	}
-
-	/* move first page to the tmplist */
-	*offset = (loff_t)page->index << PAGE_SHIFT;
-	*bytes = PAGE_SIZE;
-	*nr_pages = 1;
-	list_move_tail(&page->lru, tmplist);
-
-	/* now try and add more pages onto the request */
-	expected_index = page->index + 1;
-	list_for_each_entry_safe_reverse(page, tpage, page_list, lru) {
-		/* discontinuity ? */
-		if (page->index != expected_index)
-			break;
-
-		/* would this page push the read over the rsize? */
-		if (*bytes + PAGE_SIZE > rsize)
-			break;
-
-		__SetPageLocked(page);
-		if (add_to_page_cache_locked(page, mapping, page->index, gfp)) {
-			__ClearPageLocked(page);
-			break;
-		}
-		list_move_tail(&page->lru, tmplist);
-		(*bytes) += PAGE_SIZE;
-		expected_index++;
-		(*nr_pages)++;
-	}
-	return rc;
-}
-
-static int cifs_readpages(struct file *file, struct address_space *mapping,
-	struct list_head *page_list, unsigned num_pages)
+static unsigned cifs_readahead(struct file *file,
+		struct address_space *mapping, pgoff_t start,
+		unsigned num_pages)
 {
 	int rc;
-	struct list_head tmplist;
 	struct cifsFileInfo *open_file = file->private_data;
 	struct cifs_sb_info *cifs_sb = CIFS_FILE_SB(file);
 	struct TCP_Server_Info *server;
@@ -4358,11 +4299,10 @@ static int cifs_readpages(struct file *file, struct address_space *mapping,
 	 * After this point, every page in the list might have PG_fscache set,
 	 * so we will need to clean that up off of every page we don't use.
 	 */
-	rc = cifs_readpages_from_fscache(mapping->host, mapping, page_list,
-					 &num_pages);
+	rc = -ENOBUFS;
 	if (rc == 0) {
 		free_xid(xid);
-		return rc;
+		return num_pages;
 	}
 
 	if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_RWPIDFORWARD)
@@ -4373,24 +4313,11 @@ static int cifs_readpages(struct file *file, struct address_space *mapping,
 	rc = 0;
 	server = tlink_tcon(open_file->tlink)->ses->server;
 
-	cifs_dbg(FYI, "%s: file=%p mapping=%p num_pages=%u\n",
-		 __func__, file, mapping, num_pages);
+	cifs_dbg(FYI, "%s: file=%p mapping=%p start=%lu num_pages=%u\n",
+		 __func__, file, mapping, start, num_pages);
 
-	/*
-	 * Start with the page at end of list and move it to private
-	 * list. Do the same with any following pages until we hit
-	 * the rsize limit, hit an index discontinuity, or run out of
-	 * pages. Issue the async read and then start the loop again
-	 * until the list is empty.
-	 *
-	 * Note that list order is important. The page_list is in
-	 * the order of declining indexes. When we put the pages in
-	 * the rdata->pages, then we want them in increasing order.
-	 */
-	while (!list_empty(page_list)) {
-		unsigned int i, nr_pages, bytes, rsize;
-		loff_t offset;
-		struct page *page, *tpage;
+	while (num_pages) {
+		unsigned int i, nr_pages, rsize;
 		struct cifs_readdata *rdata;
 		struct cifs_credits credits_on_stack;
 		struct cifs_credits *credits = &credits_on_stack;
@@ -4408,21 +4335,14 @@ static int cifs_readpages(struct file *file, struct address_space *mapping,
 		if (rc)
 			break;
 
+		nr_pages = min_t(unsigned, rsize / PAGE_SIZE, num_pages);
 		/*
 		 * Give up immediately if rsize is too small to read an entire
 		 * page. The VFS will fall back to readpage. We should never
-		 * reach this point however since we set ra_pages to 0 when the
-		 * rsize is smaller than a cache page.
+		 * reach this point however since we set ra_pages to 0 when
+		 * the rsize is smaller than a cache page.
 		 */
-		if (unlikely(rsize < PAGE_SIZE)) {
-			add_credits_and_wake_if(server, credits, 0);
-			free_xid(xid);
-			return 0;
-		}
-
-		rc = readpages_get_pages(mapping, page_list, rsize, &tmplist,
-					 &nr_pages, &offset, &bytes);
-		if (rc) {
+		if (unlikely(nr_pages < 1)) {
 			add_credits_and_wake_if(server, credits, 0);
 			break;
 		}
@@ -4430,21 +4350,15 @@ static int cifs_readpages(struct file *file, struct address_space *mapping,
 		rdata = cifs_readdata_alloc(nr_pages, cifs_readv_complete);
 		if (!rdata) {
 			/* best to give up if we're out of mem */
-			list_for_each_entry_safe(page, tpage, &tmplist, lru) {
-				list_del(&page->lru);
-				lru_cache_add_file(page);
-				unlock_page(page);
-				put_page(page);
-			}
-			rc = -ENOMEM;
 			add_credits_and_wake_if(server, credits, 0);
 			break;
 		}
 
 		rdata->cfile = cifsFileInfo_get(open_file);
 		rdata->mapping = mapping;
-		rdata->offset = offset;
-		rdata->bytes = bytes;
+		rdata->offset = start;
+		rdata->nr_pages = nr_pages;
+		rdata->bytes = nr_pages * PAGE_SIZE;
 		rdata->pid = pid;
 		rdata->pagesz = PAGE_SIZE;
 		rdata->tailsz = PAGE_SIZE;
@@ -4452,10 +4366,8 @@ static int cifs_readpages(struct file *file, struct address_space *mapping,
 		rdata->copy_into_pages = cifs_readpages_copy_into_pages;
 		rdata->credits = credits_on_stack;
 
-		list_for_each_entry_safe(page, tpage, &tmplist, lru) {
-			list_del(&page->lru);
-			rdata->pages[rdata->nr_pages++] = page;
-		}
+		for (i = 0; i < nr_pages; i++)
+			rdata->pages[i] = readahead_page(mapping, start++);
 
 		rc = adjust_credits(server, &rdata->credits, rdata->bytes);
 
@@ -4468,27 +4380,22 @@ static int cifs_readpages(struct file *file, struct address_space *mapping,
 
 		if (rc) {
 			add_credits_and_wake_if(server, &rdata->credits, 0);
-			for (i = 0; i < rdata->nr_pages; i++) {
-				page = rdata->pages[i];
-				lru_cache_add_file(page);
-				unlock_page(page);
-				put_page(page);
-			}
-			/* Fallback to the readpage in error/reconnect cases */
 			kref_put(&rdata->refcount, cifs_readdata_release);
 			break;
 		}
 
 		kref_put(&rdata->refcount, cifs_readdata_release);
+		num_pages -= nr_pages;
 	}
 
 	/* Any pages that have been shown to fscache but didn't get added to
 	 * the pagecache must be uncached before they get returned to the
 	 * allocator.
 	 */
-	cifs_fscache_readpages_cancel(mapping->host, page_list);
+//	cifs_fscache_readpages_cancel(mapping->host, page_list);
 	free_xid(xid);
-	return rc;
+
+	return num_pages;
 }
 
 /*
@@ -4806,7 +4713,7 @@ cifs_direct_io(struct kiocb *iocb, struct iov_iter *iter)
 
 const struct address_space_operations cifs_addr_ops = {
 	.readpage = cifs_readpage,
-	.readpages = cifs_readpages,
+	.readahead = cifs_readahead,
 	.writepage = cifs_writepage,
 	.writepages = cifs_writepages,
 	.write_begin = cifs_write_begin,
@@ -4819,9 +4726,9 @@ const struct address_space_operations cifs_addr_ops = {
 };
 
 /*
- * cifs_readpages requires the server to support a buffer large enough to
+ * cifs_readahead requires the server to support a buffer large enough to
  * contain the header plus one complete page of data.  Otherwise, we need
- * to leave cifs_readpages out of the address space operations.
+ * to leave cifs_readahead out of the address space operations.
  */
 const struct address_space_operations cifs_addr_ops_smallbuf = {
 	.readpage = cifs_readpage,
-- 
2.24.1


^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH v2 8/9] mm: Remove add_to_page_cache_locked
  2020-01-15  2:38 [RFC v2 0/9] Replacing the readpages a_op Matthew Wilcox
                   ` (6 preceding siblings ...)
  2020-01-15  2:38 ` [PATCH v2 7/9] cifs: " Matthew Wilcox
@ 2020-01-15  2:38 ` Matthew Wilcox
  2020-01-15  2:38 ` [PATCH v2 9/9] mm: Unify all add_to_page_cache variants Matthew Wilcox
  2020-01-18 23:13 ` [RFC v2 0/9] Replacing the readpages a_op Matthew Wilcox
  9 siblings, 0 replies; 15+ messages in thread
From: Matthew Wilcox @ 2020-01-15  2:38 UTC (permalink / raw)
  To: linux-xfs, linux-fsdevel, linux-mm
  Cc: Matthew Wilcox (Oracle), Jeff Layton, Christoph Hellwig, Chris Mason

From: "Matthew Wilcox (Oracle)" <willy@infradead.org>

The only remaining caller is add_to_page_cache(), and the only
caller of that is hugetlbfs, so move add_to_page_cache() into
filemap.c and call __add_to_page_cache_locked() directly.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 include/linux/pagemap.h | 20 ++------------------
 mm/filemap.c            | 23 ++++++++---------------
 2 files changed, 10 insertions(+), 33 deletions(-)

diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index 0821f584c43c..75075065dd0b 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -604,8 +604,8 @@ static inline int fault_in_pages_readable(const char __user *uaddr, int size)
 	return 0;
 }
 
-int add_to_page_cache_locked(struct page *page, struct address_space *mapping,
-				pgoff_t index, gfp_t gfp_mask);
+int add_to_page_cache(struct page *page, struct address_space *mapping,
+				pgoff_t index, gfp_t gfp);
 int add_to_page_cache_lru(struct page *page, struct address_space *mapping,
 				pgoff_t index, gfp_t gfp_mask);
 extern void delete_from_page_cache(struct page *page);
@@ -614,22 +614,6 @@ int replace_page_cache_page(struct page *old, struct page *new, gfp_t gfp_mask);
 void delete_from_page_cache_batch(struct address_space *mapping,
 				  struct pagevec *pvec);
 
-/*
- * Like add_to_page_cache_locked, but used to add newly allocated pages:
- * the page is new, so we can just run __SetPageLocked() against it.
- */
-static inline int add_to_page_cache(struct page *page,
-		struct address_space *mapping, pgoff_t offset, gfp_t gfp_mask)
-{
-	int error;
-
-	__SetPageLocked(page);
-	error = add_to_page_cache_locked(page, mapping, offset, gfp_mask);
-	if (unlikely(error))
-		__ClearPageLocked(page);
-	return error;
-}
-
 /*
  * Only call this from a ->readahead implementation.
  */
diff --git a/mm/filemap.c b/mm/filemap.c
index bf6aa30be58d..fb87f5fa75e6 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -913,25 +913,18 @@ static int __add_to_page_cache_locked(struct page *page,
 }
 ALLOW_ERROR_INJECTION(__add_to_page_cache_locked, ERRNO);
 
-/**
- * add_to_page_cache_locked - add a locked page to the pagecache
- * @page:	page to add
- * @mapping:	the page's address_space
- * @offset:	page index
- * @gfp_mask:	page allocation mode
- *
- * This function is used to add a page to the pagecache. It must be locked.
- * This function does not add the page to the LRU.  The caller must do that.
- *
- * Return: %0 on success, negative error code otherwise.
- */
-int add_to_page_cache_locked(struct page *page, struct address_space *mapping,
+int add_to_page_cache(struct page *page, struct address_space *mapping,
 		pgoff_t offset, gfp_t gfp_mask)
 {
-	return __add_to_page_cache_locked(page, mapping, offset,
+	int err;
+
+	__SetPageLocked(page);
+	err = __add_to_page_cache_locked(page, mapping, offset,
 					  gfp_mask, NULL);
+	if (unlikely(err))
+		__ClearPageLocked(page);
+	return err;
 }
-EXPORT_SYMBOL(add_to_page_cache_locked);
 
 int add_to_page_cache_lru(struct page *page, struct address_space *mapping,
 				pgoff_t offset, gfp_t gfp_mask)
-- 
2.24.1


^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH v2 9/9] mm: Unify all add_to_page_cache variants
  2020-01-15  2:38 [RFC v2 0/9] Replacing the readpages a_op Matthew Wilcox
                   ` (7 preceding siblings ...)
  2020-01-15  2:38 ` [PATCH v2 8/9] mm: Remove add_to_page_cache_locked Matthew Wilcox
@ 2020-01-15  2:38 ` Matthew Wilcox
  2020-01-15  7:20   ` Christoph Hellwig
  2020-01-18 23:13 ` [RFC v2 0/9] Replacing the readpages a_op Matthew Wilcox
  9 siblings, 1 reply; 15+ messages in thread
From: Matthew Wilcox @ 2020-01-15  2:38 UTC (permalink / raw)
  To: linux-xfs, linux-fsdevel, linux-mm
  Cc: Matthew Wilcox (Oracle), Jeff Layton, Christoph Hellwig, Chris Mason

From: "Matthew Wilcox (Oracle)" <willy@infradead.org>

We already have various bits of add_to_page_cache() executed conditionally
on !PageHuge(page); add the add_to_page_cache_lru() pieces as some
more code which isn't executed for huge pages.  This lets us remove
the old add_to_page_cache() and rename __add_to_page_cache_locked() to
add_to_page_cache().  Include a compatibility define so we don't have
to change all 20+ callers of add_to_page_cache_lru().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 include/linux/pagemap.h |  5 ++--
 mm/filemap.c            | 65 ++++++++++++-----------------------------
 2 files changed, 21 insertions(+), 49 deletions(-)

diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index 75075065dd0b..637770fa283f 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -606,14 +606,15 @@ static inline int fault_in_pages_readable(const char __user *uaddr, int size)
 
 int add_to_page_cache(struct page *page, struct address_space *mapping,
 				pgoff_t index, gfp_t gfp);
-int add_to_page_cache_lru(struct page *page, struct address_space *mapping,
-				pgoff_t index, gfp_t gfp_mask);
 extern void delete_from_page_cache(struct page *page);
 extern void __delete_from_page_cache(struct page *page, void *shadow);
 int replace_page_cache_page(struct page *old, struct page *new, gfp_t gfp_mask);
 void delete_from_page_cache_batch(struct address_space *mapping,
 				  struct pagevec *pvec);
 
+#define add_to_page_cache_lru(page, mapping, index, gfp) \
+	add_to_page_cache(page, mapping, index, gfp)
+
 /*
  * Only call this from a ->readahead implementation.
  */
diff --git a/mm/filemap.c b/mm/filemap.c
index fb87f5fa75e6..83f45f31a00a 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -847,19 +847,18 @@ int replace_page_cache_page(struct page *old, struct page *new, gfp_t gfp_mask)
 }
 EXPORT_SYMBOL_GPL(replace_page_cache_page);
 
-static int __add_to_page_cache_locked(struct page *page,
-				      struct address_space *mapping,
-				      pgoff_t offset, gfp_t gfp_mask,
-				      void **shadowp)
+int add_to_page_cache(struct page *page, struct address_space *mapping,
+		pgoff_t offset, gfp_t gfp_mask)
 {
 	XA_STATE(xas, &mapping->i_pages, offset);
 	int huge = PageHuge(page);
 	struct mem_cgroup *memcg;
 	int error;
-	void *old;
+	void *old, *shadow = NULL;
 
 	VM_BUG_ON_PAGE(!PageLocked(page), page);
 	VM_BUG_ON_PAGE(PageSwapBacked(page), page);
+	__SetPageLocked(page);
 	mapping_set_update(&xas, mapping);
 
 	if (!huge) {
@@ -884,8 +883,7 @@ static int __add_to_page_cache_locked(struct page *page,
 
 		if (xa_is_value(old)) {
 			mapping->nrexceptional--;
-			if (shadowp)
-				*shadowp = old;
+			shadow = old;
 		}
 		mapping->nrpages++;
 
@@ -899,45 +897,8 @@ static int __add_to_page_cache_locked(struct page *page,
 	if (xas_error(&xas))
 		goto error;
 
-	if (!huge)
+	if (!huge) {
 		mem_cgroup_commit_charge(page, memcg, false, false);
-	trace_mm_filemap_add_to_page_cache(page);
-	return 0;
-error:
-	page->mapping = NULL;
-	/* Leave page->index set: truncation relies upon it */
-	if (!huge)
-		mem_cgroup_cancel_charge(page, memcg, false);
-	put_page(page);
-	return xas_error(&xas);
-}
-ALLOW_ERROR_INJECTION(__add_to_page_cache_locked, ERRNO);
-
-int add_to_page_cache(struct page *page, struct address_space *mapping,
-		pgoff_t offset, gfp_t gfp_mask)
-{
-	int err;
-
-	__SetPageLocked(page);
-	err = __add_to_page_cache_locked(page, mapping, offset,
-					  gfp_mask, NULL);
-	if (unlikely(err))
-		__ClearPageLocked(page);
-	return err;
-}
-
-int add_to_page_cache_lru(struct page *page, struct address_space *mapping,
-				pgoff_t offset, gfp_t gfp_mask)
-{
-	void *shadow = NULL;
-	int ret;
-
-	__SetPageLocked(page);
-	ret = __add_to_page_cache_locked(page, mapping, offset,
-					 gfp_mask, &shadow);
-	if (unlikely(ret))
-		__ClearPageLocked(page);
-	else {
 		/*
 		 * The page might have been evicted from cache only
 		 * recently, in which case it should be activated like
@@ -951,9 +912,19 @@ int add_to_page_cache_lru(struct page *page, struct address_space *mapping,
 			workingset_refault(page, shadow);
 		lru_cache_add(page);
 	}
-	return ret;
+	trace_mm_filemap_add_to_page_cache(page);
+	return 0;
+error:
+	page->mapping = NULL;
+	/* Leave page->index set: truncation relies upon it */
+	if (!huge)
+		mem_cgroup_cancel_charge(page, memcg, false);
+	put_page(page);
+	__ClearPageLocked(page);
+	return xas_error(&xas);
 }
-EXPORT_SYMBOL_GPL(add_to_page_cache_lru);
+ALLOW_ERROR_INJECTION(add_to_page_cache, ERRNO);
+EXPORT_SYMBOL_GPL(add_to_page_cache);
 
 #ifdef CONFIG_NUMA
 struct page *__page_cache_alloc(gfp_t gfp)
-- 
2.24.1


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 6/9] iomap,xfs: Convert from readpages to readahead
  2020-01-15  2:38 ` [PATCH v2 6/9] iomap,xfs: Convert from readpages to readahead Matthew Wilcox
@ 2020-01-15  7:16   ` Christoph Hellwig
  2020-01-15  7:42     ` Matthew Wilcox
  0 siblings, 1 reply; 15+ messages in thread
From: Christoph Hellwig @ 2020-01-15  7:16 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: linux-xfs, linux-fsdevel, linux-mm, Jeff Layton,
	Christoph Hellwig, Chris Mason

On Tue, Jan 14, 2020 at 06:38:40PM -0800, Matthew Wilcox wrote:
>  static loff_t
> +iomap_readahead_actor(struct inode *inode, loff_t pos, loff_t length,
>  		void *data, struct iomap *iomap, struct iomap *srcmap)
>  {
>  	struct iomap_readpage_ctx *ctx = data;
> @@ -410,10 +381,8 @@ iomap_readpages_actor(struct inode *inode, loff_t pos, loff_t length,
>  			ctx->cur_page = NULL;
>  		}
>  		if (!ctx->cur_page) {
> -			ctx->cur_page = iomap_next_page(inode, ctx->pages,
> -					pos, length, &done);
> -			if (!ctx->cur_page)
> -				break;
> +			ctx->cur_page = readahead_page(inode->i_mapping,
> +					pos / PAGE_SIZE);

Don't we at least need a sanity check for a NULL cur_page here?
Also the readahead_page version in your previous patch seems to expect
a byte offset, so the division above would not be required. (and should
probably be replaced with a right shift anyway no matter where it ends
up)

> +unsigned
> +iomap_readahead(struct address_space *mapping, pgoff_t start,
>  		unsigned nr_pages, const struct iomap_ops *ops)
>  {
>  	struct iomap_readpage_ctx ctx = {
> -		.pages		= pages,
>  		.is_readahead	= true,
>  	};
> -	loff_t pos = page_offset(list_entry(pages->prev, struct page, lru));
> -	loff_t last = page_offset(list_entry(pages->next, struct page, lru));
> -	loff_t length = last - pos + PAGE_SIZE, ret = 0;
> +	loff_t pos = start * PAGE_SIZE;
> +	loff_t length = nr_pages * PAGE_SIZE;

Any good reason not to pass byte offsets for start and length?

> +	return length / PAGE_SIZE;

Same for the return value?

For the file systems that would usually be a more natural interface than
a page index and number of pages.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 9/9] mm: Unify all add_to_page_cache variants
  2020-01-15  2:38 ` [PATCH v2 9/9] mm: Unify all add_to_page_cache variants Matthew Wilcox
@ 2020-01-15  7:20   ` Christoph Hellwig
  2020-01-15  7:44     ` Matthew Wilcox
  0 siblings, 1 reply; 15+ messages in thread
From: Christoph Hellwig @ 2020-01-15  7:20 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: linux-xfs, linux-fsdevel, linux-mm, Jeff Layton,
	Christoph Hellwig, Chris Mason

On Tue, Jan 14, 2020 at 06:38:43PM -0800, Matthew Wilcox wrote:
> From: "Matthew Wilcox (Oracle)" <willy@infradead.org>
> 
> We already have various bits of add_to_page_cache() executed conditionally
> on !PageHuge(page); add the add_to_page_cache_lru() pieces as some
> more code which isn't executed for huge pages.  This lets us remove
> the old add_to_page_cache() and rename __add_to_page_cache_locked() to
> add_to_page_cache().  Include a compatibility define so we don't have
> to change all 20+ callers of add_to_page_cache_lru().

I'd rather change them.  20ish isn't that much after all, and not
keeping pointless aliases around keeps the code easier to read.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 6/9] iomap,xfs: Convert from readpages to readahead
  2020-01-15  7:16   ` Christoph Hellwig
@ 2020-01-15  7:42     ` Matthew Wilcox
  0 siblings, 0 replies; 15+ messages in thread
From: Matthew Wilcox @ 2020-01-15  7:42 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: linux-xfs, linux-fsdevel, linux-mm, Jeff Layton, Chris Mason

On Tue, Jan 14, 2020 at 11:16:28PM -0800, Christoph Hellwig wrote:
> On Tue, Jan 14, 2020 at 06:38:40PM -0800, Matthew Wilcox wrote:
> >  static loff_t
> > +iomap_readahead_actor(struct inode *inode, loff_t pos, loff_t length,
> >  		void *data, struct iomap *iomap, struct iomap *srcmap)
> >  {
> >  	struct iomap_readpage_ctx *ctx = data;
> > @@ -410,10 +381,8 @@ iomap_readpages_actor(struct inode *inode, loff_t pos, loff_t length,
> >  			ctx->cur_page = NULL;
> >  		}
> >  		if (!ctx->cur_page) {
> > -			ctx->cur_page = iomap_next_page(inode, ctx->pages,
> > -					pos, length, &done);
> > -			if (!ctx->cur_page)
> > -				break;
> > +			ctx->cur_page = readahead_page(inode->i_mapping,
> > +					pos / PAGE_SIZE);
> 
> Don't we at least need a sanity check for a NULL cur_page here?

I don't think so.  The caller has already put the locked page into the
page cache at that index.  If the page has gone away, that's a bug, and
I don't think BUG_ON is all that much better than a NULL pointer derefence.
Indeed, readahead_page() checks PageLocked, so it can't return NULL.

> Also the readahead_page version in your previous patch seems to expect
> a byte offset, so the division above would not be required.

Oops.  I had intended to make readahead_pages() look like this:

struct page *readahead_page(struct address_space *mapping, pgoff_t index)
{
        struct page *page = xa_load(&mapping->i_pages, index);
        VM_BUG_ON_PAGE(!PageLocked(page), page);

        return page;
}

If only our tools could warn about these kinds of mistakes.

> (and should
> probably be replaced with a right shift anyway no matter where it ends
> up)

If the compiler can't tell that x / 4096 and x >> 12 are precisely the same
and choose the more efficient of the two, we have big problems.

> > +unsigned
> > +iomap_readahead(struct address_space *mapping, pgoff_t start,
> >  		unsigned nr_pages, const struct iomap_ops *ops)
> >  {
> >  	struct iomap_readpage_ctx ctx = {
> > -		.pages		= pages,
> >  		.is_readahead	= true,
> >  	};
> > -	loff_t pos = page_offset(list_entry(pages->prev, struct page, lru));
> > -	loff_t last = page_offset(list_entry(pages->next, struct page, lru));
> > -	loff_t length = last - pos + PAGE_SIZE, ret = 0;
> > +	loff_t pos = start * PAGE_SIZE;
> > +	loff_t length = nr_pages * PAGE_SIZE;
> 
> Any good reason not to pass byte offsets for start and length?
> 
> > +	return length / PAGE_SIZE;
> 
> Same for the return value?
> 
> For the file systems that would usually be a more natural interface than
> a page index and number of pages.

That seems to depend on the filesystem.  iomap definitely would be happier
with loff_t, but cifs prefers pgoff_t.  I should probably survey a few
more filesystems and see if there's a strong lean in one direction or
the other.


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 9/9] mm: Unify all add_to_page_cache variants
  2020-01-15  7:20   ` Christoph Hellwig
@ 2020-01-15  7:44     ` Matthew Wilcox
  0 siblings, 0 replies; 15+ messages in thread
From: Matthew Wilcox @ 2020-01-15  7:44 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: linux-xfs, linux-fsdevel, linux-mm, Jeff Layton, Chris Mason

On Tue, Jan 14, 2020 at 11:20:04PM -0800, Christoph Hellwig wrote:
> On Tue, Jan 14, 2020 at 06:38:43PM -0800, Matthew Wilcox wrote:
> > From: "Matthew Wilcox (Oracle)" <willy@infradead.org>
> > 
> > We already have various bits of add_to_page_cache() executed conditionally
> > on !PageHuge(page); add the add_to_page_cache_lru() pieces as some
> > more code which isn't executed for huge pages.  This lets us remove
> > the old add_to_page_cache() and rename __add_to_page_cache_locked() to
> > add_to_page_cache().  Include a compatibility define so we don't have
> > to change all 20+ callers of add_to_page_cache_lru().
> 
> I'd rather change them.  20ish isn't that much after all, and not
> keeping pointless aliases around keeps the code easier to read.

Almost all of them are called in the ->readpages() function, so they'll
go away as filesystems are converted to ->readahead().  I'd rather not
introduce something that makes patches harder to reorder.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [RFC v2 0/9] Replacing the readpages a_op
  2020-01-15  2:38 [RFC v2 0/9] Replacing the readpages a_op Matthew Wilcox
                   ` (8 preceding siblings ...)
  2020-01-15  2:38 ` [PATCH v2 9/9] mm: Unify all add_to_page_cache variants Matthew Wilcox
@ 2020-01-18 23:13 ` Matthew Wilcox
  9 siblings, 0 replies; 15+ messages in thread
From: Matthew Wilcox @ 2020-01-18 23:13 UTC (permalink / raw)
  To: linux-xfs, linux-fsdevel, linux-mm
  Cc: Jeff Layton, Christoph Hellwig, Chris Mason

On Tue, Jan 14, 2020 at 06:38:34PM -0800, Matthew Wilcox wrote:
> This is an attempt to add a ->readahead op to replace ->readpages.  I've
> converted two users, iomap/xfs and cifs.  The cifs conversion is lacking
> fscache support, and that's just because I didn't want to do that work;
> I don't believe there's anything fundamental to it.  But I wanted to do
> iomap because it is The Infrastructure Of The Future and cifs because it
> is the sole remaining user of add_to_page_cache_locked(), which enables
> the last two patches in the series.  By the way, that gives CIFS access
> to the workingset shadow infrastructure, which it had to ignore before
> because it couldn't put pages onto the lru list at the right time.
> 
> v2: Chris asked me to show what this would look like if we just have
> the implementation look up the pages in the page cache, and I managed
> to figure out some things I'd done wrong last time.  It's even simpler
> than v1 (net 104 lines deleted).

I want to discuss whether to change the page refcount guarantees while we're
changing the API.  Currently,

page is allocated with a refcount of 1
page is locked, and inserted into page cache and refcount is bumped to 2
->readahead is called
callee is supposed to call put_page() after submitting I/O
I/O completion will unlock the page after I/O completes, leaving the refcount at 1.

So, what if we leave the refcount at 1 throughout the submission process,
saving ourselves two atomic ops per page?  We have to ensure that after
the page is submitted for I/O, the submission path no longer touches
the page.  So the process of converting a filesystem to ->readahead
becomes slightly more complex, but there's a bugger win as a result.

Opinions?

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, back to index

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-01-15  2:38 [RFC v2 0/9] Replacing the readpages a_op Matthew Wilcox
2020-01-15  2:38 ` [PATCH v2 1/9] mm: Fix the return type of __do_page_cache_readahead Matthew Wilcox
2020-01-15  2:38 ` [PATCH v2 2/9] readahead: Ignore return value of ->readpages Matthew Wilcox
2020-01-15  2:38 ` [PATCH v2 3/9] XArray: Add xarray_for_each_range Matthew Wilcox
2020-01-15  2:38 ` [PATCH v2 4/9] readahead: Put pages in cache earlier Matthew Wilcox
2020-01-15  2:38 ` [PATCH v2 5/9] mm: Add readahead address space operation Matthew Wilcox
2020-01-15  2:38 ` [PATCH v2 6/9] iomap,xfs: Convert from readpages to readahead Matthew Wilcox
2020-01-15  7:16   ` Christoph Hellwig
2020-01-15  7:42     ` Matthew Wilcox
2020-01-15  2:38 ` [PATCH v2 7/9] cifs: " Matthew Wilcox
2020-01-15  2:38 ` [PATCH v2 8/9] mm: Remove add_to_page_cache_locked Matthew Wilcox
2020-01-15  2:38 ` [PATCH v2 9/9] mm: Unify all add_to_page_cache variants Matthew Wilcox
2020-01-15  7:20   ` Christoph Hellwig
2020-01-15  7:44     ` Matthew Wilcox
2020-01-18 23:13 ` [RFC v2 0/9] Replacing the readpages a_op Matthew Wilcox

Linux-XFS Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/linux-xfs/0 linux-xfs/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 linux-xfs linux-xfs/ https://lore.kernel.org/linux-xfs \
		linux-xfs@vger.kernel.org
	public-inbox-index linux-xfs

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kernel.vger.linux-xfs


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git