linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v4 0/6] iomap: lift the xfs writepage code into iomap
@ 2019-07-30  1:17 Darrick J. Wong
  2019-07-30  1:17 ` [PATCH 1/6] list.h: add list_pop and list_pop_entry helpers Darrick J. Wong
                   ` (7 more replies)
  0 siblings, 8 replies; 23+ messages in thread
From: Darrick J. Wong @ 2019-07-30  1:17 UTC (permalink / raw)
  To: hch, darrick.wong; +Cc: linux-xfs, linux-fsdevel, Damien.LeMoal, agruenba

Hi all,

>From Christoph:

This series cleans up the xfs writepage code and then lifts it to
fs/iomap/ so that it could be use by other file systems.  I've been
wanting to [do] this for a while so that I could eventually convert gfs2
over to it, but I never got to it.  Now Damien has a new zonefs file
system for semi-raw access to zoned block devices that would like to use
the iomap code instead of reinventing it, so I finally had to do the
work.

>From Darrick:

For v4, split the series into smaller pieces.  This first part builds
out the new iomap writeback infrastructure needed for gfs2 and zonedfs.
The second part will refactor some of XFS's writeback code to use the
new helpers introduced in the first part; and the third part converts
XFS to use the iomap writeback code.

Changes since v2:
 - rebased to v5.3-rc1
 - folded in a few changes from the gfs2 enablement series

Changes since v1:
 - rebased to the latest xfs for-next tree
 - keep the preallocated transactions for size updates
 - rename list_pop to list_pop_entry and related cleanups
 - better document the nofs context handling
 - document that the iomap tracepoints are not a stable API

If you're going to start using this mess, you probably ought to just
pull from my git trees, which are linked below.

This has been lightly tested with fstests.  Enjoy!
Comments and questions are, as always, welcome.

--D

kernel git tree:
https://git.kernel.org/pub/scm/fs/xfs/xfs-linux.git/log/?h=iomap-writeback

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH 1/6] list.h: add list_pop and list_pop_entry helpers
  2019-07-30  1:17 [PATCH v4 0/6] iomap: lift the xfs writepage code into iomap Darrick J. Wong
@ 2019-07-30  1:17 ` Darrick J. Wong
  2019-07-30  1:17 ` [PATCH 2/6] iomap: copy the xfs writeback code to iomap.c Darrick J. Wong
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 23+ messages in thread
From: Darrick J. Wong @ 2019-07-30  1:17 UTC (permalink / raw)
  To: hch, darrick.wong
  Cc: Damien.LeMoal, agruenba, Matthew Wilcox (Oracle),
	linux-xfs, linux-fsdevel, Christoph Hellwig

From: Christoph Hellwig <hch@lst.de>

We have a very common pattern where we want to delete the first entry
from a list and return it as the properly typed container structure.

Add two helpers to implement this behavior.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com>
Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
---
 include/linux/list.h |   33 +++++++++++++++++++++++++++++++++
 1 file changed, 33 insertions(+)


diff --git a/include/linux/list.h b/include/linux/list.h
index 85c92555e31f..d3b00267446a 100644
--- a/include/linux/list.h
+++ b/include/linux/list.h
@@ -514,6 +514,39 @@ static inline void list_splice_tail_init(struct list_head *list,
 	pos__ != head__ ? list_entry(pos__, type, member) : NULL; \
 })
 
+/**
+ * list_pop - delete the first entry from a list and return it
+ * @list:	the list to take the element from.
+ *
+ * Return the list entry after @list.  If @list is empty return NULL.
+ */
+static inline struct list_head *list_pop(struct list_head *list)
+{
+	struct list_head *pos = READ_ONCE(list->next);
+
+	if (pos == list)
+		return NULL;
+	list_del(pos);
+	return pos;
+}
+
+/**
+ * list_pop_entry - delete the first entry from a list and return the
+ *			containing structure
+ * @list:	the list to take the element from.
+ * @type:	the type of the struct this is embedded in.
+ * @member:	the name of the list_head within the struct.
+ *
+ * Return the containing structure for the list entry after @list.  If @list
+ * is empty return NULL.
+ */
+#define list_pop_entry(list, type, member)			\
+({								\
+	struct list_head *pos__ = list_pop(list);		\
+								\
+	pos__ ? list_entry(pos__, type, member) : NULL;		\
+})
+
 /**
  * list_next_entry - get the next element in list
  * @pos:	the type * to cursor


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 2/6] iomap: copy the xfs writeback code to iomap.c
  2019-07-30  1:17 [PATCH v4 0/6] iomap: lift the xfs writepage code into iomap Darrick J. Wong
  2019-07-30  1:17 ` [PATCH 1/6] list.h: add list_pop and list_pop_entry helpers Darrick J. Wong
@ 2019-07-30  1:17 ` Darrick J. Wong
  2019-08-04 14:59   ` Andreas Gruenbacher
  2019-08-05 12:31   ` Andreas Gruenbacher
  2019-07-30  1:17 ` [PATCH 3/6] iomap: add tracing for the address space operations Darrick J. Wong
                   ` (5 subsequent siblings)
  7 siblings, 2 replies; 23+ messages in thread
From: Darrick J. Wong @ 2019-07-30  1:17 UTC (permalink / raw)
  To: hch, darrick.wong
  Cc: linux-xfs, linux-fsdevel, Damien.LeMoal, Christoph Hellwig, agruenba

From: Christoph Hellwig <hch@lst.de>

Takes the xfs writeback code and copies it to iomap.c.  A new structure
with three methods is added as the abstraction from the generic
writeback code to the file system.  These methods are used to map
blocks, submit an ioend, and cancel a page that encountered an error
before it was added to an ioend.

Note that we temporarily lose the writepage tracing, but that will
be added back soon.

Signed-off-by: Christoph Hellwig <hch@lst.de>
[darrick: create the new iomap code, we'll delete the xfs code separately]
Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com>
Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
---
 fs/iomap/buffered-io.c |  548 ++++++++++++++++++++++++++++++++++++++++++++++++
 include/linux/iomap.h  |   43 ++++
 2 files changed, 590 insertions(+), 1 deletion(-)


diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
index e25901ae3ff4..ff1f7d2b4d7a 100644
--- a/fs/iomap/buffered-io.c
+++ b/fs/iomap/buffered-io.c
@@ -1,7 +1,7 @@
 // SPDX-License-Identifier: GPL-2.0
 /*
  * Copyright (C) 2010 Red Hat, Inc.
- * Copyright (c) 2016-2018 Christoph Hellwig.
+ * Copyright (c) 2016-2019 Christoph Hellwig.
  */
 #include <linux/module.h>
 #include <linux/compiler.h>
@@ -12,6 +12,7 @@
 #include <linux/buffer_head.h>
 #include <linux/dax.h>
 #include <linux/writeback.h>
+#include <linux/list_sort.h>
 #include <linux/swap.h>
 #include <linux/bio.h>
 #include <linux/sched/signal.h>
@@ -19,6 +20,8 @@
 
 #include "../internal.h"
 
+static struct bio_set iomap_ioend_bioset;
+
 static struct iomap_page *
 iomap_page_create(struct inode *inode, struct page *page)
 {
@@ -1071,3 +1074,546 @@ vm_fault_t iomap_page_mkwrite(struct vm_fault *vmf, const struct iomap_ops *ops)
 	return block_page_mkwrite_return(ret);
 }
 EXPORT_SYMBOL_GPL(iomap_page_mkwrite);
+
+static void
+iomap_finish_page_writeback(struct inode *inode, struct bio_vec *bvec,
+		int error)
+{
+	struct iomap_page *iop = to_iomap_page(bvec->bv_page);
+
+	if (error) {
+		SetPageError(bvec->bv_page);
+		mapping_set_error(inode->i_mapping, -EIO);
+	}
+
+	WARN_ON_ONCE(i_blocksize(inode) < PAGE_SIZE && !iop);
+	WARN_ON_ONCE(iop && atomic_read(&iop->write_count) <= 0);
+
+	if (!iop || atomic_dec_and_test(&iop->write_count))
+		end_page_writeback(bvec->bv_page);
+}
+
+/*
+ * We're now finished for good with this ioend structure.  Update the page
+ * state, release holds on bios, and finally free up memory.  Do not use the
+ * ioend after this.
+ */
+static void
+iomap_finish_ioend(struct iomap_ioend *ioend, int error)
+{
+	struct inode *inode = ioend->io_inode;
+	struct bio *bio = &ioend->io_inline_bio;
+	struct bio *last = ioend->io_bio, *next;
+	u64 start = bio->bi_iter.bi_sector;
+	bool quiet = bio_flagged(bio, BIO_QUIET);
+
+	for (bio = &ioend->io_inline_bio; bio; bio = next) {
+		struct bio_vec	*bvec;
+		struct bvec_iter_all iter_all;
+
+		/*
+		 * For the last bio, bi_private points to the ioend, so we
+		 * need to explicitly end the iteration here.
+		 */
+		if (bio == last)
+			next = NULL;
+		else
+			next = bio->bi_private;
+
+		/* walk each page on bio, ending page IO on them */
+		bio_for_each_segment_all(bvec, bio, iter_all)
+			iomap_finish_page_writeback(inode, bvec, error);
+		bio_put(bio);
+	}
+
+	if (unlikely(error && !quiet)) {
+		printk_ratelimited(KERN_ERR
+			"%s: writeback error on sector %llu",
+			inode->i_sb->s_id, start);
+	}
+}
+
+void
+iomap_finish_ioends(struct iomap_ioend *ioend, int error)
+{
+	struct list_head tmp;
+
+	list_replace_init(&ioend->io_list, &tmp);
+	iomap_finish_ioend(ioend, error);
+	while ((ioend = list_pop_entry(&tmp, struct iomap_ioend, io_list)))
+		iomap_finish_ioend(ioend, error);
+}
+EXPORT_SYMBOL_GPL(iomap_finish_ioends);
+
+/*
+ * We can merge two adjacent ioends if they have the same set of work to do.
+ */
+static bool
+iomap_ioend_can_merge(struct iomap_ioend *ioend, struct iomap_ioend *next)
+{
+	if (ioend->io_bio->bi_status != next->io_bio->bi_status)
+		return false;
+	if ((ioend->io_flags & IOMAP_F_SHARED) ^
+	    (next->io_flags & IOMAP_F_SHARED))
+		return false;
+	if ((ioend->io_type == IOMAP_UNWRITTEN) ^
+	    (next->io_type == IOMAP_UNWRITTEN))
+		return false;
+	if (ioend->io_offset + ioend->io_size != next->io_offset)
+		return false;
+	return true;
+}
+
+void
+iomap_ioend_try_merge(struct iomap_ioend *ioend, struct list_head *more_ioends,
+		void (*merge_private)(struct iomap_ioend *ioend,
+				struct iomap_ioend *next))
+{
+	struct iomap_ioend *next;
+
+	INIT_LIST_HEAD(&ioend->io_list);
+
+	while ((next = list_first_entry_or_null(more_ioends, struct iomap_ioend,
+			io_list))) {
+		if (!iomap_ioend_can_merge(ioend, next))
+			break;
+		list_move_tail(&next->io_list, &ioend->io_list);
+		ioend->io_size += next->io_size;
+		if (next->io_private && merge_private)
+			merge_private(ioend, next);
+	}
+}
+EXPORT_SYMBOL_GPL(iomap_ioend_try_merge);
+
+static int
+iomap_ioend_compare(void *priv, struct list_head *a, struct list_head *b)
+{
+	struct iomap_ioend *ia, *ib;
+
+	ia = container_of(a, struct iomap_ioend, io_list);
+	ib = container_of(b, struct iomap_ioend, io_list);
+	if (ia->io_offset < ib->io_offset)
+		return -1;
+	else if (ia->io_offset > ib->io_offset)
+		return 1;
+	return 0;
+}
+
+void
+iomap_sort_ioends(struct list_head *ioend_list)
+{
+	list_sort(NULL, ioend_list, iomap_ioend_compare);
+}
+EXPORT_SYMBOL_GPL(iomap_sort_ioends);
+
+static void iomap_writepage_end_bio(struct bio *bio)
+{
+	struct iomap_ioend *ioend = bio->bi_private;
+
+	iomap_finish_ioend(ioend, blk_status_to_errno(bio->bi_status));
+}
+
+/*
+ * Submit the bio for an ioend. We are passed an ioend with a bio attached to
+ * it, and we submit that bio. The ioend may be used for multiple bio
+ * submissions, so we only want to allocate an append transaction for the ioend
+ * once. In the case of multiple bio submission, each bio will take an IO
+ * reference to the ioend to ensure that the ioend completion is only done once
+ * all bios have been submitted and the ioend is really done.
+ *
+ * If @error is non-zero, it means that we have a situation where some part of
+ * the submission process has failed after we have marked paged for writeback
+ * and unlocked them. In this situation, we need to fail the bio and ioend
+ * rather than submit it to IO. This typically only happens on a filesystem
+ * shutdown.
+ */
+static int
+iomap_submit_ioend(struct iomap_writepage_ctx *wpc, struct iomap_ioend *ioend,
+		int error)
+{
+	ioend->io_bio->bi_private = ioend;
+	ioend->io_bio->bi_end_io = iomap_writepage_end_bio;
+
+	/*
+	 * File systems can perform actions at submit time and/or override
+	 * the end_io handler here for complex operations like copy on write
+	 * extent manipulation or unwritten extent conversions.
+	 */
+	if (wpc->ops->submit_ioend)
+		error = wpc->ops->submit_ioend(ioend, error);
+	if (error) {
+		/*
+		 * If we are failing the IO now, just mark the ioend with an
+		 * error and finish it.  This will run IO completion immediately
+		 * as there is only one reference to the ioend at this point in
+		 * time.
+		 */
+		ioend->io_bio->bi_status = errno_to_blk_status(error);
+		bio_endio(ioend->io_bio);
+		return error;
+	}
+
+	submit_bio(ioend->io_bio);
+	return 0;
+}
+
+static struct iomap_ioend *
+iomap_alloc_ioend(struct inode *inode, struct iomap_writepage_ctx *wpc,
+		loff_t offset, sector_t sector, struct writeback_control *wbc)
+{
+	struct iomap_ioend *ioend;
+	struct bio *bio;
+
+	bio = bio_alloc_bioset(GFP_NOFS, BIO_MAX_PAGES, &iomap_ioend_bioset);
+	bio_set_dev(bio, wpc->iomap.bdev);
+	bio->bi_iter.bi_sector = sector;
+	bio->bi_opf = REQ_OP_WRITE | wbc_to_write_flags(wbc);
+	bio->bi_write_hint = inode->i_write_hint;
+	wbc_init_bio(wbc, bio);
+
+	ioend = container_of(bio, struct iomap_ioend, io_inline_bio);
+	INIT_LIST_HEAD(&ioend->io_list);
+	ioend->io_type = wpc->iomap.type;
+	ioend->io_flags = wpc->iomap.flags;
+	ioend->io_inode = inode;
+	ioend->io_size = 0;
+	ioend->io_offset = offset;
+	ioend->io_private = NULL;
+	ioend->io_bio = bio;
+	return ioend;
+}
+
+/*
+ * Allocate a new bio, and chain the old bio to the new one.
+ *
+ * Note that we have to do perform the chaining in this unintuitive order
+ * so that the bi_private linkage is set up in the right direction for the
+ * traversal in iomap_finish_ioend().
+ */
+static struct bio *
+iomap_chain_bio(struct bio *prev)
+{
+	struct bio *new;
+
+	new = bio_alloc(GFP_NOFS, BIO_MAX_PAGES);
+	bio_copy_dev(new, prev);/* also copies over blkcg information */
+	new->bi_iter.bi_sector = bio_end_sector(prev);
+	new->bi_opf = prev->bi_opf;
+	new->bi_write_hint = prev->bi_write_hint;
+
+	bio_chain(prev, new);
+	bio_get(prev);		/* for iomap_finish_ioend */
+	submit_bio(prev);
+	return new;
+}
+
+/*
+ * Test to see if we have an existing ioend structure that we could append to
+ * first, otherwise finish off the current ioend and start another.
+ */
+static void
+iomap_add_to_ioend(struct inode *inode, loff_t offset, struct page *page,
+		struct iomap_page *iop, struct iomap_writepage_ctx *wpc,
+		struct writeback_control *wbc, struct list_head *iolist)
+{
+	sector_t sector = iomap_sector(&wpc->iomap, offset);
+	unsigned len = i_blocksize(inode);
+	unsigned poff = offset & (PAGE_SIZE - 1);
+	bool merged, same_page = false;
+
+	if (!wpc->ioend ||
+	    (wpc->iomap.flags & IOMAP_F_SHARED) !=
+	    (wpc->ioend->io_flags & IOMAP_F_SHARED) ||
+	    wpc->iomap.type != wpc->ioend->io_type ||
+	    sector != bio_end_sector(wpc->ioend->io_bio) ||
+	    offset != wpc->ioend->io_offset + wpc->ioend->io_size) {
+		if (wpc->ioend)
+			list_add(&wpc->ioend->io_list, iolist);
+		wpc->ioend = iomap_alloc_ioend(inode, wpc, offset, sector, wbc);
+	}
+
+	merged = __bio_try_merge_page(wpc->ioend->io_bio, page, len, poff,
+			&same_page);
+
+	if (iop && !same_page)
+		atomic_inc(&iop->write_count);
+	if (!merged) {
+		if (bio_full(wpc->ioend->io_bio, len)) {
+			wpc->ioend->io_bio =
+				iomap_chain_bio(wpc->ioend->io_bio);
+		}
+		bio_add_page(wpc->ioend->io_bio, page, len, poff);
+	}
+
+	wpc->ioend->io_size += len;
+	wbc_account_cgroup_owner(wbc, page, len);
+}
+
+/*
+ * We implement an immediate ioend submission policy here to avoid needing to
+ * chain multiple ioends and hence nest mempool allocations which can violate
+ * forward progress guarantees we need to provide. The current ioend we are
+ * adding blocks to is cached on the writepage context, and if the new block
+ * does not append to the cached ioend it will create a new ioend and cache that
+ * instead.
+ *
+ * If a new ioend is created and cached, the old ioend is returned and queued
+ * locally for submission once the entire page is processed or an error has been
+ * detected.  While ioends are submitted immediately after they are completed,
+ * batching optimisations are provided by higher level block plugging.
+ *
+ * At the end of a writeback pass, there will be a cached ioend remaining on the
+ * writepage context that the caller will need to submit.
+ */
+static int
+iomap_writepage_map(struct iomap_writepage_ctx *wpc,
+		struct writeback_control *wbc, struct inode *inode,
+		struct page *page, u64 end_offset)
+{
+	struct iomap_page *iop = to_iomap_page(page);
+	struct iomap_ioend *ioend, *next;
+	unsigned len = i_blocksize(inode);
+	u64 file_offset; /* file offset of page */
+	int error = 0, count = 0, i;
+	LIST_HEAD(submit_list);
+
+	WARN_ON_ONCE(i_blocksize(inode) < PAGE_SIZE && !iop);
+	WARN_ON_ONCE(iop && atomic_read(&iop->write_count) != 0);
+
+	/*
+	 * Walk through the page to find areas to write back. If we run off the
+	 * end of the current map or find the current map invalid, grab a new
+	 * one.
+	 */
+	for (i = 0, file_offset = page_offset(page);
+	     i < (PAGE_SIZE >> inode->i_blkbits) && file_offset < end_offset;
+	     i++, file_offset += len) {
+		if (iop && !test_bit(i, iop->uptodate))
+			continue;
+
+		error = wpc->ops->map_blocks(wpc, inode, file_offset);
+		if (error)
+			break;
+		if (wpc->iomap.type == IOMAP_HOLE)
+			continue;
+		iomap_add_to_ioend(inode, file_offset, page, iop, wpc, wbc,
+				 &submit_list);
+		count++;
+	}
+
+	WARN_ON_ONCE(!wpc->ioend && !list_empty(&submit_list));
+	WARN_ON_ONCE(!PageLocked(page));
+	WARN_ON_ONCE(PageWriteback(page));
+
+	/*
+	 * On error, we have to fail the ioend here because we may have set
+	 * pages under writeback, we have to make sure we run IO completion to
+	 * mark the error state of the IO appropriately, so we can't cancel the
+	 * ioend directly here.  That means we have to mark this page as under
+	 * writeback if we included any blocks from it in the ioend chain so
+	 * that completion treats it correctly.
+	 *
+	 * If we didn't include the page in the ioend, the on error we can
+	 * simply discard and unlock it as there are no other users of the page
+	 * now.  The caller will still need to trigger submission of outstanding
+	 * ioends on the writepage context so they are treated correctly on
+	 * error.
+	 */
+	if (unlikely(error)) {
+		if (!count) {
+			if (wpc->ops->discard_page)
+				wpc->ops->discard_page(page);
+			ClearPageUptodate(page);
+			unlock_page(page);
+			goto done;
+		}
+
+		/*
+		 * If the page was not fully cleaned, we need to ensure that the
+		 * higher layers come back to it correctly.  That means we need
+		 * to keep the page dirty, and for WB_SYNC_ALL writeback we need
+		 * to ensure the PAGECACHE_TAG_TOWRITE index mark is not removed
+		 * so another attempt to write this page in this writeback sweep
+		 * will be made.
+		 */
+		set_page_writeback_keepwrite(page);
+	} else {
+		clear_page_dirty_for_io(page);
+		set_page_writeback(page);
+	}
+
+	unlock_page(page);
+
+	/*
+	 * Preserve the original error if there was one, otherwise catch
+	 * submission errors here and propagate into subsequent ioend
+	 * submissions.
+	 */
+	list_for_each_entry_safe(ioend, next, &submit_list, io_list) {
+		int error2;
+
+		list_del_init(&ioend->io_list);
+		error2 = iomap_submit_ioend(wpc, ioend, error);
+		if (error2 && !error)
+			error = error2;
+	}
+
+	/*
+	 * We can end up here with no error and nothing to write only if we race
+	 * with a partial page truncate on a sub-page block sized filesystem.
+	 */
+	if (!count)
+		end_page_writeback(page);
+done:
+	mapping_set_error(page->mapping, error);
+	return error;
+}
+
+/*
+ * Write out a dirty page.
+ *
+ * For delalloc space on the page we need to allocate space and flush it.
+ * For unwritten space on the page we need to start the conversion to
+ * regular allocated space.
+ */
+static int
+iomap_do_writepage(struct page *page, struct writeback_control *wbc, void *data)
+{
+	struct iomap_writepage_ctx *wpc = data;
+	struct inode *inode = page->mapping->host;
+	pgoff_t end_index;
+	u64 end_offset;
+	loff_t offset;
+
+	/*
+	 * Refuse to write the page out if we are called from reclaim context.
+	 *
+	 * This avoids stack overflows when called from deeply used stacks in
+	 * random callers for direct reclaim or memcg reclaim.  We explicitly
+	 * allow reclaim from kswapd as the stack usage there is relatively low.
+	 *
+	 * This should never happen except in the case of a VM regression so
+	 * warn about it.
+	 */
+	if (WARN_ON_ONCE((current->flags & (PF_MEMALLOC|PF_KSWAPD)) ==
+			PF_MEMALLOC))
+		goto redirty;
+
+	/*
+	 * Given that we do not allow direct reclaim to call us, we should
+	 * never be called while in a filesystem transaction.
+	 */
+	if (WARN_ON_ONCE(current->flags & PF_MEMALLOC_NOFS))
+		goto redirty;
+
+	/*
+	 * Is this page beyond the end of the file?
+	 *
+	 * The page index is less than the end_index, adjust the end_offset
+	 * to the highest offset that this page should represent.
+	 * -----------------------------------------------------
+	 * |			file mapping	       | <EOF> |
+	 * -----------------------------------------------------
+	 * | Page ... | Page N-2 | Page N-1 |  Page N  |       |
+	 * ^--------------------------------^----------|--------
+	 * |     desired writeback range    |      see else    |
+	 * ---------------------------------^------------------|
+	 */
+	offset = i_size_read(inode);
+	end_index = offset >> PAGE_SHIFT;
+	if (page->index < end_index)
+		end_offset = (loff_t)(page->index + 1) << PAGE_SHIFT;
+	else {
+		/*
+		 * Check whether the page to write out is beyond or straddles
+		 * i_size or not.
+		 * -------------------------------------------------------
+		 * |		file mapping		        | <EOF>  |
+		 * -------------------------------------------------------
+		 * | Page ... | Page N-2 | Page N-1 |  Page N   | Beyond |
+		 * ^--------------------------------^-----------|---------
+		 * |				    |      Straddles     |
+		 * ---------------------------------^-----------|--------|
+		 */
+		unsigned offset_into_page = offset & (PAGE_SIZE - 1);
+
+		/*
+		 * Skip the page if it is fully outside i_size, e.g. due to a
+		 * truncate operation that is in progress. We must redirty the
+		 * page so that reclaim stops reclaiming it. Otherwise
+		 * iomap_vm_releasepage() is called on it and gets confused.
+		 *
+		 * Note that the end_index is unsigned long, it would overflow
+		 * if the given offset is greater than 16TB on 32-bit system
+		 * and if we do check the page is fully outside i_size or not
+		 * via "if (page->index >= end_index + 1)" as "end_index + 1"
+		 * will be evaluated to 0.  Hence this page will be redirtied
+		 * and be written out repeatedly which would result in an
+		 * infinite loop, the user program that perform this operation
+		 * will hang.  Instead, we can verify this situation by checking
+		 * if the page to write is totally beyond the i_size or if it's
+		 * offset is just equal to the EOF.
+		 */
+		if (page->index > end_index ||
+		    (page->index == end_index && offset_into_page == 0))
+			goto redirty;
+
+		/*
+		 * The page straddles i_size.  It must be zeroed out on each
+		 * and every writepage invocation because it may be mmapped.
+		 * "A file is mapped in multiples of the page size.  For a file
+		 * that is not a multiple of the page size, the remaining
+		 * memory is zeroed when mapped, and writes to that region are
+		 * not written out to the file."
+		 */
+		zero_user_segment(page, offset_into_page, PAGE_SIZE);
+
+		/* Adjust the end_offset to the end of file */
+		end_offset = offset;
+	}
+
+	return iomap_writepage_map(wpc, wbc, inode, page, end_offset);
+
+redirty:
+	redirty_page_for_writepage(wbc, page);
+	unlock_page(page);
+	return 0;
+}
+
+int
+iomap_writepage(struct page *page, struct writeback_control *wbc,
+		struct iomap_writepage_ctx *wpc,
+		const struct iomap_writeback_ops *ops)
+{
+	int ret;
+
+	wpc->ops = ops;
+	ret = iomap_do_writepage(page, wbc, wpc);
+	if (!wpc->ioend)
+		return ret;
+	return iomap_submit_ioend(wpc, wpc->ioend, ret);
+}
+EXPORT_SYMBOL_GPL(iomap_writepage);
+
+int
+iomap_writepages(struct address_space *mapping, struct writeback_control *wbc,
+		struct iomap_writepage_ctx *wpc,
+		const struct iomap_writeback_ops *ops)
+{
+	int			ret;
+
+	wpc->ops = ops;
+	ret = write_cache_pages(mapping, wbc, iomap_do_writepage, wpc);
+	if (!wpc->ioend)
+		return ret;
+	return iomap_submit_ioend(wpc, wpc->ioend, ret);
+}
+EXPORT_SYMBOL_GPL(iomap_writepages);
+
+static int __init iomap_init(void)
+{
+	return bioset_init(&iomap_ioend_bioset, 4 * (PAGE_SIZE / SECTOR_SIZE),
+			   offsetof(struct iomap_ioend, io_inline_bio),
+			   BIOSET_NEED_BVECS);
+}
+fs_initcall(iomap_init);
diff --git a/include/linux/iomap.h b/include/linux/iomap.h
index bc499ceae392..834d3923e2f2 100644
--- a/include/linux/iomap.h
+++ b/include/linux/iomap.h
@@ -4,6 +4,7 @@
 
 #include <linux/atomic.h>
 #include <linux/bitmap.h>
+#include <linux/blk_types.h>
 #include <linux/mm.h>
 #include <linux/types.h>
 #include <linux/mm_types.h>
@@ -12,6 +13,7 @@
 struct address_space;
 struct fiemap_extent_info;
 struct inode;
+struct iomap_writepage_ctx;
 struct iov_iter;
 struct kiocb;
 struct page;
@@ -183,6 +185,47 @@ loff_t iomap_seek_data(struct inode *inode, loff_t offset,
 sector_t iomap_bmap(struct address_space *mapping, sector_t bno,
 		const struct iomap_ops *ops);
 
+/*
+ * Structure for writeback I/O completions.
+ */
+struct iomap_ioend {
+	struct list_head	io_list;	/* next ioend in chain */
+	u16			io_type;
+	u16			io_flags;	/* IOMAP_F_* */
+	struct inode		*io_inode;	/* file being written to */
+	size_t			io_size;	/* size of the extent */
+	loff_t			io_offset;	/* offset in the file */
+	void			*io_private;	/* file system private data */
+	struct bio		*io_bio;	/* bio being built */
+	struct bio		io_inline_bio;	/* MUST BE LAST! */
+};
+
+struct iomap_writeback_ops {
+	int (*map_blocks)(struct iomap_writepage_ctx *wpc, struct inode *inode,
+				loff_t offset);
+	int (*submit_ioend)(struct iomap_ioend *ioend, int status);
+	void (*discard_page)(struct page *page);
+};
+
+struct iomap_writepage_ctx {
+	struct iomap		iomap;
+	struct iomap_ioend	*ioend;
+	const struct iomap_writeback_ops *ops;
+};
+
+void iomap_finish_ioends(struct iomap_ioend *ioend, int error);
+void iomap_ioend_try_merge(struct iomap_ioend *ioend,
+		struct list_head *more_ioends,
+		void (*merge_private)(struct iomap_ioend *ioend,
+				struct iomap_ioend *next));
+void iomap_sort_ioends(struct list_head *ioend_list);
+int iomap_writepage(struct page *page, struct writeback_control *wbc,
+		struct iomap_writepage_ctx *wpc,
+		const struct iomap_writeback_ops *ops);
+int iomap_writepages(struct address_space *mapping,
+		struct writeback_control *wbc, struct iomap_writepage_ctx *wpc,
+		const struct iomap_writeback_ops *ops);
+
 /*
  * Flags for direct I/O ->end_io:
  */


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 3/6] iomap: add tracing for the address space operations
  2019-07-30  1:17 [PATCH v4 0/6] iomap: lift the xfs writepage code into iomap Darrick J. Wong
  2019-07-30  1:17 ` [PATCH 1/6] list.h: add list_pop and list_pop_entry helpers Darrick J. Wong
  2019-07-30  1:17 ` [PATCH 2/6] iomap: copy the xfs writeback code to iomap.c Darrick J. Wong
@ 2019-07-30  1:17 ` Darrick J. Wong
  2019-07-30  1:18 ` [PATCH 4/6] iomap: warn on inline maps in iomap_writepage_map Darrick J. Wong
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 23+ messages in thread
From: Darrick J. Wong @ 2019-07-30  1:17 UTC (permalink / raw)
  To: hch, darrick.wong
  Cc: linux-xfs, linux-fsdevel, Damien.LeMoal, Christoph Hellwig, agruenba

From: Christoph Hellwig <hch@lst.de>

Lift the xfs code for tracing address space operations to the iomap
layer.

Signed-off-by: Christoph Hellwig <hch@lst.de>
[darrick: remove deletion of xfs writeback tracepoints]
Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com>
Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
---
 fs/iomap/buffered-io.c       |   13 ++++++
 include/trace/events/iomap.h |   85 ++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 98 insertions(+)
 create mode 100644 include/trace/events/iomap.h


diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
index ff1f7d2b4d7a..21b50e8cb9f2 100644
--- a/fs/iomap/buffered-io.c
+++ b/fs/iomap/buffered-io.c
@@ -20,6 +20,9 @@
 
 #include "../internal.h"
 
+#define CREATE_TRACE_POINTS
+#include <trace/events/iomap.h>
+
 static struct bio_set iomap_ioend_bioset;
 
 static struct iomap_page *
@@ -296,6 +299,8 @@ iomap_readpage(struct page *page, const struct iomap_ops *ops)
 	unsigned poff;
 	loff_t ret;
 
+	trace_iomap_readpage(page->mapping->host, 1);
+
 	for (poff = 0; poff < PAGE_SIZE; poff += ret) {
 		ret = iomap_apply(inode, page_offset(page) + poff,
 				PAGE_SIZE - poff, 0, ops, &ctx,
@@ -392,6 +397,8 @@ iomap_readpages(struct address_space *mapping, struct list_head *pages,
 	loff_t last = page_offset(list_entry(pages->next, struct page, lru));
 	loff_t length = last - pos + PAGE_SIZE, ret = 0;
 
+	trace_iomap_readpages(mapping->host, nr_pages);
+
 	while (length > 0) {
 		ret = iomap_apply(mapping->host, pos, length, 0, ops,
 				&ctx, iomap_readpages_actor);
@@ -458,6 +465,8 @@ EXPORT_SYMBOL_GPL(iomap_is_partially_uptodate);
 int
 iomap_releasepage(struct page *page, gfp_t gfp_mask)
 {
+	trace_iomap_releasepage(page->mapping->host, page, 0, 0);
+
 	/*
 	 * mm accommodates an old ext3 case where clean pages might not have had
 	 * the dirty bit cleared. Thus, it can send actual dirty pages to
@@ -473,6 +482,8 @@ EXPORT_SYMBOL_GPL(iomap_releasepage);
 void
 iomap_invalidatepage(struct page *page, unsigned int offset, unsigned int len)
 {
+	trace_iomap_invalidatepage(page->mapping->host, page, offset, len);
+
 	/*
 	 * If we are invalidating the entire page, clear the dirty state from it
 	 * and release it to avoid unnecessary buildup of the LRU.
@@ -1485,6 +1496,8 @@ iomap_do_writepage(struct page *page, struct writeback_control *wbc, void *data)
 	u64 end_offset;
 	loff_t offset;
 
+	trace_iomap_writepage(inode, page, 0, 0);
+
 	/*
 	 * Refuse to write the page out if we are called from reclaim context.
 	 *
diff --git a/include/trace/events/iomap.h b/include/trace/events/iomap.h
new file mode 100644
index 000000000000..1da90e743e6e
--- /dev/null
+++ b/include/trace/events/iomap.h
@@ -0,0 +1,85 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2009-2019, Christoph Hellwig
+ * All Rights Reserved.
+ *
+ * NOTE: none of these tracepoints shall be consider a stable kernel ABI
+ * as they can change at any time.
+ */
+#undef TRACE_SYSTEM
+#define TRACE_SYSTEM iomap
+
+#if !defined(_TRACE_IOMAP_H) || defined(TRACE_HEADER_MULTI_READ)
+#define _TRACE_IOMAP_H
+
+#include <linux/tracepoint.h>
+
+DECLARE_EVENT_CLASS(iomap_page_class,
+	TP_PROTO(struct inode *inode, struct page *page, unsigned long off,
+		 unsigned int len),
+	TP_ARGS(inode, page, off, len),
+	TP_STRUCT__entry(
+		__field(dev_t, dev)
+		__field(u64, ino)
+		__field(pgoff_t, pgoff)
+		__field(loff_t, size)
+		__field(unsigned long, offset)
+		__field(unsigned int, length)
+	),
+	TP_fast_assign(
+		__entry->dev = inode->i_sb->s_dev;
+		__entry->ino = inode->i_ino;
+		__entry->pgoff = page_offset(page);
+		__entry->size = i_size_read(inode);
+		__entry->offset = off;
+		__entry->length = len;
+	),
+	TP_printk("dev %d:%d ino 0x%llx pgoff 0x%lx size 0x%llx offset %lx "
+		  "length %x",
+		  MAJOR(__entry->dev), MINOR(__entry->dev),
+		  __entry->ino,
+		  __entry->pgoff,
+		  __entry->size,
+		  __entry->offset,
+		  __entry->length)
+)
+
+#define DEFINE_PAGE_EVENT(name)		\
+DEFINE_EVENT(iomap_page_class, name,	\
+	TP_PROTO(struct inode *inode, struct page *page, unsigned long off, \
+		 unsigned int len),	\
+	TP_ARGS(inode, page, off, len))
+DEFINE_PAGE_EVENT(iomap_writepage);
+DEFINE_PAGE_EVENT(iomap_releasepage);
+DEFINE_PAGE_EVENT(iomap_invalidatepage);
+
+DECLARE_EVENT_CLASS(iomap_readpage_class,
+	TP_PROTO(struct inode *inode, int nr_pages),
+	TP_ARGS(inode, nr_pages),
+	TP_STRUCT__entry(
+		__field(dev_t, dev)
+		__field(u64, ino)
+		__field(int, nr_pages)
+	),
+	TP_fast_assign(
+		__entry->dev = inode->i_sb->s_dev;
+		__entry->ino = inode->i_ino;
+		__entry->nr_pages = nr_pages;
+	),
+	TP_printk("dev %d:%d ino 0x%llx nr_pages %d",
+		  MAJOR(__entry->dev), MINOR(__entry->dev),
+		  __entry->ino,
+		  __entry->nr_pages)
+)
+
+#define DEFINE_READPAGE_EVENT(name)		\
+DEFINE_EVENT(iomap_readpage_class, name,	\
+	TP_PROTO(struct inode *inode, int nr_pages), \
+	TP_ARGS(inode, nr_pages))
+DEFINE_READPAGE_EVENT(iomap_readpage);
+DEFINE_READPAGE_EVENT(iomap_readpages);
+
+#endif /* _TRACE_IOMAP_H */
+
+/* This part must be outside protection */
+#include <trace/define_trace.h>


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 4/6] iomap: warn on inline maps in iomap_writepage_map
  2019-07-30  1:17 [PATCH v4 0/6] iomap: lift the xfs writepage code into iomap Darrick J. Wong
                   ` (2 preceding siblings ...)
  2019-07-30  1:17 ` [PATCH 3/6] iomap: add tracing for the address space operations Darrick J. Wong
@ 2019-07-30  1:18 ` Darrick J. Wong
  2019-07-30  1:18 ` [PATCH 5/6] xfs: set IOMAP_F_NEW more carefully Darrick J. Wong
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 23+ messages in thread
From: Darrick J. Wong @ 2019-07-30  1:18 UTC (permalink / raw)
  To: hch, darrick.wong
  Cc: linux-xfs, linux-fsdevel, Damien.LeMoal, Christoph Hellwig, agruenba

From: Christoph Hellwig <hch@lst.de>

And inline mapping should never mark the page dirty and thus never end up
in writepages.  Add a check for that condition and warn if it happens.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com>
Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
---
 fs/iomap/buffered-io.c |    2 ++
 1 file changed, 2 insertions(+)


diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
index 21b50e8cb9f2..ed694a59c527 100644
--- a/fs/iomap/buffered-io.c
+++ b/fs/iomap/buffered-io.c
@@ -1405,6 +1405,8 @@ iomap_writepage_map(struct iomap_writepage_ctx *wpc,
 		error = wpc->ops->map_blocks(wpc, inode, file_offset);
 		if (error)
 			break;
+		if (WARN_ON_ONCE(wpc->iomap.type == IOMAP_INLINE))
+			continue;
 		if (wpc->iomap.type == IOMAP_HOLE)
 			continue;
 		iomap_add_to_ioend(inode, file_offset, page, iop, wpc, wbc,


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 5/6] xfs: set IOMAP_F_NEW more carefully
  2019-07-30  1:17 [PATCH v4 0/6] iomap: lift the xfs writepage code into iomap Darrick J. Wong
                   ` (3 preceding siblings ...)
  2019-07-30  1:18 ` [PATCH 4/6] iomap: warn on inline maps in iomap_writepage_map Darrick J. Wong
@ 2019-07-30  1:18 ` Darrick J. Wong
  2019-07-30  1:18 ` [PATCH 6/6] iomap: zero newly allocated mapped blocks Darrick J. Wong
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 23+ messages in thread
From: Darrick J. Wong @ 2019-07-30  1:18 UTC (permalink / raw)
  To: hch, darrick.wong
  Cc: linux-xfs, linux-fsdevel, Damien.LeMoal, Christoph Hellwig, agruenba

From: Christoph Hellwig <hch@lst.de>

Don't set IOMAP_F_NEW if we COW over and existing allocated range, as
these aren't strictly new allocations.  This is required to be able to
use IOMAP_F_NEW to zero newly allocated blocks, which is required for
the iomap code to fully support file systems that don't do delayed
allocations or use unwritten extents.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com>
Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
---
 fs/xfs/xfs_iomap.c |    9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)


diff --git a/fs/xfs/xfs_iomap.c b/fs/xfs/xfs_iomap.c
index 3a4310d7cb59..434ff589f0fc 100644
--- a/fs/xfs/xfs_iomap.c
+++ b/fs/xfs/xfs_iomap.c
@@ -707,9 +707,12 @@ xfs_file_iomap_begin_delay(
 	 * Flag newly allocated delalloc blocks with IOMAP_F_NEW so we punch
 	 * them out if the write happens to fail.
 	 */
-	iomap->flags |= IOMAP_F_NEW;
-	trace_xfs_iomap_alloc(ip, offset, count, whichfork,
-			whichfork == XFS_DATA_FORK ? &imap : &cmap);
+	if (whichfork == XFS_DATA_FORK) {
+		iomap->flags |= IOMAP_F_NEW;
+		trace_xfs_iomap_alloc(ip, offset, count, whichfork, &imap);
+	} else {
+		trace_xfs_iomap_alloc(ip, offset, count, whichfork, &cmap);
+	}
 done:
 	if (whichfork == XFS_COW_FORK) {
 		if (imap.br_startoff > offset_fsb) {


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 6/6] iomap: zero newly allocated mapped blocks
  2019-07-30  1:17 [PATCH v4 0/6] iomap: lift the xfs writepage code into iomap Darrick J. Wong
                   ` (4 preceding siblings ...)
  2019-07-30  1:18 ` [PATCH 5/6] xfs: set IOMAP_F_NEW more carefully Darrick J. Wong
@ 2019-07-30  1:18 ` Darrick J. Wong
  2019-07-30 14:48 ` [PATCH v4 0/6] iomap: lift the xfs writepage code into iomap Christoph Hellwig
  2019-08-16  6:52 ` Christoph Hellwig
  7 siblings, 0 replies; 23+ messages in thread
From: Darrick J. Wong @ 2019-07-30  1:18 UTC (permalink / raw)
  To: hch, darrick.wong
  Cc: linux-xfs, linux-fsdevel, Damien.LeMoal, Christoph Hellwig, agruenba

From: Christoph Hellwig <hch@lst.de>

File systems like gfs2 don't support delayed allocations or unwritten
extents and thus allocate normal mapped blocks to fill holes.  To
cover the case of such file systems allocating new blocks to fill holes
also zero out mapped blocks with the new flag.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com>
Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
---
 fs/iomap/buffered-io.c |   12 ++++++++++--
 1 file changed, 10 insertions(+), 2 deletions(-)


diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
index ed694a59c527..1a7570c441c8 100644
--- a/fs/iomap/buffered-io.c
+++ b/fs/iomap/buffered-io.c
@@ -209,6 +209,14 @@ iomap_read_inline_data(struct inode *inode, struct page *page,
 	SetPageUptodate(page);
 }
 
+static inline bool iomap_block_needs_zeroing(struct inode *inode,
+		struct iomap *iomap, loff_t pos)
+{
+	return iomap->type != IOMAP_MAPPED ||
+		(iomap->flags & IOMAP_F_NEW) ||
+		pos >= i_size_read(inode);
+}
+
 static loff_t
 iomap_readpage_actor(struct inode *inode, loff_t pos, loff_t length, void *data,
 		struct iomap *iomap)
@@ -232,7 +240,7 @@ iomap_readpage_actor(struct inode *inode, loff_t pos, loff_t length, void *data,
 	if (plen == 0)
 		goto done;
 
-	if (iomap->type != IOMAP_MAPPED || pos >= i_size_read(inode)) {
+	if (iomap_block_needs_zeroing(inode, iomap, pos)) {
 		zero_user(page, poff, plen);
 		iomap_set_range_uptodate(page, poff, plen);
 		goto done;
@@ -546,7 +554,7 @@ iomap_read_page_sync(struct inode *inode, loff_t block_start, struct page *page,
 	struct bio_vec bvec;
 	struct bio bio;
 
-	if (iomap->type != IOMAP_MAPPED || block_start >= i_size_read(inode)) {
+	if (iomap_block_needs_zeroing(inode, iomap, block_start)) {
 		zero_user_segments(page, poff, from, to, poff + plen);
 		iomap_set_range_uptodate(page, poff, plen);
 		return 0;


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* Re: [PATCH v4 0/6] iomap: lift the xfs writepage code into iomap
  2019-07-30  1:17 [PATCH v4 0/6] iomap: lift the xfs writepage code into iomap Darrick J. Wong
                   ` (5 preceding siblings ...)
  2019-07-30  1:18 ` [PATCH 6/6] iomap: zero newly allocated mapped blocks Darrick J. Wong
@ 2019-07-30 14:48 ` Christoph Hellwig
  2019-08-05 12:34   ` Andreas Gruenbacher
  2019-08-16  6:52 ` Christoph Hellwig
  7 siblings, 1 reply; 23+ messages in thread
From: Christoph Hellwig @ 2019-07-30 14:48 UTC (permalink / raw)
  To: Darrick J. Wong; +Cc: hch, linux-xfs, linux-fsdevel, Damien.LeMoal, agruenba

I don't really see the point of the split, but the result looks fine
to me.


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 2/6] iomap: copy the xfs writeback code to iomap.c
  2019-07-30  1:17 ` [PATCH 2/6] iomap: copy the xfs writeback code to iomap.c Darrick J. Wong
@ 2019-08-04 14:59   ` Andreas Gruenbacher
  2019-08-06  5:33     ` Christoph Hellwig
  2019-08-05 12:31   ` Andreas Gruenbacher
  1 sibling, 1 reply; 23+ messages in thread
From: Andreas Gruenbacher @ 2019-08-04 14:59 UTC (permalink / raw)
  To: Darrick J. Wong
  Cc: Christoph Hellwig, linux-xfs, linux-fsdevel, Damien Le Moal,
	Christoph Hellwig

On Tue, 30 Jul 2019 at 03:18, Darrick J. Wong <darrick.wong@oracle.com> wrote:
> From: Christoph Hellwig <hch@lst.de>
>
> Takes the xfs writeback code and copies it to iomap.c.  A new structure
> with three methods is added as the abstraction from the generic
> writeback code to the file system.  These methods are used to map
> blocks, submit an ioend, and cancel a page that encountered an error
> before it was added to an ioend.
>
> Note that we temporarily lose the writepage tracing, but that will
> be added back soon.
>
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> [darrick: create the new iomap code, we'll delete the xfs code separately]
> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com>
> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
> ---
>  fs/iomap/buffered-io.c |  548 ++++++++++++++++++++++++++++++++++++++++++++++++
>  include/linux/iomap.h  |   43 ++++
>  2 files changed, 590 insertions(+), 1 deletion(-)
>
>
> diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
> index e25901ae3ff4..ff1f7d2b4d7a 100644
> --- a/fs/iomap/buffered-io.c
> +++ b/fs/iomap/buffered-io.c
> @@ -1,7 +1,7 @@
>  // SPDX-License-Identifier: GPL-2.0
>  /*
>   * Copyright (C) 2010 Red Hat, Inc.
> - * Copyright (c) 2016-2018 Christoph Hellwig.
> + * Copyright (c) 2016-2019 Christoph Hellwig.
>   */
>  #include <linux/module.h>
>  #include <linux/compiler.h>
> @@ -12,6 +12,7 @@
>  #include <linux/buffer_head.h>
>  #include <linux/dax.h>
>  #include <linux/writeback.h>
> +#include <linux/list_sort.h>
>  #include <linux/swap.h>
>  #include <linux/bio.h>
>  #include <linux/sched/signal.h>
> @@ -19,6 +20,8 @@
>
>  #include "../internal.h"
>
> +static struct bio_set iomap_ioend_bioset;
> +
>  static struct iomap_page *
>  iomap_page_create(struct inode *inode, struct page *page)
>  {
> @@ -1071,3 +1074,546 @@ vm_fault_t iomap_page_mkwrite(struct vm_fault *vmf, const struct iomap_ops *ops)
>         return block_page_mkwrite_return(ret);
>  }
>  EXPORT_SYMBOL_GPL(iomap_page_mkwrite);
> +
> +static void
> +iomap_finish_page_writeback(struct inode *inode, struct bio_vec *bvec,
> +               int error)
> +{
> +       struct iomap_page *iop = to_iomap_page(bvec->bv_page);
> +
> +       if (error) {
> +               SetPageError(bvec->bv_page);
> +               mapping_set_error(inode->i_mapping, -EIO);
> +       }
> +
> +       WARN_ON_ONCE(i_blocksize(inode) < PAGE_SIZE && !iop);
> +       WARN_ON_ONCE(iop && atomic_read(&iop->write_count) <= 0);
> +
> +       if (!iop || atomic_dec_and_test(&iop->write_count))
> +               end_page_writeback(bvec->bv_page);
> +}
> +
> +/*
> + * We're now finished for good with this ioend structure.  Update the page
> + * state, release holds on bios, and finally free up memory.  Do not use the
> + * ioend after this.
> + */
> +static void
> +iomap_finish_ioend(struct iomap_ioend *ioend, int error)
> +{
> +       struct inode *inode = ioend->io_inode;
> +       struct bio *bio = &ioend->io_inline_bio;
> +       struct bio *last = ioend->io_bio, *next;
> +       u64 start = bio->bi_iter.bi_sector;
> +       bool quiet = bio_flagged(bio, BIO_QUIET);
> +
> +       for (bio = &ioend->io_inline_bio; bio; bio = next) {
> +               struct bio_vec  *bvec;
> +               struct bvec_iter_all iter_all;
> +
> +               /*
> +                * For the last bio, bi_private points to the ioend, so we
> +                * need to explicitly end the iteration here.
> +                */
> +               if (bio == last)
> +                       next = NULL;
> +               else
> +                       next = bio->bi_private;
> +
> +               /* walk each page on bio, ending page IO on them */
> +               bio_for_each_segment_all(bvec, bio, iter_all)
> +                       iomap_finish_page_writeback(inode, bvec, error);
> +               bio_put(bio);
> +       }
> +
> +       if (unlikely(error && !quiet)) {
> +               printk_ratelimited(KERN_ERR
> +                       "%s: writeback error on sector %llu",
> +                       inode->i_sb->s_id, start);
> +       }
> +}
> +
> +void
> +iomap_finish_ioends(struct iomap_ioend *ioend, int error)
> +{
> +       struct list_head tmp;
> +
> +       list_replace_init(&ioend->io_list, &tmp);
> +       iomap_finish_ioend(ioend, error);
> +       while ((ioend = list_pop_entry(&tmp, struct iomap_ioend, io_list)))
> +               iomap_finish_ioend(ioend, error);
> +}
> +EXPORT_SYMBOL_GPL(iomap_finish_ioends);
> +
> +/*
> + * We can merge two adjacent ioends if they have the same set of work to do.
> + */
> +static bool
> +iomap_ioend_can_merge(struct iomap_ioend *ioend, struct iomap_ioend *next)
> +{
> +       if (ioend->io_bio->bi_status != next->io_bio->bi_status)
> +               return false;
> +       if ((ioend->io_flags & IOMAP_F_SHARED) ^
> +           (next->io_flags & IOMAP_F_SHARED))
> +               return false;
> +       if ((ioend->io_type == IOMAP_UNWRITTEN) ^
> +           (next->io_type == IOMAP_UNWRITTEN))
> +               return false;
> +       if (ioend->io_offset + ioend->io_size != next->io_offset)
> +               return false;
> +       return true;
> +}
> +
> +void
> +iomap_ioend_try_merge(struct iomap_ioend *ioend, struct list_head *more_ioends,
> +               void (*merge_private)(struct iomap_ioend *ioend,
> +                               struct iomap_ioend *next))
> +{
> +       struct iomap_ioend *next;
> +
> +       INIT_LIST_HEAD(&ioend->io_list);
> +
> +       while ((next = list_first_entry_or_null(more_ioends, struct iomap_ioend,
> +                       io_list))) {
> +               if (!iomap_ioend_can_merge(ioend, next))
> +                       break;
> +               list_move_tail(&next->io_list, &ioend->io_list);
> +               ioend->io_size += next->io_size;
> +               if (next->io_private && merge_private)
> +                       merge_private(ioend, next);
> +       }
> +}
> +EXPORT_SYMBOL_GPL(iomap_ioend_try_merge);
> +
> +static int
> +iomap_ioend_compare(void *priv, struct list_head *a, struct list_head *b)
> +{
> +       struct iomap_ioend *ia, *ib;
> +
> +       ia = container_of(a, struct iomap_ioend, io_list);
> +       ib = container_of(b, struct iomap_ioend, io_list);
> +       if (ia->io_offset < ib->io_offset)
> +               return -1;
> +       else if (ia->io_offset > ib->io_offset)
> +               return 1;
> +       return 0;
> +}
> +
> +void
> +iomap_sort_ioends(struct list_head *ioend_list)
> +{
> +       list_sort(NULL, ioend_list, iomap_ioend_compare);
> +}
> +EXPORT_SYMBOL_GPL(iomap_sort_ioends);
> +
> +static void iomap_writepage_end_bio(struct bio *bio)
> +{
> +       struct iomap_ioend *ioend = bio->bi_private;
> +
> +       iomap_finish_ioend(ioend, blk_status_to_errno(bio->bi_status));
> +}
> +
> +/*
> + * Submit the bio for an ioend. We are passed an ioend with a bio attached to
> + * it, and we submit that bio. The ioend may be used for multiple bio
> + * submissions, so we only want to allocate an append transaction for the ioend
> + * once. In the case of multiple bio submission, each bio will take an IO
> + * reference to the ioend to ensure that the ioend completion is only done once
> + * all bios have been submitted and the ioend is really done.
> + *
> + * If @error is non-zero, it means that we have a situation where some part of
> + * the submission process has failed after we have marked paged for writeback
> + * and unlocked them. In this situation, we need to fail the bio and ioend
> + * rather than submit it to IO. This typically only happens on a filesystem
> + * shutdown.
> + */
> +static int
> +iomap_submit_ioend(struct iomap_writepage_ctx *wpc, struct iomap_ioend *ioend,
> +               int error)
> +{
> +       ioend->io_bio->bi_private = ioend;
> +       ioend->io_bio->bi_end_io = iomap_writepage_end_bio;
> +
> +       /*
> +        * File systems can perform actions at submit time and/or override
> +        * the end_io handler here for complex operations like copy on write
> +        * extent manipulation or unwritten extent conversions.
> +        */
> +       if (wpc->ops->submit_ioend)
> +               error = wpc->ops->submit_ioend(ioend, error);

I think we only want to call submit_ioend here if error isn't set already.

> +       if (error) {
> +               /*
> +                * If we are failing the IO now, just mark the ioend with an
> +                * error and finish it.  This will run IO completion immediately
> +                * as there is only one reference to the ioend at this point in
> +                * time.
> +                */
> +               ioend->io_bio->bi_status = errno_to_blk_status(error);
> +               bio_endio(ioend->io_bio);
> +               return error;
> +       }
> +
> +       submit_bio(ioend->io_bio);
> +       return 0;
> +}
> +
> +static struct iomap_ioend *
> +iomap_alloc_ioend(struct inode *inode, struct iomap_writepage_ctx *wpc,
> +               loff_t offset, sector_t sector, struct writeback_control *wbc)
> +{
> +       struct iomap_ioend *ioend;
> +       struct bio *bio;
> +
> +       bio = bio_alloc_bioset(GFP_NOFS, BIO_MAX_PAGES, &iomap_ioend_bioset);
> +       bio_set_dev(bio, wpc->iomap.bdev);
> +       bio->bi_iter.bi_sector = sector;
> +       bio->bi_opf = REQ_OP_WRITE | wbc_to_write_flags(wbc);
> +       bio->bi_write_hint = inode->i_write_hint;
> +       wbc_init_bio(wbc, bio);
> +
> +       ioend = container_of(bio, struct iomap_ioend, io_inline_bio);
> +       INIT_LIST_HEAD(&ioend->io_list);
> +       ioend->io_type = wpc->iomap.type;
> +       ioend->io_flags = wpc->iomap.flags;
> +       ioend->io_inode = inode;
> +       ioend->io_size = 0;
> +       ioend->io_offset = offset;
> +       ioend->io_private = NULL;
> +       ioend->io_bio = bio;
> +       return ioend;
> +}
> +
> +/*
> + * Allocate a new bio, and chain the old bio to the new one.
> + *
> + * Note that we have to do perform the chaining in this unintuitive order
> + * so that the bi_private linkage is set up in the right direction for the
> + * traversal in iomap_finish_ioend().
> + */
> +static struct bio *
> +iomap_chain_bio(struct bio *prev)
> +{
> +       struct bio *new;
> +
> +       new = bio_alloc(GFP_NOFS, BIO_MAX_PAGES);
> +       bio_copy_dev(new, prev);/* also copies over blkcg information */
> +       new->bi_iter.bi_sector = bio_end_sector(prev);
> +       new->bi_opf = prev->bi_opf;
> +       new->bi_write_hint = prev->bi_write_hint;
> +
> +       bio_chain(prev, new);
> +       bio_get(prev);          /* for iomap_finish_ioend */
> +       submit_bio(prev);
> +       return new;
> +}
> +
> +/*
> + * Test to see if we have an existing ioend structure that we could append to
> + * first, otherwise finish off the current ioend and start another.
> + */
> +static void
> +iomap_add_to_ioend(struct inode *inode, loff_t offset, struct page *page,
> +               struct iomap_page *iop, struct iomap_writepage_ctx *wpc,
> +               struct writeback_control *wbc, struct list_head *iolist)
> +{
> +       sector_t sector = iomap_sector(&wpc->iomap, offset);
> +       unsigned len = i_blocksize(inode);
> +       unsigned poff = offset & (PAGE_SIZE - 1);
> +       bool merged, same_page = false;
> +
> +       if (!wpc->ioend ||
> +           (wpc->iomap.flags & IOMAP_F_SHARED) !=
> +           (wpc->ioend->io_flags & IOMAP_F_SHARED) ||
> +           wpc->iomap.type != wpc->ioend->io_type ||
> +           sector != bio_end_sector(wpc->ioend->io_bio) ||
> +           offset != wpc->ioend->io_offset + wpc->ioend->io_size) {
> +               if (wpc->ioend)
> +                       list_add(&wpc->ioend->io_list, iolist);
> +               wpc->ioend = iomap_alloc_ioend(inode, wpc, offset, sector, wbc);
> +       }
> +
> +       merged = __bio_try_merge_page(wpc->ioend->io_bio, page, len, poff,
> +                       &same_page);
> +
> +       if (iop && !same_page)
> +               atomic_inc(&iop->write_count);
> +       if (!merged) {
> +               if (bio_full(wpc->ioend->io_bio, len)) {
> +                       wpc->ioend->io_bio =
> +                               iomap_chain_bio(wpc->ioend->io_bio);
> +               }
> +               bio_add_page(wpc->ioend->io_bio, page, len, poff);
> +       }
> +
> +       wpc->ioend->io_size += len;
> +       wbc_account_cgroup_owner(wbc, page, len);
> +}
> +
> +/*
> + * We implement an immediate ioend submission policy here to avoid needing to
> + * chain multiple ioends and hence nest mempool allocations which can violate
> + * forward progress guarantees we need to provide. The current ioend we are
> + * adding blocks to is cached on the writepage context, and if the new block
> + * does not append to the cached ioend it will create a new ioend and cache that
> + * instead.
> + *
> + * If a new ioend is created and cached, the old ioend is returned and queued
> + * locally for submission once the entire page is processed or an error has been
> + * detected.  While ioends are submitted immediately after they are completed,
> + * batching optimisations are provided by higher level block plugging.
> + *
> + * At the end of a writeback pass, there will be a cached ioend remaining on the
> + * writepage context that the caller will need to submit.
> + */
> +static int
> +iomap_writepage_map(struct iomap_writepage_ctx *wpc,
> +               struct writeback_control *wbc, struct inode *inode,
> +               struct page *page, u64 end_offset)
> +{
> +       struct iomap_page *iop = to_iomap_page(page);
> +       struct iomap_ioend *ioend, *next;
> +       unsigned len = i_blocksize(inode);
> +       u64 file_offset; /* file offset of page */
> +       int error = 0, count = 0, i;
> +       LIST_HEAD(submit_list);
> +
> +       WARN_ON_ONCE(i_blocksize(inode) < PAGE_SIZE && !iop);
> +       WARN_ON_ONCE(iop && atomic_read(&iop->write_count) != 0);
> +
> +       /*
> +        * Walk through the page to find areas to write back. If we run off the
> +        * end of the current map or find the current map invalid, grab a new
> +        * one.
> +        */
> +       for (i = 0, file_offset = page_offset(page);
> +            i < (PAGE_SIZE >> inode->i_blkbits) && file_offset < end_offset;
> +            i++, file_offset += len) {
> +               if (iop && !test_bit(i, iop->uptodate))
> +                       continue;
> +
> +               error = wpc->ops->map_blocks(wpc, inode, file_offset);
> +               if (error)
> +                       break;
> +               if (wpc->iomap.type == IOMAP_HOLE)
> +                       continue;
> +               iomap_add_to_ioend(inode, file_offset, page, iop, wpc, wbc,
> +                                &submit_list);
> +               count++;
> +       }
> +
> +       WARN_ON_ONCE(!wpc->ioend && !list_empty(&submit_list));
> +       WARN_ON_ONCE(!PageLocked(page));
> +       WARN_ON_ONCE(PageWriteback(page));
> +
> +       /*
> +        * On error, we have to fail the ioend here because we may have set
> +        * pages under writeback, we have to make sure we run IO completion to
> +        * mark the error state of the IO appropriately, so we can't cancel the
> +        * ioend directly here.  That means we have to mark this page as under
> +        * writeback if we included any blocks from it in the ioend chain so
> +        * that completion treats it correctly.
> +        *
> +        * If we didn't include the page in the ioend, the on error we can
> +        * simply discard and unlock it as there are no other users of the page
> +        * now.  The caller will still need to trigger submission of outstanding
> +        * ioends on the writepage context so they are treated correctly on
> +        * error.
> +        */
> +       if (unlikely(error)) {
> +               if (!count) {
> +                       if (wpc->ops->discard_page)
> +                               wpc->ops->discard_page(page);
> +                       ClearPageUptodate(page);
> +                       unlock_page(page);
> +                       goto done;
> +               }
> +
> +               /*
> +                * If the page was not fully cleaned, we need to ensure that the
> +                * higher layers come back to it correctly.  That means we need
> +                * to keep the page dirty, and for WB_SYNC_ALL writeback we need
> +                * to ensure the PAGECACHE_TAG_TOWRITE index mark is not removed
> +                * so another attempt to write this page in this writeback sweep
> +                * will be made.
> +                */
> +               set_page_writeback_keepwrite(page);
> +       } else {
> +               clear_page_dirty_for_io(page);
> +               set_page_writeback(page);
> +       }
> +
> +       unlock_page(page);
> +
> +       /*
> +        * Preserve the original error if there was one, otherwise catch
> +        * submission errors here and propagate into subsequent ioend
> +        * submissions.
> +        */
> +       list_for_each_entry_safe(ioend, next, &submit_list, io_list) {
> +               int error2;
> +
> +               list_del_init(&ioend->io_list);
> +               error2 = iomap_submit_ioend(wpc, ioend, error);
> +               if (error2 && !error)
> +                       error = error2;
> +       }
> +
> +       /*
> +        * We can end up here with no error and nothing to write only if we race
> +        * with a partial page truncate on a sub-page block sized filesystem.
> +        */
> +       if (!count)
> +               end_page_writeback(page);
> +done:
> +       mapping_set_error(page->mapping, error);
> +       return error;
> +}
> +
> +/*
> + * Write out a dirty page.
> + *
> + * For delalloc space on the page we need to allocate space and flush it.
> + * For unwritten space on the page we need to start the conversion to
> + * regular allocated space.
> + */
> +static int
> +iomap_do_writepage(struct page *page, struct writeback_control *wbc, void *data)
> +{
> +       struct iomap_writepage_ctx *wpc = data;
> +       struct inode *inode = page->mapping->host;
> +       pgoff_t end_index;
> +       u64 end_offset;
> +       loff_t offset;
> +
> +       /*
> +        * Refuse to write the page out if we are called from reclaim context.
> +        *
> +        * This avoids stack overflows when called from deeply used stacks in
> +        * random callers for direct reclaim or memcg reclaim.  We explicitly
> +        * allow reclaim from kswapd as the stack usage there is relatively low.
> +        *
> +        * This should never happen except in the case of a VM regression so
> +        * warn about it.
> +        */
> +       if (WARN_ON_ONCE((current->flags & (PF_MEMALLOC|PF_KSWAPD)) ==
> +                       PF_MEMALLOC))
> +               goto redirty;
> +
> +       /*
> +        * Given that we do not allow direct reclaim to call us, we should
> +        * never be called while in a filesystem transaction.
> +        */
> +       if (WARN_ON_ONCE(current->flags & PF_MEMALLOC_NOFS))
> +               goto redirty;
> +
> +       /*
> +        * Is this page beyond the end of the file?
> +        *
> +        * The page index is less than the end_index, adjust the end_offset
> +        * to the highest offset that this page should represent.
> +        * -----------------------------------------------------
> +        * |                    file mapping           | <EOF> |
> +        * -----------------------------------------------------
> +        * | Page ... | Page N-2 | Page N-1 |  Page N  |       |
> +        * ^--------------------------------^----------|--------
> +        * |     desired writeback range    |      see else    |
> +        * ---------------------------------^------------------|
> +        */
> +       offset = i_size_read(inode);
> +       end_index = offset >> PAGE_SHIFT;
> +       if (page->index < end_index)
> +               end_offset = (loff_t)(page->index + 1) << PAGE_SHIFT;
> +       else {
> +               /*
> +                * Check whether the page to write out is beyond or straddles
> +                * i_size or not.
> +                * -------------------------------------------------------
> +                * |            file mapping                    | <EOF>  |
> +                * -------------------------------------------------------
> +                * | Page ... | Page N-2 | Page N-1 |  Page N   | Beyond |
> +                * ^--------------------------------^-----------|---------
> +                * |                                |      Straddles     |
> +                * ---------------------------------^-----------|--------|
> +                */
> +               unsigned offset_into_page = offset & (PAGE_SIZE - 1);
> +
> +               /*
> +                * Skip the page if it is fully outside i_size, e.g. due to a
> +                * truncate operation that is in progress. We must redirty the
> +                * page so that reclaim stops reclaiming it. Otherwise
> +                * iomap_vm_releasepage() is called on it and gets confused.
> +                *
> +                * Note that the end_index is unsigned long, it would overflow
> +                * if the given offset is greater than 16TB on 32-bit system
> +                * and if we do check the page is fully outside i_size or not
> +                * via "if (page->index >= end_index + 1)" as "end_index + 1"
> +                * will be evaluated to 0.  Hence this page will be redirtied
> +                * and be written out repeatedly which would result in an
> +                * infinite loop, the user program that perform this operation
> +                * will hang.  Instead, we can verify this situation by checking
> +                * if the page to write is totally beyond the i_size or if it's
> +                * offset is just equal to the EOF.
> +                */
> +               if (page->index > end_index ||
> +                   (page->index == end_index && offset_into_page == 0))
> +                       goto redirty;
> +
> +               /*
> +                * The page straddles i_size.  It must be zeroed out on each
> +                * and every writepage invocation because it may be mmapped.
> +                * "A file is mapped in multiples of the page size.  For a file
> +                * that is not a multiple of the page size, the remaining
> +                * memory is zeroed when mapped, and writes to that region are
> +                * not written out to the file."
> +                */
> +               zero_user_segment(page, offset_into_page, PAGE_SIZE);
> +
> +               /* Adjust the end_offset to the end of file */
> +               end_offset = offset;
> +       }
> +
> +       return iomap_writepage_map(wpc, wbc, inode, page, end_offset);
> +
> +redirty:
> +       redirty_page_for_writepage(wbc, page);
> +       unlock_page(page);
> +       return 0;
> +}
> +
> +int
> +iomap_writepage(struct page *page, struct writeback_control *wbc,
> +               struct iomap_writepage_ctx *wpc,
> +               const struct iomap_writeback_ops *ops)
> +{
> +       int ret;
> +
> +       wpc->ops = ops;
> +       ret = iomap_do_writepage(page, wbc, wpc);
> +       if (!wpc->ioend)
> +               return ret;
> +       return iomap_submit_ioend(wpc, wpc->ioend, ret);
> +}
> +EXPORT_SYMBOL_GPL(iomap_writepage);
> +
> +int
> +iomap_writepages(struct address_space *mapping, struct writeback_control *wbc,
> +               struct iomap_writepage_ctx *wpc,
> +               const struct iomap_writeback_ops *ops)
> +{
> +       int                     ret;
> +
> +       wpc->ops = ops;
> +       ret = write_cache_pages(mapping, wbc, iomap_do_writepage, wpc);
> +       if (!wpc->ioend)
> +               return ret;
> +       return iomap_submit_ioend(wpc, wpc->ioend, ret);
> +}
> +EXPORT_SYMBOL_GPL(iomap_writepages);
> +
> +static int __init iomap_init(void)
> +{
> +       return bioset_init(&iomap_ioend_bioset, 4 * (PAGE_SIZE / SECTOR_SIZE),
> +                          offsetof(struct iomap_ioend, io_inline_bio),
> +                          BIOSET_NEED_BVECS);
> +}
> +fs_initcall(iomap_init);
> diff --git a/include/linux/iomap.h b/include/linux/iomap.h
> index bc499ceae392..834d3923e2f2 100644
> --- a/include/linux/iomap.h
> +++ b/include/linux/iomap.h
> @@ -4,6 +4,7 @@
>
>  #include <linux/atomic.h>
>  #include <linux/bitmap.h>
> +#include <linux/blk_types.h>
>  #include <linux/mm.h>
>  #include <linux/types.h>
>  #include <linux/mm_types.h>
> @@ -12,6 +13,7 @@
>  struct address_space;
>  struct fiemap_extent_info;
>  struct inode;
> +struct iomap_writepage_ctx;
>  struct iov_iter;
>  struct kiocb;
>  struct page;
> @@ -183,6 +185,47 @@ loff_t iomap_seek_data(struct inode *inode, loff_t offset,
>  sector_t iomap_bmap(struct address_space *mapping, sector_t bno,
>                 const struct iomap_ops *ops);
>
> +/*
> + * Structure for writeback I/O completions.
> + */
> +struct iomap_ioend {
> +       struct list_head        io_list;        /* next ioend in chain */
> +       u16                     io_type;
> +       u16                     io_flags;       /* IOMAP_F_* */
> +       struct inode            *io_inode;      /* file being written to */
> +       size_t                  io_size;        /* size of the extent */
> +       loff_t                  io_offset;      /* offset in the file */
> +       void                    *io_private;    /* file system private data */
> +       struct bio              *io_bio;        /* bio being built */
> +       struct bio              io_inline_bio;  /* MUST BE LAST! */
> +};
> +
> +struct iomap_writeback_ops {
> +       int (*map_blocks)(struct iomap_writepage_ctx *wpc, struct inode *inode,
> +                               loff_t offset);
> +       int (*submit_ioend)(struct iomap_ioend *ioend, int status);
> +       void (*discard_page)(struct page *page);
> +};
> +
> +struct iomap_writepage_ctx {
> +       struct iomap            iomap;
> +       struct iomap_ioend      *ioend;
> +       const struct iomap_writeback_ops *ops;
> +};
> +
> +void iomap_finish_ioends(struct iomap_ioend *ioend, int error);
> +void iomap_ioend_try_merge(struct iomap_ioend *ioend,
> +               struct list_head *more_ioends,
> +               void (*merge_private)(struct iomap_ioend *ioend,
> +                               struct iomap_ioend *next));
> +void iomap_sort_ioends(struct list_head *ioend_list);
> +int iomap_writepage(struct page *page, struct writeback_control *wbc,
> +               struct iomap_writepage_ctx *wpc,
> +               const struct iomap_writeback_ops *ops);
> +int iomap_writepages(struct address_space *mapping,
> +               struct writeback_control *wbc, struct iomap_writepage_ctx *wpc,
> +               const struct iomap_writeback_ops *ops);
> +
>  /*
>   * Flags for direct I/O ->end_io:
>   */
>

Andreas

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 2/6] iomap: copy the xfs writeback code to iomap.c
  2019-07-30  1:17 ` [PATCH 2/6] iomap: copy the xfs writeback code to iomap.c Darrick J. Wong
  2019-08-04 14:59   ` Andreas Gruenbacher
@ 2019-08-05 12:31   ` Andreas Gruenbacher
  2019-08-06  5:32     ` Christoph Hellwig
  1 sibling, 1 reply; 23+ messages in thread
From: Andreas Gruenbacher @ 2019-08-05 12:31 UTC (permalink / raw)
  To: Darrick J. Wong
  Cc: Christoph Hellwig, linux-xfs, linux-fsdevel, Damien Le Moal,
	Christoph Hellwig

On Tue, 30 Jul 2019 at 03:18, Darrick J. Wong <darrick.wong@oracle.com> wrote:
> From: Christoph Hellwig <hch@lst.de>
>
> Takes the xfs writeback code and copies it to iomap.c.  A new structure
> with three methods is added as the abstraction from the generic
> writeback code to the file system.  These methods are used to map
> blocks, submit an ioend, and cancel a page that encountered an error
> before it was added to an ioend.
>
> Note that we temporarily lose the writepage tracing, but that will
> be added back soon.
>
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> [darrick: create the new iomap code, we'll delete the xfs code separately]
> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com>
> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
> ---
>  fs/iomap/buffered-io.c |  548 ++++++++++++++++++++++++++++++++++++++++++++++++
>  include/linux/iomap.h  |   43 ++++
>  2 files changed, 590 insertions(+), 1 deletion(-)
>
>
> diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
> index e25901ae3ff4..ff1f7d2b4d7a 100644
> --- a/fs/iomap/buffered-io.c
> +++ b/fs/iomap/buffered-io.c
> @@ -1,7 +1,7 @@
>  // SPDX-License-Identifier: GPL-2.0
>  /*
>   * Copyright (C) 2010 Red Hat, Inc.
> - * Copyright (c) 2016-2018 Christoph Hellwig.
> + * Copyright (c) 2016-2019 Christoph Hellwig.
>   */
>  #include <linux/module.h>
>  #include <linux/compiler.h>
> @@ -12,6 +12,7 @@
>  #include <linux/buffer_head.h>
>  #include <linux/dax.h>
>  #include <linux/writeback.h>
> +#include <linux/list_sort.h>
>  #include <linux/swap.h>
>  #include <linux/bio.h>
>  #include <linux/sched/signal.h>
> @@ -19,6 +20,8 @@
>
>  #include "../internal.h"
>
> +static struct bio_set iomap_ioend_bioset;
> +
>  static struct iomap_page *
>  iomap_page_create(struct inode *inode, struct page *page)
>  {
> @@ -1071,3 +1074,546 @@ vm_fault_t iomap_page_mkwrite(struct vm_fault *vmf, const struct iomap_ops *ops)
>         return block_page_mkwrite_return(ret);
>  }
>  EXPORT_SYMBOL_GPL(iomap_page_mkwrite);
> +
> +static void
> +iomap_finish_page_writeback(struct inode *inode, struct bio_vec *bvec,
> +               int error)
> +{
> +       struct iomap_page *iop = to_iomap_page(bvec->bv_page);
> +
> +       if (error) {
> +               SetPageError(bvec->bv_page);
> +               mapping_set_error(inode->i_mapping, -EIO);
> +       }
> +
> +       WARN_ON_ONCE(i_blocksize(inode) < PAGE_SIZE && !iop);
> +       WARN_ON_ONCE(iop && atomic_read(&iop->write_count) <= 0);
> +
> +       if (!iop || atomic_dec_and_test(&iop->write_count))
> +               end_page_writeback(bvec->bv_page);
> +}
> +
> +/*
> + * We're now finished for good with this ioend structure.  Update the page
> + * state, release holds on bios, and finally free up memory.  Do not use the
> + * ioend after this.
> + */
> +static void
> +iomap_finish_ioend(struct iomap_ioend *ioend, int error)
> +{
> +       struct inode *inode = ioend->io_inode;
> +       struct bio *bio = &ioend->io_inline_bio;
> +       struct bio *last = ioend->io_bio, *next;
> +       u64 start = bio->bi_iter.bi_sector;
> +       bool quiet = bio_flagged(bio, BIO_QUIET);
> +
> +       for (bio = &ioend->io_inline_bio; bio; bio = next) {
> +               struct bio_vec  *bvec;
> +               struct bvec_iter_all iter_all;
> +
> +               /*
> +                * For the last bio, bi_private points to the ioend, so we
> +                * need to explicitly end the iteration here.
> +                */
> +               if (bio == last)
> +                       next = NULL;
> +               else
> +                       next = bio->bi_private;
> +
> +               /* walk each page on bio, ending page IO on them */
> +               bio_for_each_segment_all(bvec, bio, iter_all)
> +                       iomap_finish_page_writeback(inode, bvec, error);
> +               bio_put(bio);
> +       }
> +
> +       if (unlikely(error && !quiet)) {
> +               printk_ratelimited(KERN_ERR
> +                       "%s: writeback error on sector %llu",
> +                       inode->i_sb->s_id, start);
> +       }
> +}
> +
> +void
> +iomap_finish_ioends(struct iomap_ioend *ioend, int error)
> +{
> +       struct list_head tmp;
> +
> +       list_replace_init(&ioend->io_list, &tmp);
> +       iomap_finish_ioend(ioend, error);
> +       while ((ioend = list_pop_entry(&tmp, struct iomap_ioend, io_list)))
> +               iomap_finish_ioend(ioend, error);
> +}
> +EXPORT_SYMBOL_GPL(iomap_finish_ioends);
> +
> +/*
> + * We can merge two adjacent ioends if they have the same set of work to do.
> + */
> +static bool
> +iomap_ioend_can_merge(struct iomap_ioend *ioend, struct iomap_ioend *next)
> +{
> +       if (ioend->io_bio->bi_status != next->io_bio->bi_status)
> +               return false;
> +       if ((ioend->io_flags & IOMAP_F_SHARED) ^
> +           (next->io_flags & IOMAP_F_SHARED))
> +               return false;
> +       if ((ioend->io_type == IOMAP_UNWRITTEN) ^
> +           (next->io_type == IOMAP_UNWRITTEN))
> +               return false;
> +       if (ioend->io_offset + ioend->io_size != next->io_offset)
> +               return false;
> +       return true;
> +}
> +
> +void
> +iomap_ioend_try_merge(struct iomap_ioend *ioend, struct list_head *more_ioends,
> +               void (*merge_private)(struct iomap_ioend *ioend,
> +                               struct iomap_ioend *next))
> +{
> +       struct iomap_ioend *next;
> +
> +       INIT_LIST_HEAD(&ioend->io_list);
> +
> +       while ((next = list_first_entry_or_null(more_ioends, struct iomap_ioend,
> +                       io_list))) {
> +               if (!iomap_ioend_can_merge(ioend, next))
> +                       break;
> +               list_move_tail(&next->io_list, &ioend->io_list);
> +               ioend->io_size += next->io_size;
> +               if (next->io_private && merge_private)
> +                       merge_private(ioend, next);
> +       }
> +}
> +EXPORT_SYMBOL_GPL(iomap_ioend_try_merge);
> +
> +static int
> +iomap_ioend_compare(void *priv, struct list_head *a, struct list_head *b)
> +{
> +       struct iomap_ioend *ia, *ib;
> +
> +       ia = container_of(a, struct iomap_ioend, io_list);
> +       ib = container_of(b, struct iomap_ioend, io_list);
> +       if (ia->io_offset < ib->io_offset)
> +               return -1;
> +       else if (ia->io_offset > ib->io_offset)
> +               return 1;
> +       return 0;
> +}
> +
> +void
> +iomap_sort_ioends(struct list_head *ioend_list)
> +{
> +       list_sort(NULL, ioend_list, iomap_ioend_compare);
> +}
> +EXPORT_SYMBOL_GPL(iomap_sort_ioends);
> +
> +static void iomap_writepage_end_bio(struct bio *bio)
> +{
> +       struct iomap_ioend *ioend = bio->bi_private;
> +
> +       iomap_finish_ioend(ioend, blk_status_to_errno(bio->bi_status));
> +}
> +
> +/*
> + * Submit the bio for an ioend. We are passed an ioend with a bio attached to
> + * it, and we submit that bio. The ioend may be used for multiple bio
> + * submissions, so we only want to allocate an append transaction for the ioend
> + * once. In the case of multiple bio submission, each bio will take an IO
> + * reference to the ioend to ensure that the ioend completion is only done once
> + * all bios have been submitted and the ioend is really done.
> + *
> + * If @error is non-zero, it means that we have a situation where some part of
> + * the submission process has failed after we have marked paged for writeback
> + * and unlocked them. In this situation, we need to fail the bio and ioend
> + * rather than submit it to IO. This typically only happens on a filesystem
> + * shutdown.
> + */
> +static int
> +iomap_submit_ioend(struct iomap_writepage_ctx *wpc, struct iomap_ioend *ioend,
> +               int error)
> +{
> +       ioend->io_bio->bi_private = ioend;
> +       ioend->io_bio->bi_end_io = iomap_writepage_end_bio;
> +
> +       /*
> +        * File systems can perform actions at submit time and/or override
> +        * the end_io handler here for complex operations like copy on write
> +        * extent manipulation or unwritten extent conversions.
> +        */
> +       if (wpc->ops->submit_ioend)
> +               error = wpc->ops->submit_ioend(ioend, error);
> +       if (error) {
> +               /*
> +                * If we are failing the IO now, just mark the ioend with an
> +                * error and finish it.  This will run IO completion immediately
> +                * as there is only one reference to the ioend at this point in
> +                * time.
> +                */
> +               ioend->io_bio->bi_status = errno_to_blk_status(error);
> +               bio_endio(ioend->io_bio);
> +               return error;
> +       }
> +
> +       submit_bio(ioend->io_bio);
> +       return 0;
> +}
> +
> +static struct iomap_ioend *
> +iomap_alloc_ioend(struct inode *inode, struct iomap_writepage_ctx *wpc,
> +               loff_t offset, sector_t sector, struct writeback_control *wbc)
> +{
> +       struct iomap_ioend *ioend;
> +       struct bio *bio;
> +
> +       bio = bio_alloc_bioset(GFP_NOFS, BIO_MAX_PAGES, &iomap_ioend_bioset);
> +       bio_set_dev(bio, wpc->iomap.bdev);
> +       bio->bi_iter.bi_sector = sector;
> +       bio->bi_opf = REQ_OP_WRITE | wbc_to_write_flags(wbc);
> +       bio->bi_write_hint = inode->i_write_hint;
> +       wbc_init_bio(wbc, bio);
> +
> +       ioend = container_of(bio, struct iomap_ioend, io_inline_bio);
> +       INIT_LIST_HEAD(&ioend->io_list);
> +       ioend->io_type = wpc->iomap.type;
> +       ioend->io_flags = wpc->iomap.flags;
> +       ioend->io_inode = inode;
> +       ioend->io_size = 0;
> +       ioend->io_offset = offset;
> +       ioend->io_private = NULL;
> +       ioend->io_bio = bio;
> +       return ioend;
> +}
> +
> +/*
> + * Allocate a new bio, and chain the old bio to the new one.
> + *
> + * Note that we have to do perform the chaining in this unintuitive order
> + * so that the bi_private linkage is set up in the right direction for the
> + * traversal in iomap_finish_ioend().
> + */
> +static struct bio *
> +iomap_chain_bio(struct bio *prev)
> +{
> +       struct bio *new;
> +
> +       new = bio_alloc(GFP_NOFS, BIO_MAX_PAGES);
> +       bio_copy_dev(new, prev);/* also copies over blkcg information */
> +       new->bi_iter.bi_sector = bio_end_sector(prev);
> +       new->bi_opf = prev->bi_opf;
> +       new->bi_write_hint = prev->bi_write_hint;
> +
> +       bio_chain(prev, new);
> +       bio_get(prev);          /* for iomap_finish_ioend */
> +       submit_bio(prev);
> +       return new;
> +}
> +
> +/*
> + * Test to see if we have an existing ioend structure that we could append to
> + * first, otherwise finish off the current ioend and start another.
> + */
> +static void
> +iomap_add_to_ioend(struct inode *inode, loff_t offset, struct page *page,
> +               struct iomap_page *iop, struct iomap_writepage_ctx *wpc,
> +               struct writeback_control *wbc, struct list_head *iolist)
> +{
> +       sector_t sector = iomap_sector(&wpc->iomap, offset);
> +       unsigned len = i_blocksize(inode);
> +       unsigned poff = offset & (PAGE_SIZE - 1);
> +       bool merged, same_page = false;
> +
> +       if (!wpc->ioend ||
> +           (wpc->iomap.flags & IOMAP_F_SHARED) !=
> +           (wpc->ioend->io_flags & IOMAP_F_SHARED) ||
> +           wpc->iomap.type != wpc->ioend->io_type ||
> +           sector != bio_end_sector(wpc->ioend->io_bio) ||
> +           offset != wpc->ioend->io_offset + wpc->ioend->io_size) {
> +               if (wpc->ioend)
> +                       list_add(&wpc->ioend->io_list, iolist);
> +               wpc->ioend = iomap_alloc_ioend(inode, wpc, offset, sector, wbc);
> +       }
> +
> +       merged = __bio_try_merge_page(wpc->ioend->io_bio, page, len, poff,
> +                       &same_page);
> +
> +       if (iop && !same_page)
> +               atomic_inc(&iop->write_count);
> +       if (!merged) {
> +               if (bio_full(wpc->ioend->io_bio, len)) {
> +                       wpc->ioend->io_bio =
> +                               iomap_chain_bio(wpc->ioend->io_bio);
> +               }
> +               bio_add_page(wpc->ioend->io_bio, page, len, poff);
> +       }
> +
> +       wpc->ioend->io_size += len;
> +       wbc_account_cgroup_owner(wbc, page, len);
> +}
> +
> +/*
> + * We implement an immediate ioend submission policy here to avoid needing to
> + * chain multiple ioends and hence nest mempool allocations which can violate
> + * forward progress guarantees we need to provide. The current ioend we are
> + * adding blocks to is cached on the writepage context, and if the new block
> + * does not append to the cached ioend it will create a new ioend and cache that
> + * instead.
> + *
> + * If a new ioend is created and cached, the old ioend is returned and queued
> + * locally for submission once the entire page is processed or an error has been
> + * detected.  While ioends are submitted immediately after they are completed,
> + * batching optimisations are provided by higher level block plugging.
> + *
> + * At the end of a writeback pass, there will be a cached ioend remaining on the
> + * writepage context that the caller will need to submit.
> + */
> +static int
> +iomap_writepage_map(struct iomap_writepage_ctx *wpc,
> +               struct writeback_control *wbc, struct inode *inode,
> +               struct page *page, u64 end_offset)
> +{
> +       struct iomap_page *iop = to_iomap_page(page);
> +       struct iomap_ioend *ioend, *next;
> +       unsigned len = i_blocksize(inode);
> +       u64 file_offset; /* file offset of page */
> +       int error = 0, count = 0, i;
> +       LIST_HEAD(submit_list);
> +
> +       WARN_ON_ONCE(i_blocksize(inode) < PAGE_SIZE && !iop);
> +       WARN_ON_ONCE(iop && atomic_read(&iop->write_count) != 0);

How about this instead?

       if (iop)
               WARN_ON_ONCE(atomic_read(&iop->write_count) != 0);
       else
               WARN_ON_ONCE(i_blocksize(inode) < PAGE_SIZE);

> +
> +       /*
> +        * Walk through the page to find areas to write back. If we run off the
> +        * end of the current map or find the current map invalid, grab a new
> +        * one.
> +        */
> +       for (i = 0, file_offset = page_offset(page);
> +            i < (PAGE_SIZE >> inode->i_blkbits) && file_offset < end_offset;
> +            i++, file_offset += len) {
> +               if (iop && !test_bit(i, iop->uptodate))
> +                       continue;
> +
> +               error = wpc->ops->map_blocks(wpc, inode, file_offset);
> +               if (error)
> +                       break;
> +               if (wpc->iomap.type == IOMAP_HOLE)
> +                       continue;
> +               iomap_add_to_ioend(inode, file_offset, page, iop, wpc, wbc,
> +                                &submit_list);
> +               count++;
> +       }
> +
> +       WARN_ON_ONCE(!wpc->ioend && !list_empty(&submit_list));
> +       WARN_ON_ONCE(!PageLocked(page));
> +       WARN_ON_ONCE(PageWriteback(page));
> +
> +       /*
> +        * On error, we have to fail the ioend here because we may have set
> +        * pages under writeback, we have to make sure we run IO completion to
> +        * mark the error state of the IO appropriately, so we can't cancel the
> +        * ioend directly here.  That means we have to mark this page as under
> +        * writeback if we included any blocks from it in the ioend chain so
> +        * that completion treats it correctly.
> +        *
> +        * If we didn't include the page in the ioend, the on error we can
> +        * simply discard and unlock it as there are no other users of the page
> +        * now.  The caller will still need to trigger submission of outstanding
> +        * ioends on the writepage context so they are treated correctly on
> +        * error.
> +        */
> +       if (unlikely(error)) {
> +               if (!count) {
> +                       if (wpc->ops->discard_page)
> +                               wpc->ops->discard_page(page);
> +                       ClearPageUptodate(page);
> +                       unlock_page(page);
> +                       goto done;
> +               }
> +
> +               /*
> +                * If the page was not fully cleaned, we need to ensure that the
> +                * higher layers come back to it correctly.  That means we need
> +                * to keep the page dirty, and for WB_SYNC_ALL writeback we need
> +                * to ensure the PAGECACHE_TAG_TOWRITE index mark is not removed
> +                * so another attempt to write this page in this writeback sweep
> +                * will be made.
> +                */
> +               set_page_writeback_keepwrite(page);
> +       } else {
> +               clear_page_dirty_for_io(page);
> +               set_page_writeback(page);
> +       }
> +
> +       unlock_page(page);
> +
> +       /*
> +        * Preserve the original error if there was one, otherwise catch
> +        * submission errors here and propagate into subsequent ioend
> +        * submissions.
> +        */
> +       list_for_each_entry_safe(ioend, next, &submit_list, io_list) {
> +               int error2;
> +
> +               list_del_init(&ioend->io_list);
> +               error2 = iomap_submit_ioend(wpc, ioend, error);
> +               if (error2 && !error)
> +                       error = error2;
> +       }
> +
> +       /*
> +        * We can end up here with no error and nothing to write only if we race
> +        * with a partial page truncate on a sub-page block sized filesystem.
> +        */
> +       if (!count)
> +               end_page_writeback(page);
> +done:
> +       mapping_set_error(page->mapping, error);
> +       return error;
> +}
> +
> +/*
> + * Write out a dirty page.
> + *
> + * For delalloc space on the page we need to allocate space and flush it.
> + * For unwritten space on the page we need to start the conversion to
> + * regular allocated space.
> + */
> +static int
> +iomap_do_writepage(struct page *page, struct writeback_control *wbc, void *data)
> +{
> +       struct iomap_writepage_ctx *wpc = data;
> +       struct inode *inode = page->mapping->host;
> +       pgoff_t end_index;
> +       u64 end_offset;
> +       loff_t offset;
> +
> +       /*
> +        * Refuse to write the page out if we are called from reclaim context.
> +        *
> +        * This avoids stack overflows when called from deeply used stacks in
> +        * random callers for direct reclaim or memcg reclaim.  We explicitly
> +        * allow reclaim from kswapd as the stack usage there is relatively low.
> +        *
> +        * This should never happen except in the case of a VM regression so
> +        * warn about it.
> +        */
> +       if (WARN_ON_ONCE((current->flags & (PF_MEMALLOC|PF_KSWAPD)) ==
> +                       PF_MEMALLOC))
> +               goto redirty;
> +
> +       /*
> +        * Given that we do not allow direct reclaim to call us, we should
> +        * never be called while in a filesystem transaction.
> +        */
> +       if (WARN_ON_ONCE(current->flags & PF_MEMALLOC_NOFS))
> +               goto redirty;
> +
> +       /*
> +        * Is this page beyond the end of the file?
> +        *
> +        * The page index is less than the end_index, adjust the end_offset
> +        * to the highest offset that this page should represent.
> +        * -----------------------------------------------------
> +        * |                    file mapping           | <EOF> |
> +        * -----------------------------------------------------
> +        * | Page ... | Page N-2 | Page N-1 |  Page N  |       |
> +        * ^--------------------------------^----------|--------
> +        * |     desired writeback range    |      see else    |
> +        * ---------------------------------^------------------|
> +        */
> +       offset = i_size_read(inode);
> +       end_index = offset >> PAGE_SHIFT;
> +       if (page->index < end_index)
> +               end_offset = (loff_t)(page->index + 1) << PAGE_SHIFT;
> +       else {
> +               /*
> +                * Check whether the page to write out is beyond or straddles
> +                * i_size or not.
> +                * -------------------------------------------------------
> +                * |            file mapping                    | <EOF>  |
> +                * -------------------------------------------------------
> +                * | Page ... | Page N-2 | Page N-1 |  Page N   | Beyond |
> +                * ^--------------------------------^-----------|---------
> +                * |                                |      Straddles     |
> +                * ---------------------------------^-----------|--------|
> +                */
> +               unsigned offset_into_page = offset & (PAGE_SIZE - 1);
> +
> +               /*
> +                * Skip the page if it is fully outside i_size, e.g. due to a
> +                * truncate operation that is in progress. We must redirty the
> +                * page so that reclaim stops reclaiming it. Otherwise
> +                * iomap_vm_releasepage() is called on it and gets confused.
> +                *
> +                * Note that the end_index is unsigned long, it would overflow
> +                * if the given offset is greater than 16TB on 32-bit system
> +                * and if we do check the page is fully outside i_size or not
> +                * via "if (page->index >= end_index + 1)" as "end_index + 1"
> +                * will be evaluated to 0.  Hence this page will be redirtied
> +                * and be written out repeatedly which would result in an
> +                * infinite loop, the user program that perform this operation
> +                * will hang.  Instead, we can verify this situation by checking
> +                * if the page to write is totally beyond the i_size or if it's
> +                * offset is just equal to the EOF.
> +                */
> +               if (page->index > end_index ||
> +                   (page->index == end_index && offset_into_page == 0))
> +                       goto redirty;
> +
> +               /*
> +                * The page straddles i_size.  It must be zeroed out on each
> +                * and every writepage invocation because it may be mmapped.
> +                * "A file is mapped in multiples of the page size.  For a file
> +                * that is not a multiple of the page size, the remaining
> +                * memory is zeroed when mapped, and writes to that region are
> +                * not written out to the file."
> +                */
> +               zero_user_segment(page, offset_into_page, PAGE_SIZE);
> +
> +               /* Adjust the end_offset to the end of file */
> +               end_offset = offset;
> +       }
> +
> +       return iomap_writepage_map(wpc, wbc, inode, page, end_offset);
> +
> +redirty:
> +       redirty_page_for_writepage(wbc, page);
> +       unlock_page(page);
> +       return 0;
> +}
> +
> +int
> +iomap_writepage(struct page *page, struct writeback_control *wbc,
> +               struct iomap_writepage_ctx *wpc,
> +               const struct iomap_writeback_ops *ops)
> +{
> +       int ret;
> +
> +       wpc->ops = ops;
> +       ret = iomap_do_writepage(page, wbc, wpc);
> +       if (!wpc->ioend)
> +               return ret;
> +       return iomap_submit_ioend(wpc, wpc->ioend, ret);
> +}
> +EXPORT_SYMBOL_GPL(iomap_writepage);
> +
> +int
> +iomap_writepages(struct address_space *mapping, struct writeback_control *wbc,
> +               struct iomap_writepage_ctx *wpc,
> +               const struct iomap_writeback_ops *ops)
> +{
> +       int                     ret;
> +
> +       wpc->ops = ops;
> +       ret = write_cache_pages(mapping, wbc, iomap_do_writepage, wpc);
> +       if (!wpc->ioend)
> +               return ret;
> +       return iomap_submit_ioend(wpc, wpc->ioend, ret);
> +}
> +EXPORT_SYMBOL_GPL(iomap_writepages);
> +
> +static int __init iomap_init(void)
> +{
> +       return bioset_init(&iomap_ioend_bioset, 4 * (PAGE_SIZE / SECTOR_SIZE),
> +                          offsetof(struct iomap_ioend, io_inline_bio),
> +                          BIOSET_NEED_BVECS);
> +}
> +fs_initcall(iomap_init);
> diff --git a/include/linux/iomap.h b/include/linux/iomap.h
> index bc499ceae392..834d3923e2f2 100644
> --- a/include/linux/iomap.h
> +++ b/include/linux/iomap.h
> @@ -4,6 +4,7 @@
>
>  #include <linux/atomic.h>
>  #include <linux/bitmap.h>
> +#include <linux/blk_types.h>
>  #include <linux/mm.h>
>  #include <linux/types.h>
>  #include <linux/mm_types.h>
> @@ -12,6 +13,7 @@
>  struct address_space;
>  struct fiemap_extent_info;
>  struct inode;
> +struct iomap_writepage_ctx;
>  struct iov_iter;
>  struct kiocb;
>  struct page;
> @@ -183,6 +185,47 @@ loff_t iomap_seek_data(struct inode *inode, loff_t offset,
>  sector_t iomap_bmap(struct address_space *mapping, sector_t bno,
>                 const struct iomap_ops *ops);
>
> +/*
> + * Structure for writeback I/O completions.
> + */
> +struct iomap_ioend {
> +       struct list_head        io_list;        /* next ioend in chain */
> +       u16                     io_type;
> +       u16                     io_flags;       /* IOMAP_F_* */
> +       struct inode            *io_inode;      /* file being written to */
> +       size_t                  io_size;        /* size of the extent */
> +       loff_t                  io_offset;      /* offset in the file */
> +       void                    *io_private;    /* file system private data */
> +       struct bio              *io_bio;        /* bio being built */
> +       struct bio              io_inline_bio;  /* MUST BE LAST! */
> +};
> +
> +struct iomap_writeback_ops {
> +       int (*map_blocks)(struct iomap_writepage_ctx *wpc, struct inode *inode,
> +                               loff_t offset);
> +       int (*submit_ioend)(struct iomap_ioend *ioend, int status);
> +       void (*discard_page)(struct page *page);
> +};
> +
> +struct iomap_writepage_ctx {
> +       struct iomap            iomap;
> +       struct iomap_ioend      *ioend;
> +       const struct iomap_writeback_ops *ops;
> +};
> +
> +void iomap_finish_ioends(struct iomap_ioend *ioend, int error);
> +void iomap_ioend_try_merge(struct iomap_ioend *ioend,
> +               struct list_head *more_ioends,
> +               void (*merge_private)(struct iomap_ioend *ioend,
> +                               struct iomap_ioend *next));
> +void iomap_sort_ioends(struct list_head *ioend_list);
> +int iomap_writepage(struct page *page, struct writeback_control *wbc,
> +               struct iomap_writepage_ctx *wpc,
> +               const struct iomap_writeback_ops *ops);
> +int iomap_writepages(struct address_space *mapping,
> +               struct writeback_control *wbc, struct iomap_writepage_ctx *wpc,
> +               const struct iomap_writeback_ops *ops);
> +
>  /*
>   * Flags for direct I/O ->end_io:
>   */
>

Andreas

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v4 0/6] iomap: lift the xfs writepage code into iomap
  2019-07-30 14:48 ` [PATCH v4 0/6] iomap: lift the xfs writepage code into iomap Christoph Hellwig
@ 2019-08-05 12:34   ` Andreas Gruenbacher
  0 siblings, 0 replies; 23+ messages in thread
From: Andreas Gruenbacher @ 2019-08-05 12:34 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Darrick J. Wong, linux-xfs, linux-fsdevel, Damien Le Moal

On Tue, 30 Jul 2019 at 16:48, Christoph Hellwig <hch@infradead.org> wrote:
> I don't really see the point of the split, but the result looks fine
> to me.

The split made it easier for me to debug the gfs2 side by backing out
the xfs changes.

Thanks,
Andreas

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 2/6] iomap: copy the xfs writeback code to iomap.c
  2019-08-05 12:31   ` Andreas Gruenbacher
@ 2019-08-06  5:32     ` Christoph Hellwig
  0 siblings, 0 replies; 23+ messages in thread
From: Christoph Hellwig @ 2019-08-06  5:32 UTC (permalink / raw)
  To: Andreas Gruenbacher
  Cc: Darrick J. Wong, Christoph Hellwig, linux-xfs, linux-fsdevel,
	Damien Le Moal, Christoph Hellwig

On Mon, Aug 05, 2019 at 02:31:24PM +0200, Andreas Gruenbacher wrote:

> > +       WARN_ON_ONCE(i_blocksize(inode) < PAGE_SIZE && !iop);
> > +       WARN_ON_ONCE(iop && atomic_read(&iop->write_count) != 0);
> 
> How about this instead?
> 
>        if (iop)
>                WARN_ON_ONCE(atomic_read(&iop->write_count) != 0);
>        else
>                WARN_ON_ONCE(i_blocksize(inode) < PAGE_SIZE);

For one I don't really want to change the code from before in a move
patch, but second I also wrote it like that as having conditionals just
around asserts seems a little odd.

This also helps with the next step of the evolution where we'll allocate
the iops on demand and skip it for small block size file systems as long
as a page is only covered by a single extent, as we then can remove the
first assert.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 2/6] iomap: copy the xfs writeback code to iomap.c
  2019-08-04 14:59   ` Andreas Gruenbacher
@ 2019-08-06  5:33     ` Christoph Hellwig
  0 siblings, 0 replies; 23+ messages in thread
From: Christoph Hellwig @ 2019-08-06  5:33 UTC (permalink / raw)
  To: Andreas Gruenbacher
  Cc: Darrick J. Wong, Christoph Hellwig, linux-xfs, linux-fsdevel,
	Damien Le Moal, Christoph Hellwig

Hi Andreas,

any chance you could trim your quote to the relevant parts?

On Sun, Aug 04, 2019 at 04:59:25PM +0200, Andreas Gruenbacher wrote:
> > +       /*
> > +        * File systems can perform actions at submit time and/or override
> > +        * the end_io handler here for complex operations like copy on write
> > +        * extent manipulation or unwritten extent conversions.
> > +        */
> > +       if (wpc->ops->submit_ioend)
> > +               error = wpc->ops->submit_ioend(ioend, error);
> 
> I think we only want to call submit_ioend here if error isn't set already.

No, we need to call it even with an error so that the file system can
override the bi_end_io handler, which bio_endio ends up calling even
for the error case.

Maybe the comment above could use an update to make that more clear.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v4 0/6] iomap: lift the xfs writepage code into iomap
  2019-07-30  1:17 [PATCH v4 0/6] iomap: lift the xfs writepage code into iomap Darrick J. Wong
                   ` (6 preceding siblings ...)
  2019-07-30 14:48 ` [PATCH v4 0/6] iomap: lift the xfs writepage code into iomap Christoph Hellwig
@ 2019-08-16  6:52 ` Christoph Hellwig
  2019-08-17  1:46   ` Darrick J. Wong
  7 siblings, 1 reply; 23+ messages in thread
From: Christoph Hellwig @ 2019-08-16  6:52 UTC (permalink / raw)
  To: Darrick J. Wong; +Cc: hch, linux-xfs, linux-fsdevel, Damien.LeMoal, agruenba

Darrick,

are you going to queue this up?

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v4 0/6] iomap: lift the xfs writepage code into iomap
  2019-08-16  6:52 ` Christoph Hellwig
@ 2019-08-17  1:46   ` Darrick J. Wong
  2019-08-17  8:25     ` Andreas Gruenbacher
                       ` (3 more replies)
  0 siblings, 4 replies; 23+ messages in thread
From: Darrick J. Wong @ 2019-08-17  1:46 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: linux-xfs, linux-fsdevel, Damien.LeMoal, agruenba

On Thu, Aug 15, 2019 at 11:52:29PM -0700, Christoph Hellwig wrote:
> Darrick,
> 
> are you going to queue this up?

Yes, I'll go promote the iomap writeback branch to iomap-for-next.  I
haven't 100% convinced myself that it's a good idea to hook up xfs to it
yet, if nothing else because of all the other problems I've had getting
5.3 testing to run to completion reliably...

...I assume gfs2 and zonedfs are getting ready to send stuff based on
that branch?

--D

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v4 0/6] iomap: lift the xfs writepage code into iomap
  2019-08-17  1:46   ` Darrick J. Wong
@ 2019-08-17  8:25     ` Andreas Gruenbacher
  2019-08-17 13:15     ` Damien Le Moal
                       ` (2 subsequent siblings)
  3 siblings, 0 replies; 23+ messages in thread
From: Andreas Gruenbacher @ 2019-08-17  8:25 UTC (permalink / raw)
  To: Darrick J. Wong
  Cc: Christoph Hellwig, linux-xfs, linux-fsdevel, Damien Le Moal

On Sat, Aug 17, 2019 at 3:46 AM Darrick J. Wong <darrick.wong@oracle.com> wrote:
>
> On Thu, Aug 15, 2019 at 11:52:29PM -0700, Christoph Hellwig wrote:
> > Darrick,
> >
> > are you going to queue this up?
>
> Yes, I'll go promote the iomap writeback branch to iomap-for-next.  I
> haven't 100% convinced myself that it's a good idea to hook up xfs to it
> yet, if nothing else because of all the other problems I've had getting
> 5.3 testing to run to completion reliably...
>
> ...I assume gfs2 and zonedfs are getting ready to send stuff based on
> that branch?

It doesn't look like the gfs2 bits are going to be ready for 5.4.

Andreas

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v4 0/6] iomap: lift the xfs writepage code into iomap
  2019-08-17  1:46   ` Darrick J. Wong
  2019-08-17  8:25     ` Andreas Gruenbacher
@ 2019-08-17 13:15     ` Damien Le Moal
  2019-08-20  7:30     ` Christoph Hellwig
  2019-09-01  7:34     ` Christoph Hellwig
  3 siblings, 0 replies; 23+ messages in thread
From: Damien Le Moal @ 2019-08-17 13:15 UTC (permalink / raw)
  To: Darrick J. Wong, Christoph Hellwig; +Cc: linux-xfs, linux-fsdevel, agruenba

On 2019/08/16 18:47, Darrick J. Wong wrote:
> On Thu, Aug 15, 2019 at 11:52:29PM -0700, Christoph Hellwig wrote:
>> Darrick,
>>
>> are you going to queue this up?
> 
> Yes, I'll go promote the iomap writeback branch to iomap-for-next.  I
> haven't 100% convinced myself that it's a good idea to hook up xfs to it
> yet, if nothing else because of all the other problems I've had getting
> 5.3 testing to run to completion reliably...
> 
> ...I assume gfs2 and zonedfs are getting ready to send stuff based on
> that branch?
> 
> --D
> 

I have zonefs rebased on your branch and good to go. But before posting, I am
finishing a test suite for it as xfstests does not work, for obvious reasons :)
I will post a V2 next week.

As zonefs is not an update but a new fs, it needs to be accepted first. If it
is, it will need your branch.

Best regards.

-- 
Damien Le Moal
Western Digital Research

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v4 0/6] iomap: lift the xfs writepage code into iomap
  2019-08-17  1:46   ` Darrick J. Wong
  2019-08-17  8:25     ` Andreas Gruenbacher
  2019-08-17 13:15     ` Damien Le Moal
@ 2019-08-20  7:30     ` Christoph Hellwig
  2019-09-01  7:34     ` Christoph Hellwig
  3 siblings, 0 replies; 23+ messages in thread
From: Christoph Hellwig @ 2019-08-20  7:30 UTC (permalink / raw)
  To: Darrick J. Wong
  Cc: Christoph Hellwig, linux-xfs, linux-fsdevel, Damien.LeMoal, agruenba

On Fri, Aug 16, 2019 at 06:46:33PM -0700, Darrick J. Wong wrote:
> On Thu, Aug 15, 2019 at 11:52:29PM -0700, Christoph Hellwig wrote:
> > Darrick,
> > 
> > are you going to queue this up?
> 
> Yes, I'll go promote the iomap writeback branch to iomap-for-next.  I
> haven't 100% convinced myself that it's a good idea to hook up xfs to it
> yet, if nothing else because of all the other problems I've had getting
> 5.3 testing to run to completion reliably...

Oh well.  I had some XFS/iomap patches I want to dust off that depend
on it, but I guess they'll have to wait then.

We just need to be very careful the versions don't get out of sync,
which would be a major pain in the butt.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v4 0/6] iomap: lift the xfs writepage code into iomap
  2019-08-17  1:46   ` Darrick J. Wong
                       ` (2 preceding siblings ...)
  2019-08-20  7:30     ` Christoph Hellwig
@ 2019-09-01  7:34     ` Christoph Hellwig
  2019-09-01 20:44       ` Darrick J. Wong
  3 siblings, 1 reply; 23+ messages in thread
From: Christoph Hellwig @ 2019-09-01  7:34 UTC (permalink / raw)
  To: Darrick J. Wong
  Cc: Christoph Hellwig, linux-xfs, linux-fsdevel, Damien.LeMoal, agruenba

On Fri, Aug 16, 2019 at 06:46:33PM -0700, Darrick J. Wong wrote:
> On Thu, Aug 15, 2019 at 11:52:29PM -0700, Christoph Hellwig wrote:
> > Darrick,
> > 
> > are you going to queue this up?
> 
> Yes, I'll go promote the iomap writeback branch to iomap-for-next.  I
> haven't 100% convinced myself that it's a good idea to hook up xfs to it
> yet, if nothing else because of all the other problems I've had getting
> 5.3 testing to run to completion reliably...

So what is the current status?  We are going to get an -rc8 to give
you some more time, and I'd really hate to miss the second merge window
for the change, espececially as things tend to get out of sync, and I
have various bits touching the code planned for the 5.5 merge window.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v4 0/6] iomap: lift the xfs writepage code into iomap
  2019-09-01  7:34     ` Christoph Hellwig
@ 2019-09-01 20:44       ` Darrick J. Wong
  2019-09-02 17:16         ` Christoph Hellwig
  0 siblings, 1 reply; 23+ messages in thread
From: Darrick J. Wong @ 2019-09-01 20:44 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: linux-xfs, linux-fsdevel, Damien.LeMoal, agruenba

On Sun, Sep 01, 2019 at 12:34:40AM -0700, Christoph Hellwig wrote:
> On Fri, Aug 16, 2019 at 06:46:33PM -0700, Darrick J. Wong wrote:
> > On Thu, Aug 15, 2019 at 11:52:29PM -0700, Christoph Hellwig wrote:
> > > Darrick,
> > > 
> > > are you going to queue this up?
> > 
> > Yes, I'll go promote the iomap writeback branch to iomap-for-next.  I
> > haven't 100% convinced myself that it's a good idea to hook up xfs to it
> > yet, if nothing else because of all the other problems I've had getting
> > 5.3 testing to run to completion reliably...
> 
> So what is the current status?  We are going to get an -rc8 to give
> you some more time, and I'd really hate to miss the second merge window
> for the change, espececially as things tend to get out of sync, and I
> have various bits touching the code planned for the 5.5 merge window.

Heh, I had assumed today was going to be 5.3 final and that would be
that for 5.4.  However, if the 5.4 merge window isn't going to close
until Sept. 29 then there's still time for more soaking.

Would you mind rebasing the remaining patches against iomap-for-next and
sending that out?  I'll try to get to it before I go on vacation 6 - 15
Sept.

Admittedly I think the controversial questions are still "How much
writeback code are we outsourcing to iomap anyway?" and "Do we want to
do the added stress of keeping that going without breaking everyone
else"?  IOWs, more philosophical than just the mechanics of porting code
around.

I still want a CONFIG_IOMAP_DEBUG which turns on stricter checking of
the iomap(s) that ->begin_iomap pass to the actor, if you have the time;
I for one am starting to forget exactly what are the valid combinations
of iomap flag inputs that ->begin_iomap has to handle for a given actor
and what are the valid imaps for each actor that it can pass back.  :)

--D

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v4 0/6] iomap: lift the xfs writepage code into iomap
  2019-09-01 20:44       ` Darrick J. Wong
@ 2019-09-02 17:16         ` Christoph Hellwig
  2019-09-10  7:01           ` Christoph Hellwig
  0 siblings, 1 reply; 23+ messages in thread
From: Christoph Hellwig @ 2019-09-02 17:16 UTC (permalink / raw)
  To: Darrick J. Wong
  Cc: Christoph Hellwig, linux-xfs, linux-fsdevel, Damien.LeMoal, agruenba

On Sun, Sep 01, 2019 at 01:44:00PM -0700, Darrick J. Wong wrote:
> Would you mind rebasing the remaining patches against iomap-for-next and
> sending that out?  I'll try to get to it before I go on vacation 6 - 15
> Sept.

Ok.  Testing right now, but the rebase was trivial.

> Admittedly I think the controversial questions are still "How much
> writeback code are we outsourcing to iomap anyway?" and "Do we want to
> do the added stress of keeping that going without breaking everyone
> else"?  IOWs, more philosophical than just the mechanics of porting code
> around.

At least as far as I'm concerned the more code that is common the
better so that I don't have to fix up 4 badly maintained half-assed
forks of the same code (hello mpage, ext4 and f2fs..).

> I still want a CONFIG_IOMAP_DEBUG which turns on stricter checking of
> the iomap(s) that ->begin_iomap pass to the actor, if you have the time;
> I for one am starting to forget exactly what are the valid combinations
> of iomap flag inputs that ->begin_iomap has to handle for a given actor
> and what are the valid imaps for each actor that it can pass back.  :)

I was actually thinking of killing the input IOMAP_ types entirely,
as that "flags" model just doesn't scale, and instead have more
iomap_ops instances in the callers.  But that is just a vague idea
at the moment.  I'll need some more time to think about it.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v4 0/6] iomap: lift the xfs writepage code into iomap
  2019-09-02 17:16         ` Christoph Hellwig
@ 2019-09-10  7:01           ` Christoph Hellwig
  2019-09-10 21:30             ` Dave Chinner
  0 siblings, 1 reply; 23+ messages in thread
From: Christoph Hellwig @ 2019-09-10  7:01 UTC (permalink / raw)
  To: Darrick J. Wong
  Cc: Christoph Hellwig, linux-xfs, linux-fsdevel, Damien.LeMoal, agruenba

On Mon, Sep 02, 2019 at 10:16:37AM -0700, Christoph Hellwig wrote:
> On Sun, Sep 01, 2019 at 01:44:00PM -0700, Darrick J. Wong wrote:
> > Would you mind rebasing the remaining patches against iomap-for-next and
> > sending that out?  I'll try to get to it before I go on vacation 6 - 15
> > Sept.
> 
> Ok.  Testing right now, but the rebase was trivial.
> 
> > Admittedly I think the controversial questions are still "How much
> > writeback code are we outsourcing to iomap anyway?" and "Do we want to
> > do the added stress of keeping that going without breaking everyone
> > else"?  IOWs, more philosophical than just the mechanics of porting code
> > around.
> 
> At least as far as I'm concerned the more code that is common the
> better so that I don't have to fix up 4 badly maintained half-assed
> forks of the same code (hello mpage, ext4 and f2fs..).

Any news?

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v4 0/6] iomap: lift the xfs writepage code into iomap
  2019-09-10  7:01           ` Christoph Hellwig
@ 2019-09-10 21:30             ` Dave Chinner
  0 siblings, 0 replies; 23+ messages in thread
From: Dave Chinner @ 2019-09-10 21:30 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Darrick J. Wong, linux-xfs, linux-fsdevel, Damien.LeMoal, agruenba

On Tue, Sep 10, 2019 at 12:01:24AM -0700, Christoph Hellwig wrote:
> On Mon, Sep 02, 2019 at 10:16:37AM -0700, Christoph Hellwig wrote:
> > On Sun, Sep 01, 2019 at 01:44:00PM -0700, Darrick J. Wong wrote:
> > > Would you mind rebasing the remaining patches against iomap-for-next and
> > > sending that out?  I'll try to get to it before I go on vacation 6 - 15
> > > Sept.
> > 
> > Ok.  Testing right now, but the rebase was trivial.
> > 
> > > Admittedly I think the controversial questions are still "How much
> > > writeback code are we outsourcing to iomap anyway?" and "Do we want to
> > > do the added stress of keeping that going without breaking everyone
> > > else"?  IOWs, more philosophical than just the mechanics of porting code
> > > around.
> > 
> > At least as far as I'm concerned the more code that is common the
> > better so that I don't have to fix up 4 badly maintained half-assed
> > forks of the same code (hello mpage, ext4 and f2fs..).
> 
> Any news?

Darrick is still on holidays. The iomap-for-next branch has this
series in it already(*), but I suspect at this point the XFS
conversion is going to have to wait until the next cycle.

Cheers,

Dave.

(*) https://git.kernel.org/pub/scm/fs/xfs/xfs-linux.git/log/?h=iomap-for-next

-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2019-09-10 21:30 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-07-30  1:17 [PATCH v4 0/6] iomap: lift the xfs writepage code into iomap Darrick J. Wong
2019-07-30  1:17 ` [PATCH 1/6] list.h: add list_pop and list_pop_entry helpers Darrick J. Wong
2019-07-30  1:17 ` [PATCH 2/6] iomap: copy the xfs writeback code to iomap.c Darrick J. Wong
2019-08-04 14:59   ` Andreas Gruenbacher
2019-08-06  5:33     ` Christoph Hellwig
2019-08-05 12:31   ` Andreas Gruenbacher
2019-08-06  5:32     ` Christoph Hellwig
2019-07-30  1:17 ` [PATCH 3/6] iomap: add tracing for the address space operations Darrick J. Wong
2019-07-30  1:18 ` [PATCH 4/6] iomap: warn on inline maps in iomap_writepage_map Darrick J. Wong
2019-07-30  1:18 ` [PATCH 5/6] xfs: set IOMAP_F_NEW more carefully Darrick J. Wong
2019-07-30  1:18 ` [PATCH 6/6] iomap: zero newly allocated mapped blocks Darrick J. Wong
2019-07-30 14:48 ` [PATCH v4 0/6] iomap: lift the xfs writepage code into iomap Christoph Hellwig
2019-08-05 12:34   ` Andreas Gruenbacher
2019-08-16  6:52 ` Christoph Hellwig
2019-08-17  1:46   ` Darrick J. Wong
2019-08-17  8:25     ` Andreas Gruenbacher
2019-08-17 13:15     ` Damien Le Moal
2019-08-20  7:30     ` Christoph Hellwig
2019-09-01  7:34     ` Christoph Hellwig
2019-09-01 20:44       ` Darrick J. Wong
2019-09-02 17:16         ` Christoph Hellwig
2019-09-10  7:01           ` Christoph Hellwig
2019-09-10 21:30             ` Dave Chinner

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).