All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC PATCH 0/8] btrfs iomap support
@ 2017-11-17 17:44 Goldwyn Rodrigues
  2017-11-17 17:44 ` [RFC PATCH 1/8] btrfs: use iocb for __btrfs_buffered_write Goldwyn Rodrigues
                   ` (10 more replies)
  0 siblings, 11 replies; 16+ messages in thread
From: Goldwyn Rodrigues @ 2017-11-17 17:44 UTC (permalink / raw)
  To: linux-btrfs

This patch series attempts to use kernels iomap for btrfs. Currently,
it covers buffered writes only, but I intend to add some other iomap
uses once this gets through. I am sending this as an RFC because I
would like to find ways to improve the solution since some changes
require adding more functions to the iomap infrastructure which I
would try to avoid. I still have to remove some kinks as well such
as -o compress. I have posted some questions in the individual
patches and would appreciate some input to those.

Some of the problems I faced is:

1. extent locking: While we perform the extent locking for writes,
we need to perform any reads because of non-page-aligned calls before
locking can be done. This requires reading the page, increasing their
pagecount and "letting it go". The iomap infrastructure uses
buffer_heads wheras btrfs uses bio and hence needs to call readpage
exclusively. The "letting it go" part makes me somewhat nervous of
conflicting reads/writes, even though we are protected under i_rwsem.
Is readpage_nolock() a good idea? The extent locking sequence is a
bit weird, with locks and unlock happening in different functions.

2. btrfs pages use PagePrivate to store EXTENT_PAGE_PRIVATE which is not used anywhere.
However, a PagePrivate flag is used for try_to_release_buffers(). Can
we do away with PagePrivate for data pages? The same with PageChecked.
How and why is it used (I guess -o compress)

3. I had to stick information which will be required from iomap_begin()
to iomap_end() in btrfs_iomap which is a pointer in btrfs_inode. Is
there any other place/way we can transmit this information. XFS only
performs allocations and deallocations so it just relies of bmap code
for it.

Suggestions/Criticism welcome.

-- 
Goldwyn



^ permalink raw reply	[flat|nested] 16+ messages in thread

* [RFC PATCH 1/8] btrfs: use iocb for __btrfs_buffered_write
  2017-11-17 17:44 [RFC PATCH 0/8] btrfs iomap support Goldwyn Rodrigues
@ 2017-11-17 17:44 ` Goldwyn Rodrigues
  2018-04-10 16:19   ` David Sterba
  2018-05-22  6:40   ` Misono Tomohiro
  2017-11-17 17:44 ` [RFC PATCH 2/8] fs: Add inode_extend_page() Goldwyn Rodrigues
                   ` (9 subsequent siblings)
  10 siblings, 2 replies; 16+ messages in thread
From: Goldwyn Rodrigues @ 2017-11-17 17:44 UTC (permalink / raw)
  To: linux-btrfs; +Cc: Goldwyn Rodrigues

From: Goldwyn Rodrigues <rgoldwyn@suse.com>

Preparatory patch. It reduces the arguments to __btrfs_buffered_write
to follow buffered_write() style.

Signed-off-by: Goldwyn Rodrigues <rgoldwyn@suse.com>

---
 fs/btrfs/file.c | 24 ++++++++++++------------
 1 file changed, 12 insertions(+), 12 deletions(-)

diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
index aafcc785f840..9bceb0e61361 100644
--- a/fs/btrfs/file.c
+++ b/fs/btrfs/file.c
@@ -1572,10 +1572,11 @@ static noinline int check_can_nocow(struct btrfs_inode *inode, loff_t pos,
 	return ret;
 }
 
-static noinline ssize_t __btrfs_buffered_write(struct file *file,
-					       struct iov_iter *i,
-					       loff_t pos)
+static noinline ssize_t __btrfs_buffered_write(struct kiocb *iocb,
+					       struct iov_iter *i)
 {
+	struct file *file = iocb->ki_filp;
+	loff_t pos = iocb->ki_pos;
 	struct inode *inode = file_inode(file);
 	struct btrfs_fs_info *fs_info = btrfs_sb(inode->i_sb);
 	struct btrfs_root *root = BTRFS_I(inode)->root;
@@ -1815,7 +1816,6 @@ static ssize_t __btrfs_direct_write(struct kiocb *iocb, struct iov_iter *from)
 {
 	struct file *file = iocb->ki_filp;
 	struct inode *inode = file_inode(file);
-	loff_t pos = iocb->ki_pos;
 	ssize_t written;
 	ssize_t written_buffered;
 	loff_t endbyte;
@@ -1826,8 +1826,8 @@ static ssize_t __btrfs_direct_write(struct kiocb *iocb, struct iov_iter *from)
 	if (written < 0 || !iov_iter_count(from))
 		return written;
 
-	pos += written;
-	written_buffered = __btrfs_buffered_write(file, from, pos);
+	iocb->ki_pos += written;
+	written_buffered = __btrfs_buffered_write(iocb, from);
 	if (written_buffered < 0) {
 		err = written_buffered;
 		goto out;
@@ -1836,16 +1836,16 @@ static ssize_t __btrfs_direct_write(struct kiocb *iocb, struct iov_iter *from)
 	 * Ensure all data is persisted. We want the next direct IO read to be
 	 * able to read what was just written.
 	 */
-	endbyte = pos + written_buffered - 1;
-	err = btrfs_fdatawrite_range(inode, pos, endbyte);
+	endbyte = iocb->ki_pos + written_buffered - 1;
+	err = btrfs_fdatawrite_range(inode, iocb->ki_pos, endbyte);
 	if (err)
 		goto out;
-	err = filemap_fdatawait_range(inode->i_mapping, pos, endbyte);
+	err = filemap_fdatawait_range(inode->i_mapping, iocb->ki_pos, endbyte);
 	if (err)
 		goto out;
+	iocb->ki_pos += written_buffered;
 	written += written_buffered;
-	iocb->ki_pos = pos + written_buffered;
-	invalidate_mapping_pages(file->f_mapping, pos >> PAGE_SHIFT,
+	invalidate_mapping_pages(file->f_mapping, iocb->ki_pos >> PAGE_SHIFT,
 				 endbyte >> PAGE_SHIFT);
 out:
 	return written ? written : err;
@@ -1964,7 +1964,7 @@ static ssize_t btrfs_file_write_iter(struct kiocb *iocb,
 	if (iocb->ki_flags & IOCB_DIRECT) {
 		num_written = __btrfs_direct_write(iocb, from);
 	} else {
-		num_written = __btrfs_buffered_write(file, from, pos);
+		num_written = __btrfs_buffered_write(iocb, from);
 		if (num_written > 0)
 			iocb->ki_pos = pos + num_written;
 		if (clean_page)
-- 
2.14.2


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [RFC PATCH 2/8] fs: Add inode_extend_page()
  2017-11-17 17:44 [RFC PATCH 0/8] btrfs iomap support Goldwyn Rodrigues
  2017-11-17 17:44 ` [RFC PATCH 1/8] btrfs: use iocb for __btrfs_buffered_write Goldwyn Rodrigues
@ 2017-11-17 17:44 ` Goldwyn Rodrigues
  2017-11-17 17:44 ` [RFC PATCH 3/8] fs: Introduce IOMAP_F_NOBH Goldwyn Rodrigues
                   ` (8 subsequent siblings)
  10 siblings, 0 replies; 16+ messages in thread
From: Goldwyn Rodrigues @ 2017-11-17 17:44 UTC (permalink / raw)
  To: linux-btrfs; +Cc: Goldwyn Rodrigues

From: Goldwyn Rodrigues <rgoldwyn@suse.com>

This splits the generic_write_end() into functions which handle
block_write_end() and iomap_extend_page().

iomap_extend_page() performs the functions of increasing
i_size (if required) and extending pagecache.

Performed this split so we don't use buffer_heads while ending file I/O.

Signed-off-by: Goldwyn Rodrigues <rgoldwyn@suse.com>
---
 fs/buffer.c                 | 20 +++++++++++++-------
 include/linux/buffer_head.h |  1 +
 2 files changed, 14 insertions(+), 7 deletions(-)

diff --git a/fs/buffer.c b/fs/buffer.c
index 170df856bdb9..266daa85b80e 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -2180,16 +2180,11 @@ int block_write_end(struct file *file, struct address_space *mapping,
 }
 EXPORT_SYMBOL(block_write_end);
 
-int generic_write_end(struct file *file, struct address_space *mapping,
-			loff_t pos, unsigned len, unsigned copied,
-			struct page *page, void *fsdata)
+int inode_extend_page(struct inode *inode, loff_t pos,
+		unsigned copied, struct page *page)
 {
-	struct inode *inode = mapping->host;
 	loff_t old_size = inode->i_size;
 	int i_size_changed = 0;
-
-	copied = block_write_end(file, mapping, pos, len, copied, page, fsdata);
-
 	/*
 	 * No need to use i_size_read() here, the i_size
 	 * cannot change under us because we hold i_mutex.
@@ -2218,6 +2213,17 @@ int generic_write_end(struct file *file, struct address_space *mapping,
 
 	return copied;
 }
+EXPORT_SYMBOL(inode_extend_page);
+
+int generic_write_end(struct file *file, struct address_space *mapping,
+			loff_t pos, unsigned len, unsigned copied,
+			struct page *page, void *fsdata)
+{
+	struct inode *inode = mapping->host;
+	copied = block_write_end(file, mapping, pos, len, copied, page, fsdata);
+	return inode_extend_page(inode, pos, copied, page);
+
+}
 EXPORT_SYMBOL(generic_write_end);
 
 /*
diff --git a/include/linux/buffer_head.h b/include/linux/buffer_head.h
index afa37f807f12..16cf994be178 100644
--- a/include/linux/buffer_head.h
+++ b/include/linux/buffer_head.h
@@ -229,6 +229,7 @@ int __block_write_begin(struct page *page, loff_t pos, unsigned len,
 int block_write_end(struct file *, struct address_space *,
 				loff_t, unsigned, unsigned,
 				struct page *, void *);
+int inode_extend_page(struct inode *, loff_t, unsigned, struct page*);
 int generic_write_end(struct file *, struct address_space *,
 				loff_t, unsigned, unsigned,
 				struct page *, void *);
-- 
2.14.2


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [RFC PATCH 3/8] fs: Introduce IOMAP_F_NOBH
  2017-11-17 17:44 [RFC PATCH 0/8] btrfs iomap support Goldwyn Rodrigues
  2017-11-17 17:44 ` [RFC PATCH 1/8] btrfs: use iocb for __btrfs_buffered_write Goldwyn Rodrigues
  2017-11-17 17:44 ` [RFC PATCH 2/8] fs: Add inode_extend_page() Goldwyn Rodrigues
@ 2017-11-17 17:44 ` Goldwyn Rodrigues
  2017-11-17 17:44 ` [RFC PATCH 4/8] btrfs: Introduce btrfs_iomap Goldwyn Rodrigues
                   ` (7 subsequent siblings)
  10 siblings, 0 replies; 16+ messages in thread
From: Goldwyn Rodrigues @ 2017-11-17 17:44 UTC (permalink / raw)
  To: linux-btrfs; +Cc: Goldwyn Rodrigues

From: Goldwyn Rodrigues <rgoldwyn@suse.com>

IOMAP_F_NOBH tells iomap functions not to use or attach buffer heads
to the page. Page flush and writeback is the responsibility of the
filesystem (such as btrfs) code, which use bio to perform it.

Signed-off-by: Goldwyn Rodrigues <rgoldwyn@suse.com>
---
 fs/iomap.c            | 20 ++++++++++++--------
 include/linux/iomap.h |  1 +
 2 files changed, 13 insertions(+), 8 deletions(-)

diff --git a/fs/iomap.c b/fs/iomap.c
index d4801f8dd4fd..9ec9cc3077b3 100644
--- a/fs/iomap.c
+++ b/fs/iomap.c
@@ -123,7 +123,8 @@ iomap_write_begin(struct inode *inode, loff_t pos, unsigned len, unsigned flags,
 	if (!page)
 		return -ENOMEM;
 
-	status = __block_write_begin_int(page, pos, len, NULL, iomap);
+	if (!(iomap->flags & IOMAP_F_NOBH))
+		status = __block_write_begin_int(page, pos, len, NULL, iomap);
 	if (unlikely(status)) {
 		unlock_page(page);
 		put_page(page);
@@ -138,12 +139,15 @@ iomap_write_begin(struct inode *inode, loff_t pos, unsigned len, unsigned flags,
 
 static int
 iomap_write_end(struct inode *inode, loff_t pos, unsigned len,
-		unsigned copied, struct page *page)
+		unsigned copied, struct page *page, struct iomap *iomap)
 {
-	int ret;
+	int ret = len;
 
-	ret = generic_write_end(NULL, inode->i_mapping, pos, len,
-			copied, page, NULL);
+	if (iomap->flags & IOMAP_F_NOBH)
+		ret = inode_extend_page(inode, pos, copied, page);
+	else
+		ret = generic_write_end(NULL, inode->i_mapping, pos, len,
+					copied, page, NULL);
 	if (ret < len)
 		iomap_write_failed(inode, pos, len);
 	return ret;
@@ -198,7 +202,7 @@ iomap_write_actor(struct inode *inode, loff_t pos, loff_t length, void *data,
 
 		flush_dcache_page(page);
 
-		status = iomap_write_end(inode, pos, bytes, copied, page);
+		status = iomap_write_end(inode, pos, bytes, copied, page, iomap);
 		if (unlikely(status < 0))
 			break;
 		copied = status;
@@ -292,7 +296,7 @@ iomap_dirty_actor(struct inode *inode, loff_t pos, loff_t length, void *data,
 
 		WARN_ON_ONCE(!PageUptodate(page));
 
-		status = iomap_write_end(inode, pos, bytes, bytes, page);
+		status = iomap_write_end(inode, pos, bytes, bytes, page, iomap);
 		if (unlikely(status <= 0)) {
 			if (WARN_ON_ONCE(status == 0))
 				return -EIO;
@@ -344,7 +348,7 @@ static int iomap_zero(struct inode *inode, loff_t pos, unsigned offset,
 	zero_user(page, offset, bytes);
 	mark_page_accessed(page);
 
-	return iomap_write_end(inode, pos, bytes, bytes, page);
+	return iomap_write_end(inode, pos, bytes, bytes, page, iomap);
 }
 
 static int iomap_dax_zero(loff_t pos, unsigned offset, unsigned bytes,
diff --git a/include/linux/iomap.h b/include/linux/iomap.h
index 8a7c6d26b147..61af7b1bd0fc 100644
--- a/include/linux/iomap.h
+++ b/include/linux/iomap.h
@@ -29,6 +29,7 @@ struct vm_fault;
  */
 #define IOMAP_F_MERGED	0x10	/* contains multiple blocks/extents */
 #define IOMAP_F_SHARED	0x20	/* block shared with another file */
+#define IOMAP_F_NOBH	0x40	/* Do not assign buffer heads */
 
 /*
  * Magic value for blkno:
-- 
2.14.2


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [RFC PATCH 4/8] btrfs: Introduce btrfs_iomap
  2017-11-17 17:44 [RFC PATCH 0/8] btrfs iomap support Goldwyn Rodrigues
                   ` (2 preceding siblings ...)
  2017-11-17 17:44 ` [RFC PATCH 3/8] fs: Introduce IOMAP_F_NOBH Goldwyn Rodrigues
@ 2017-11-17 17:44 ` Goldwyn Rodrigues
  2017-11-17 17:44 ` [RFC PATCH 5/8] btrfs: use iomap to perform buffered writes Goldwyn Rodrigues
                   ` (6 subsequent siblings)
  10 siblings, 0 replies; 16+ messages in thread
From: Goldwyn Rodrigues @ 2017-11-17 17:44 UTC (permalink / raw)
  To: linux-btrfs; +Cc: Goldwyn Rodrigues

From: Goldwyn Rodrigues <rgoldwyn@suse.com>

Preparatory patch. btrfs_iomap structure carries extent/page
state from iomap_begin() to iomap_end().

Signed-off-by: Goldwyn Rodrigues <rgoldwyn@suse.com>
---
 fs/btrfs/file.c  | 68 ++++++++++++++++++++++++++------------------------------
 fs/btrfs/iomap.h | 21 +++++++++++++++++
 2 files changed, 53 insertions(+), 36 deletions(-)
 create mode 100644 fs/btrfs/iomap.h

diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
index 9bceb0e61361..876c2acc2a71 100644
--- a/fs/btrfs/file.c
+++ b/fs/btrfs/file.c
@@ -41,6 +41,7 @@
 #include "volumes.h"
 #include "qgroup.h"
 #include "compression.h"
+#include "iomap.h"
 
 static struct kmem_cache *btrfs_inode_defrag_cachep;
 /*
@@ -1580,18 +1581,14 @@ static noinline ssize_t __btrfs_buffered_write(struct kiocb *iocb,
 	struct inode *inode = file_inode(file);
 	struct btrfs_fs_info *fs_info = btrfs_sb(inode->i_sb);
 	struct btrfs_root *root = BTRFS_I(inode)->root;
+	struct btrfs_iomap btrfs_iomap = {0};
+	struct btrfs_iomap *bim = &btrfs_iomap;
 	struct page **pages = NULL;
-	struct extent_state *cached_state = NULL;
-	struct extent_changeset *data_reserved = NULL;
 	u64 release_bytes = 0;
-	u64 lockstart;
-	u64 lockend;
 	size_t num_written = 0;
 	int nrptrs;
 	int ret = 0;
-	bool only_release_metadata = false;
 	bool force_page_uptodate = false;
-	bool need_unlock;
 
 	nrptrs = min(DIV_ROUND_UP(iov_iter_count(i), PAGE_SIZE),
 			PAGE_SIZE / (sizeof(struct page *)));
@@ -1609,7 +1606,6 @@ static noinline ssize_t __btrfs_buffered_write(struct kiocb *iocb,
 					 offset);
 		size_t num_pages = DIV_ROUND_UP(write_bytes + offset,
 						PAGE_SIZE);
-		size_t reserve_bytes;
 		size_t dirty_pages;
 		size_t copied;
 		size_t dirty_sectors;
@@ -1627,11 +1623,11 @@ static noinline ssize_t __btrfs_buffered_write(struct kiocb *iocb,
 		}
 
 		sector_offset = pos & (fs_info->sectorsize - 1);
-		reserve_bytes = round_up(write_bytes + sector_offset,
+		bim->reserve_bytes = round_up(write_bytes + sector_offset,
 				fs_info->sectorsize);
 
-		extent_changeset_release(data_reserved);
-		ret = btrfs_check_data_free_space(inode, &data_reserved, pos,
+		extent_changeset_release(bim->data_reserved);
+		ret = btrfs_check_data_free_space(inode, &bim->data_reserved, pos,
 						  write_bytes);
 		if (ret < 0) {
 			if ((BTRFS_I(inode)->flags & (BTRFS_INODE_NODATACOW |
@@ -1642,14 +1638,14 @@ static noinline ssize_t __btrfs_buffered_write(struct kiocb *iocb,
 				 * For nodata cow case, no need to reserve
 				 * data space.
 				 */
-				only_release_metadata = true;
+				bim->only_release_metadata = true;
 				/*
 				 * our prealloc extent may be smaller than
 				 * write_bytes, so scale down.
 				 */
 				num_pages = DIV_ROUND_UP(write_bytes + offset,
 							 PAGE_SIZE);
-				reserve_bytes = round_up(write_bytes +
+				bim->reserve_bytes = round_up(write_bytes +
 							 sector_offset,
 							 fs_info->sectorsize);
 			} else {
@@ -1658,19 +1654,19 @@ static noinline ssize_t __btrfs_buffered_write(struct kiocb *iocb,
 		}
 
 		ret = btrfs_delalloc_reserve_metadata(BTRFS_I(inode),
-				reserve_bytes);
+				bim->reserve_bytes);
 		if (ret) {
-			if (!only_release_metadata)
+			if (!bim->only_release_metadata)
 				btrfs_free_reserved_data_space(inode,
-						data_reserved, pos,
+						bim->data_reserved, pos,
 						write_bytes);
 			else
 				btrfs_end_write_no_snapshotting(root);
 			break;
 		}
 
-		release_bytes = reserve_bytes;
-		need_unlock = false;
+		release_bytes = bim->reserve_bytes;
+		bim->extent_locked = 0;
 again:
 		/*
 		 * This is going to setup the pages array with the number of
@@ -1684,20 +1680,20 @@ static noinline ssize_t __btrfs_buffered_write(struct kiocb *iocb,
 			break;
 
 		ret = lock_and_cleanup_extent_if_need(BTRFS_I(inode), pages,
-				num_pages, pos, write_bytes, &lockstart,
-				&lockend, &cached_state);
+				num_pages, pos, write_bytes, &bim->lockstart,
+				&bim->lockend, &bim->cached_state);
 		if (ret < 0) {
 			if (ret == -EAGAIN)
 				goto again;
 			break;
 		} else if (ret > 0) {
-			need_unlock = true;
+			bim->extent_locked = 1;
 			ret = 0;
 		}
 
 		copied = btrfs_copy_from_user(pos, write_bytes, pages, i);
 
-		num_sectors = BTRFS_BYTES_TO_BLKS(fs_info, reserve_bytes);
+		num_sectors = BTRFS_BYTES_TO_BLKS(fs_info, bim->reserve_bytes);
 		dirty_sectors = round_up(copied + sector_offset,
 					fs_info->sectorsize);
 		dirty_sectors = BTRFS_BYTES_TO_BLKS(fs_info, dirty_sectors);
@@ -1736,7 +1732,7 @@ static noinline ssize_t __btrfs_buffered_write(struct kiocb *iocb,
 				BTRFS_I(inode)->outstanding_extents++;
 				spin_unlock(&BTRFS_I(inode)->lock);
 			}
-			if (only_release_metadata) {
+			if (bim->only_release_metadata) {
 				btrfs_delalloc_release_metadata(BTRFS_I(inode),
 								release_bytes);
 			} else {
@@ -1746,7 +1742,7 @@ static noinline ssize_t __btrfs_buffered_write(struct kiocb *iocb,
 						   fs_info->sectorsize) +
 					(dirty_pages << PAGE_SHIFT);
 				btrfs_delalloc_release_space(inode,
-						data_reserved, __pos,
+						bim->data_reserved, __pos,
 						release_bytes);
 			}
 		}
@@ -1757,29 +1753,29 @@ static noinline ssize_t __btrfs_buffered_write(struct kiocb *iocb,
 		if (copied > 0)
 			ret = btrfs_dirty_pages(inode, pages, dirty_pages,
 						pos, copied, NULL);
-		if (need_unlock)
+		if (bim->extent_locked)
 			unlock_extent_cached(&BTRFS_I(inode)->io_tree,
-					     lockstart, lockend, &cached_state,
-					     GFP_NOFS);
+					     bim->lockstart, bim->lockend,
+					     &bim->cached_state, GFP_NOFS);
 		if (ret) {
 			btrfs_drop_pages(pages, num_pages);
 			break;
 		}
 
 		release_bytes = 0;
-		if (only_release_metadata)
+		if (bim->only_release_metadata)
 			btrfs_end_write_no_snapshotting(root);
 
-		if (only_release_metadata && copied > 0) {
-			lockstart = round_down(pos,
+		if (bim->only_release_metadata && copied > 0) {
+			bim->lockstart = round_down(pos,
 					       fs_info->sectorsize);
-			lockend = round_up(pos + copied,
+			bim->lockend = round_up(pos + copied,
 					   fs_info->sectorsize) - 1;
 
-			set_extent_bit(&BTRFS_I(inode)->io_tree, lockstart,
-				       lockend, EXTENT_NORESERVE, NULL,
+			set_extent_bit(&BTRFS_I(inode)->io_tree, bim->lockstart,
+				       bim->lockend, EXTENT_NORESERVE, NULL,
 				       NULL, GFP_NOFS);
-			only_release_metadata = false;
+			bim->only_release_metadata = false;
 		}
 
 		btrfs_drop_pages(pages, num_pages);
@@ -1797,18 +1793,18 @@ static noinline ssize_t __btrfs_buffered_write(struct kiocb *iocb,
 	kfree(pages);
 
 	if (release_bytes) {
-		if (only_release_metadata) {
+		if (bim->only_release_metadata) {
 			btrfs_end_write_no_snapshotting(root);
 			btrfs_delalloc_release_metadata(BTRFS_I(inode),
 					release_bytes);
 		} else {
-			btrfs_delalloc_release_space(inode, data_reserved,
+			btrfs_delalloc_release_space(inode, bim->data_reserved,
 					round_down(pos, fs_info->sectorsize),
 					release_bytes);
 		}
 	}
 
-	extent_changeset_free(data_reserved);
+	extent_changeset_free(bim->data_reserved);
 	return num_written ? num_written : ret;
 }
 
diff --git a/fs/btrfs/iomap.h b/fs/btrfs/iomap.h
new file mode 100644
index 000000000000..ac34b3412f64
--- /dev/null
+++ b/fs/btrfs/iomap.h
@@ -0,0 +1,21 @@
+
+
+#ifndef __BTRFS_IOMAP_H__
+#define __BTRFS_IOMAP_H__
+
+#include <linux/iomap.h>
+#include "extent_io.h"
+
+struct btrfs_iomap {
+	u64 lockstart;
+	u64 lockend;
+	u64 reserve_bytes;
+	bool only_release_metadata;
+	int extent_locked;
+	struct extent_state *cached_state;
+	struct extent_changeset *data_reserved;
+};
+
+#endif
+
+
-- 
2.14.2


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [RFC PATCH 5/8] btrfs: use iomap to perform buffered writes
  2017-11-17 17:44 [RFC PATCH 0/8] btrfs iomap support Goldwyn Rodrigues
                   ` (3 preceding siblings ...)
  2017-11-17 17:44 ` [RFC PATCH 4/8] btrfs: Introduce btrfs_iomap Goldwyn Rodrigues
@ 2017-11-17 17:44 ` Goldwyn Rodrigues
  2017-11-17 17:44 ` [RFC PATCH 6/8] btrfs: read the first/last page of the write Goldwyn Rodrigues
                   ` (5 subsequent siblings)
  10 siblings, 0 replies; 16+ messages in thread
From: Goldwyn Rodrigues @ 2017-11-17 17:44 UTC (permalink / raw)
  To: linux-btrfs; +Cc: Goldwyn Rodrigues

From: Goldwyn Rodrigues <rgoldwyn@suse.com>

This eliminates all page related code.

Signed-off-by: Goldwyn Rodrigues <rgoldwyn@suse.com>
---
 fs/btrfs/btrfs_inode.h |   4 +-
 fs/btrfs/file.c        | 488 ++++++++++++++++++-------------------------------
 2 files changed, 185 insertions(+), 307 deletions(-)

diff --git a/fs/btrfs/btrfs_inode.h b/fs/btrfs/btrfs_inode.h
index eccadb5f62a5..2c2bc5fd5cc9 100644
--- a/fs/btrfs/btrfs_inode.h
+++ b/fs/btrfs/btrfs_inode.h
@@ -21,7 +21,7 @@
 
 #include <linux/hash.h>
 #include "extent_map.h"
-#include "extent_io.h"
+#include "iomap.h"
 #include "ordered-data.h"
 #include "delayed-inode.h"
 
@@ -207,6 +207,8 @@ struct btrfs_inode {
 	 */
 	struct rw_semaphore dio_sem;
 
+	struct btrfs_iomap *b_iomap;
+
 	struct inode vfs_inode;
 };
 
diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
index 876c2acc2a71..b7390214ef3a 100644
--- a/fs/btrfs/file.c
+++ b/fs/btrfs/file.c
@@ -405,79 +405,6 @@ int btrfs_run_defrag_inodes(struct btrfs_fs_info *fs_info)
 	return 0;
 }
 
-/* simple helper to fault in pages and copy.  This should go away
- * and be replaced with calls into generic code.
- */
-static noinline int btrfs_copy_from_user(loff_t pos, size_t write_bytes,
-					 struct page **prepared_pages,
-					 struct iov_iter *i)
-{
-	size_t copied = 0;
-	size_t total_copied = 0;
-	int pg = 0;
-	int offset = pos & (PAGE_SIZE - 1);
-
-	while (write_bytes > 0) {
-		size_t count = min_t(size_t,
-				     PAGE_SIZE - offset, write_bytes);
-		struct page *page = prepared_pages[pg];
-		/*
-		 * Copy data from userspace to the current page
-		 */
-		copied = iov_iter_copy_from_user_atomic(page, i, offset, count);
-
-		/* Flush processor's dcache for this page */
-		flush_dcache_page(page);
-
-		/*
-		 * if we get a partial write, we can end up with
-		 * partially up to date pages.  These add
-		 * a lot of complexity, so make sure they don't
-		 * happen by forcing this copy to be retried.
-		 *
-		 * The rest of the btrfs_file_write code will fall
-		 * back to page at a time copies after we return 0.
-		 */
-		if (!PageUptodate(page) && copied < count)
-			copied = 0;
-
-		iov_iter_advance(i, copied);
-		write_bytes -= copied;
-		total_copied += copied;
-
-		/* Return to btrfs_file_write_iter to fault page */
-		if (unlikely(copied == 0))
-			break;
-
-		if (copied < PAGE_SIZE - offset) {
-			offset += copied;
-		} else {
-			pg++;
-			offset = 0;
-		}
-	}
-	return total_copied;
-}
-
-/*
- * unlocks pages after btrfs_file_write is done with them
- */
-static void btrfs_drop_pages(struct page **pages, size_t num_pages)
-{
-	size_t i;
-	for (i = 0; i < num_pages; i++) {
-		/* page checked is some magic around finding pages that
-		 * have been modified without going through btrfs_set_page_dirty
-		 * clear it here. There should be no need to mark the pages
-		 * accessed as prepare_pages should have marked them accessed
-		 * in prepare_pages via find_or_create_page()
-		 */
-		ClearPageChecked(pages[i]);
-		unlock_page(pages[i]);
-		put_page(pages[i]);
-	}
-}
-
 /*
  * after copy_from_user, pages need to be dirtied and we need to make
  * sure holes are created between the current EOF and the start of
@@ -1457,8 +1384,7 @@ static int btrfs_find_new_delalloc_bytes(struct btrfs_inode *inode,
  * the other < 0 number - Something wrong happens
  */
 static noinline int
-lock_and_cleanup_extent_if_need(struct btrfs_inode *inode, struct page **pages,
-				size_t num_pages, loff_t pos,
+lock_and_cleanup_extent(struct btrfs_inode *inode, loff_t pos,
 				size_t write_bytes,
 				u64 *lockstart, u64 *lockend,
 				struct extent_state **cached_state)
@@ -1466,7 +1392,6 @@ lock_and_cleanup_extent_if_need(struct btrfs_inode *inode, struct page **pages,
 	struct btrfs_fs_info *fs_info = btrfs_sb(inode->vfs_inode.i_sb);
 	u64 start_pos;
 	u64 last_pos;
-	int i;
 	int ret = 0;
 
 	start_pos = round_down(pos, fs_info->sectorsize);
@@ -1488,10 +1413,6 @@ lock_and_cleanup_extent_if_need(struct btrfs_inode *inode, struct page **pages,
 		    ordered->file_offset <= last_pos) {
 			unlock_extent_cached(&inode->io_tree, start_pos,
 					last_pos, cached_state, GFP_NOFS);
-			for (i = 0; i < num_pages; i++) {
-				unlock_page(pages[i]);
-				put_page(pages[i]);
-			}
 			btrfs_start_ordered_extent(&inode->vfs_inode,
 					ordered, 1);
 			btrfs_put_ordered_extent(ordered);
@@ -1517,13 +1438,6 @@ lock_and_cleanup_extent_if_need(struct btrfs_inode *inode, struct page **pages,
 		ret = 1;
 	}
 
-	for (i = 0; i < num_pages; i++) {
-		if (clear_page_dirty_for_io(pages[i]))
-			account_page_redirty(pages[i]);
-		set_page_extent_mapped(pages[i]);
-		WARN_ON(!PageLocked(pages[i]));
-	}
-
 	return ret;
 }
 
@@ -1573,239 +1487,201 @@ static noinline int check_can_nocow(struct btrfs_inode *inode, loff_t pos,
 	return ret;
 }
 
-static noinline ssize_t __btrfs_buffered_write(struct kiocb *iocb,
-					       struct iov_iter *i)
-{
-	struct file *file = iocb->ki_filp;
-	loff_t pos = iocb->ki_pos;
-	struct inode *inode = file_inode(file);
-	struct btrfs_fs_info *fs_info = btrfs_sb(inode->i_sb);
-	struct btrfs_root *root = BTRFS_I(inode)->root;
-	struct btrfs_iomap btrfs_iomap = {0};
-	struct btrfs_iomap *bim = &btrfs_iomap;
-	struct page **pages = NULL;
-	u64 release_bytes = 0;
-	size_t num_written = 0;
-	int nrptrs;
-	int ret = 0;
-	bool force_page_uptodate = false;
-
-	nrptrs = min(DIV_ROUND_UP(iov_iter_count(i), PAGE_SIZE),
-			PAGE_SIZE / (sizeof(struct page *)));
-	nrptrs = min(nrptrs, current->nr_dirtied_pause - current->nr_dirtied);
-	nrptrs = max(nrptrs, 8);
-	pages = kmalloc_array(nrptrs, sizeof(struct page *), GFP_KERNEL);
-	if (!pages)
-		return -ENOMEM;
-
-	while (iov_iter_count(i) > 0) {
-		size_t offset = pos & (PAGE_SIZE - 1);
-		size_t sector_offset;
-		size_t write_bytes = min(iov_iter_count(i),
-					 nrptrs * (size_t)PAGE_SIZE -
-					 offset);
-		size_t num_pages = DIV_ROUND_UP(write_bytes + offset,
-						PAGE_SIZE);
-		size_t dirty_pages;
-		size_t copied;
-		size_t dirty_sectors;
-		size_t num_sectors;
-
-		WARN_ON(num_pages > nrptrs);
-
-		/*
-		 * Fault pages before locking them in prepare_pages
-		 * to avoid recursive lock
-		 */
-		if (unlikely(iov_iter_fault_in_readable(i, write_bytes))) {
-			ret = -EFAULT;
-			break;
-		}
-
-		sector_offset = pos & (fs_info->sectorsize - 1);
-		bim->reserve_bytes = round_up(write_bytes + sector_offset,
-				fs_info->sectorsize);
-
-		extent_changeset_release(bim->data_reserved);
-		ret = btrfs_check_data_free_space(inode, &bim->data_reserved, pos,
-						  write_bytes);
-		if (ret < 0) {
-			if ((BTRFS_I(inode)->flags & (BTRFS_INODE_NODATACOW |
-						      BTRFS_INODE_PREALLOC)) &&
-			    check_can_nocow(BTRFS_I(inode), pos,
-					&write_bytes) > 0) {
-				/*
-				 * For nodata cow case, no need to reserve
-				 * data space.
-				 */
-				bim->only_release_metadata = true;
-				/*
-				 * our prealloc extent may be smaller than
-				 * write_bytes, so scale down.
-				 */
-				num_pages = DIV_ROUND_UP(write_bytes + offset,
-							 PAGE_SIZE);
-				bim->reserve_bytes = round_up(write_bytes +
-							 sector_offset,
-							 fs_info->sectorsize);
-			} else {
-				break;
-			}
-		}
-
-		ret = btrfs_delalloc_reserve_metadata(BTRFS_I(inode),
-				bim->reserve_bytes);
-		if (ret) {
-			if (!bim->only_release_metadata)
-				btrfs_free_reserved_data_space(inode,
-						bim->data_reserved, pos,
-						write_bytes);
-			else
-				btrfs_end_write_no_snapshotting(root);
-			break;
-		}
 
-		release_bytes = bim->reserve_bytes;
-		bim->extent_locked = 0;
+int btrfs_file_iomap_begin(struct inode *inode, loff_t pos, loff_t length,
+                                        unsigned flags, struct iomap *iomap)
+{
+        struct btrfs_iomap *bim = BTRFS_I(inode)->b_iomap;
+        struct btrfs_fs_info *fs_info = btrfs_sb(inode->i_sb);
+        struct btrfs_root *root = BTRFS_I(inode)->root;
+        size_t write_bytes = length;
+        size_t sector_offset = pos & (fs_info->sectorsize - 1);
+        int ret;
+
+        bim->reserve_bytes = round_up(write_bytes + sector_offset,
+                        fs_info->sectorsize);
+        bim->extent_locked = false;
+        iomap->type = IOMAP_DELALLOC;
+        iomap->flags = IOMAP_F_NEW;
+
+	extent_changeset_release(bim->data_reserved);
+        /* Reserve data/quota space */
+        ret = btrfs_check_data_free_space(inode, &bim->data_reserved, pos,
+                        write_bytes);
+        if (ret < 0) {
+                if ((BTRFS_I(inode)->flags & (BTRFS_INODE_NODATACOW |
+                                                BTRFS_INODE_PREALLOC)) &&
+                                check_can_nocow(BTRFS_I(inode), pos,
+                                        &write_bytes) > 0) {
+                        /*
+                         * For nodata cow case, no need to reserve
+                         * data space.
+                         */
+                        bim->only_release_metadata = true;
+                        /*
+                         * our prealloc extent may be smaller than
+                         * write_bytes, so scale down.
+                         */
+                        bim->reserve_bytes = round_up(write_bytes +
+                                        sector_offset,
+                                        fs_info->sectorsize);
+                        iomap->type = IOMAP_UNWRITTEN;
+                        iomap->flags = 0;
+                } else {
+                        return ret;
+                }
+        }
+        ret = btrfs_delalloc_reserve_metadata(BTRFS_I(inode), bim->reserve_bytes);
+        if (ret) {
+                if (!bim->only_release_metadata)
+                        btrfs_free_reserved_data_space(inode,
+                                        bim->data_reserved, pos, write_bytes);
+                else
+                        btrfs_end_write_no_snapshotting(root);
+                extent_changeset_free(bim->data_reserved);
+                return ret;
+        }
+
+	bim->extent_locked = 0;
 again:
-		/*
-		 * This is going to setup the pages array with the number of
-		 * pages we want, so we don't really need to worry about the
-		 * contents of pages from loop to loop
-		 */
-		ret = prepare_pages(inode, pages, num_pages,
-				    pos, write_bytes,
-				    force_page_uptodate);
-		if (ret)
-			break;
-
-		ret = lock_and_cleanup_extent_if_need(BTRFS_I(inode), pages,
-				num_pages, pos, write_bytes, &bim->lockstart,
-				&bim->lockend, &bim->cached_state);
-		if (ret < 0) {
-			if (ret == -EAGAIN)
-				goto again;
-			break;
-		} else if (ret > 0) {
-			bim->extent_locked = 1;
-			ret = 0;
-		}
-
-		copied = btrfs_copy_from_user(pos, write_bytes, pages, i);
-
-		num_sectors = BTRFS_BYTES_TO_BLKS(fs_info, bim->reserve_bytes);
-		dirty_sectors = round_up(copied + sector_offset,
-					fs_info->sectorsize);
-		dirty_sectors = BTRFS_BYTES_TO_BLKS(fs_info, dirty_sectors);
+        bim->extent_locked = lock_and_cleanup_extent(BTRFS_I(inode),
+                        pos, write_bytes, &bim->lockstart,
+                        &bim->lockend, &bim->cached_state);
+
+        if (bim->extent_locked < 0) {
+                if (bim->extent_locked == -EAGAIN)
+                        goto again;
+                ret = bim->extent_locked;
+		goto release;
+        }
+
+
+        iomap->length = write_bytes;
+        iomap->offset = pos;
+        iomap->blkno = IOMAP_NULL_BLOCK;
+        iomap->bdev = fs_info->fs_devices->latest_bdev;
+        return 0;
+
+release:
+	if (bim->only_release_metadata) {
+		btrfs_end_write_no_snapshotting(root);
+		btrfs_delalloc_release_metadata(BTRFS_I(inode),
+				bim->reserve_bytes);
+	} else {
+		btrfs_delalloc_release_space(inode, bim->data_reserved,
+				round_down(pos, fs_info->sectorsize),
+				bim->reserve_bytes);
+	}
+	extent_changeset_free(bim->data_reserved);
+	return ret;
+}
 
-		/*
-		 * if we have trouble faulting in the pages, fall
-		 * back to one page at a time
-		 */
-		if (copied < write_bytes)
-			nrptrs = 1;
+int btrfs_file_iomap_end(struct inode *inode, loff_t pos, loff_t length,
+			 ssize_t copied, unsigned flags, struct iomap *iomap)
+{
 
-		if (copied == 0) {
-			force_page_uptodate = true;
-			dirty_sectors = 0;
-			dirty_pages = 0;
-		} else {
-			force_page_uptodate = false;
-			dirty_pages = DIV_ROUND_UP(copied + offset,
-						   PAGE_SIZE);
-		}
+        struct btrfs_iomap *bim = BTRFS_I(inode)->b_iomap;
+        struct btrfs_fs_info *fs_info = btrfs_sb(inode->i_sb);
+	u64 release_bytes = bim->reserve_bytes;
+	size_t num_sectors = BTRFS_BYTES_TO_BLKS(fs_info, bim->reserve_bytes);
+	size_t sector_offset = pos & (fs_info->sectorsize - 1);
+	size_t offset = pos & (PAGE_SIZE - 1);
+	size_t dirty_sectors = round_up(copied + sector_offset,
+			fs_info->sectorsize);
+	size_t dirty_pages = 0;
+        u64 start_pos = round_down(pos, fs_info->sectorsize);
+        u64 num_bytes = round_up(copied + pos - start_pos,
+                             fs_info->sectorsize);
+        u64 end_of_last_block = start_pos + num_bytes - 1;
+        int ret = 0;
+
+	dirty_sectors = BTRFS_BYTES_TO_BLKS(fs_info, dirty_sectors);
+
+	if (unlikely(copied == 0))
+		dirty_sectors = 0;
+	else
+		dirty_pages = DIV_ROUND_UP(copied + offset,
+				PAGE_SIZE);
 
-		/*
-		 * If we had a short copy we need to release the excess delaloc
-		 * bytes we reserved.  We need to increment outstanding_extents
-		 * because btrfs_delalloc_release_space and
-		 * btrfs_delalloc_release_metadata will decrement it, but
-		 * we still have an outstanding extent for the chunk we actually
-		 * managed to copy.
-		 */
-		if (num_sectors > dirty_sectors) {
-			/* release everything except the sectors we dirtied */
-			release_bytes -= dirty_sectors <<
-						fs_info->sb->s_blocksize_bits;
-			if (copied > 0) {
-				spin_lock(&BTRFS_I(inode)->lock);
-				BTRFS_I(inode)->outstanding_extents++;
-				spin_unlock(&BTRFS_I(inode)->lock);
-			}
-			if (bim->only_release_metadata) {
-				btrfs_delalloc_release_metadata(BTRFS_I(inode),
-								release_bytes);
-			} else {
-				u64 __pos;
-
-				__pos = round_down(pos,
-						   fs_info->sectorsize) +
-					(dirty_pages << PAGE_SHIFT);
-				btrfs_delalloc_release_space(inode,
-						bim->data_reserved, __pos,
-						release_bytes);
-			}
+	/*
+	 * If we had a short copy we need to release the excess delaloc
+	 * bytes we reserved.  We need to increment outstanding_extents
+	 * because btrfs_delalloc_release_space and
+	 * btrfs_delalloc_release_metadata will decrement it, but
+	 * we still have an outstanding extent for the chunk we actually
+	 * managed to copy.
+	 */
+	if (num_sectors > dirty_sectors) {
+		/* release everything except the sectors we dirtied */
+		release_bytes -= dirty_sectors <<
+			fs_info->sb->s_blocksize_bits;
+		if (copied > 0) {
+			spin_lock(&BTRFS_I(inode)->lock);
+			BTRFS_I(inode)->outstanding_extents++;
+			spin_unlock(&BTRFS_I(inode)->lock);
 		}
-
-		release_bytes = round_up(copied + sector_offset,
-					fs_info->sectorsize);
-
-		if (copied > 0)
-			ret = btrfs_dirty_pages(inode, pages, dirty_pages,
-						pos, copied, NULL);
-		if (bim->extent_locked)
-			unlock_extent_cached(&BTRFS_I(inode)->io_tree,
-					     bim->lockstart, bim->lockend,
-					     &bim->cached_state, GFP_NOFS);
-		if (ret) {
-			btrfs_drop_pages(pages, num_pages);
-			break;
+		if (bim->only_release_metadata) {
+			btrfs_delalloc_release_metadata(BTRFS_I(inode),
+					release_bytes);
+		} else {
+			u64 __pos;
+			__pos = round_down(pos,
+					fs_info->sectorsize) +
+				(dirty_pages << PAGE_SHIFT);
+			btrfs_delalloc_release_space(inode,
+					bim->data_reserved, __pos,
+					release_bytes);
 		}
+	}
 
-		release_bytes = 0;
-		if (bim->only_release_metadata)
-			btrfs_end_write_no_snapshotting(root);
+	release_bytes = round_up(copied + sector_offset,
+			fs_info->sectorsize);
 
-		if (bim->only_release_metadata && copied > 0) {
-			bim->lockstart = round_down(pos,
-					       fs_info->sectorsize);
-			bim->lockend = round_up(pos + copied,
-					   fs_info->sectorsize) - 1;
+	if (copied > 0)
+		ret = btrfs_set_extent_delalloc(inode, start_pos,
+					        end_of_last_block,
+						&bim->cached_state, 0);
 
-			set_extent_bit(&BTRFS_I(inode)->io_tree, bim->lockstart,
-				       bim->lockend, EXTENT_NORESERVE, NULL,
-				       NULL, GFP_NOFS);
-			bim->only_release_metadata = false;
-		}
+	if (bim->extent_locked)
+		unlock_extent_cached(&BTRFS_I(inode)->io_tree,
+				bim->lockstart, bim->lockend,
+				&bim->cached_state, GFP_NOFS);
 
-		btrfs_drop_pages(pages, num_pages);
+	if (bim->only_release_metadata)
+		btrfs_end_write_no_snapshotting(BTRFS_I(inode)->root);
 
-		cond_resched();
-
-		balance_dirty_pages_ratelimited(inode->i_mapping);
-		if (dirty_pages < (fs_info->nodesize >> PAGE_SHIFT) + 1)
-			btrfs_btree_balance_dirty(fs_info);
+	if (bim->only_release_metadata && copied > 0) {
+		bim->lockstart = round_down(pos,
+				fs_info->sectorsize);
+		bim->lockend = round_up(pos + copied,
+				fs_info->sectorsize) - 1;
 
-		pos += copied;
-		num_written += copied;
+		set_extent_bit(&BTRFS_I(inode)->io_tree, bim->lockstart,
+				bim->lockend, EXTENT_NORESERVE, NULL,
+				NULL, GFP_NOFS);
+		bim->only_release_metadata = false;
 	}
+        extent_changeset_free(bim->data_reserved);
+	return ret;
+}
 
-	kfree(pages);
-
-	if (release_bytes) {
-		if (bim->only_release_metadata) {
-			btrfs_end_write_no_snapshotting(root);
-			btrfs_delalloc_release_metadata(BTRFS_I(inode),
-					release_bytes);
-		} else {
-			btrfs_delalloc_release_space(inode, bim->data_reserved,
-					round_down(pos, fs_info->sectorsize),
-					release_bytes);
-		}
-	}
+const struct iomap_ops btrfs_iomap_ops = {
+        .iomap_begin            = btrfs_file_iomap_begin,
+        .iomap_end              = btrfs_file_iomap_end,
+};
 
-	extent_changeset_free(bim->data_reserved);
-	return num_written ? num_written : ret;
+static noinline ssize_t btrfs_buffered_write(struct kiocb *iocb,
+                                               struct iov_iter *from)
+{
+        struct btrfs_iomap bi = {0};
+        struct inode *inode = file_inode(iocb->ki_filp);
+        ssize_t written;
+        BTRFS_I(inode)->b_iomap = &bi;
+        written = iomap_file_buffered_write(iocb, from, &btrfs_iomap_ops);
+        if (written > 0)
+                iocb->ki_pos += written;
+        BTRFS_I(inode)->b_iomap = NULL;
+        return written;
 }
 
 static ssize_t __btrfs_direct_write(struct kiocb *iocb, struct iov_iter *from)
@@ -1823,7 +1699,7 @@ static ssize_t __btrfs_direct_write(struct kiocb *iocb, struct iov_iter *from)
 		return written;
 
 	iocb->ki_pos += written;
-	written_buffered = __btrfs_buffered_write(iocb, from);
+	written_buffered = btrfs_buffered_write(iocb, from);
 	if (written_buffered < 0) {
 		err = written_buffered;
 		goto out;
@@ -1960,7 +1836,7 @@ static ssize_t btrfs_file_write_iter(struct kiocb *iocb,
 	if (iocb->ki_flags & IOCB_DIRECT) {
 		num_written = __btrfs_direct_write(iocb, from);
 	} else {
-		num_written = __btrfs_buffered_write(iocb, from);
+		num_written = btrfs_buffered_write(iocb, from);
 		if (num_written > 0)
 			iocb->ki_pos = pos + num_written;
 		if (clean_page)
-- 
2.14.2


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [RFC PATCH 6/8] btrfs: read the first/last page of the write
  2017-11-17 17:44 [RFC PATCH 0/8] btrfs iomap support Goldwyn Rodrigues
                   ` (4 preceding siblings ...)
  2017-11-17 17:44 ` [RFC PATCH 5/8] btrfs: use iomap to perform buffered writes Goldwyn Rodrigues
@ 2017-11-17 17:44 ` Goldwyn Rodrigues
  2017-11-17 17:44 ` [RFC PATCH 7/8] fs: iomap->prepare_pages() to set directives specific for the page Goldwyn Rodrigues
                   ` (4 subsequent siblings)
  10 siblings, 0 replies; 16+ messages in thread
From: Goldwyn Rodrigues @ 2017-11-17 17:44 UTC (permalink / raw)
  To: linux-btrfs; +Cc: Goldwyn Rodrigues

From: Goldwyn Rodrigues <rgoldwyn@suse.com>

We cannot perform a readpage in iomap_apply after
iomap_begin() because we have our extents locked. So,
we perform a readpage and make sure we unlock it, but
increase the page count.

Question: How do we deal with -EAGAIN return from
prepare_uptodate_page()? Under what scenario's would this occur?

Signed-off-by: Goldwyn Rodrigues <rgoldwyn@suse.com>
---
 fs/btrfs/file.c  | 116 ++++++++++++++++++++++---------------------------------
 fs/btrfs/iomap.h |   1 +
 2 files changed, 47 insertions(+), 70 deletions(-)

diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
index b7390214ef3a..b34ec493fe4b 100644
--- a/fs/btrfs/file.c
+++ b/fs/btrfs/file.c
@@ -1252,84 +1252,36 @@ int btrfs_mark_extent_written(struct btrfs_trans_handle *trans,
 	return 0;
 }
 
-/*
- * on error we return an unlocked page and the error value
- * on success we return a locked page and 0
- */
-static int prepare_uptodate_page(struct inode *inode,
-				 struct page *page, u64 pos,
-				 bool force_uptodate)
+static int prepare_uptodate_page(struct inode *inode, u64 pos, struct page **pagep)
 {
+	struct page *page = NULL;
 	int ret = 0;
+	int index = pos >> PAGE_SHIFT;
+
+	if (!(pos & (PAGE_SIZE - 1)))
+		goto out;
+
+	page = grab_cache_page_write_begin(inode->i_mapping, index,
+			AOP_FLAG_NOFS);
 
-	if (((pos & (PAGE_SIZE - 1)) || force_uptodate) &&
-	    !PageUptodate(page)) {
+	if (!PageUptodate(page)) {
 		ret = btrfs_readpage(NULL, page);
 		if (ret)
-			return ret;
-		lock_page(page);
+			goto out;
 		if (!PageUptodate(page)) {
-			unlock_page(page);
-			return -EIO;
+			ret = -EIO;
+			goto out;
 		}
 		if (page->mapping != inode->i_mapping) {
-			unlock_page(page);
-			return -EAGAIN;
-		}
-	}
-	return 0;
-}
-
-/*
- * this just gets pages into the page cache and locks them down.
- */
-static noinline int prepare_pages(struct inode *inode, struct page **pages,
-				  size_t num_pages, loff_t pos,
-				  size_t write_bytes, bool force_uptodate)
-{
-	int i;
-	unsigned long index = pos >> PAGE_SHIFT;
-	gfp_t mask = btrfs_alloc_write_mask(inode->i_mapping);
-	int err = 0;
-	int faili;
-
-	for (i = 0; i < num_pages; i++) {
-again:
-		pages[i] = find_or_create_page(inode->i_mapping, index + i,
-					       mask | __GFP_WRITE);
-		if (!pages[i]) {
-			faili = i - 1;
-			err = -ENOMEM;
-			goto fail;
-		}
-
-		if (i == 0)
-			err = prepare_uptodate_page(inode, pages[i], pos,
-						    force_uptodate);
-		if (!err && i == num_pages - 1)
-			err = prepare_uptodate_page(inode, pages[i],
-						    pos + write_bytes, false);
-		if (err) {
-			put_page(pages[i]);
-			if (err == -EAGAIN) {
-				err = 0;
-				goto again;
-			}
-			faili = i - 1;
-			goto fail;
+			ret = -EAGAIN;
+			goto out;
 		}
-		wait_on_page_writeback(pages[i]);
 	}
-
-	return 0;
-fail:
-	while (faili >= 0) {
-		unlock_page(pages[faili]);
-		put_page(pages[faili]);
-		faili--;
-	}
-	return err;
-
+out:
+	if (page)
+		unlock_page(page);
+	*pagep = page;
+	return ret;
 }
 
 static int btrfs_find_new_delalloc_bytes(struct btrfs_inode *inode,
@@ -1502,7 +1454,7 @@ int btrfs_file_iomap_begin(struct inode *inode, loff_t pos, loff_t length,
                         fs_info->sectorsize);
         bim->extent_locked = false;
         iomap->type = IOMAP_DELALLOC;
-        iomap->flags = IOMAP_F_NEW;
+        iomap->flags = IOMAP_F_NEW | IOMAP_F_NOBH;
 
 	extent_changeset_release(bim->data_reserved);
         /* Reserve data/quota space */
@@ -1526,7 +1478,7 @@ int btrfs_file_iomap_begin(struct inode *inode, loff_t pos, loff_t length,
                                         sector_offset,
                                         fs_info->sectorsize);
                         iomap->type = IOMAP_UNWRITTEN;
-                        iomap->flags = 0;
+                        iomap->flags &= ~IOMAP_F_NEW;
                 } else {
                         return ret;
                 }
@@ -1543,6 +1495,20 @@ int btrfs_file_iomap_begin(struct inode *inode, loff_t pos, loff_t length,
         }
 
 	bim->extent_locked = 0;
+
+	if (pos < inode->i_size) {
+		ret = prepare_uptodate_page(inode, pos, &bim->first_page);
+		if (ret)
+			goto release;
+	}
+
+	if ((length > PAGE_SIZE) &&
+			(round_down(length + pos, PAGE_SIZE) < inode->i_size)) {
+		ret = prepare_uptodate_page(inode, pos + length, &bim->last_page);
+		if (ret)
+			goto release;
+	}
+
 again:
         bim->extent_locked = lock_and_cleanup_extent(BTRFS_I(inode),
                         pos, write_bytes, &bim->lockstart,
@@ -1597,6 +1563,16 @@ int btrfs_file_iomap_end(struct inode *inode, loff_t pos, loff_t length,
 
 	dirty_sectors = BTRFS_BYTES_TO_BLKS(fs_info, dirty_sectors);
 
+	if (bim->first_page) {
+		put_page(bim->first_page);
+		bim->first_page = NULL;
+	}
+
+	if (bim->last_page) {
+		put_page(bim->last_page);
+		bim->last_page = NULL;
+	}
+
 	if (unlikely(copied == 0))
 		dirty_sectors = 0;
 	else
diff --git a/fs/btrfs/iomap.h b/fs/btrfs/iomap.h
index ac34b3412f64..f62e3ee6d4de 100644
--- a/fs/btrfs/iomap.h
+++ b/fs/btrfs/iomap.h
@@ -14,6 +14,7 @@ struct btrfs_iomap {
 	int extent_locked;
 	struct extent_state *cached_state;
 	struct extent_changeset *data_reserved;
+	struct page *first_page, *last_page;
 };
 
 #endif
-- 
2.14.2


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [RFC PATCH 7/8] fs: iomap->prepare_pages() to set directives specific for the page
  2017-11-17 17:44 [RFC PATCH 0/8] btrfs iomap support Goldwyn Rodrigues
                   ` (5 preceding siblings ...)
  2017-11-17 17:44 ` [RFC PATCH 6/8] btrfs: read the first/last page of the write Goldwyn Rodrigues
@ 2017-11-17 17:44 ` Goldwyn Rodrigues
  2017-11-17 17:44 ` Goldwyn Rodrigues
                   ` (3 subsequent siblings)
  10 siblings, 0 replies; 16+ messages in thread
From: Goldwyn Rodrigues @ 2017-11-17 17:44 UTC (permalink / raw)
  To: linux-btrfs; +Cc: Goldwyn Rodrigues

From: Goldwyn Rodrigues <rgoldwyn@suse.com>

This adds prepare_pages() to iomap in order to set page directives
for the page so as FS such as btrfs may perform post-write operations
after write completes.

Can we do away with this? EXTENT_PAGE_PRIVATE is only set and not used.
However, we want the page to be set with PG_Priavate with SetPagePrivate()
for try_to_release_buffers(). Can we work around it?

Signed-off-by: Goldwyn Rodrigues <rgoldwyn@suse.com>
---
 fs/btrfs/file.c       |  8 ++++++++
 fs/dax.c              |  2 +-
 fs/internal.h         |  2 +-
 fs/iomap.c            | 23 ++++++++++++++---------
 include/linux/iomap.h |  3 +++
 5 files changed, 27 insertions(+), 11 deletions(-)

diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
index b34ec493fe4b..b5cc5c0a0cf5 100644
--- a/fs/btrfs/file.c
+++ b/fs/btrfs/file.c
@@ -1641,9 +1641,17 @@ int btrfs_file_iomap_end(struct inode *inode, loff_t pos, loff_t length,
 	return ret;
 }
 
+static void btrfs_file_process_page(struct inode *inode, struct page *page)
+{
+	SetPagePrivate(page);
+	set_page_private(page, EXTENT_PAGE_PRIVATE);
+	get_page(page);
+}
+
 const struct iomap_ops btrfs_iomap_ops = {
         .iomap_begin            = btrfs_file_iomap_begin,
         .iomap_end              = btrfs_file_iomap_end,
+	.iomap_process_page	= btrfs_file_process_page,
 };
 
 static noinline ssize_t btrfs_buffered_write(struct kiocb *iocb,
diff --git a/fs/dax.c b/fs/dax.c
index f001d8c72a06..51d07b24b3a1 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -943,7 +943,7 @@ static sector_t dax_iomap_sector(struct iomap *iomap, loff_t pos)
 
 static loff_t
 dax_iomap_actor(struct inode *inode, loff_t pos, loff_t length, void *data,
-		struct iomap *iomap)
+		const struct iomap_ops *ops, struct iomap *iomap)
 {
 	struct block_device *bdev = iomap->bdev;
 	struct dax_device *dax_dev = iomap->dax_dev;
diff --git a/fs/internal.h b/fs/internal.h
index 48cee21b4f14..bd9d5a37bd23 100644
--- a/fs/internal.h
+++ b/fs/internal.h
@@ -176,7 +176,7 @@ extern long vfs_ioctl(struct file *file, unsigned int cmd, unsigned long arg);
  * iomap support:
  */
 typedef loff_t (*iomap_actor_t)(struct inode *inode, loff_t pos, loff_t len,
-		void *data, struct iomap *iomap);
+		void *data, const struct iomap_ops *ops, struct iomap *iomap);
 
 loff_t iomap_apply(struct inode *inode, loff_t pos, loff_t length,
 		unsigned flags, const struct iomap_ops *ops, void *data,
diff --git a/fs/iomap.c b/fs/iomap.c
index 9ec9cc3077b3..a32660b1b6c5 100644
--- a/fs/iomap.c
+++ b/fs/iomap.c
@@ -78,7 +78,7 @@ iomap_apply(struct inode *inode, loff_t pos, loff_t length, unsigned flags,
 	 * we can do the copy-in page by page without having to worry about
 	 * failures exposing transient data.
 	 */
-	written = actor(inode, pos, length, data, &iomap);
+	written = actor(inode, pos, length, data, ops, &iomap);
 
 	/*
 	 * Now the data has been copied, commit the range we've copied.  This
@@ -155,7 +155,7 @@ iomap_write_end(struct inode *inode, loff_t pos, unsigned len,
 
 static loff_t
 iomap_write_actor(struct inode *inode, loff_t pos, loff_t length, void *data,
-		struct iomap *iomap)
+		const struct iomap_ops *ops, struct iomap *iomap)
 {
 	struct iov_iter *i = data;
 	long status = 0;
@@ -195,6 +195,9 @@ iomap_write_actor(struct inode *inode, loff_t pos, loff_t length, void *data,
 		if (unlikely(status))
 			break;
 
+		if (ops->iomap_process_page)
+			ops->iomap_process_page(inode, page);
+
 		if (mapping_writably_mapped(inode->i_mapping))
 			flush_dcache_page(page);
 
@@ -271,7 +274,7 @@ __iomap_read_page(struct inode *inode, loff_t offset)
 
 static loff_t
 iomap_dirty_actor(struct inode *inode, loff_t pos, loff_t length, void *data,
-		struct iomap *iomap)
+		const struct iomap_ops *ops, struct iomap *iomap)
 {
 	long status = 0;
 	ssize_t written = 0;
@@ -363,7 +366,7 @@ static int iomap_dax_zero(loff_t pos, unsigned offset, unsigned bytes,
 
 static loff_t
 iomap_zero_range_actor(struct inode *inode, loff_t pos, loff_t count,
-		void *data, struct iomap *iomap)
+		void *data, const struct iomap_ops *ops, struct iomap *iomap)
 {
 	bool *did_zero = data;
 	loff_t written = 0;
@@ -432,7 +435,7 @@ EXPORT_SYMBOL_GPL(iomap_truncate_page);
 
 static loff_t
 iomap_page_mkwrite_actor(struct inode *inode, loff_t pos, loff_t length,
-		void *data, struct iomap *iomap)
+		void *data, const struct iomap_ops *ops, struct iomap *iomap)
 {
 	struct page *page = data;
 	int ret;
@@ -523,7 +526,7 @@ static int iomap_to_fiemap(struct fiemap_extent_info *fi,
 
 static loff_t
 iomap_fiemap_actor(struct inode *inode, loff_t pos, loff_t length, void *data,
-		struct iomap *iomap)
+		const struct iomap_ops *ops, struct iomap *iomap)
 {
 	struct fiemap_ctx *ctx = data;
 	loff_t ret = length;
@@ -590,7 +593,8 @@ EXPORT_SYMBOL_GPL(iomap_fiemap);
 
 static loff_t
 iomap_seek_hole_actor(struct inode *inode, loff_t offset, loff_t length,
-		      void *data, struct iomap *iomap)
+		      void *data, const struct iomap_ops *ops,
+		      struct iomap *iomap)
 {
 	switch (iomap->type) {
 	case IOMAP_UNWRITTEN:
@@ -636,7 +640,8 @@ EXPORT_SYMBOL_GPL(iomap_seek_hole);
 
 static loff_t
 iomap_seek_data_actor(struct inode *inode, loff_t offset, loff_t length,
-		      void *data, struct iomap *iomap)
+		      void *data, const struct iomap_ops *ops,
+		      struct iomap *iomap)
 {
 	switch (iomap->type) {
 	case IOMAP_HOLE:
@@ -849,7 +854,7 @@ iomap_dio_zero(struct iomap_dio *dio, struct iomap *iomap, loff_t pos,
 
 static loff_t
 iomap_dio_actor(struct inode *inode, loff_t pos, loff_t length,
-		void *data, struct iomap *iomap)
+		void *data, const struct iomap_ops *ops, struct iomap *iomap)
 {
 	struct iomap_dio *dio = data;
 	unsigned int blkbits = blksize_bits(bdev_logical_block_size(iomap->bdev));
diff --git a/include/linux/iomap.h b/include/linux/iomap.h
index 61af7b1bd0fc..fbb0194d56d6 100644
--- a/include/linux/iomap.h
+++ b/include/linux/iomap.h
@@ -6,6 +6,7 @@
 
 struct fiemap_extent_info;
 struct inode;
+struct page;
 struct iov_iter;
 struct kiocb;
 struct vm_area_struct;
@@ -73,6 +74,8 @@ struct iomap_ops {
 	 */
 	int (*iomap_end)(struct inode *inode, loff_t pos, loff_t length,
 			ssize_t written, unsigned flags, struct iomap *iomap);
+
+	void (*iomap_process_page)(struct inode *inode, struct page *page);
 };
 
 ssize_t iomap_file_buffered_write(struct kiocb *iocb, struct iov_iter *from,
-- 
2.14.2


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [RFC PATCH 7/8] fs: iomap->prepare_pages() to set directives specific for the page
  2017-11-17 17:44 [RFC PATCH 0/8] btrfs iomap support Goldwyn Rodrigues
                   ` (6 preceding siblings ...)
  2017-11-17 17:44 ` [RFC PATCH 7/8] fs: iomap->prepare_pages() to set directives specific for the page Goldwyn Rodrigues
@ 2017-11-17 17:44 ` Goldwyn Rodrigues
  2017-11-17 17:44 ` [RFC PATCH 8/8] fs: Introduce iomap->dirty_page() Goldwyn Rodrigues
                   ` (2 subsequent siblings)
  10 siblings, 0 replies; 16+ messages in thread
From: Goldwyn Rodrigues @ 2017-11-17 17:44 UTC (permalink / raw)
  To: linux-btrfs; +Cc: Goldwyn Rodrigues

From: Goldwyn Rodrigues <rgoldwyn@suse.com>

This adds prepare_pages() to iomap in order to set page directives
for the page so as FS such as btrfs may perform post-write operations
after write completes.

Can we do away with this? EXTENT_PAGE_PRIVATE is only set and not used.
However, we want the page to be set with PG_Priavate with SetPagePrivate()
for try_to_release_buffers(). Can we work around it?

Signed-off-by: Goldwyn Rodrigues <rgoldwyn@suse.com>
---
 fs/btrfs/file.c       | 12 ++++++++++--
 fs/dax.c              |  2 +-
 fs/internal.h         |  2 +-
 fs/iomap.c            | 23 ++++++++++++++---------
 include/linux/iomap.h |  3 +++
 5 files changed, 29 insertions(+), 13 deletions(-)

diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
index f5f34e199709..1c459c9001b2 100644
--- a/fs/btrfs/file.c
+++ b/fs/btrfs/file.c
@@ -1261,8 +1261,8 @@ static int prepare_uptodate_page(struct inode *inode, u64 pos, struct page **pag
 	if (!(pos & (PAGE_SIZE - 1)))
 		goto out;
 
-	page = find_or_create_page(inode->i_mapping, index,
-			btrfs_alloc_write_mask(inode->i_mapping) | __GFP_WRITE);
+	page = grab_cache_page_write_begin(inode->i_mapping, index,
+			AOP_FLAG_NOFS);
 
 	if (!PageUptodate(page)) {
 		int ret = btrfs_readpage(NULL, page);
@@ -1641,9 +1641,17 @@ int btrfs_file_iomap_end(struct inode *inode, loff_t pos, loff_t length,
 	return ret;
 }
 
+static void btrfs_file_process_page(struct inode *inode, struct page *page)
+{
+	SetPagePrivate(page);
+	set_page_private(page, EXTENT_PAGE_PRIVATE);
+	get_page(page);
+}
+
 const struct iomap_ops btrfs_iomap_ops = {
         .iomap_begin            = btrfs_file_iomap_begin,
         .iomap_end              = btrfs_file_iomap_end,
+	.iomap_process_page	= btrfs_file_process_page,
 };
 
 static noinline ssize_t btrfs_buffered_write(struct kiocb *iocb,
diff --git a/fs/dax.c b/fs/dax.c
index f001d8c72a06..51d07b24b3a1 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -943,7 +943,7 @@ static sector_t dax_iomap_sector(struct iomap *iomap, loff_t pos)
 
 static loff_t
 dax_iomap_actor(struct inode *inode, loff_t pos, loff_t length, void *data,
-		struct iomap *iomap)
+		const struct iomap_ops *ops, struct iomap *iomap)
 {
 	struct block_device *bdev = iomap->bdev;
 	struct dax_device *dax_dev = iomap->dax_dev;
diff --git a/fs/internal.h b/fs/internal.h
index 48cee21b4f14..bd9d5a37bd23 100644
--- a/fs/internal.h
+++ b/fs/internal.h
@@ -176,7 +176,7 @@ extern long vfs_ioctl(struct file *file, unsigned int cmd, unsigned long arg);
  * iomap support:
  */
 typedef loff_t (*iomap_actor_t)(struct inode *inode, loff_t pos, loff_t len,
-		void *data, struct iomap *iomap);
+		void *data, const struct iomap_ops *ops, struct iomap *iomap);
 
 loff_t iomap_apply(struct inode *inode, loff_t pos, loff_t length,
 		unsigned flags, const struct iomap_ops *ops, void *data,
diff --git a/fs/iomap.c b/fs/iomap.c
index 9ec9cc3077b3..a32660b1b6c5 100644
--- a/fs/iomap.c
+++ b/fs/iomap.c
@@ -78,7 +78,7 @@ iomap_apply(struct inode *inode, loff_t pos, loff_t length, unsigned flags,
 	 * we can do the copy-in page by page without having to worry about
 	 * failures exposing transient data.
 	 */
-	written = actor(inode, pos, length, data, &iomap);
+	written = actor(inode, pos, length, data, ops, &iomap);
 
 	/*
 	 * Now the data has been copied, commit the range we've copied.  This
@@ -155,7 +155,7 @@ iomap_write_end(struct inode *inode, loff_t pos, unsigned len,
 
 static loff_t
 iomap_write_actor(struct inode *inode, loff_t pos, loff_t length, void *data,
-		struct iomap *iomap)
+		const struct iomap_ops *ops, struct iomap *iomap)
 {
 	struct iov_iter *i = data;
 	long status = 0;
@@ -195,6 +195,9 @@ iomap_write_actor(struct inode *inode, loff_t pos, loff_t length, void *data,
 		if (unlikely(status))
 			break;
 
+		if (ops->iomap_process_page)
+			ops->iomap_process_page(inode, page);
+
 		if (mapping_writably_mapped(inode->i_mapping))
 			flush_dcache_page(page);
 
@@ -271,7 +274,7 @@ __iomap_read_page(struct inode *inode, loff_t offset)
 
 static loff_t
 iomap_dirty_actor(struct inode *inode, loff_t pos, loff_t length, void *data,
-		struct iomap *iomap)
+		const struct iomap_ops *ops, struct iomap *iomap)
 {
 	long status = 0;
 	ssize_t written = 0;
@@ -363,7 +366,7 @@ static int iomap_dax_zero(loff_t pos, unsigned offset, unsigned bytes,
 
 static loff_t
 iomap_zero_range_actor(struct inode *inode, loff_t pos, loff_t count,
-		void *data, struct iomap *iomap)
+		void *data, const struct iomap_ops *ops, struct iomap *iomap)
 {
 	bool *did_zero = data;
 	loff_t written = 0;
@@ -432,7 +435,7 @@ EXPORT_SYMBOL_GPL(iomap_truncate_page);
 
 static loff_t
 iomap_page_mkwrite_actor(struct inode *inode, loff_t pos, loff_t length,
-		void *data, struct iomap *iomap)
+		void *data, const struct iomap_ops *ops, struct iomap *iomap)
 {
 	struct page *page = data;
 	int ret;
@@ -523,7 +526,7 @@ static int iomap_to_fiemap(struct fiemap_extent_info *fi,
 
 static loff_t
 iomap_fiemap_actor(struct inode *inode, loff_t pos, loff_t length, void *data,
-		struct iomap *iomap)
+		const struct iomap_ops *ops, struct iomap *iomap)
 {
 	struct fiemap_ctx *ctx = data;
 	loff_t ret = length;
@@ -590,7 +593,8 @@ EXPORT_SYMBOL_GPL(iomap_fiemap);
 
 static loff_t
 iomap_seek_hole_actor(struct inode *inode, loff_t offset, loff_t length,
-		      void *data, struct iomap *iomap)
+		      void *data, const struct iomap_ops *ops,
+		      struct iomap *iomap)
 {
 	switch (iomap->type) {
 	case IOMAP_UNWRITTEN:
@@ -636,7 +640,8 @@ EXPORT_SYMBOL_GPL(iomap_seek_hole);
 
 static loff_t
 iomap_seek_data_actor(struct inode *inode, loff_t offset, loff_t length,
-		      void *data, struct iomap *iomap)
+		      void *data, const struct iomap_ops *ops,
+		      struct iomap *iomap)
 {
 	switch (iomap->type) {
 	case IOMAP_HOLE:
@@ -849,7 +854,7 @@ iomap_dio_zero(struct iomap_dio *dio, struct iomap *iomap, loff_t pos,
 
 static loff_t
 iomap_dio_actor(struct inode *inode, loff_t pos, loff_t length,
-		void *data, struct iomap *iomap)
+		void *data, const struct iomap_ops *ops, struct iomap *iomap)
 {
 	struct iomap_dio *dio = data;
 	unsigned int blkbits = blksize_bits(bdev_logical_block_size(iomap->bdev));
diff --git a/include/linux/iomap.h b/include/linux/iomap.h
index 61af7b1bd0fc..fbb0194d56d6 100644
--- a/include/linux/iomap.h
+++ b/include/linux/iomap.h
@@ -6,6 +6,7 @@
 
 struct fiemap_extent_info;
 struct inode;
+struct page;
 struct iov_iter;
 struct kiocb;
 struct vm_area_struct;
@@ -73,6 +74,8 @@ struct iomap_ops {
 	 */
 	int (*iomap_end)(struct inode *inode, loff_t pos, loff_t length,
 			ssize_t written, unsigned flags, struct iomap *iomap);
+
+	void (*iomap_process_page)(struct inode *inode, struct page *page);
 };
 
 ssize_t iomap_file_buffered_write(struct kiocb *iocb, struct iov_iter *from,
-- 
2.14.2


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [RFC PATCH 8/8] fs: Introduce iomap->dirty_page()
  2017-11-17 17:44 [RFC PATCH 0/8] btrfs iomap support Goldwyn Rodrigues
                   ` (7 preceding siblings ...)
  2017-11-17 17:44 ` Goldwyn Rodrigues
@ 2017-11-17 17:44 ` Goldwyn Rodrigues
  2017-11-17 17:44 ` [RFC PATCH 8/8] iomap: " Goldwyn Rodrigues
  2017-11-17 18:45 ` [RFC PATCH 0/8] btrfs iomap support Nikolay Borisov
  10 siblings, 0 replies; 16+ messages in thread
From: Goldwyn Rodrigues @ 2017-11-17 17:44 UTC (permalink / raw)
  To: linux-btrfs; +Cc: Goldwyn Rodrigues

From: Goldwyn Rodrigues <rgoldwyn@suse.com>

In dirty_page(), we are clearing PageChecked, though I don't see it set.
Is this used for compression only?
Can we call __set_page_dirty_nobuffers instead?

Signed-off-by: Goldwyn Rodrigues <rgoldwyn@suse.com>
---
 fs/btrfs/file.c       | 8 ++++++++
 fs/iomap.c            | 2 ++
 include/linux/iomap.h | 1 +
 3 files changed, 11 insertions(+)

diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
index b5cc5c0a0cf5..049ed1d8ce1f 100644
--- a/fs/btrfs/file.c
+++ b/fs/btrfs/file.c
@@ -1648,10 +1648,18 @@ static void btrfs_file_process_page(struct inode *inode, struct page *page)
 	get_page(page);
 }
 
+static void btrfs_file_dirty_page(struct page *page)
+{
+	SetPageUptodate(page);
+	ClearPageChecked(page);
+	set_page_dirty(page);
+}
+
 const struct iomap_ops btrfs_iomap_ops = {
         .iomap_begin            = btrfs_file_iomap_begin,
         .iomap_end              = btrfs_file_iomap_end,
 	.iomap_process_page	= btrfs_file_process_page,
+	.iomap_dirty_page	= btrfs_file_dirty_page,
 };
 
 static noinline ssize_t btrfs_buffered_write(struct kiocb *iocb,
diff --git a/fs/iomap.c b/fs/iomap.c
index a32660b1b6c5..0907790c76c0 100644
--- a/fs/iomap.c
+++ b/fs/iomap.c
@@ -208,6 +208,8 @@ iomap_write_actor(struct inode *inode, loff_t pos, loff_t length, void *data,
 		status = iomap_write_end(inode, pos, bytes, copied, page, iomap);
 		if (unlikely(status < 0))
 			break;
+		if (ops->iomap_dirty_page)
+			ops->iomap_dirty_page(page);
 		copied = status;
 
 		cond_resched();
diff --git a/include/linux/iomap.h b/include/linux/iomap.h
index fbb0194d56d6..7fbf6889dc54 100644
--- a/include/linux/iomap.h
+++ b/include/linux/iomap.h
@@ -76,6 +76,7 @@ struct iomap_ops {
 			ssize_t written, unsigned flags, struct iomap *iomap);
 
 	void (*iomap_process_page)(struct inode *inode, struct page *page);
+	void (*iomap_dirty_page)(struct page *page);
 };
 
 ssize_t iomap_file_buffered_write(struct kiocb *iocb, struct iov_iter *from,
-- 
2.14.2


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [RFC PATCH 8/8] iomap: Introduce iomap->dirty_page()
  2017-11-17 17:44 [RFC PATCH 0/8] btrfs iomap support Goldwyn Rodrigues
                   ` (8 preceding siblings ...)
  2017-11-17 17:44 ` [RFC PATCH 8/8] fs: Introduce iomap->dirty_page() Goldwyn Rodrigues
@ 2017-11-17 17:44 ` Goldwyn Rodrigues
  2017-11-17 18:45 ` [RFC PATCH 0/8] btrfs iomap support Nikolay Borisov
  10 siblings, 0 replies; 16+ messages in thread
From: Goldwyn Rodrigues @ 2017-11-17 17:44 UTC (permalink / raw)
  To: linux-btrfs; +Cc: Goldwyn Rodrigues

From: Goldwyn Rodrigues <rgoldwyn@suse.com>

In dirty_page(), we are clearing PageChecked, though I don't see it set.
Is this used for compression only?
Can we call __set_page_dirty_nobuffers instead?

Signed-off-by: Goldwyn Rodrigues <rgoldwyn@suse.com>
---
 fs/btrfs/file.c       | 8 ++++++++
 fs/iomap.c            | 2 ++
 include/linux/iomap.h | 1 +
 3 files changed, 11 insertions(+)

diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
index 1c459c9001b2..ba304e782098 100644
--- a/fs/btrfs/file.c
+++ b/fs/btrfs/file.c
@@ -1648,10 +1648,18 @@ static void btrfs_file_process_page(struct inode *inode, struct page *page)
 	get_page(page);
 }
 
+static void btrfs_file_dirty_page(struct page *page)
+{
+	SetPageUptodate(page);
+	ClearPageChecked(page);
+	set_page_dirty(page);
+}
+
 const struct iomap_ops btrfs_iomap_ops = {
         .iomap_begin            = btrfs_file_iomap_begin,
         .iomap_end              = btrfs_file_iomap_end,
 	.iomap_process_page	= btrfs_file_process_page,
+	.iomap_dirty_page	= btrfs_file_dirty_page,
 };
 
 static noinline ssize_t btrfs_buffered_write(struct kiocb *iocb,
diff --git a/fs/iomap.c b/fs/iomap.c
index a32660b1b6c5..0907790c76c0 100644
--- a/fs/iomap.c
+++ b/fs/iomap.c
@@ -208,6 +208,8 @@ iomap_write_actor(struct inode *inode, loff_t pos, loff_t length, void *data,
 		status = iomap_write_end(inode, pos, bytes, copied, page, iomap);
 		if (unlikely(status < 0))
 			break;
+		if (ops->iomap_dirty_page)
+			ops->iomap_dirty_page(page);
 		copied = status;
 
 		cond_resched();
diff --git a/include/linux/iomap.h b/include/linux/iomap.h
index fbb0194d56d6..7fbf6889dc54 100644
--- a/include/linux/iomap.h
+++ b/include/linux/iomap.h
@@ -76,6 +76,7 @@ struct iomap_ops {
 			ssize_t written, unsigned flags, struct iomap *iomap);
 
 	void (*iomap_process_page)(struct inode *inode, struct page *page);
+	void (*iomap_dirty_page)(struct page *page);
 };
 
 ssize_t iomap_file_buffered_write(struct kiocb *iocb, struct iov_iter *from,
-- 
2.14.2


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH 0/8] btrfs iomap support
  2017-11-17 17:44 [RFC PATCH 0/8] btrfs iomap support Goldwyn Rodrigues
                   ` (9 preceding siblings ...)
  2017-11-17 17:44 ` [RFC PATCH 8/8] iomap: " Goldwyn Rodrigues
@ 2017-11-17 18:45 ` Nikolay Borisov
  2017-11-17 23:07   ` Goldwyn Rodrigues
  10 siblings, 1 reply; 16+ messages in thread
From: Nikolay Borisov @ 2017-11-17 18:45 UTC (permalink / raw)
  To: Goldwyn Rodrigues, linux-btrfs



On 17.11.2017 19:44, Goldwyn Rodrigues wrote:
> This patch series attempts to use kernels iomap for btrfs. Currently,
> it covers buffered writes only, but I intend to add some other iomap
> uses once this gets through. I am sending this as an RFC because I
> would like to find ways to improve the solution since some changes
> require adding more functions to the iomap infrastructure which I
> would try to avoid. I still have to remove some kinks as well such
> as -o compress. I have posted some questions in the individual
> patches and would appreciate some input to those.
> 
> Some of the problems I faced is:
> 
> 1. extent locking: While we perform the extent locking for writes,
> we need to perform any reads because of non-page-aligned calls before
> locking can be done. This requires reading the page, increasing their
> pagecount and "letting it go". The iomap infrastructure uses
> buffer_heads wheras btrfs uses bio and hence needs to call readpage
> exclusively. The "letting it go" part makes me somewhat nervous of
> conflicting reads/writes, even though we are protected under i_rwsem.
> Is readpage_nolock() a good idea? The extent locking sequence is a
> bit weird, with locks and unlock happening in different functions.

Is there some inherent requirement in iomap's design that necessitates
the usage of buffer heads? I thought the trend is for buffer_head to
eventually die out. Given that iomap is fairly recent (2-3 years?) I
find it odd it's relying on buffer heads.

> 
> 2. btrfs pages use PagePrivate to store EXTENT_PAGE_PRIVATE which is not used anywhere.
> However, a PagePrivate flag is used for try_to_release_buffers(). Can
> we do away with PagePrivate for data pages? The same with PageChecked.
> How and why is it used (I guess -o compress)
> 
> 3. I had to stick information which will be required from iomap_begin()
> to iomap_end() in btrfs_iomap which is a pointer in btrfs_inode. Is
> there any other place/way we can transmit this information. XFS only
> performs allocations and deallocations so it just relies of bmap code
> for it.
> 
> Suggestions/Criticism welcome.
> 

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH 0/8] btrfs iomap support
  2017-11-17 18:45 ` [RFC PATCH 0/8] btrfs iomap support Nikolay Borisov
@ 2017-11-17 23:07   ` Goldwyn Rodrigues
  0 siblings, 0 replies; 16+ messages in thread
From: Goldwyn Rodrigues @ 2017-11-17 23:07 UTC (permalink / raw)
  To: Nikolay Borisov, linux-btrfs



On 11/17/2017 12:45 PM, Nikolay Borisov wrote:
> 
> 
> On 17.11.2017 19:44, Goldwyn Rodrigues wrote:
>> This patch series attempts to use kernels iomap for btrfs. Currently,
>> it covers buffered writes only, but I intend to add some other iomap
>> uses once this gets through. I am sending this as an RFC because I
>> would like to find ways to improve the solution since some changes
>> require adding more functions to the iomap infrastructure which I
>> would try to avoid. I still have to remove some kinks as well such
>> as -o compress. I have posted some questions in the individual
>> patches and would appreciate some input to those.
>>
>> Some of the problems I faced is:
>>
>> 1. extent locking: While we perform the extent locking for writes,
>> we need to perform any reads because of non-page-aligned calls before
>> locking can be done. This requires reading the page, increasing their
>> pagecount and "letting it go". The iomap infrastructure uses
>> buffer_heads wheras btrfs uses bio and hence needs to call readpage
>> exclusively. The "letting it go" part makes me somewhat nervous of
>> conflicting reads/writes, even though we are protected under i_rwsem.
>> Is readpage_nolock() a good idea? The extent locking sequence is a
>> bit weird, with locks and unlock happening in different functions.
> 
> Is there some inherent requirement in iomap's design that necessitates
> the usage of buffer heads? I thought the trend is for buffer_head to
> eventually die out. Given that iomap is fairly recent (2-3 years?) I
> find it odd it's relying on buffer heads.
> 

No, there is no inherent reason that I see, but legacy. iomap is carved
out of existing filesystems such as xfs which traditionally use
buffer_heads. In any case, the buffer heads make I/O to individual pages
independently. iomap calls existing functions which use buffer heads.

>>
>> 2. btrfs pages use PagePrivate to store EXTENT_PAGE_PRIVATE which is not used anywhere.
>> However, a PagePrivate flag is used for try_to_release_buffers(). Can
>> we do away with PagePrivate for data pages? The same with PageChecked.
>> How and why is it used (I guess -o compress)
>>
>> 3. I had to stick information which will be required from iomap_begin()
>> to iomap_end() in btrfs_iomap which is a pointer in btrfs_inode. Is
>> there any other place/way we can transmit this information. XFS only
>> performs allocations and deallocations so it just relies of bmap code
>> for it.
>>
>> Suggestions/Criticism welcome.
>>

-- 
Goldwyn

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH 1/8] btrfs: use iocb for __btrfs_buffered_write
  2017-11-17 17:44 ` [RFC PATCH 1/8] btrfs: use iocb for __btrfs_buffered_write Goldwyn Rodrigues
@ 2018-04-10 16:19   ` David Sterba
  2018-05-22  6:40   ` Misono Tomohiro
  1 sibling, 0 replies; 16+ messages in thread
From: David Sterba @ 2018-04-10 16:19 UTC (permalink / raw)
  To: Goldwyn Rodrigues; +Cc: linux-btrfs, Goldwyn Rodrigues

On Fri, Nov 17, 2017 at 11:44:47AM -0600, Goldwyn Rodrigues wrote:
> From: Goldwyn Rodrigues <rgoldwyn@suse.com>
> 
> Preparatory patch. It reduces the arguments to __btrfs_buffered_write
> to follow buffered_write() style.
> 
> Signed-off-by: Goldwyn Rodrigues <rgoldwyn@suse.com>

I got pointed to this patch that it could be applied independently, so
I'm adding it to btrfs queue.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH 1/8] btrfs: use iocb for __btrfs_buffered_write
  2017-11-17 17:44 ` [RFC PATCH 1/8] btrfs: use iocb for __btrfs_buffered_write Goldwyn Rodrigues
  2018-04-10 16:19   ` David Sterba
@ 2018-05-22  6:40   ` Misono Tomohiro
  2018-05-22 10:03     ` David Sterba
  1 sibling, 1 reply; 16+ messages in thread
From: Misono Tomohiro @ 2018-05-22  6:40 UTC (permalink / raw)
  To: Goldwyn Rodrigues, linux-btrfs; +Cc: Goldwyn Rodrigues, David Sterba

On 2017/11/18 2:44, Goldwyn Rodrigues wrote:
> From: Goldwyn Rodrigues <rgoldwyn@suse.com>
> 
> Preparatory patch. It reduces the arguments to __btrfs_buffered_write
> to follow buffered_write() style.
> 
> Signed-off-by: Goldwyn Rodrigues <rgoldwyn@suse.com>
> 
> ---
>  fs/btrfs/file.c | 24 ++++++++++++------------
>  1 file changed, 12 insertions(+), 12 deletions(-)
> 
> diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
> index aafcc785f840..9bceb0e61361 100644
> --- a/fs/btrfs/file.c
> +++ b/fs/btrfs/file.c
> @@ -1572,10 +1572,11 @@ static noinline int check_can_nocow(struct btrfs_inode *inode, loff_t pos,
>  	return ret;
>  }
>  
> -static noinline ssize_t __btrfs_buffered_write(struct file *file,
> -					       struct iov_iter *i,
> -					       loff_t pos)
> +static noinline ssize_t __btrfs_buffered_write(struct kiocb *iocb,
> +					       struct iov_iter *i)
>  {
> +	struct file *file = iocb->ki_filp;
> +	loff_t pos = iocb->ki_pos;
>  	struct inode *inode = file_inode(file);
>  	struct btrfs_fs_info *fs_info = btrfs_sb(inode->i_sb);
>  	struct btrfs_root *root = BTRFS_I(inode)->root;
> @@ -1815,7 +1816,6 @@ static ssize_t __btrfs_direct_write(struct kiocb *iocb, struct iov_iter *from)
>  {
>  	struct file *file = iocb->ki_filp;
>  	struct inode *inode = file_inode(file);
> -	loff_t pos = iocb->ki_pos;
>  	ssize_t written;
>  	ssize_t written_buffered;
>  	loff_t endbyte;
> @@ -1826,8 +1826,8 @@ static ssize_t __btrfs_direct_write(struct kiocb *iocb, struct iov_iter *from)
>  	if (written < 0 || !iov_iter_count(from))
>  		return written;
>  
> -	pos += written;
> -	written_buffered = __btrfs_buffered_write(file, from, pos);

> +	iocb->ki_pos += written;

Hi,

I found btrfs/026 fails on current misc-next branch and
git bisect points this commit.

I noticed generic_file_direct_write() already updates iocb->ki_pos, and therefore
above "iocb->ki_pos += written" is not needed.

> +	written_buffered = __btrfs_buffered_write(iocb, from);
>  	if (written_buffered < 0) {
>  		err = written_buffered;
>  		goto out;
> @@ -1836,16 +1836,16 @@ static ssize_t __btrfs_direct_write(struct kiocb *iocb, struct iov_iter *from)
>  	 * Ensure all data is persisted. We want the next direct IO read to be
>  	 * able to read what was just written.
>  	 */
> -	endbyte = pos + written_buffered - 1;
> -	err = btrfs_fdatawrite_range(inode, pos, endbyte);
> +	endbyte = iocb->ki_pos + written_buffered - 1;
> +	err = btrfs_fdatawrite_range(inode, iocb->ki_pos, endbyte);
>  	if (err)
>  		goto out;
> -	err = filemap_fdatawait_range(inode->i_mapping, pos, endbyte);
> +	err = filemap_fdatawait_range(inode->i_mapping, iocb->ki_pos, endbyte);
>  	if (err)
>  		goto out;
> +	iocb->ki_pos += written_buffered;
>  	written += written_buffered;
> -	iocb->ki_pos = pos + written_buffered;
> -	invalidate_mapping_pages(file->f_mapping, pos >> PAGE_SHIFT,
> +	invalidate_mapping_pages(file->f_mapping, iocb->ki_pos >> PAGE_SHIFT,
>  				 endbyte >> PAGE_SHIFT);

Also, this invalidate_mapping_pages() should be done before updating iocb->ki_pos
to invalidate buffered written area.

Thanks,
Tomohiro Misono

>  out:
>  	return written ? written : err;
> @@ -1964,7 +1964,7 @@ static ssize_t btrfs_file_write_iter(struct kiocb *iocb,
>  	if (iocb->ki_flags & IOCB_DIRECT) {
>  		num_written = __btrfs_direct_write(iocb, from);
>  	} else {
> -		num_written = __btrfs_buffered_write(file, from, pos);
> +		num_written = __btrfs_buffered_write(iocb, from);
>  		if (num_written > 0)
>  			iocb->ki_pos = pos + num_written;
>  		if (clean_page)
> 


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH 1/8] btrfs: use iocb for __btrfs_buffered_write
  2018-05-22  6:40   ` Misono Tomohiro
@ 2018-05-22 10:03     ` David Sterba
  0 siblings, 0 replies; 16+ messages in thread
From: David Sterba @ 2018-05-22 10:03 UTC (permalink / raw)
  To: Misono Tomohiro
  Cc: Goldwyn Rodrigues, linux-btrfs, Goldwyn Rodrigues, David Sterba

On Tue, May 22, 2018 at 03:40:18PM +0900, Misono Tomohiro wrote:
> > @@ -1815,7 +1816,6 @@ static ssize_t __btrfs_direct_write(struct kiocb *iocb, struct iov_iter *from)
> >  {
> >  	struct file *file = iocb->ki_filp;
> >  	struct inode *inode = file_inode(file);
> > -	loff_t pos = iocb->ki_pos;
> >  	ssize_t written;
> >  	ssize_t written_buffered;
> >  	loff_t endbyte;
> > @@ -1826,8 +1826,8 @@ static ssize_t __btrfs_direct_write(struct kiocb *iocb, struct iov_iter *from)
> >  	if (written < 0 || !iov_iter_count(from))
> >  		return written;
> >  
> > -	pos += written;
> > -	written_buffered = __btrfs_buffered_write(file, from, pos);
> 
> > +	iocb->ki_pos += written;
> 
> Hi,
> 
> I found btrfs/026 fails on current misc-next branch and
> git bisect points this commit.

Thanks, that's appreciated.

> I noticed generic_file_direct_write() already updates iocb->ki_pos, and therefore
> above "iocb->ki_pos += written" is not needed.
> 
> > +	written_buffered = __btrfs_buffered_write(iocb, from);
> >  	if (written_buffered < 0) {
> >  		err = written_buffered;
> >  		goto out;
> > @@ -1836,16 +1836,16 @@ static ssize_t __btrfs_direct_write(struct kiocb *iocb, struct iov_iter *from)
> >  	 * Ensure all data is persisted. We want the next direct IO read to be
> >  	 * able to read what was just written.
> >  	 */
> > -	endbyte = pos + written_buffered - 1;
> > -	err = btrfs_fdatawrite_range(inode, pos, endbyte);
> > +	endbyte = iocb->ki_pos + written_buffered - 1;
> > +	err = btrfs_fdatawrite_range(inode, iocb->ki_pos, endbyte);
> >  	if (err)
> >  		goto out;
> > -	err = filemap_fdatawait_range(inode->i_mapping, pos, endbyte);
> > +	err = filemap_fdatawait_range(inode->i_mapping, iocb->ki_pos, endbyte);
> >  	if (err)
> >  		goto out;
> > +	iocb->ki_pos += written_buffered;
> >  	written += written_buffered;
> > -	iocb->ki_pos = pos + written_buffered;
> > -	invalidate_mapping_pages(file->f_mapping, pos >> PAGE_SHIFT,
> > +	invalidate_mapping_pages(file->f_mapping, iocb->ki_pos >> PAGE_SHIFT,
> >  				 endbyte >> PAGE_SHIFT);
> 
> Also, this invalidate_mapping_pages() should be done before updating iocb->ki_pos
> to invalidate buffered written area.

This sounds like the patch needs more review, I treated it as more like
a cleanup so I'll drop it from misc-next and revisit once the whole
iomap patchset will be sent again.

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2018-05-22 10:05 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-11-17 17:44 [RFC PATCH 0/8] btrfs iomap support Goldwyn Rodrigues
2017-11-17 17:44 ` [RFC PATCH 1/8] btrfs: use iocb for __btrfs_buffered_write Goldwyn Rodrigues
2018-04-10 16:19   ` David Sterba
2018-05-22  6:40   ` Misono Tomohiro
2018-05-22 10:03     ` David Sterba
2017-11-17 17:44 ` [RFC PATCH 2/8] fs: Add inode_extend_page() Goldwyn Rodrigues
2017-11-17 17:44 ` [RFC PATCH 3/8] fs: Introduce IOMAP_F_NOBH Goldwyn Rodrigues
2017-11-17 17:44 ` [RFC PATCH 4/8] btrfs: Introduce btrfs_iomap Goldwyn Rodrigues
2017-11-17 17:44 ` [RFC PATCH 5/8] btrfs: use iomap to perform buffered writes Goldwyn Rodrigues
2017-11-17 17:44 ` [RFC PATCH 6/8] btrfs: read the first/last page of the write Goldwyn Rodrigues
2017-11-17 17:44 ` [RFC PATCH 7/8] fs: iomap->prepare_pages() to set directives specific for the page Goldwyn Rodrigues
2017-11-17 17:44 ` Goldwyn Rodrigues
2017-11-17 17:44 ` [RFC PATCH 8/8] fs: Introduce iomap->dirty_page() Goldwyn Rodrigues
2017-11-17 17:44 ` [RFC PATCH 8/8] iomap: " Goldwyn Rodrigues
2017-11-17 18:45 ` [RFC PATCH 0/8] btrfs iomap support Nikolay Borisov
2017-11-17 23:07   ` Goldwyn Rodrigues

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.