All of lore.kernel.org
 help / color / mirror / Atom feed
* pnfs block layout driver fixes V4
@ 2014-09-10 15:23 Christoph Hellwig
  2014-09-10 15:23 ` [PATCH 1/9] pnfs: force a layout commit when encountering busy segments during recall Christoph Hellwig
                   ` (8 more replies)
  0 siblings, 9 replies; 32+ messages in thread
From: Christoph Hellwig @ 2014-09-10 15:23 UTC (permalink / raw)
  To: linux-nfs

This series fixes a wide range of issues with the pnfs block layout driver.

Before this we were seeing frequent silent data corruption, softlockups and
kernel crashes when running both user applications and test cases like xfstests.

After this rewrite of the I/O path we've sorted out all issues under normal
operations, although error handling in the block layout driver and its
interaction with the core nfs and pnfs code still needs further work.

This work was sponsored by NetApp, Inc.

Changes since V3:
 - fix debug compile in extent_tree.c
 - address a few review nitpicks

Changes since V2:
 - drop patches already merged by Trond
 - fix compile when NFSv4 isn't enabled
 - address a minor style issue pointed out by Anna
 - unconditionally try a layoutcommit first during layout recall processing

Changes since V1:
 - added two more layout stateid handling fixes
 - change the layoutget path so that the layout driver is responsible for
   freeing the spliced in payload.



^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH 1/9] pnfs: force a layout commit when encountering busy segments during recall
  2014-09-10 15:23 pnfs block layout driver fixes V4 Christoph Hellwig
@ 2014-09-10 15:23 ` Christoph Hellwig
  2014-09-10 15:23 ` [PATCH 2/9] pnfs: add flag to force read-modify-write in ->write_begin Christoph Hellwig
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 32+ messages in thread
From: Christoph Hellwig @ 2014-09-10 15:23 UTC (permalink / raw)
  To: linux-nfs

Expedite layout recall processing by forcing a layout commit when
we see busy segments.  Without it the layout recall might have to wait
until the VM decided to start writeback for the file, which can introduce
long delays.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 fs/nfs/callback_proc.c | 8 +++++++-
 fs/nfs/pnfs.c          | 3 +++
 2 files changed, 10 insertions(+), 1 deletion(-)

diff --git a/fs/nfs/callback_proc.c b/fs/nfs/callback_proc.c
index 41db525..0d26951 100644
--- a/fs/nfs/callback_proc.c
+++ b/fs/nfs/callback_proc.c
@@ -171,6 +171,13 @@ static u32 initiate_file_draining(struct nfs_client *clp,
 		goto out;
 
 	ino = lo->plh_inode;
+
+	spin_lock(&ino->i_lock);
+	pnfs_set_layout_stateid(lo, &args->cbl_stateid, true);
+	spin_unlock(&ino->i_lock);
+
+	pnfs_layoutcommit_inode(ino, false);
+
 	spin_lock(&ino->i_lock);
 	if (test_bit(NFS_LAYOUT_BULK_RECALL, &lo->plh_flags) ||
 	    pnfs_mark_matching_lsegs_invalid(lo, &free_me_list,
@@ -178,7 +185,6 @@ static u32 initiate_file_draining(struct nfs_client *clp,
 		rv = NFS4ERR_DELAY;
 	else
 		rv = NFS4ERR_NOMATCHING_LAYOUT;
-	pnfs_set_layout_stateid(lo, &args->cbl_stateid, true);
 	spin_unlock(&ino->i_lock);
 	pnfs_free_lseg_list(&free_me_list);
 	pnfs_put_layout_hdr(lo);
diff --git a/fs/nfs/pnfs.c b/fs/nfs/pnfs.c
index 8827ab1..ce5d1ea 100644
--- a/fs/nfs/pnfs.c
+++ b/fs/nfs/pnfs.c
@@ -594,6 +594,9 @@ pnfs_layout_free_bulk_destroy_list(struct list_head *layout_list,
 		dprintk("%s freeing layout for inode %lu\n", __func__,
 			lo->plh_inode->i_ino);
 		inode = lo->plh_inode;
+
+		pnfs_layoutcommit_inode(inode, false);
+
 		spin_lock(&inode->i_lock);
 		list_del_init(&lo->plh_bulk_destroy);
 		lo->plh_block_lgets++; /* permanently block new LAYOUTGETs */
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH 2/9] pnfs: add flag to force read-modify-write in ->write_begin
  2014-09-10 15:23 pnfs block layout driver fixes V4 Christoph Hellwig
  2014-09-10 15:23 ` [PATCH 1/9] pnfs: force a layout commit when encountering busy segments during recall Christoph Hellwig
@ 2014-09-10 15:23 ` Christoph Hellwig
  2014-09-10 15:23 ` [PATCH 3/9] pnfs: add return_range method Christoph Hellwig
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 32+ messages in thread
From: Christoph Hellwig @ 2014-09-10 15:23 UTC (permalink / raw)
  To: linux-nfs

Like all block based filesystems, the pNFS block layout driver can't read
or write at a byte granularity and thus has to perform read-modify-write
cycles on writes smaller than this granularity.

Add a flag so that the core NFS code always reads a whole page when
starting a smaller write, so that we can do it in the place where the VFS
expects it instead of doing in very deadlock prone way in the writeback
handler.

Note that in theory we could do less than page size reads here for disks
that have a smaller sector size which are served by a server with a smaller
pnfs block size.  But so far that doesn't seem like a worthwhile
optimization.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 fs/nfs/file.c |  7 +++++++
 fs/nfs/pnfs.h | 15 +++++++++++++++
 2 files changed, 22 insertions(+)

diff --git a/fs/nfs/file.c b/fs/nfs/file.c
index 524dd80..05ef7dc 100644
--- a/fs/nfs/file.c
+++ b/fs/nfs/file.c
@@ -36,6 +36,7 @@
 #include "internal.h"
 #include "iostat.h"
 #include "fscache.h"
+#include "pnfs.h"
 
 #include "nfstrace.h"
 
@@ -327,6 +328,12 @@ static int nfs_want_read_modify_write(struct file *file, struct page *page,
 	unsigned int offset = pos & (PAGE_CACHE_SIZE - 1);
 	unsigned int end = offset + len;
 
+	if (pnfs_ld_read_whole_page(file->f_mapping->host)) {
+		if (!PageUptodate(page))
+			return 1;
+		return 0;
+	}
+
 	if ((file->f_mode & FMODE_READ) &&	/* open for read? */
 	    !PageUptodate(page) &&		/* Uptodate? */
 	    !PagePrivate(page) &&		/* i/o request already? */
diff --git a/fs/nfs/pnfs.h b/fs/nfs/pnfs.h
index 8835b5a..a4c530e 100644
--- a/fs/nfs/pnfs.h
+++ b/fs/nfs/pnfs.h
@@ -72,6 +72,7 @@ enum layoutdriver_policy_flags {
 	/* Should the pNFS client commit and return the layout upon a setattr */
 	PNFS_LAYOUTRET_ON_SETATTR	= 1 << 0,
 	PNFS_LAYOUTRET_ON_ERROR		= 1 << 1,
+	PNFS_READ_WHOLE_PAGE		= 1 << 2,
 };
 
 struct nfs4_deviceid_node;
@@ -370,6 +371,14 @@ pnfs_ld_layoutret_on_setattr(struct inode *inode)
 }
 
 static inline bool
+pnfs_ld_read_whole_page(struct inode *inode)
+{
+	if (!pnfs_enabled_sb(NFS_SERVER(inode)))
+		return false;
+	return NFS_SERVER(inode)->pnfs_curr_ld->flags & PNFS_READ_WHOLE_PAGE;
+}
+
+static inline bool
 pnfs_layoutcommit_outstanding(struct inode *inode)
 {
 	struct nfs_inode *nfsi = NFS_I(inode);
@@ -445,6 +454,12 @@ pnfs_ld_layoutret_on_setattr(struct inode *inode)
 }
 
 static inline bool
+pnfs_ld_read_whole_page(struct inode *inode)
+{
+	return false;
+}
+
+static inline bool
 pnfs_roc(struct inode *ino)
 {
 	return false;
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH 3/9] pnfs: add return_range method
  2014-09-10 15:23 pnfs block layout driver fixes V4 Christoph Hellwig
  2014-09-10 15:23 ` [PATCH 1/9] pnfs: force a layout commit when encountering busy segments during recall Christoph Hellwig
  2014-09-10 15:23 ` [PATCH 2/9] pnfs: add flag to force read-modify-write in ->write_begin Christoph Hellwig
@ 2014-09-10 15:23 ` Christoph Hellwig
  2014-09-11 13:54   ` Peng Tao
  2014-09-10 15:23 ` [PATCH 4/9] pnfs/blocklayout: remove read-modify-write handling in bl_write_pagelist Christoph Hellwig
                   ` (5 subsequent siblings)
  8 siblings, 1 reply; 32+ messages in thread
From: Christoph Hellwig @ 2014-09-10 15:23 UTC (permalink / raw)
  To: linux-nfs

If a layout driver keeps per-inode state outside of the layout segments it
needs to be notified of any layout returns or recalls on an inode, and not
just about the freeing of layout segments.  Add a method to acomplish this,
which will allow the block layout driver to handle the case of truncated
and re-expanded files properly.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 fs/nfs/callback_proc.c | 12 +++++++++---
 fs/nfs/pnfs.c          | 10 ++++++++++
 fs/nfs/pnfs.h          |  3 +++
 3 files changed, 22 insertions(+), 3 deletions(-)

diff --git a/fs/nfs/callback_proc.c b/fs/nfs/callback_proc.c
index 0d26951..56bd0ea 100644
--- a/fs/nfs/callback_proc.c
+++ b/fs/nfs/callback_proc.c
@@ -181,10 +181,16 @@ static u32 initiate_file_draining(struct nfs_client *clp,
 	spin_lock(&ino->i_lock);
 	if (test_bit(NFS_LAYOUT_BULK_RECALL, &lo->plh_flags) ||
 	    pnfs_mark_matching_lsegs_invalid(lo, &free_me_list,
-					&args->cbl_range))
+					&args->cbl_range)) {
 		rv = NFS4ERR_DELAY;
-	else
-		rv = NFS4ERR_NOMATCHING_LAYOUT;
+		goto unlock;
+	}
+
+	if (NFS_SERVER(ino)->pnfs_curr_ld->return_range) {
+		NFS_SERVER(ino)->pnfs_curr_ld->return_range(lo,
+			&args->cbl_range);
+	}
+unlock:
 	spin_unlock(&ino->i_lock);
 	pnfs_free_lseg_list(&free_me_list);
 	pnfs_put_layout_hdr(lo);
diff --git a/fs/nfs/pnfs.c b/fs/nfs/pnfs.c
index ce5d1ea..76de7f5 100644
--- a/fs/nfs/pnfs.c
+++ b/fs/nfs/pnfs.c
@@ -857,6 +857,16 @@ _pnfs_return_layout(struct inode *ino)
 	empty = list_empty(&lo->plh_segs);
 	pnfs_clear_layoutcommit(ino, &tmp_list);
 	pnfs_mark_matching_lsegs_invalid(lo, &tmp_list, NULL);
+
+	if (NFS_SERVER(ino)->pnfs_curr_ld->return_range) {
+		struct pnfs_layout_range range = {
+			.iomode		= IOMODE_ANY,
+			.offset		= 0,
+			.length		= NFS4_MAX_UINT64,
+		};
+		NFS_SERVER(ino)->pnfs_curr_ld->return_range(lo, &range);
+	}
+
 	/* Don't send a LAYOUTRETURN if list was initially empty */
 	if (empty) {
 		spin_unlock(&ino->i_lock);
diff --git a/fs/nfs/pnfs.h b/fs/nfs/pnfs.h
index a4c530e..2c24942 100644
--- a/fs/nfs/pnfs.h
+++ b/fs/nfs/pnfs.h
@@ -94,6 +94,9 @@ struct pnfs_layoutdriver_type {
 	struct pnfs_layout_segment * (*alloc_lseg) (struct pnfs_layout_hdr *layoutid, struct nfs4_layoutget_res *lgr, gfp_t gfp_flags);
 	void (*free_lseg) (struct pnfs_layout_segment *lseg);
 
+	void (*return_range) (struct pnfs_layout_hdr *lo,
+			      struct pnfs_layout_range *range);
+
 	/* test for nfs page cache coalescing */
 	const struct nfs_pageio_ops *pg_read_ops;
 	const struct nfs_pageio_ops *pg_write_ops;
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH 4/9] pnfs/blocklayout: remove read-modify-write handling in bl_write_pagelist
  2014-09-10 15:23 pnfs block layout driver fixes V4 Christoph Hellwig
                   ` (2 preceding siblings ...)
  2014-09-10 15:23 ` [PATCH 3/9] pnfs: add return_range method Christoph Hellwig
@ 2014-09-10 15:23 ` Christoph Hellwig
  2014-09-10 15:23 ` [PATCH 5/9] pnfs/blocklayout: don't set pages uptodate Christoph Hellwig
                   ` (4 subsequent siblings)
  8 siblings, 0 replies; 32+ messages in thread
From: Christoph Hellwig @ 2014-09-10 15:23 UTC (permalink / raw)
  To: linux-nfs

Use the new PNFS_READ_WHOLE_PAGE flag to offload read-modify-write
handling to core nfs code, and remove a huge chunk of deadlock prone
mess from the block layout writeback path.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 fs/nfs/blocklayout/blocklayout.c | 500 +++++----------------------------------
 1 file changed, 64 insertions(+), 436 deletions(-)

diff --git a/fs/nfs/blocklayout/blocklayout.c b/fs/nfs/blocklayout/blocklayout.c
index 87a633d..6484b9f 100644
--- a/fs/nfs/blocklayout/blocklayout.c
+++ b/fs/nfs/blocklayout/blocklayout.c
@@ -35,7 +35,6 @@
 #include <linux/mount.h>
 #include <linux/namei.h>
 #include <linux/bio.h>		/* struct bio */
-#include <linux/buffer_head.h>	/* various write calls */
 #include <linux/prefetch.h>
 #include <linux/pagevec.h>
 
@@ -188,16 +187,6 @@ retry:
 	return bio;
 }
 
-static struct bio *bl_add_page_to_bio(struct bio *bio, int npg, int rw,
-				      sector_t isect, struct page *page,
-				      struct pnfs_block_extent *be,
-				      void (*end_io)(struct bio *, int err),
-				      struct parallel_io *par)
-{
-	return do_add_page_to_bio(bio, npg, rw, isect, page, be,
-				  end_io, par, 0, PAGE_CACHE_SIZE);
-}
-
 /* This is basically copied from mpage_end_io_read */
 static void bl_end_io_read(struct bio *bio, int err)
 {
@@ -293,8 +282,8 @@ bl_read_pagelist(struct nfs_pgio_header *hdr)
 			}
 		}
 
+		pg_offset = f_offset & ~PAGE_CACHE_MASK;
 		if (is_dio) {
-			pg_offset = f_offset & ~PAGE_CACHE_MASK;
 			if (pg_offset + bytes_left > PAGE_CACHE_SIZE)
 				pg_len = PAGE_CACHE_SIZE - pg_offset;
 			else
@@ -305,7 +294,7 @@ bl_read_pagelist(struct nfs_pgio_header *hdr)
 			isect += (pg_offset >> SECTOR_SHIFT);
 			extent_length -= (pg_offset >> SECTOR_SHIFT);
 		} else {
-			pg_offset = 0;
+			BUG_ON(pg_offset != 0);
 			pg_len = PAGE_CACHE_SIZE;
 		}
 
@@ -383,29 +372,6 @@ static void mark_extents_written(struct pnfs_block_layout *bl,
 	}
 }
 
-static void bl_end_io_write_zero(struct bio *bio, int err)
-{
-	struct parallel_io *par = bio->bi_private;
-	struct bio_vec *bvec;
-	int i;
-
-	bio_for_each_segment_all(bvec, bio, i) {
-		/* This is the zeroing page we added */
-		end_page_writeback(bvec->bv_page);
-		page_cache_release(bvec->bv_page);
-	}
-
-	if (unlikely(err)) {
-		struct nfs_pgio_header *header = par->data;
-
-		if (!header->pnfs_error)
-			header->pnfs_error = -EIO;
-		pnfs_set_lo_fail(header->lseg);
-	}
-	bio_put(bio);
-	put_parallel(par);
-}
-
 static void bl_end_io_write(struct bio *bio, int err)
 {
 	struct parallel_io *par = bio->bi_private;
@@ -455,256 +421,22 @@ static void bl_end_par_io_write(void *data, int num_se)
 	schedule_work(&hdr->task.u.tk_work);
 }
 
-/* FIXME STUB - mark intersection of layout and page as bad, so is not
- * used again.
- */
-static void mark_bad_read(void)
-{
-	return;
-}
-
-/*
- * map_block:  map a requested I/0 block (isect) into an offset in the LVM
- * block_device
- */
-static void
-map_block(struct buffer_head *bh, sector_t isect, struct pnfs_block_extent *be)
-{
-	dprintk("%s enter be=%p\n", __func__, be);
-
-	set_buffer_mapped(bh);
-	bh->b_bdev = be->be_mdev;
-	bh->b_blocknr = (isect - be->be_f_offset + be->be_v_offset) >>
-	    (be->be_mdev->bd_inode->i_blkbits - SECTOR_SHIFT);
-
-	dprintk("%s isect %llu, bh->b_blocknr %ld, using bsize %Zd\n",
-		__func__, (unsigned long long)isect, (long)bh->b_blocknr,
-		bh->b_size);
-	return;
-}
-
-static void
-bl_read_single_end_io(struct bio *bio, int error)
-{
-	struct bio_vec *bvec = bio->bi_io_vec + bio->bi_vcnt - 1;
-	struct page *page = bvec->bv_page;
-
-	/* Only one page in bvec */
-	unlock_page(page);
-}
-
-static int
-bl_do_readpage_sync(struct page *page, struct pnfs_block_extent *be,
-		    unsigned int offset, unsigned int len)
-{
-	struct bio *bio;
-	struct page *shadow_page;
-	sector_t isect;
-	char *kaddr, *kshadow_addr;
-	int ret = 0;
-
-	dprintk("%s: offset %u len %u\n", __func__, offset, len);
-
-	shadow_page = alloc_page(GFP_NOFS | __GFP_HIGHMEM);
-	if (shadow_page == NULL)
-		return -ENOMEM;
-
-	bio = bio_alloc(GFP_NOIO, 1);
-	if (bio == NULL)
-		return -ENOMEM;
-
-	isect = (page->index << PAGE_CACHE_SECTOR_SHIFT) +
-		(offset / SECTOR_SIZE);
-
-	bio->bi_iter.bi_sector = isect - be->be_f_offset + be->be_v_offset;
-	bio->bi_bdev = be->be_mdev;
-	bio->bi_end_io = bl_read_single_end_io;
-
-	lock_page(shadow_page);
-	if (bio_add_page(bio, shadow_page,
-			 SECTOR_SIZE, round_down(offset, SECTOR_SIZE)) == 0) {
-		unlock_page(shadow_page);
-		bio_put(bio);
-		return -EIO;
-	}
-
-	submit_bio(READ, bio);
-	wait_on_page_locked(shadow_page);
-	if (unlikely(!test_bit(BIO_UPTODATE, &bio->bi_flags))) {
-		ret = -EIO;
-	} else {
-		kaddr = kmap_atomic(page);
-		kshadow_addr = kmap_atomic(shadow_page);
-		memcpy(kaddr + offset, kshadow_addr + offset, len);
-		kunmap_atomic(kshadow_addr);
-		kunmap_atomic(kaddr);
-	}
-	__free_page(shadow_page);
-	bio_put(bio);
-
-	return ret;
-}
-
-static int
-bl_read_partial_page_sync(struct page *page, struct pnfs_block_extent *be,
-			  unsigned int dirty_offset, unsigned int dirty_len,
-			  bool full_page)
-{
-	int ret = 0;
-	unsigned int start, end;
-
-	if (full_page) {
-		start = 0;
-		end = PAGE_CACHE_SIZE;
-	} else {
-		start = round_down(dirty_offset, SECTOR_SIZE);
-		end = round_up(dirty_offset + dirty_len, SECTOR_SIZE);
-	}
-
-	dprintk("%s: offset %u len %d\n", __func__, dirty_offset, dirty_len);
-	if (!be) {
-		zero_user_segments(page, start, dirty_offset,
-				   dirty_offset + dirty_len, end);
-		if (start == 0 && end == PAGE_CACHE_SIZE &&
-		    trylock_page(page)) {
-			SetPageUptodate(page);
-			unlock_page(page);
-		}
-		return ret;
-	}
-
-	if (start != dirty_offset)
-		ret = bl_do_readpage_sync(page, be, start, dirty_offset - start);
-
-	if (!ret && (dirty_offset + dirty_len < end))
-		ret = bl_do_readpage_sync(page, be, dirty_offset + dirty_len,
-					  end - dirty_offset - dirty_len);
-
-	return ret;
-}
-
-/* Given an unmapped page, zero it or read in page for COW, page is locked
- * by caller.
- */
-static int
-init_page_for_write(struct page *page, struct pnfs_block_extent *cow_read)
-{
-	struct buffer_head *bh = NULL;
-	int ret = 0;
-	sector_t isect;
-
-	dprintk("%s enter, %p\n", __func__, page);
-	BUG_ON(PageUptodate(page));
-	if (!cow_read) {
-		zero_user_segment(page, 0, PAGE_SIZE);
-		SetPageUptodate(page);
-		goto cleanup;
-	}
-
-	bh = alloc_page_buffers(page, PAGE_CACHE_SIZE, 0);
-	if (!bh) {
-		ret = -ENOMEM;
-		goto cleanup;
-	}
-
-	isect = (sector_t) page->index << PAGE_CACHE_SECTOR_SHIFT;
-	map_block(bh, isect, cow_read);
-	if (!bh_uptodate_or_lock(bh))
-		ret = bh_submit_read(bh);
-	if (ret)
-		goto cleanup;
-	SetPageUptodate(page);
-
-cleanup:
-	if (bh)
-		free_buffer_head(bh);
-	if (ret) {
-		/* Need to mark layout with bad read...should now
-		 * just use nfs4 for reads and writes.
-		 */
-		mark_bad_read();
-	}
-	return ret;
-}
-
-/* Find or create a zeroing page marked being writeback.
- * Return ERR_PTR on error, NULL to indicate skip this page and page itself
- * to indicate write out.
- */
-static struct page *
-bl_find_get_zeroing_page(struct inode *inode, pgoff_t index,
-			struct pnfs_block_extent *cow_read)
-{
-	struct page *page;
-	int locked = 0;
-	page = find_get_page(inode->i_mapping, index);
-	if (page)
-		goto check_page;
-
-	page = find_or_create_page(inode->i_mapping, index, GFP_NOFS);
-	if (unlikely(!page)) {
-		dprintk("%s oom\n", __func__);
-		return ERR_PTR(-ENOMEM);
-	}
-	locked = 1;
-
-check_page:
-	/* PageDirty: Other will write this out
-	 * PageWriteback: Other is writing this out
-	 * PageUptodate: It was read before
-	 */
-	if (PageDirty(page) || PageWriteback(page)) {
-		print_page(page);
-		if (locked)
-			unlock_page(page);
-		page_cache_release(page);
-		return NULL;
-	}
-
-	if (!locked) {
-		lock_page(page);
-		locked = 1;
-		goto check_page;
-	}
-	if (!PageUptodate(page)) {
-		/* New page, readin or zero it */
-		init_page_for_write(page, cow_read);
-	}
-	set_page_writeback(page);
-	unlock_page(page);
-
-	return page;
-}
-
 static enum pnfs_try_status
 bl_write_pagelist(struct nfs_pgio_header *header, int sync)
 {
-	int i, ret, npg_zero, pg_index, last = 0;
+	int i, ret;
 	struct bio *bio = NULL;
-	struct pnfs_block_extent *be = NULL, *cow_read = NULL;
-	sector_t isect, last_isect = 0, extent_length = 0;
+	struct pnfs_block_extent *be = NULL;
+	sector_t isect, extent_length = 0;
 	struct parallel_io *par = NULL;
 	loff_t offset = header->args.offset;
 	size_t count = header->args.count;
-	unsigned int pg_offset, pg_len, saved_len;
 	struct page **pages = header->args.pages;
-	struct page *page;
-	pgoff_t index;
-	u64 temp;
-	int npg_per_block =
-	    NFS_SERVER(header->inode)->pnfs_blksize >> PAGE_CACHE_SHIFT;
+	int pg_index = pg_index = header->args.pgbase >> PAGE_CACHE_SHIFT;
 	struct blk_plug plug;
 
 	dprintk("%s enter, %Zu@%lld\n", __func__, count, offset);
 
-	blk_start_plug(&plug);
-
-	if (header->dreq != NULL &&
-	    (!IS_ALIGNED(offset, NFS_SERVER(header->inode)->pnfs_blksize) ||
-	     !IS_ALIGNED(count, NFS_SERVER(header->inode)->pnfs_blksize))) {
-		dprintk("pnfsblock nonblock aligned DIO writes. Resend MDS\n");
-		goto out_mds;
-	}
 	/* At this point, header->page_aray is a (sequential) list of nfs_pages.
 	 * We want to write each, and if there is an error set pnfs_error
 	 * to have it redone using nfs.
@@ -715,97 +447,20 @@ bl_write_pagelist(struct nfs_pgio_header *header, int sync)
 	par->pnfs_callback = bl_end_par_io_write;
 	/* At this point, have to be more careful with error handling */
 
-	isect = (sector_t) ((offset & (long)PAGE_CACHE_MASK) >> SECTOR_SHIFT);
-	be = bl_find_get_extent(BLK_LSEG2EXT(header->lseg), isect, &cow_read);
-	if (!be || !is_writable(be, isect)) {
-		dprintk("%s no matching extents!\n", __func__);
-		goto out_mds;
-	}
-
-	/* First page inside INVALID extent */
-	if (be->be_state == PNFS_BLOCK_INVALID_DATA) {
-		if (likely(!bl_push_one_short_extent(be->be_inval)))
-			par->bse_count++;
-		else
-			goto out_mds;
-		temp = offset >> PAGE_CACHE_SHIFT;
-		npg_zero = do_div(temp, npg_per_block);
-		isect = (sector_t) (((offset - npg_zero * PAGE_CACHE_SIZE) &
-				     (long)PAGE_CACHE_MASK) >> SECTOR_SHIFT);
-		extent_length = be->be_length - (isect - be->be_f_offset);
-
-fill_invalid_ext:
-		dprintk("%s need to zero %d pages\n", __func__, npg_zero);
-		for (;npg_zero > 0; npg_zero--) {
-			if (bl_is_sector_init(be->be_inval, isect)) {
-				dprintk("isect %llu already init\n",
-					(unsigned long long)isect);
-				goto next_page;
-			}
-			/* page ref released in bl_end_io_write_zero */
-			index = isect >> PAGE_CACHE_SECTOR_SHIFT;
-			dprintk("%s zero %dth page: index %lu isect %llu\n",
-				__func__, npg_zero, index,
-				(unsigned long long)isect);
-			page = bl_find_get_zeroing_page(header->inode, index,
-							cow_read);
-			if (unlikely(IS_ERR(page))) {
-				header->pnfs_error = PTR_ERR(page);
-				goto out;
-			} else if (page == NULL)
-				goto next_page;
-
-			ret = bl_mark_sectors_init(be->be_inval, isect,
-						       PAGE_CACHE_SECTORS);
-			if (unlikely(ret)) {
-				dprintk("%s bl_mark_sectors_init fail %d\n",
-					__func__, ret);
-				end_page_writeback(page);
-				page_cache_release(page);
-				header->pnfs_error = ret;
-				goto out;
-			}
-			if (likely(!bl_push_one_short_extent(be->be_inval)))
-				par->bse_count++;
-			else {
-				end_page_writeback(page);
-				page_cache_release(page);
-				header->pnfs_error = -ENOMEM;
-				goto out;
-			}
-			/* FIXME: This should be done in bi_end_io */
-			mark_extents_written(BLK_LSEG2EXT(header->lseg),
-					     page->index << PAGE_CACHE_SHIFT,
-					     PAGE_CACHE_SIZE);
-
-			bio = bl_add_page_to_bio(bio, npg_zero, WRITE,
-						 isect, page, be,
-						 bl_end_io_write_zero, par);
-			if (IS_ERR(bio)) {
-				header->pnfs_error = PTR_ERR(bio);
-				bio = NULL;
-				goto out;
-			}
-next_page:
-			isect += PAGE_CACHE_SECTORS;
-			extent_length -= PAGE_CACHE_SECTORS;
-		}
-		if (last)
-			goto write_done;
-	}
-	bio = bl_submit_bio(WRITE, bio);
+	blk_start_plug(&plug);
 
-	/* Middle pages */
-	pg_index = header->args.pgbase >> PAGE_CACHE_SHIFT;
+	/* we always write out the whole page */
+	offset = offset & (loff_t)PAGE_CACHE_MASK;
+	isect = offset >> SECTOR_SHIFT;
+	
 	for (i = pg_index; i < header->page_array.npages; i++) {
 		if (extent_length <= 0) {
 			/* We've used up the previous extent */
 			bl_put_extent(be);
-			bl_put_extent(cow_read);
 			bio = bl_submit_bio(WRITE, bio);
 			/* Get the next one */
 			be = bl_find_get_extent(BLK_LSEG2EXT(header->lseg),
-					     isect, &cow_read);
+					     isect, NULL);
 			if (!be || !is_writable(be, isect)) {
 				header->pnfs_error = -EINVAL;
 				goto out;
@@ -823,25 +478,10 @@ next_page:
 			    (isect - be->be_f_offset);
 		}
 
-		dprintk("%s offset %lld count %Zu\n", __func__, offset, count);
-		pg_offset = offset & ~PAGE_CACHE_MASK;
-		if (pg_offset + count > PAGE_CACHE_SIZE)
-			pg_len = PAGE_CACHE_SIZE - pg_offset;
-		else
-			pg_len = count;
+		BUG_ON(offset & ~PAGE_CACHE_MASK);
 
-		saved_len = pg_len;
 		if (be->be_state == PNFS_BLOCK_INVALID_DATA &&
 		    !bl_is_sector_init(be->be_inval, isect)) {
-			ret = bl_read_partial_page_sync(pages[i], cow_read,
-							pg_offset, pg_len, true);
-			if (ret) {
-				dprintk("%s bl_read_partial_page_sync fail %d\n",
-					__func__, ret);
-				header->pnfs_error = ret;
-				goto out;
-			}
-
 			ret = bl_mark_sectors_init(be->be_inval, isect,
 						       PAGE_CACHE_SECTORS);
 			if (unlikely(ret)) {
@@ -850,66 +490,31 @@ next_page:
 				header->pnfs_error = ret;
 				goto out;
 			}
-
-			/* Expand to full page write */
-			pg_offset = 0;
-			pg_len = PAGE_CACHE_SIZE;
-		} else if  ((pg_offset & (SECTOR_SIZE - 1)) ||
-			    (pg_len & (SECTOR_SIZE - 1))){
-			/* ahh, nasty case. We have to do sync full sector
-			 * read-modify-write cycles.
-			 */
-			unsigned int saved_offset = pg_offset;
-			ret = bl_read_partial_page_sync(pages[i], be, pg_offset,
-							pg_len, false);
-			pg_offset = round_down(pg_offset, SECTOR_SIZE);
-			pg_len = round_up(saved_offset + pg_len, SECTOR_SIZE)
-				 - pg_offset;
 		}
 
-
 		bio = do_add_page_to_bio(bio, header->page_array.npages - i,
-					 WRITE,
-					 isect, pages[i], be,
+					 WRITE, isect, pages[i], be,
 					 bl_end_io_write, par,
-					 pg_offset, pg_len);
+					 0, PAGE_CACHE_SIZE);
 		if (IS_ERR(bio)) {
 			header->pnfs_error = PTR_ERR(bio);
 			bio = NULL;
 			goto out;
 		}
-		offset += saved_len;
-		count -= saved_len;
+		offset += PAGE_CACHE_SIZE;
+		count -= PAGE_CACHE_SIZE;
 		isect += PAGE_CACHE_SECTORS;
-		last_isect = isect;
 		extent_length -= PAGE_CACHE_SECTORS;
 	}
 
-	/* Last page inside INVALID extent */
-	if (be->be_state == PNFS_BLOCK_INVALID_DATA) {
-		bio = bl_submit_bio(WRITE, bio);
-		temp = last_isect >> PAGE_CACHE_SECTOR_SHIFT;
-		npg_zero = npg_per_block - do_div(temp, npg_per_block);
-		if (npg_zero < npg_per_block) {
-			last = 1;
-			goto fill_invalid_ext;
-		}
-	}
-
-write_done:
 	header->res.count = header->args.count;
 out:
 	bl_put_extent(be);
-	bl_put_extent(cow_read);
 	bl_submit_bio(WRITE, bio);
 	blk_finish_plug(&plug);
 	put_parallel(par);
 	return PNFS_ATTEMPTED;
 out_mds:
-	blk_finish_plug(&plug);
-	bl_put_extent(be);
-	bl_put_extent(cow_read);
-	kfree(par);
 	return PNFS_NOT_ATTEMPTED;
 }
 
@@ -1188,20 +793,45 @@ bl_clear_layoutdriver(struct nfs_server *server)
 }
 
 static bool
-is_aligned_req(struct nfs_page *req, unsigned int alignment)
+is_aligned_req(struct nfs_pageio_descriptor *pgio,
+		struct nfs_page *req, unsigned int alignment)
 {
-	return IS_ALIGNED(req->wb_offset, alignment) &&
-	       IS_ALIGNED(req->wb_bytes, alignment);
+	/*
+	 * Always accept buffered writes, higher layers take care of the
+	 * right alignment.
+	 */
+	if (pgio->pg_dreq == NULL)
+		return true;
+
+	if (!IS_ALIGNED(req->wb_offset, alignment))
+		return false;
+
+	if (IS_ALIGNED(req->wb_bytes, alignment))
+		return true;
+
+	if (req_offset(req) + req->wb_bytes == i_size_read(pgio->pg_inode)) {
+		/*
+		 * If the write goes up to the inode size, just write
+		 * the full page.  Data past the inode size is
+		 * guaranteed to be zeroed by the higher level client
+		 * code, and this behaviour is mandated by RFC 5663
+		 * section 2.3.2.
+		 */
+		return true;
+	}
+
+	return false;
 }
 
 static void
 bl_pg_init_read(struct nfs_pageio_descriptor *pgio, struct nfs_page *req)
 {
-	if (pgio->pg_dreq != NULL &&
-	    !is_aligned_req(req, SECTOR_SIZE))
+	if (!is_aligned_req(pgio, req, SECTOR_SIZE)) {
 		nfs_pageio_reset_read_mds(pgio);
-	else
-		pnfs_generic_pg_init_read(pgio, req);
+		return;
+	}
+
+	pnfs_generic_pg_init_read(pgio, req);
 }
 
 /*
@@ -1212,10 +842,8 @@ static size_t
 bl_pg_test_read(struct nfs_pageio_descriptor *pgio, struct nfs_page *prev,
 		struct nfs_page *req)
 {
-	if (pgio->pg_dreq != NULL &&
-	    !is_aligned_req(req, SECTOR_SIZE))
+	if (!is_aligned_req(pgio, req, SECTOR_SIZE))
 		return 0;
-
 	return pnfs_generic_pg_test(pgio, prev, req);
 }
 
@@ -1245,19 +873,20 @@ static u64 pnfs_num_cont_bytes(struct inode *inode, pgoff_t idx)
 static void
 bl_pg_init_write(struct nfs_pageio_descriptor *pgio, struct nfs_page *req)
 {
-	if (pgio->pg_dreq != NULL &&
-	    !is_aligned_req(req, PAGE_CACHE_SIZE)) {
+	u64 wb_size;
+
+	if (!is_aligned_req(pgio, req, PAGE_SIZE)) {
 		nfs_pageio_reset_write_mds(pgio);
-	} else {
-		u64 wb_size;
-		if (pgio->pg_dreq == NULL)
-			wb_size = pnfs_num_cont_bytes(pgio->pg_inode,
-						      req->wb_index);
-		else
-			wb_size = nfs_dreq_bytes_left(pgio->pg_dreq);
-
-		pnfs_generic_pg_init_write(pgio, req, wb_size);
+		return;
 	}
+
+	if (pgio->pg_dreq == NULL)
+		wb_size = pnfs_num_cont_bytes(pgio->pg_inode,
+					      req->wb_index);
+	else
+		wb_size = nfs_dreq_bytes_left(pgio->pg_dreq);
+
+	pnfs_generic_pg_init_write(pgio, req, wb_size);
 }
 
 /*
@@ -1268,10 +897,8 @@ static size_t
 bl_pg_test_write(struct nfs_pageio_descriptor *pgio, struct nfs_page *prev,
 		 struct nfs_page *req)
 {
-	if (pgio->pg_dreq != NULL &&
-	    !is_aligned_req(req, PAGE_CACHE_SIZE))
+	if (!is_aligned_req(pgio, req, PAGE_SIZE))
 		return 0;
-
 	return pnfs_generic_pg_test(pgio, prev, req);
 }
 
@@ -1291,6 +918,7 @@ static struct pnfs_layoutdriver_type blocklayout_type = {
 	.id				= LAYOUT_BLOCK_VOLUME,
 	.name				= "LAYOUT_BLOCK_VOLUME",
 	.owner				= THIS_MODULE,
+	.flags				= PNFS_READ_WHOLE_PAGE,
 	.read_pagelist			= bl_read_pagelist,
 	.write_pagelist			= bl_write_pagelist,
 	.alloc_layout_hdr		= bl_alloc_layout_hdr,
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH 5/9] pnfs/blocklayout: don't set pages uptodate
  2014-09-10 15:23 pnfs block layout driver fixes V4 Christoph Hellwig
                   ` (3 preceding siblings ...)
  2014-09-10 15:23 ` [PATCH 4/9] pnfs/blocklayout: remove read-modify-write handling in bl_write_pagelist Christoph Hellwig
@ 2014-09-10 15:23 ` Christoph Hellwig
  2014-09-10 15:23 ` [PATCH 6/9] pnfs/blocklayout: rewrite extent tracking Christoph Hellwig
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 32+ messages in thread
From: Christoph Hellwig @ 2014-09-10 15:23 UTC (permalink / raw)
  To: linux-nfs

The core nfs code handles setting pages uptodate on reads, no need to mess
with the pageflags outselves.  Also remove a debug function to dump page
flags.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 fs/nfs/blocklayout/blocklayout.c | 24 +-----------------------
 1 file changed, 1 insertion(+), 23 deletions(-)

diff --git a/fs/nfs/blocklayout/blocklayout.c b/fs/nfs/blocklayout/blocklayout.c
index 6484b9f..92be984 100644
--- a/fs/nfs/blocklayout/blocklayout.c
+++ b/fs/nfs/blocklayout/blocklayout.c
@@ -49,20 +49,6 @@ MODULE_LICENSE("GPL");
 MODULE_AUTHOR("Andy Adamson <andros@citi.umich.edu>");
 MODULE_DESCRIPTION("The NFSv4.1 pNFS Block layout driver");
 
-static void print_page(struct page *page)
-{
-	dprintk("PRINTPAGE page %p\n", page);
-	dprintk("	PagePrivate %d\n", PagePrivate(page));
-	dprintk("	PageUptodate %d\n", PageUptodate(page));
-	dprintk("	PageError %d\n", PageError(page));
-	dprintk("	PageDirty %d\n", PageDirty(page));
-	dprintk("	PageReferenced %d\n", PageReferenced(page));
-	dprintk("	PageLocked %d\n", PageLocked(page));
-	dprintk("	PageWriteback %d\n", PageWriteback(page));
-	dprintk("	PageMappedToDisk %d\n", PageMappedToDisk(page));
-	dprintk("\n");
-}
-
 /* Given the be associated with isect, determine if page data needs to be
  * initialized.
  */
@@ -187,16 +173,9 @@ retry:
 	return bio;
 }
 
-/* This is basically copied from mpage_end_io_read */
 static void bl_end_io_read(struct bio *bio, int err)
 {
 	struct parallel_io *par = bio->bi_private;
-	struct bio_vec *bvec;
-	int i;
-
-	if (!err)
-		bio_for_each_segment_all(bvec, bio, i)
-			SetPageUptodate(bvec->bv_page);
 
 	if (err) {
 		struct nfs_pgio_header *header = par->data;
@@ -205,6 +184,7 @@ static void bl_end_io_read(struct bio *bio, int err)
 			header->pnfs_error = -EIO;
 		pnfs_set_lo_fail(header->lseg);
 	}
+
 	bio_put(bio);
 	put_parallel(par);
 }
@@ -304,8 +284,6 @@ bl_read_pagelist(struct nfs_pgio_header *hdr)
 			/* Fill hole w/ zeroes w/o accessing device */
 			dprintk("%s Zeroing page for hole\n", __func__);
 			zero_user_segment(pages[i], pg_offset, pg_len);
-			print_page(pages[i]);
-			SetPageUptodate(pages[i]);
 		} else {
 			struct pnfs_block_extent *be_read;
 
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH 6/9] pnfs/blocklayout: rewrite extent tracking
  2014-09-10 15:23 pnfs block layout driver fixes V4 Christoph Hellwig
                   ` (4 preceding siblings ...)
  2014-09-10 15:23 ` [PATCH 5/9] pnfs/blocklayout: don't set pages uptodate Christoph Hellwig
@ 2014-09-10 15:23 ` Christoph Hellwig
  2014-09-11 14:11   ` Peng Tao
  2014-09-10 15:23 ` [PATCH 7/9] pnfs/blocklayout: implement the return_range method Christoph Hellwig
                   ` (2 subsequent siblings)
  8 siblings, 1 reply; 32+ messages in thread
From: Christoph Hellwig @ 2014-09-10 15:23 UTC (permalink / raw)
  To: linux-nfs

Currently the block layout driver tracks extents in three separate
data structures:

 - the two list of pnfs_block_extent structures returned by the server
 - the list of sectors that were in invalid state but have been written to
 - a list of pnfs_block_short_extent structures for LAYOUTCOMMIT

All of these share the property that they are not only highly inefficient
data structures, but also that operations on them are even more inefficient
than nessecary.

In addition there are various implementation defects like:

 - using an int to track sectors, causing corruption for large offsets
 - incorrect normalization of page or block granularity ranges
 - insufficient error handling
 - incorrect synchronization as extents can be modified while they are in
   use

This patch replace all three data with a single unified rbtree structure
tracking all extents, as well as their in-memory state, although we still
need to instance for read-only and read-write extent due to the arcane
client side COW feature in the block layouts spec.

To fix the problem of extent possibly being modified while in use we make
sure to return a copy of the extent for use in the write path - the
extent can only be invalidated by a layout recall or return which has
to wait until the I/O operations finished due to refcounts on the layout
segment.

The new extent tree work similar to the schemes used by block based
filesystems like XFS or ext4.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 fs/nfs/blocklayout/Makefile         |   3 +-
 fs/nfs/blocklayout/blocklayout.c    | 258 +++-------
 fs/nfs/blocklayout/blocklayout.h    | 112 +----
 fs/nfs/blocklayout/blocklayoutdev.c |  35 +-
 fs/nfs/blocklayout/extent_tree.c    | 547 ++++++++++++++++++++++
 fs/nfs/blocklayout/extents.c        | 908 ------------------------------------
 6 files changed, 651 insertions(+), 1212 deletions(-)
 create mode 100644 fs/nfs/blocklayout/extent_tree.c
 delete mode 100644 fs/nfs/blocklayout/extents.c

diff --git a/fs/nfs/blocklayout/Makefile b/fs/nfs/blocklayout/Makefile
index d581550..3fa5ec7 100644
--- a/fs/nfs/blocklayout/Makefile
+++ b/fs/nfs/blocklayout/Makefile
@@ -2,4 +2,5 @@
 # Makefile for the pNFS block layout driver kernel module
 #
 obj-$(CONFIG_PNFS_BLOCK) += blocklayoutdriver.o
-blocklayoutdriver-objs := blocklayout.o extents.o blocklayoutdev.o blocklayoutdm.o
+blocklayoutdriver-objs := blocklayout.o blocklayoutdev.o blocklayoutdm.o \
+	extent_tree.o
diff --git a/fs/nfs/blocklayout/blocklayout.c b/fs/nfs/blocklayout/blocklayout.c
index 92be984..42b6f9c 100644
--- a/fs/nfs/blocklayout/blocklayout.c
+++ b/fs/nfs/blocklayout/blocklayout.c
@@ -49,26 +49,16 @@ MODULE_LICENSE("GPL");
 MODULE_AUTHOR("Andy Adamson <andros@citi.umich.edu>");
 MODULE_DESCRIPTION("The NFSv4.1 pNFS Block layout driver");
 
-/* Given the be associated with isect, determine if page data needs to be
- * initialized.
- */
-static int is_hole(struct pnfs_block_extent *be, sector_t isect)
-{
-	if (be->be_state == PNFS_BLOCK_NONE_DATA)
-		return 1;
-	else if (be->be_state != PNFS_BLOCK_INVALID_DATA)
-		return 0;
-	else
-		return !bl_is_sector_init(be->be_inval, isect);
-}
-
-/* Given the be associated with isect, determine if page data can be
- * written to disk.
- */
-static int is_writable(struct pnfs_block_extent *be, sector_t isect)
+static bool is_hole(struct pnfs_block_extent *be)
 {
-	return (be->be_state == PNFS_BLOCK_READWRITE_DATA ||
-		be->be_state == PNFS_BLOCK_INVALID_DATA);
+	switch (be->be_state) {
+	case PNFS_BLOCK_NONE_DATA:
+		return true;
+	case PNFS_BLOCK_INVALID_DATA:
+		return be->be_tag ? false : true;
+	default:
+		return false;
+	}
 }
 
 /* The data we are handed might be spread across several bios.  We need
@@ -76,9 +66,8 @@ static int is_writable(struct pnfs_block_extent *be, sector_t isect)
  */
 struct parallel_io {
 	struct kref refcnt;
-	void (*pnfs_callback) (void *data, int num_se);
+	void (*pnfs_callback) (void *data);
 	void *data;
-	int bse_count;
 };
 
 static inline struct parallel_io *alloc_parallel(void *data)
@@ -89,7 +78,6 @@ static inline struct parallel_io *alloc_parallel(void *data)
 	if (rv) {
 		rv->data = data;
 		kref_init(&rv->refcnt);
-		rv->bse_count = 0;
 	}
 	return rv;
 }
@@ -104,7 +92,7 @@ static void destroy_parallel(struct kref *kref)
 	struct parallel_io *p = container_of(kref, struct parallel_io, refcnt);
 
 	dprintk("%s enter\n", __func__);
-	p->pnfs_callback(p->data, p->bse_count);
+	p->pnfs_callback(p->data);
 	kfree(p);
 }
 
@@ -200,7 +188,7 @@ static void bl_read_cleanup(struct work_struct *work)
 }
 
 static void
-bl_end_par_io_read(void *data, int unused)
+bl_end_par_io_read(void *data)
 {
 	struct nfs_pgio_header *hdr = data;
 
@@ -210,56 +198,46 @@ bl_end_par_io_read(void *data, int unused)
 }
 
 static enum pnfs_try_status
-bl_read_pagelist(struct nfs_pgio_header *hdr)
+bl_read_pagelist(struct nfs_pgio_header *header)
 {
-	struct nfs_pgio_header *header = hdr;
-	int i, hole;
+	struct pnfs_block_layout *bl = BLK_LSEG2EXT(header->lseg);
 	struct bio *bio = NULL;
-	struct pnfs_block_extent *be = NULL, *cow_read = NULL;
+	struct pnfs_block_extent be;
 	sector_t isect, extent_length = 0;
 	struct parallel_io *par;
-	loff_t f_offset = hdr->args.offset;
-	size_t bytes_left = hdr->args.count;
+	loff_t f_offset = header->args.offset;
+	size_t bytes_left = header->args.count;
 	unsigned int pg_offset, pg_len;
-	struct page **pages = hdr->args.pages;
-	int pg_index = hdr->args.pgbase >> PAGE_CACHE_SHIFT;
+	struct page **pages = header->args.pages;
+	int pg_index = header->args.pgbase >> PAGE_CACHE_SHIFT;
 	const bool is_dio = (header->dreq != NULL);
 	struct blk_plug plug;
+	int i;
 
 	dprintk("%s enter nr_pages %u offset %lld count %u\n", __func__,
-		hdr->page_array.npages, f_offset,
-		(unsigned int)hdr->args.count);
+		header->page_array.npages, f_offset,
+		(unsigned int)header->args.count);
 
-	par = alloc_parallel(hdr);
+	par = alloc_parallel(header);
 	if (!par)
-		goto use_mds;
+		return PNFS_NOT_ATTEMPTED;
 	par->pnfs_callback = bl_end_par_io_read;
-	/* At this point, we can no longer jump to use_mds */
 
 	blk_start_plug(&plug);
 
 	isect = (sector_t) (f_offset >> SECTOR_SHIFT);
 	/* Code assumes extents are page-aligned */
-	for (i = pg_index; i < hdr->page_array.npages; i++) {
+	for (i = pg_index; i < header->page_array.npages; i++) {
 		if (extent_length <= 0) {
 			/* We've used up the previous extent */
-			bl_put_extent(be);
-			bl_put_extent(cow_read);
 			bio = bl_submit_bio(READ, bio);
+
 			/* Get the next one */
-			be = bl_find_get_extent(BLK_LSEG2EXT(header->lseg),
-					     isect, &cow_read);
-			if (!be) {
+			if (!ext_tree_lookup(bl, isect, &be, false)) {
 				header->pnfs_error = -EIO;
 				goto out;
 			}
-			extent_length = be->be_length -
-				(isect - be->be_f_offset);
-			if (cow_read) {
-				sector_t cow_length = cow_read->be_length -
-					(isect - cow_read->be_f_offset);
-				extent_length = min(extent_length, cow_length);
-			}
+			extent_length = be.be_length - (isect - be.be_f_offset);
 		}
 
 		pg_offset = f_offset & ~PAGE_CACHE_MASK;
@@ -278,20 +256,16 @@ bl_read_pagelist(struct nfs_pgio_header *hdr)
 			pg_len = PAGE_CACHE_SIZE;
 		}
 
-		hole = is_hole(be, isect);
-		if (hole && !cow_read) {
+		if (is_hole(&be)) {
 			bio = bl_submit_bio(READ, bio);
 			/* Fill hole w/ zeroes w/o accessing device */
 			dprintk("%s Zeroing page for hole\n", __func__);
 			zero_user_segment(pages[i], pg_offset, pg_len);
 		} else {
-			struct pnfs_block_extent *be_read;
-
-			be_read = (hole && cow_read) ? cow_read : be;
 			bio = do_add_page_to_bio(bio,
-						 hdr->page_array.npages - i,
+						 header->page_array.npages - i,
 						 READ,
-						 isect, pages[i], be_read,
+						 isect, pages[i], &be,
 						 bl_end_io_read, par,
 						 pg_offset, pg_len);
 			if (IS_ERR(bio)) {
@@ -304,50 +278,16 @@ bl_read_pagelist(struct nfs_pgio_header *hdr)
 		extent_length -= (pg_len >> SECTOR_SHIFT);
 	}
 	if ((isect << SECTOR_SHIFT) >= header->inode->i_size) {
-		hdr->res.eof = 1;
-		hdr->res.count = header->inode->i_size - hdr->args.offset;
+		header->res.eof = 1;
+		header->res.count = header->inode->i_size - header->args.offset;
 	} else {
-		hdr->res.count = (isect << SECTOR_SHIFT) - hdr->args.offset;
+		header->res.count = (isect << SECTOR_SHIFT) - header->args.offset;
 	}
 out:
-	bl_put_extent(be);
-	bl_put_extent(cow_read);
 	bl_submit_bio(READ, bio);
 	blk_finish_plug(&plug);
 	put_parallel(par);
 	return PNFS_ATTEMPTED;
-
- use_mds:
-	dprintk("Giving up and using normal NFS\n");
-	return PNFS_NOT_ATTEMPTED;
-}
-
-static void mark_extents_written(struct pnfs_block_layout *bl,
-				 __u64 offset, __u32 count)
-{
-	sector_t isect, end;
-	struct pnfs_block_extent *be;
-	struct pnfs_block_short_extent *se;
-
-	dprintk("%s(%llu, %u)\n", __func__, offset, count);
-	if (count == 0)
-		return;
-	isect = (offset & (long)(PAGE_CACHE_MASK)) >> SECTOR_SHIFT;
-	end = (offset + count + PAGE_CACHE_SIZE - 1) & (long)(PAGE_CACHE_MASK);
-	end >>= SECTOR_SHIFT;
-	while (isect < end) {
-		sector_t len;
-		be = bl_find_get_extent(bl, isect, NULL);
-		BUG_ON(!be); /* FIXME */
-		len = min(end, be->be_f_offset + be->be_length) - isect;
-		if (be->be_state == PNFS_BLOCK_INVALID_DATA) {
-			se = bl_pop_one_short_extent(be->be_inval);
-			BUG_ON(!se);
-			bl_mark_for_commit(be, isect, len, se);
-		}
-		isect += len;
-		bl_put_extent(be);
-	}
 }
 
 static void bl_end_io_write(struct bio *bio, int err)
@@ -370,29 +310,30 @@ static void bl_end_io_write(struct bio *bio, int err)
  */
 static void bl_write_cleanup(struct work_struct *work)
 {
-	struct rpc_task *task;
-	struct nfs_pgio_header *hdr;
+	struct rpc_task *task = container_of(work, struct rpc_task, u.tk_work);
+	struct nfs_pgio_header *hdr =
+			container_of(task, struct nfs_pgio_header, task);
+
 	dprintk("%s enter\n", __func__);
-	task = container_of(work, struct rpc_task, u.tk_work);
-	hdr = container_of(task, struct nfs_pgio_header, task);
+
 	if (likely(!hdr->pnfs_error)) {
-		/* Marks for LAYOUTCOMMIT */
-		mark_extents_written(BLK_LSEG2EXT(hdr->lseg),
-				     hdr->args.offset, hdr->args.count);
+		struct pnfs_block_layout *bl = BLK_LSEG2EXT(hdr->lseg);
+		u64 start = hdr->args.offset & (loff_t)PAGE_CACHE_MASK;
+		u64 end = (hdr->args.offset + hdr->args.count +
+			PAGE_CACHE_SIZE - 1) & (loff_t)PAGE_CACHE_MASK;
+
+		ext_tree_mark_written(bl, start >> SECTOR_SHIFT,
+					(end - start) >> SECTOR_SHIFT);
 	}
+
 	pnfs_ld_write_done(hdr);
 }
 
 /* Called when last of bios associated with a bl_write_pagelist call finishes */
-static void bl_end_par_io_write(void *data, int num_se)
+static void bl_end_par_io_write(void *data)
 {
 	struct nfs_pgio_header *hdr = data;
 
-	if (unlikely(hdr->pnfs_error)) {
-		bl_free_short_extents(&BLK_LSEG2EXT(hdr->lseg)->bl_inval,
-					num_se);
-	}
-
 	hdr->task.tk_status = hdr->pnfs_error;
 	hdr->verf.committed = NFS_FILE_SYNC;
 	INIT_WORK(&hdr->task.u.tk_work, bl_write_cleanup);
@@ -402,9 +343,9 @@ static void bl_end_par_io_write(void *data, int num_se)
 static enum pnfs_try_status
 bl_write_pagelist(struct nfs_pgio_header *header, int sync)
 {
-	int i, ret;
+	struct pnfs_block_layout *bl = BLK_LSEG2EXT(header->lseg);
 	struct bio *bio = NULL;
-	struct pnfs_block_extent *be = NULL;
+	struct pnfs_block_extent be;
 	sector_t isect, extent_length = 0;
 	struct parallel_io *par = NULL;
 	loff_t offset = header->args.offset;
@@ -412,6 +353,7 @@ bl_write_pagelist(struct nfs_pgio_header *header, int sync)
 	struct page **pages = header->args.pages;
 	int pg_index = pg_index = header->args.pgbase >> PAGE_CACHE_SHIFT;
 	struct blk_plug plug;
+	int i;
 
 	dprintk("%s enter, %Zu@%lld\n", __func__, count, offset);
 
@@ -421,9 +363,8 @@ bl_write_pagelist(struct nfs_pgio_header *header, int sync)
 	 */
 	par = alloc_parallel(header);
 	if (!par)
-		goto out_mds;
+		return PNFS_NOT_ATTEMPTED;
 	par->pnfs_callback = bl_end_par_io_write;
-	/* At this point, have to be more careful with error handling */
 
 	blk_start_plug(&plug);
 
@@ -434,44 +375,18 @@ bl_write_pagelist(struct nfs_pgio_header *header, int sync)
 	for (i = pg_index; i < header->page_array.npages; i++) {
 		if (extent_length <= 0) {
 			/* We've used up the previous extent */
-			bl_put_extent(be);
 			bio = bl_submit_bio(WRITE, bio);
 			/* Get the next one */
-			be = bl_find_get_extent(BLK_LSEG2EXT(header->lseg),
-					     isect, NULL);
-			if (!be || !is_writable(be, isect)) {
+			if (!ext_tree_lookup(bl, isect, &be, true)) {
 				header->pnfs_error = -EINVAL;
 				goto out;
 			}
-			if (be->be_state == PNFS_BLOCK_INVALID_DATA) {
-				if (likely(!bl_push_one_short_extent(
-								be->be_inval)))
-					par->bse_count++;
-				else {
-					header->pnfs_error = -ENOMEM;
-					goto out;
-				}
-			}
-			extent_length = be->be_length -
-			    (isect - be->be_f_offset);
-		}
 
-		BUG_ON(offset & ~PAGE_CACHE_MASK);
-
-		if (be->be_state == PNFS_BLOCK_INVALID_DATA &&
-		    !bl_is_sector_init(be->be_inval, isect)) {
-			ret = bl_mark_sectors_init(be->be_inval, isect,
-						       PAGE_CACHE_SECTORS);
-			if (unlikely(ret)) {
-				dprintk("%s bl_mark_sectors_init fail %d\n",
-					__func__, ret);
-				header->pnfs_error = ret;
-				goto out;
-			}
+			extent_length = be.be_length - (isect - be.be_f_offset);
 		}
 
 		bio = do_add_page_to_bio(bio, header->page_array.npages - i,
-					 WRITE, isect, pages[i], be,
+					 WRITE, isect, pages[i], &be,
 					 bl_end_io_write, par,
 					 0, PAGE_CACHE_SIZE);
 		if (IS_ERR(bio)) {
@@ -487,60 +402,22 @@ bl_write_pagelist(struct nfs_pgio_header *header, int sync)
 
 	header->res.count = header->args.count;
 out:
-	bl_put_extent(be);
 	bl_submit_bio(WRITE, bio);
 	blk_finish_plug(&plug);
 	put_parallel(par);
 	return PNFS_ATTEMPTED;
-out_mds:
-	return PNFS_NOT_ATTEMPTED;
-}
-
-/* FIXME - range ignored */
-static void
-release_extents(struct pnfs_block_layout *bl, struct pnfs_layout_range *range)
-{
-	int i;
-	struct pnfs_block_extent *be;
-
-	spin_lock(&bl->bl_ext_lock);
-	for (i = 0; i < EXTENT_LISTS; i++) {
-		while (!list_empty(&bl->bl_extents[i])) {
-			be = list_first_entry(&bl->bl_extents[i],
-					      struct pnfs_block_extent,
-					      be_node);
-			list_del(&be->be_node);
-			bl_put_extent(be);
-		}
-	}
-	spin_unlock(&bl->bl_ext_lock);
-}
-
-static void
-release_inval_marks(struct pnfs_inval_markings *marks)
-{
-	struct pnfs_inval_tracking *pos, *temp;
-	struct pnfs_block_short_extent *se, *stemp;
-
-	list_for_each_entry_safe(pos, temp, &marks->im_tree.mtt_stub, it_link) {
-		list_del(&pos->it_link);
-		kfree(pos);
-	}
-
-	list_for_each_entry_safe(se, stemp, &marks->im_extents, bse_node) {
-		list_del(&se->bse_node);
-		kfree(se);
-	}
-	return;
 }
 
 static void bl_free_layout_hdr(struct pnfs_layout_hdr *lo)
 {
 	struct pnfs_block_layout *bl = BLK_LO2EXT(lo);
+	int err;
 
 	dprintk("%s enter\n", __func__);
-	release_extents(bl, NULL);
-	release_inval_marks(&bl->bl_inval);
+
+	err = ext_tree_remove(bl, true, 0, LLONG_MAX);
+	WARN_ON(err);
+
 	kfree(bl);
 }
 
@@ -553,14 +430,11 @@ static struct pnfs_layout_hdr *bl_alloc_layout_hdr(struct inode *inode,
 	bl = kzalloc(sizeof(*bl), gfp_flags);
 	if (!bl)
 		return NULL;
+
+	bl->bl_ext_rw = RB_ROOT;
+	bl->bl_ext_ro = RB_ROOT;
 	spin_lock_init(&bl->bl_ext_lock);
-	INIT_LIST_HEAD(&bl->bl_extents[0]);
-	INIT_LIST_HEAD(&bl->bl_extents[1]);
-	INIT_LIST_HEAD(&bl->bl_commit);
-	INIT_LIST_HEAD(&bl->bl_committing);
-	bl->bl_count = 0;
-	bl->bl_blocksize = NFS_SERVER(inode)->pnfs_blksize >> SECTOR_SHIFT;
-	BL_INIT_INVAL_MARKS(&bl->bl_inval, bl->bl_blocksize);
+	
 	return &bl->bl_layout;
 }
 
@@ -600,7 +474,7 @@ bl_encode_layoutcommit(struct pnfs_layout_hdr *lo, struct xdr_stream *xdr,
 		       const struct nfs4_layoutcommit_args *arg)
 {
 	dprintk("%s enter\n", __func__);
-	encode_pnfs_block_layoutupdate(BLK_LO2EXT(lo), xdr, arg);
+	ext_tree_encode_commit(BLK_LO2EXT(lo), xdr);
 }
 
 static void
@@ -609,7 +483,7 @@ bl_cleanup_layoutcommit(struct nfs4_layoutcommit_data *lcdata)
 	struct pnfs_layout_hdr *lo = NFS_I(lcdata->args.inode)->layout;
 
 	dprintk("%s enter\n", __func__);
-	clean_pnfs_block_layoutupdate(BLK_LO2EXT(lo), &lcdata->args, lcdata->res.status);
+	ext_tree_mark_committed(BLK_LO2EXT(lo), lcdata->res.status);
 }
 
 static void free_blk_mountid(struct block_mount_id *mid)
diff --git a/fs/nfs/blocklayout/blocklayout.h b/fs/nfs/blocklayout/blocklayout.h
index 9838fb0..b4f66d8 100644
--- a/fs/nfs/blocklayout/blocklayout.h
+++ b/fs/nfs/blocklayout/blocklayout.h
@@ -63,82 +63,28 @@ enum exstate4 {
 	PNFS_BLOCK_NONE_DATA		= 3  /* unmapped, it's a hole */
 };
 
-#define MY_MAX_TAGS (15) /* tag bitnums used must be less than this */
-
-struct my_tree {
-	sector_t		mtt_step_size;	/* Internal sector alignment */
-	struct list_head	mtt_stub; /* Should be a radix tree */
-};
-
-struct pnfs_inval_markings {
-	spinlock_t	im_lock;
-	struct my_tree	im_tree;	/* Sectors that need LAYOUTCOMMIT */
-	sector_t	im_block_size;	/* Server blocksize in sectors */
-	struct list_head im_extents;	/* Short extents for INVAL->RW conversion */
-};
-
-struct pnfs_inval_tracking {
-	struct list_head it_link;
-	int		 it_sector;
-	int		 it_tags;
-};
-
 /* sector_t fields are all in 512-byte sectors */
 struct pnfs_block_extent {
-	struct kref	be_refcnt;
-	struct list_head be_node;	/* link into lseg list */
-	struct nfs4_deviceid be_devid;  /* FIXME: could use device cache instead */
+	union {
+		struct rb_node	be_node;
+		struct list_head be_list;
+	};
+	struct nfs4_deviceid be_devid;	/* FIXME: could use device cache instead */
 	struct block_device *be_mdev;
 	sector_t	be_f_offset;	/* the starting offset in the file */
 	sector_t	be_length;	/* the size of the extent */
 	sector_t	be_v_offset;	/* the starting offset in the volume */
 	enum exstate4	be_state;	/* the state of this extent */
-	struct pnfs_inval_markings *be_inval; /* tracks INVAL->RW transition */
-};
-
-/* Shortened extent used by LAYOUTCOMMIT */
-struct pnfs_block_short_extent {
-	struct list_head bse_node;
-	struct nfs4_deviceid bse_devid;
-	struct block_device *bse_mdev;
-	sector_t	bse_f_offset;	/* the starting offset in the file */
-	sector_t	bse_length;	/* the size of the extent */
+#define EXTENT_WRITTEN		1
+#define EXTENT_COMMITTING	2
+	unsigned int	be_tag;
 };
 
-static inline void
-BL_INIT_INVAL_MARKS(struct pnfs_inval_markings *marks, sector_t blocksize)
-{
-	spin_lock_init(&marks->im_lock);
-	INIT_LIST_HEAD(&marks->im_tree.mtt_stub);
-	INIT_LIST_HEAD(&marks->im_extents);
-	marks->im_block_size = blocksize;
-	marks->im_tree.mtt_step_size = min((sector_t)PAGE_CACHE_SECTORS,
-					   blocksize);
-}
-
-enum extentclass4 {
-	RW_EXTENT       = 0, /* READWRTE and INVAL */
-	RO_EXTENT       = 1, /* READ and NONE */
-	EXTENT_LISTS    = 2,
-};
-
-static inline int bl_choose_list(enum exstate4 state)
-{
-	if (state == PNFS_BLOCK_READ_DATA || state == PNFS_BLOCK_NONE_DATA)
-		return RO_EXTENT;
-	else
-		return RW_EXTENT;
-}
-
 struct pnfs_block_layout {
-	struct pnfs_layout_hdr bl_layout;
-	struct pnfs_inval_markings bl_inval; /* tracks INVAL->RW transition */
+	struct pnfs_layout_hdr	bl_layout;
+	struct rb_root		bl_ext_rw;
+	struct rb_root		bl_ext_ro;
 	spinlock_t		bl_ext_lock;   /* Protects list manipulation */
-	struct list_head	bl_extents[EXTENT_LISTS]; /* R and RW extents */
-	struct list_head	bl_commit;	/* Needs layout commit */
-	struct list_head	bl_committing;	/* Layout committing */
-	unsigned int		bl_count;	/* entries in bl_commit */
-	sector_t		bl_blocksize;  /* Server blocksize in sectors */
 };
 
 #define BLK_ID(lo) ((struct block_mount_id *)(NFS_SERVER(lo->plh_inode)->pnfs_ld_data))
@@ -183,29 +129,17 @@ int nfs4_blk_process_layoutget(struct pnfs_layout_hdr *lo,
 /* blocklayoutdm.c */
 void bl_free_block_dev(struct pnfs_block_dev *bdev);
 
-/* extents.c */
-struct pnfs_block_extent *
-bl_find_get_extent(struct pnfs_block_layout *bl, sector_t isect,
-		struct pnfs_block_extent **cow_read);
-int bl_mark_sectors_init(struct pnfs_inval_markings *marks,
-			     sector_t offset, sector_t length);
-void bl_put_extent(struct pnfs_block_extent *be);
-struct pnfs_block_extent *bl_alloc_extent(void);
-int bl_is_sector_init(struct pnfs_inval_markings *marks, sector_t isect);
-int encode_pnfs_block_layoutupdate(struct pnfs_block_layout *bl,
-				   struct xdr_stream *xdr,
-				   const struct nfs4_layoutcommit_args *arg);
-void clean_pnfs_block_layoutupdate(struct pnfs_block_layout *bl,
-				   const struct nfs4_layoutcommit_args *arg,
-				   int status);
-int bl_add_merge_extent(struct pnfs_block_layout *bl,
-			 struct pnfs_block_extent *new);
-int bl_mark_for_commit(struct pnfs_block_extent *be,
-			sector_t offset, sector_t length,
-			struct pnfs_block_short_extent *new);
-int bl_push_one_short_extent(struct pnfs_inval_markings *marks);
-struct pnfs_block_short_extent *
-bl_pop_one_short_extent(struct pnfs_inval_markings *marks);
-void bl_free_short_extents(struct pnfs_inval_markings *marks, int num_to_free);
+/* extent_tree.c */
+int ext_tree_insert(struct pnfs_block_layout *bl,
+		struct pnfs_block_extent *new);
+int ext_tree_remove(struct pnfs_block_layout *bl, bool rw, sector_t start,
+		sector_t end);
+int ext_tree_mark_written(struct pnfs_block_layout *bl, sector_t start,
+		sector_t len);
+bool ext_tree_lookup(struct pnfs_block_layout *bl, sector_t isect,
+		struct pnfs_block_extent *ret, bool rw);
+int ext_tree_encode_commit(struct pnfs_block_layout *bl,
+		struct xdr_stream *xdr);
+void ext_tree_mark_committed(struct pnfs_block_layout *bl, int status);
 
 #endif /* FS_NFS_NFS4BLOCKLAYOUT_H */
diff --git a/fs/nfs/blocklayout/blocklayoutdev.c b/fs/nfs/blocklayout/blocklayoutdev.c
index 63f77925..cd71b5e 100644
--- a/fs/nfs/blocklayout/blocklayoutdev.c
+++ b/fs/nfs/blocklayout/blocklayoutdev.c
@@ -309,7 +309,7 @@ nfs4_blk_process_layoutget(struct pnfs_layout_hdr *lo,
 	 * recovery easier.
 	 */
 	for (i = 0; i < count; i++) {
-		be = bl_alloc_extent();
+		be = kzalloc(sizeof(struct pnfs_block_extent), GFP_NOFS);
 		if (!be) {
 			status = -ENOMEM;
 			goto out_err;
@@ -330,13 +330,11 @@ nfs4_blk_process_layoutget(struct pnfs_layout_hdr *lo,
 		if (decode_sector_number(&p, &be->be_v_offset) < 0)
 			goto out_err;
 		be->be_state = be32_to_cpup(p++);
-		if (be->be_state == PNFS_BLOCK_INVALID_DATA)
-			be->be_inval = &bl->bl_inval;
 		if (verify_extent(be, &lv)) {
 			dprintk("%s verify failed\n", __func__);
 			goto out_err;
 		}
-		list_add_tail(&be->be_node, &extents);
+		list_add_tail(&be->be_list, &extents);
 	}
 	if (lgr->range.offset + lgr->range.length !=
 			lv.start << SECTOR_SHIFT) {
@@ -352,21 +350,13 @@ nfs4_blk_process_layoutget(struct pnfs_layout_hdr *lo,
 	/* Extents decoded properly, now try to merge them in to
 	 * existing layout extents.
 	 */
-	spin_lock(&bl->bl_ext_lock);
-	list_for_each_entry_safe(be, save, &extents, be_node) {
-		list_del(&be->be_node);
-		status = bl_add_merge_extent(bl, be);
-		if (status) {
-			spin_unlock(&bl->bl_ext_lock);
-			/* This is a fairly catastrophic error, as the
-			 * entire layout extent lists are now corrupted.
-			 * We should have some way to distinguish this.
-			 */
-			be = NULL;
-			goto out_err;
-		}
+	list_for_each_entry_safe(be, save, &extents, be_list) {
+		list_del(&be->be_list);
+
+		status = ext_tree_insert(bl, be);
+		if (status)
+			goto out_free_list;
 	}
-	spin_unlock(&bl->bl_ext_lock);
 	status = 0;
  out:
 	__free_page(scratch);
@@ -374,12 +364,13 @@ nfs4_blk_process_layoutget(struct pnfs_layout_hdr *lo,
 	return status;
 
  out_err:
-	bl_put_extent(be);
+	kfree(be);
+ out_free_list:
 	while (!list_empty(&extents)) {
 		be = list_first_entry(&extents, struct pnfs_block_extent,
-				      be_node);
-		list_del(&be->be_node);
-		bl_put_extent(be);
+				      be_list);
+		list_del(&be->be_list);
+		kfree(be);
 	}
 	goto out;
 }
diff --git a/fs/nfs/blocklayout/extent_tree.c b/fs/nfs/blocklayout/extent_tree.c
new file mode 100644
index 0000000..c8c59a5
--- /dev/null
+++ b/fs/nfs/blocklayout/extent_tree.c
@@ -0,0 +1,547 @@
+/*
+ * Copyright (c) 2014 Christoph Hellwig.
+ */
+
+#include "blocklayout.h"
+
+#define NFSDBG_FACILITY		NFSDBG_PNFS_LD
+
+static inline struct pnfs_block_extent *
+ext_node(struct rb_node *node)
+{
+	return rb_entry(node, struct pnfs_block_extent, be_node);
+}
+
+static struct pnfs_block_extent *
+ext_tree_first(struct rb_root *root)
+{
+	struct rb_node *node = rb_first(root);
+	return node ? ext_node(node) : NULL;
+}
+
+static struct pnfs_block_extent *
+ext_tree_prev(struct pnfs_block_extent *be)
+{
+	struct rb_node *node = rb_prev(&be->be_node);
+	return node ? ext_node(node) : NULL;
+}
+
+static struct pnfs_block_extent *
+ext_tree_next(struct pnfs_block_extent *be)
+{
+	struct rb_node *node = rb_next(&be->be_node);
+	return node ? ext_node(node) : NULL;
+}
+
+static inline sector_t
+ext_f_end(struct pnfs_block_extent *be)
+{
+	return be->be_f_offset + be->be_length;
+}
+
+static struct pnfs_block_extent *
+__ext_tree_search(struct rb_root *root, sector_t start)
+{
+	struct rb_node *node = root->rb_node;
+	struct pnfs_block_extent *be = NULL;
+
+	while (node) {
+		be = ext_node(node);
+		if (start < be->be_f_offset)
+			node = node->rb_left;
+		else if (start >= ext_f_end(be))
+			node = node->rb_right;
+		else
+			return be;
+	}
+
+	if (be) {
+		if (start < be->be_f_offset)
+			return be;
+
+		if (start >= ext_f_end(be))
+			return ext_tree_next(be);
+	}
+
+	return NULL;
+}
+
+static bool
+ext_can_merge(struct pnfs_block_extent *be1, struct pnfs_block_extent *be2)
+{
+	if (be1->be_state != be2->be_state)
+		return false;
+	if (be1->be_mdev != be2->be_mdev)
+		return false;
+
+	if (be1->be_f_offset + be1->be_length != be2->be_f_offset)
+		return false;
+
+	if (be1->be_state != PNFS_BLOCK_NONE_DATA &&
+	    (be1->be_v_offset + be1->be_length != be2->be_v_offset))
+		return false;
+
+	if (be1->be_state == PNFS_BLOCK_INVALID_DATA &&
+	    be1->be_tag != be2->be_tag)
+		return false;
+
+	return true;
+}
+
+static struct pnfs_block_extent *
+ext_try_to_merge_left(struct rb_root *root, struct pnfs_block_extent *be)
+{
+	struct pnfs_block_extent *left = ext_tree_prev(be);
+
+	if (left && ext_can_merge(left, be)) {
+		left->be_length += be->be_length;
+		rb_erase(&be->be_node, root);
+		kfree(be);
+		return left;
+	}
+
+	return be;
+}
+
+static struct pnfs_block_extent *
+ext_try_to_merge_right(struct rb_root *root, struct pnfs_block_extent *be)
+{
+	struct pnfs_block_extent *right = ext_tree_next(be);
+
+	if (right && ext_can_merge(be, right)) {
+		be->be_length += right->be_length;
+		rb_erase(&right->be_node, root);
+		kfree(right);
+	}
+
+	return be;
+}
+
+static void
+__ext_tree_insert(struct rb_root *root,
+		struct pnfs_block_extent *new, bool merge_ok)
+{
+	struct rb_node **p = &root->rb_node, *parent = NULL;
+	struct pnfs_block_extent *be;
+
+	while (*p) {
+		parent = *p;
+		be = ext_node(parent);
+
+		if (new->be_f_offset < be->be_f_offset) {
+			if (merge_ok && ext_can_merge(new, be)) {
+				be->be_f_offset = new->be_f_offset;
+				if (be->be_state != PNFS_BLOCK_NONE_DATA)
+					be->be_v_offset = new->be_v_offset;
+				be->be_length += new->be_length;
+				be = ext_try_to_merge_left(root, be);
+				kfree(new);
+				return;
+			}
+			p = &(*p)->rb_left;
+		} else if (new->be_f_offset >= ext_f_end(be)) {
+			if (merge_ok && ext_can_merge(be, new)) {
+				be->be_length += new->be_length;
+				be = ext_try_to_merge_right(root, be);
+				kfree(new);
+				return;
+			}
+			p = &(*p)->rb_right;
+		} else {
+			BUG();
+		}
+	}
+
+	rb_link_node(&new->be_node, parent, p);
+	rb_insert_color(&new->be_node, root);
+}
+
+static int
+__ext_tree_remove(struct rb_root *root, sector_t start, sector_t end)
+{
+	struct pnfs_block_extent *be;
+	sector_t len1 = 0, len2 = 0;
+	sector_t orig_f_offset;
+	sector_t orig_v_offset;
+	sector_t orig_len;
+
+	be = __ext_tree_search(root, start);
+	if (!be)
+		return 0;
+	if (be->be_f_offset >= end)
+		return 0;
+
+	orig_f_offset = be->be_f_offset;
+	orig_v_offset = be->be_v_offset;
+	orig_len = be->be_length;
+
+	if (start > be->be_f_offset)
+		len1 = start - be->be_f_offset;
+	if (ext_f_end(be) > end)
+		len2 = ext_f_end(be) - end;
+
+	if (len2 > 0) {
+		if (len1 > 0) {
+			struct pnfs_block_extent *new;
+
+			new = kzalloc(sizeof(*new), GFP_ATOMIC);
+			if (!new)
+				return -ENOMEM;
+
+			be->be_length = len1;
+
+			new->be_f_offset = end;
+			if (be->be_state != PNFS_BLOCK_NONE_DATA) {
+				new->be_v_offset =
+					orig_v_offset + orig_len - len2;
+			}
+			new->be_length = len2;
+			new->be_state = be->be_state;
+			new->be_tag = be->be_tag;
+			new->be_mdev = be->be_mdev;
+			memcpy(&new->be_devid, &be->be_devid,
+				sizeof(struct nfs4_deviceid));
+
+			__ext_tree_insert(root, new, true);
+		} else {
+			be->be_f_offset = end;
+			if (be->be_state != PNFS_BLOCK_NONE_DATA) {
+				be->be_v_offset =
+					orig_v_offset + orig_len - len2;
+			}
+			be->be_length = len2;
+		}
+	} else {
+		if (len1 > 0) {
+			be->be_length = len1;
+			be = ext_tree_next(be);
+		}
+
+		while (be && ext_f_end(be) <= end) {
+			struct pnfs_block_extent *next = ext_tree_next(be);
+
+			rb_erase(&be->be_node, root);
+			kfree(be);
+			be = next;
+		}
+
+		if (be && be->be_f_offset < end) {
+			len1 = ext_f_end(be) - end;
+			be->be_f_offset = end;
+			if (be->be_state != PNFS_BLOCK_NONE_DATA)
+				be->be_v_offset += be->be_length - len1;
+			be->be_length = len1;
+		}
+	}
+
+	return 0;
+}
+
+int
+ext_tree_insert(struct pnfs_block_layout *bl, struct pnfs_block_extent *new)
+{
+	struct pnfs_block_extent *be;
+	struct rb_root *root;
+	int err = 0;
+
+	switch (new->be_state) {
+	case PNFS_BLOCK_READWRITE_DATA:
+	case PNFS_BLOCK_INVALID_DATA:
+		root = &bl->bl_ext_rw;
+		break;
+	case PNFS_BLOCK_READ_DATA:
+	case PNFS_BLOCK_NONE_DATA:
+		root = &bl->bl_ext_ro;
+		break;
+	default:
+		dprintk("invalid extent type\n");
+		return -EINVAL;
+	}
+
+	spin_lock(&bl->bl_ext_lock);
+retry:
+	be = __ext_tree_search(root, new->be_f_offset);
+	if (!be || be->be_f_offset >= ext_f_end(new)) {
+		__ext_tree_insert(root, new, true);
+	} else if (new->be_f_offset >= be->be_f_offset) {
+		if (ext_f_end(new) <= ext_f_end(be)) {
+			kfree(new);
+		} else {
+			sector_t new_len = ext_f_end(new) - ext_f_end(be);
+			sector_t diff = new->be_length - new_len;
+
+			new->be_f_offset += diff;
+			new->be_v_offset += diff;
+			new->be_length = new_len;
+			goto retry;
+		}
+	} else if (ext_f_end(new) <= ext_f_end(be)) {
+		new->be_length = be->be_f_offset - new->be_f_offset;
+		__ext_tree_insert(root, new, true);
+	} else {
+		struct pnfs_block_extent *split;
+		sector_t new_len = ext_f_end(new) - ext_f_end(be);
+		sector_t diff = new->be_length - new_len;
+
+		split = kmemdup(new, sizeof(*new), GFP_ATOMIC);
+		if (!split) {
+			err = -EINVAL;
+			goto out;
+		}
+
+		split->be_length = be->be_f_offset - split->be_f_offset;
+		__ext_tree_insert(root, split, true);
+
+		new->be_f_offset += diff;
+		new->be_v_offset += diff;
+		new->be_length = new_len;
+		goto retry;
+	}
+out:
+	spin_unlock(&bl->bl_ext_lock);
+	return err;
+}
+
+static bool
+__ext_tree_lookup(struct rb_root *root, sector_t isect,
+		struct pnfs_block_extent *ret)
+{
+	struct rb_node *node;
+	struct pnfs_block_extent *be;
+
+	node = root->rb_node;
+	while (node) {
+		be = ext_node(node);
+		if (isect < be->be_f_offset)
+			node = node->rb_left;
+		else if (isect >= ext_f_end(be))
+			node = node->rb_right;
+		else {
+			*ret = *be;
+			return true;
+		}
+	}
+
+	return false;
+}
+
+bool
+ext_tree_lookup(struct pnfs_block_layout *bl, sector_t isect,
+	    struct pnfs_block_extent *ret, bool rw)
+{
+	bool found = false;
+
+	spin_lock(&bl->bl_ext_lock);
+	if (!rw)
+		found = __ext_tree_lookup(&bl->bl_ext_ro, isect, ret);
+	if (!found)
+		found = __ext_tree_lookup(&bl->bl_ext_rw, isect, ret);
+	spin_unlock(&bl->bl_ext_lock);
+
+	return found;
+}
+
+int ext_tree_remove(struct pnfs_block_layout *bl, bool rw,
+		sector_t start, sector_t end)
+{
+	int err, err2;
+
+	spin_lock(&bl->bl_ext_lock);
+	err = __ext_tree_remove(&bl->bl_ext_ro, start, end);
+	if (rw) {
+		err2 = __ext_tree_remove(&bl->bl_ext_rw, start, end);
+		if (!err)
+			err = err2;
+	}
+	spin_unlock(&bl->bl_ext_lock);
+
+	return err;
+}
+
+static int
+ext_tree_split(struct rb_root *root, struct pnfs_block_extent *be,
+		sector_t split)
+{
+	struct pnfs_block_extent *new;
+	sector_t orig_len = be->be_length;
+
+	dprintk("%s: need split for 0x%lx:0x%lx at 0x%lx\n",
+		__func__, be->be_f_offset, ext_f_end(be), split);
+
+	new = kzalloc(sizeof(*new), GFP_ATOMIC);
+	if (!new)
+		return -ENOMEM;
+
+	be->be_length = split - be->be_f_offset;
+
+	new->be_f_offset = split;
+	if (be->be_state != PNFS_BLOCK_NONE_DATA)
+		new->be_v_offset = be->be_v_offset + be->be_length;
+	new->be_length = orig_len - be->be_length;
+	new->be_state = be->be_state;
+	new->be_tag = be->be_tag;
+
+	new->be_mdev = be->be_mdev;
+	memcpy(&new->be_devid, &be->be_devid, sizeof(struct nfs4_deviceid));
+
+	dprintk("%s: got 0x%lx:0x%lx!\n",
+		__func__, be->be_f_offset, ext_f_end(be));
+	dprintk("%s: got 0x%lx:0x%lx!\n",
+		__func__, new->be_f_offset, ext_f_end(new));
+
+	__ext_tree_insert(root, new, false);
+	return 0;
+}
+
+int
+ext_tree_mark_written(struct pnfs_block_layout *bl, sector_t start,
+		sector_t len)
+{
+	struct rb_root *root = &bl->bl_ext_rw;
+	sector_t end = start + len;
+	struct pnfs_block_extent *be;
+	int err = 0;
+
+	spin_lock(&bl->bl_ext_lock);
+	/*
+	 * First remove all COW extents or holes from written to range.
+	 */
+	err = __ext_tree_remove(&bl->bl_ext_ro, start, end);
+	if (err)
+		goto out;
+
+	/*
+	 * Then mark all invalid extents in the range as written to.
+	 */
+	for (be = __ext_tree_search(root, start); be; be = ext_tree_next(be)) {
+		if (be->be_f_offset >= end)
+			break;
+
+		if (be->be_state != PNFS_BLOCK_INVALID_DATA || be->be_tag)
+			continue;
+
+		if (be->be_f_offset < start) {
+			struct pnfs_block_extent *left = ext_tree_prev(be);
+
+			if (left && ext_can_merge(left, be)) {
+				sector_t diff = start - be->be_f_offset;
+
+				left->be_length += diff;
+
+				be->be_f_offset += diff;
+				be->be_v_offset += diff;
+				be->be_length -= diff;
+			} else {
+				err = ext_tree_split(root, be, start);
+				if (err)
+					goto out;
+			}
+		}
+
+		if (ext_f_end(be) > end) {
+			struct pnfs_block_extent *right = ext_tree_next(be);
+
+			if (right && ext_can_merge(be, right)) {
+				sector_t diff = end - be->be_f_offset;
+
+				be->be_length -= diff;
+
+				right->be_f_offset -= diff;
+				right->be_v_offset -= diff;
+				right->be_length += diff;
+			} else {
+				err = ext_tree_split(root, be, end);
+				if (err)
+					goto out;
+			}
+		}
+
+		if (be->be_f_offset >= start && ext_f_end(be) <= end) {
+			be->be_tag = EXTENT_WRITTEN;
+			be = ext_try_to_merge_left(root, be);
+			be = ext_try_to_merge_right(root, be);
+		}
+	}
+out:
+	spin_unlock(&bl->bl_ext_lock);
+	return err;
+}
+
+int
+ext_tree_encode_commit(struct pnfs_block_layout *bl, struct xdr_stream *xdr)
+{
+	struct pnfs_block_extent *be;
+	unsigned int count = 0;
+	__be32 *p, *xdr_start;
+	int ret = 0;
+
+	dprintk("%s enter\n", __func__);
+
+	xdr_start = xdr_reserve_space(xdr, 8);
+	if (!xdr_start)
+		return -ENOSPC;
+
+	spin_lock(&bl->bl_ext_lock);
+	for (be = ext_tree_first(&bl->bl_ext_rw); be; be = ext_tree_next(be)) {
+		if (be->be_state != PNFS_BLOCK_INVALID_DATA ||
+		    be->be_tag != EXTENT_WRITTEN)
+			continue;
+
+		p = xdr_reserve_space(xdr, 7 * sizeof(__be32) +
+					NFS4_DEVICEID4_SIZE);
+		if (!p) {
+			printk("%s: out of space for extent list\n", __func__);
+			ret = -ENOSPC;
+			break;
+		}
+
+		p = xdr_encode_opaque_fixed(p, be->be_devid.data,
+				NFS4_DEVICEID4_SIZE);
+		p = xdr_encode_hyper(p, be->be_f_offset << SECTOR_SHIFT);
+		p = xdr_encode_hyper(p, be->be_length << SECTOR_SHIFT);
+		p = xdr_encode_hyper(p, 0LL);
+		*p++ = cpu_to_be32(PNFS_BLOCK_READWRITE_DATA);
+
+		be->be_tag = EXTENT_COMMITTING;
+		count++;
+	}
+	spin_unlock(&bl->bl_ext_lock);
+
+	xdr_start[0] = cpu_to_be32((xdr->p - xdr_start - 1) * 4);
+	xdr_start[1] = cpu_to_be32(count);
+
+	dprintk("%s found %i ranges\n", __func__, count);
+	return ret;
+}
+
+void
+ext_tree_mark_committed(struct pnfs_block_layout *bl, int status)
+{
+	struct rb_root *root = &bl->bl_ext_rw;
+	struct pnfs_block_extent *be;
+
+	dprintk("%s status %d\n", __func__, status);
+
+	spin_lock(&bl->bl_ext_lock);
+	for (be = ext_tree_first(root); be; be = ext_tree_next(be)) {
+		if (be->be_state != PNFS_BLOCK_INVALID_DATA ||
+		    be->be_tag != EXTENT_COMMITTING)
+			continue;
+
+		if (status) {
+			/*
+			 * Mark as written and try again.
+			 *
+			 * XXX: some real error handling here wouldn't hurt..
+			 */
+			be->be_tag = EXTENT_WRITTEN;
+		} else {
+			be->be_state = PNFS_BLOCK_READWRITE_DATA;
+			be->be_tag = 0;
+		}
+
+		be = ext_try_to_merge_left(root, be);
+		be = ext_try_to_merge_right(root, be);
+	}
+	spin_unlock(&bl->bl_ext_lock);
+}
diff --git a/fs/nfs/blocklayout/extents.c b/fs/nfs/blocklayout/extents.c
deleted file mode 100644
index 4d01614..0000000
--- a/fs/nfs/blocklayout/extents.c
+++ /dev/null
@@ -1,908 +0,0 @@
-/*
- *  linux/fs/nfs/blocklayout/blocklayout.h
- *
- *  Module for the NFSv4.1 pNFS block layout driver.
- *
- *  Copyright (c) 2006 The Regents of the University of Michigan.
- *  All rights reserved.
- *
- *  Andy Adamson <andros@citi.umich.edu>
- *  Fred Isaman <iisaman@umich.edu>
- *
- * permission is granted to use, copy, create derivative works and
- * redistribute this software and such derivative works for any purpose,
- * so long as the name of the university of michigan is not used in
- * any advertising or publicity pertaining to the use or distribution
- * of this software without specific, written prior authorization.  if
- * the above copyright notice or any other identification of the
- * university of michigan is included in any copy of any portion of
- * this software, then the disclaimer below must also be included.
- *
- * this software is provided as is, without representation from the
- * university of michigan as to its fitness for any purpose, and without
- * warranty by the university of michigan of any kind, either express
- * or implied, including without limitation the implied warranties of
- * merchantability and fitness for a particular purpose.  the regents
- * of the university of michigan shall not be liable for any damages,
- * including special, indirect, incidental, or consequential damages,
- * with respect to any claim arising out or in connection with the use
- * of the software, even if it has been or is hereafter advised of the
- * possibility of such damages.
- */
-
-#include "blocklayout.h"
-#define NFSDBG_FACILITY         NFSDBG_PNFS_LD
-
-/* Bit numbers */
-#define EXTENT_INITIALIZED 0
-#define EXTENT_WRITTEN     1
-#define EXTENT_IN_COMMIT   2
-#define INTERNAL_EXISTS    MY_MAX_TAGS
-#define INTERNAL_MASK      ((1 << INTERNAL_EXISTS) - 1)
-
-/* Returns largest t<=s s.t. t%base==0 */
-static inline sector_t normalize(sector_t s, int base)
-{
-	sector_t tmp = s; /* Since do_div modifies its argument */
-	return s - sector_div(tmp, base);
-}
-
-static inline sector_t normalize_up(sector_t s, int base)
-{
-	return normalize(s + base - 1, base);
-}
-
-/* Complete stub using list while determine API wanted */
-
-/* Returns tags, or negative */
-static int32_t _find_entry(struct my_tree *tree, u64 s)
-{
-	struct pnfs_inval_tracking *pos;
-
-	dprintk("%s(%llu) enter\n", __func__, s);
-	list_for_each_entry_reverse(pos, &tree->mtt_stub, it_link) {
-		if (pos->it_sector > s)
-			continue;
-		else if (pos->it_sector == s)
-			return pos->it_tags & INTERNAL_MASK;
-		else
-			break;
-	}
-	return -ENOENT;
-}
-
-static inline
-int _has_tag(struct my_tree *tree, u64 s, int32_t tag)
-{
-	int32_t tags;
-
-	dprintk("%s(%llu, %i) enter\n", __func__, s, tag);
-	s = normalize(s, tree->mtt_step_size);
-	tags = _find_entry(tree, s);
-	if ((tags < 0) || !(tags & (1 << tag)))
-		return 0;
-	else
-		return 1;
-}
-
-/* Creates entry with tag, or if entry already exists, unions tag to it.
- * If storage is not NULL, newly created entry will use it.
- * Returns number of entries added, or negative on error.
- */
-static int _add_entry(struct my_tree *tree, u64 s, int32_t tag,
-		      struct pnfs_inval_tracking *storage)
-{
-	int found = 0;
-	struct pnfs_inval_tracking *pos;
-
-	dprintk("%s(%llu, %i, %p) enter\n", __func__, s, tag, storage);
-	list_for_each_entry_reverse(pos, &tree->mtt_stub, it_link) {
-		if (pos->it_sector > s)
-			continue;
-		else if (pos->it_sector == s) {
-			found = 1;
-			break;
-		} else
-			break;
-	}
-	if (found) {
-		pos->it_tags |= (1 << tag);
-		return 0;
-	} else {
-		struct pnfs_inval_tracking *new;
-		new = storage;
-		new->it_sector = s;
-		new->it_tags = (1 << tag);
-		list_add(&new->it_link, &pos->it_link);
-		return 1;
-	}
-}
-
-/* XXXX Really want option to not create */
-/* Over range, unions tag with existing entries, else creates entry with tag */
-static int _set_range(struct my_tree *tree, int32_t tag, u64 s, u64 length)
-{
-	u64 i;
-
-	dprintk("%s(%i, %llu, %llu) enter\n", __func__, tag, s, length);
-	for (i = normalize(s, tree->mtt_step_size); i < s + length;
-	     i += tree->mtt_step_size)
-		if (_add_entry(tree, i, tag, NULL))
-			return -ENOMEM;
-	return 0;
-}
-
-/* Ensure that future operations on given range of tree will not malloc */
-static int _preload_range(struct pnfs_inval_markings *marks,
-		u64 offset, u64 length)
-{
-	u64 start, end, s;
-	int count, i, used = 0, status = -ENOMEM;
-	struct pnfs_inval_tracking **storage;
-	struct my_tree  *tree = &marks->im_tree;
-
-	dprintk("%s(%llu, %llu) enter\n", __func__, offset, length);
-	start = normalize(offset, tree->mtt_step_size);
-	end = normalize_up(offset + length, tree->mtt_step_size);
-	count = (int)(end - start) / (int)tree->mtt_step_size;
-
-	/* Pre-malloc what memory we might need */
-	storage = kcalloc(count, sizeof(*storage), GFP_NOFS);
-	if (!storage)
-		return -ENOMEM;
-	for (i = 0; i < count; i++) {
-		storage[i] = kmalloc(sizeof(struct pnfs_inval_tracking),
-				     GFP_NOFS);
-		if (!storage[i])
-			goto out_cleanup;
-	}
-
-	spin_lock_bh(&marks->im_lock);
-	for (s = start; s < end; s += tree->mtt_step_size)
-		used += _add_entry(tree, s, INTERNAL_EXISTS, storage[used]);
-	spin_unlock_bh(&marks->im_lock);
-
-	status = 0;
-
- out_cleanup:
-	for (i = used; i < count; i++) {
-		if (!storage[i])
-			break;
-		kfree(storage[i]);
-	}
-	kfree(storage);
-	return status;
-}
-
-/* We are relying on page lock to serialize this */
-int bl_is_sector_init(struct pnfs_inval_markings *marks, sector_t isect)
-{
-	int rv;
-
-	spin_lock_bh(&marks->im_lock);
-	rv = _has_tag(&marks->im_tree, isect, EXTENT_INITIALIZED);
-	spin_unlock_bh(&marks->im_lock);
-	return rv;
-}
-
-/* Assume start, end already sector aligned */
-static int
-_range_has_tag(struct my_tree *tree, u64 start, u64 end, int32_t tag)
-{
-	struct pnfs_inval_tracking *pos;
-	u64 expect = 0;
-
-	dprintk("%s(%llu, %llu, %i) enter\n", __func__, start, end, tag);
-	list_for_each_entry_reverse(pos, &tree->mtt_stub, it_link) {
-		if (pos->it_sector >= end)
-			continue;
-		if (!expect) {
-			if ((pos->it_sector == end - tree->mtt_step_size) &&
-			    (pos->it_tags & (1 << tag))) {
-				expect = pos->it_sector - tree->mtt_step_size;
-				if (pos->it_sector < tree->mtt_step_size || expect < start)
-					return 1;
-				continue;
-			} else {
-				return 0;
-			}
-		}
-		if (pos->it_sector != expect || !(pos->it_tags & (1 << tag)))
-			return 0;
-		expect -= tree->mtt_step_size;
-		if (expect < start)
-			return 1;
-	}
-	return 0;
-}
-
-static int is_range_written(struct pnfs_inval_markings *marks,
-			    sector_t start, sector_t end)
-{
-	int rv;
-
-	spin_lock_bh(&marks->im_lock);
-	rv = _range_has_tag(&marks->im_tree, start, end, EXTENT_WRITTEN);
-	spin_unlock_bh(&marks->im_lock);
-	return rv;
-}
-
-/* Marks sectors in [offest, offset_length) as having been initialized.
- * All lengths are step-aligned, where step is min(pagesize, blocksize).
- * Currently assumes offset is page-aligned
- */
-int bl_mark_sectors_init(struct pnfs_inval_markings *marks,
-			     sector_t offset, sector_t length)
-{
-	sector_t start, end;
-
-	dprintk("%s(offset=%llu,len=%llu) enter\n",
-		__func__, (u64)offset, (u64)length);
-
-	start = normalize(offset, marks->im_block_size);
-	end = normalize_up(offset + length, marks->im_block_size);
-	if (_preload_range(marks, start, end - start))
-		goto outerr;
-
-	spin_lock_bh(&marks->im_lock);
-	if (_set_range(&marks->im_tree, EXTENT_INITIALIZED, offset, length))
-		goto out_unlock;
-	spin_unlock_bh(&marks->im_lock);
-
-	return 0;
-
-out_unlock:
-	spin_unlock_bh(&marks->im_lock);
-outerr:
-	return -ENOMEM;
-}
-
-/* Marks sectors in [offest, offset+length) as having been written to disk.
- * All lengths should be block aligned.
- */
-static int mark_written_sectors(struct pnfs_inval_markings *marks,
-				sector_t offset, sector_t length)
-{
-	int status;
-
-	dprintk("%s(offset=%llu,len=%llu) enter\n", __func__,
-		(u64)offset, (u64)length);
-	spin_lock_bh(&marks->im_lock);
-	status = _set_range(&marks->im_tree, EXTENT_WRITTEN, offset, length);
-	spin_unlock_bh(&marks->im_lock);
-	return status;
-}
-
-static void print_short_extent(struct pnfs_block_short_extent *be)
-{
-	dprintk("PRINT SHORT EXTENT extent %p\n", be);
-	if (be) {
-		dprintk("        be_f_offset %llu\n", (u64)be->bse_f_offset);
-		dprintk("        be_length   %llu\n", (u64)be->bse_length);
-	}
-}
-
-static void print_clist(struct list_head *list, unsigned int count)
-{
-	struct pnfs_block_short_extent *be;
-	unsigned int i = 0;
-
-	ifdebug(FACILITY) {
-		printk(KERN_DEBUG "****************\n");
-		printk(KERN_DEBUG "Extent list looks like:\n");
-		list_for_each_entry(be, list, bse_node) {
-			i++;
-			print_short_extent(be);
-		}
-		if (i != count)
-			printk(KERN_DEBUG "\n\nExpected %u entries\n\n\n", count);
-		printk(KERN_DEBUG "****************\n");
-	}
-}
-
-/* Note: In theory, we should do more checking that devid's match between
- * old and new, but if they don't, the lists are too corrupt to salvage anyway.
- */
-/* Note this is very similar to bl_add_merge_extent */
-static void add_to_commitlist(struct pnfs_block_layout *bl,
-			      struct pnfs_block_short_extent *new)
-{
-	struct list_head *clist = &bl->bl_commit;
-	struct pnfs_block_short_extent *old, *save;
-	sector_t end = new->bse_f_offset + new->bse_length;
-
-	dprintk("%s enter\n", __func__);
-	print_short_extent(new);
-	print_clist(clist, bl->bl_count);
-	bl->bl_count++;
-	/* Scan for proper place to insert, extending new to the left
-	 * as much as possible.
-	 */
-	list_for_each_entry_safe(old, save, clist, bse_node) {
-		if (new->bse_f_offset < old->bse_f_offset)
-			break;
-		if (end <= old->bse_f_offset + old->bse_length) {
-			/* Range is already in list */
-			bl->bl_count--;
-			kfree(new);
-			return;
-		} else if (new->bse_f_offset <=
-				old->bse_f_offset + old->bse_length) {
-			/* new overlaps or abuts existing be */
-			if (new->bse_mdev == old->bse_mdev) {
-				/* extend new to fully replace old */
-				new->bse_length += new->bse_f_offset -
-						old->bse_f_offset;
-				new->bse_f_offset = old->bse_f_offset;
-				list_del(&old->bse_node);
-				bl->bl_count--;
-				kfree(old);
-			}
-		}
-	}
-	/* Note that if we never hit the above break, old will not point to a
-	 * valid extent.  However, in that case &old->bse_node==list.
-	 */
-	list_add_tail(&new->bse_node, &old->bse_node);
-	/* Scan forward for overlaps.  If we find any, extend new and
-	 * remove the overlapped extent.
-	 */
-	old = list_prepare_entry(new, clist, bse_node);
-	list_for_each_entry_safe_continue(old, save, clist, bse_node) {
-		if (end < old->bse_f_offset)
-			break;
-		/* new overlaps or abuts old */
-		if (new->bse_mdev == old->bse_mdev) {
-			if (end < old->bse_f_offset + old->bse_length) {
-				/* extend new to fully cover old */
-				end = old->bse_f_offset + old->bse_length;
-				new->bse_length = end - new->bse_f_offset;
-			}
-			list_del(&old->bse_node);
-			bl->bl_count--;
-			kfree(old);
-		}
-	}
-	dprintk("%s: after merging\n", __func__);
-	print_clist(clist, bl->bl_count);
-}
-
-/* Note the range described by offset, length is guaranteed to be contained
- * within be.
- * new will be freed, either by this function or add_to_commitlist if they
- * decide not to use it, or after LAYOUTCOMMIT uses it in the commitlist.
- */
-int bl_mark_for_commit(struct pnfs_block_extent *be,
-		    sector_t offset, sector_t length,
-		    struct pnfs_block_short_extent *new)
-{
-	sector_t new_end, end = offset + length;
-	struct pnfs_block_layout *bl = container_of(be->be_inval,
-						    struct pnfs_block_layout,
-						    bl_inval);
-
-	mark_written_sectors(be->be_inval, offset, length);
-	/* We want to add the range to commit list, but it must be
-	 * block-normalized, and verified that the normalized range has
-	 * been entirely written to disk.
-	 */
-	new->bse_f_offset = offset;
-	offset = normalize(offset, bl->bl_blocksize);
-	if (offset < new->bse_f_offset) {
-		if (is_range_written(be->be_inval, offset, new->bse_f_offset))
-			new->bse_f_offset = offset;
-		else
-			new->bse_f_offset = offset + bl->bl_blocksize;
-	}
-	new_end = normalize_up(end, bl->bl_blocksize);
-	if (end < new_end) {
-		if (is_range_written(be->be_inval, end, new_end))
-			end = new_end;
-		else
-			end = new_end - bl->bl_blocksize;
-	}
-	if (end <= new->bse_f_offset) {
-		kfree(new);
-		return 0;
-	}
-	new->bse_length = end - new->bse_f_offset;
-	new->bse_devid = be->be_devid;
-	new->bse_mdev = be->be_mdev;
-
-	spin_lock(&bl->bl_ext_lock);
-	add_to_commitlist(bl, new);
-	spin_unlock(&bl->bl_ext_lock);
-	return 0;
-}
-
-static void print_bl_extent(struct pnfs_block_extent *be)
-{
-	dprintk("PRINT EXTENT extent %p\n", be);
-	if (be) {
-		dprintk("        be_f_offset %llu\n", (u64)be->be_f_offset);
-		dprintk("        be_length   %llu\n", (u64)be->be_length);
-		dprintk("        be_v_offset %llu\n", (u64)be->be_v_offset);
-		dprintk("        be_state    %d\n", be->be_state);
-	}
-}
-
-static void
-destroy_extent(struct kref *kref)
-{
-	struct pnfs_block_extent *be;
-
-	be = container_of(kref, struct pnfs_block_extent, be_refcnt);
-	dprintk("%s be=%p\n", __func__, be);
-	kfree(be);
-}
-
-void
-bl_put_extent(struct pnfs_block_extent *be)
-{
-	if (be) {
-		dprintk("%s enter %p (%i)\n", __func__, be,
-			atomic_read(&be->be_refcnt.refcount));
-		kref_put(&be->be_refcnt, destroy_extent);
-	}
-}
-
-struct pnfs_block_extent *bl_alloc_extent(void)
-{
-	struct pnfs_block_extent *be;
-
-	be = kmalloc(sizeof(struct pnfs_block_extent), GFP_NOFS);
-	if (!be)
-		return NULL;
-	INIT_LIST_HEAD(&be->be_node);
-	kref_init(&be->be_refcnt);
-	be->be_inval = NULL;
-	return be;
-}
-
-static void print_elist(struct list_head *list)
-{
-	struct pnfs_block_extent *be;
-	dprintk("****************\n");
-	dprintk("Extent list looks like:\n");
-	list_for_each_entry(be, list, be_node) {
-		print_bl_extent(be);
-	}
-	dprintk("****************\n");
-}
-
-static inline int
-extents_consistent(struct pnfs_block_extent *old, struct pnfs_block_extent *new)
-{
-	/* Note this assumes new->be_f_offset >= old->be_f_offset */
-	return (new->be_state == old->be_state) &&
-		((new->be_state == PNFS_BLOCK_NONE_DATA) ||
-		 ((new->be_v_offset - old->be_v_offset ==
-		   new->be_f_offset - old->be_f_offset) &&
-		  new->be_mdev == old->be_mdev));
-}
-
-/* Adds new to appropriate list in bl, modifying new and removing existing
- * extents as appropriate to deal with overlaps.
- *
- * See bl_find_get_extent for list constraints.
- *
- * Refcount on new is already set.  If end up not using it, or error out,
- * need to put the reference.
- *
- * bl->bl_ext_lock is held by caller.
- */
-int
-bl_add_merge_extent(struct pnfs_block_layout *bl,
-		     struct pnfs_block_extent *new)
-{
-	struct pnfs_block_extent *be, *tmp;
-	sector_t end = new->be_f_offset + new->be_length;
-	struct list_head *list;
-
-	dprintk("%s enter with be=%p\n", __func__, new);
-	print_bl_extent(new);
-	list = &bl->bl_extents[bl_choose_list(new->be_state)];
-	print_elist(list);
-
-	/* Scan for proper place to insert, extending new to the left
-	 * as much as possible.
-	 */
-	list_for_each_entry_safe_reverse(be, tmp, list, be_node) {
-		if (new->be_f_offset >= be->be_f_offset + be->be_length)
-			break;
-		if (new->be_f_offset >= be->be_f_offset) {
-			if (end <= be->be_f_offset + be->be_length) {
-				/* new is a subset of existing be*/
-				if (extents_consistent(be, new)) {
-					dprintk("%s: new is subset, ignoring\n",
-						__func__);
-					bl_put_extent(new);
-					return 0;
-				} else {
-					goto out_err;
-				}
-			} else {
-				/* |<--   be   -->|
-				 *          |<--   new   -->| */
-				if (extents_consistent(be, new)) {
-					/* extend new to fully replace be */
-					new->be_length += new->be_f_offset -
-						be->be_f_offset;
-					new->be_f_offset = be->be_f_offset;
-					new->be_v_offset = be->be_v_offset;
-					dprintk("%s: removing %p\n", __func__, be);
-					list_del(&be->be_node);
-					bl_put_extent(be);
-				} else {
-					goto out_err;
-				}
-			}
-		} else if (end >= be->be_f_offset + be->be_length) {
-			/* new extent overlap existing be */
-			if (extents_consistent(be, new)) {
-				/* extend new to fully replace be */
-				dprintk("%s: removing %p\n", __func__, be);
-				list_del(&be->be_node);
-				bl_put_extent(be);
-			} else {
-				goto out_err;
-			}
-		} else if (end > be->be_f_offset) {
-			/*           |<--   be   -->|
-			 *|<--   new   -->| */
-			if (extents_consistent(new, be)) {
-				/* extend new to fully replace be */
-				new->be_length += be->be_f_offset + be->be_length -
-					new->be_f_offset - new->be_length;
-				dprintk("%s: removing %p\n", __func__, be);
-				list_del(&be->be_node);
-				bl_put_extent(be);
-			} else {
-				goto out_err;
-			}
-		}
-	}
-	/* Note that if we never hit the above break, be will not point to a
-	 * valid extent.  However, in that case &be->be_node==list.
-	 */
-	list_add(&new->be_node, &be->be_node);
-	dprintk("%s: inserting new\n", __func__);
-	print_elist(list);
-	/* FIXME - The per-list consistency checks have all been done,
-	 * should now check cross-list consistency.
-	 */
-	return 0;
-
- out_err:
-	bl_put_extent(new);
-	return -EIO;
-}
-
-/* Returns extent, or NULL.  If a second READ extent exists, it is returned
- * in cow_read, if given.
- *
- * The extents are kept in two seperate ordered lists, one for READ and NONE,
- * one for READWRITE and INVALID.  Within each list, we assume:
- * 1. Extents are ordered by file offset.
- * 2. For any given isect, there is at most one extents that matches.
- */
-struct pnfs_block_extent *
-bl_find_get_extent(struct pnfs_block_layout *bl, sector_t isect,
-	    struct pnfs_block_extent **cow_read)
-{
-	struct pnfs_block_extent *be, *cow, *ret;
-	int i;
-
-	dprintk("%s enter with isect %llu\n", __func__, (u64)isect);
-	cow = ret = NULL;
-	spin_lock(&bl->bl_ext_lock);
-	for (i = 0; i < EXTENT_LISTS; i++) {
-		list_for_each_entry_reverse(be, &bl->bl_extents[i], be_node) {
-			if (isect >= be->be_f_offset + be->be_length)
-				break;
-			if (isect >= be->be_f_offset) {
-				/* We have found an extent */
-				dprintk("%s Get %p (%i)\n", __func__, be,
-					atomic_read(&be->be_refcnt.refcount));
-				kref_get(&be->be_refcnt);
-				if (!ret)
-					ret = be;
-				else if (be->be_state != PNFS_BLOCK_READ_DATA)
-					bl_put_extent(be);
-				else
-					cow = be;
-				break;
-			}
-		}
-		if (ret &&
-		    (!cow_read || ret->be_state != PNFS_BLOCK_INVALID_DATA))
-			break;
-	}
-	spin_unlock(&bl->bl_ext_lock);
-	if (cow_read)
-		*cow_read = cow;
-	print_bl_extent(ret);
-	return ret;
-}
-
-/* Similar to bl_find_get_extent, but called with lock held, and ignores cow */
-static struct pnfs_block_extent *
-bl_find_get_extent_locked(struct pnfs_block_layout *bl, sector_t isect)
-{
-	struct pnfs_block_extent *be, *ret = NULL;
-	int i;
-
-	dprintk("%s enter with isect %llu\n", __func__, (u64)isect);
-	for (i = 0; i < EXTENT_LISTS; i++) {
-		if (ret)
-			break;
-		list_for_each_entry_reverse(be, &bl->bl_extents[i], be_node) {
-			if (isect >= be->be_f_offset + be->be_length)
-				break;
-			if (isect >= be->be_f_offset) {
-				/* We have found an extent */
-				dprintk("%s Get %p (%i)\n", __func__, be,
-					atomic_read(&be->be_refcnt.refcount));
-				kref_get(&be->be_refcnt);
-				ret = be;
-				break;
-			}
-		}
-	}
-	print_bl_extent(ret);
-	return ret;
-}
-
-int
-encode_pnfs_block_layoutupdate(struct pnfs_block_layout *bl,
-			       struct xdr_stream *xdr,
-			       const struct nfs4_layoutcommit_args *arg)
-{
-	struct pnfs_block_short_extent *lce, *save;
-	unsigned int count = 0;
-	__be32 *p, *xdr_start;
-
-	dprintk("%s enter\n", __func__);
-	/* BUG - creation of bl_commit is buggy - need to wait for
-	 * entire block to be marked WRITTEN before it can be added.
-	 */
-	spin_lock(&bl->bl_ext_lock);
-	/* Want to adjust for possible truncate */
-	/* We now want to adjust argument range */
-
-	/* XDR encode the ranges found */
-	xdr_start = xdr_reserve_space(xdr, 8);
-	if (!xdr_start)
-		goto out;
-	list_for_each_entry_safe(lce, save, &bl->bl_commit, bse_node) {
-		p = xdr_reserve_space(xdr, 7 * 4 + sizeof(lce->bse_devid.data));
-		if (!p)
-			break;
-		p = xdr_encode_opaque_fixed(p, lce->bse_devid.data, NFS4_DEVICEID4_SIZE);
-		p = xdr_encode_hyper(p, lce->bse_f_offset << SECTOR_SHIFT);
-		p = xdr_encode_hyper(p, lce->bse_length << SECTOR_SHIFT);
-		p = xdr_encode_hyper(p, 0LL);
-		*p++ = cpu_to_be32(PNFS_BLOCK_READWRITE_DATA);
-		list_move_tail(&lce->bse_node, &bl->bl_committing);
-		bl->bl_count--;
-		count++;
-	}
-	xdr_start[0] = cpu_to_be32((xdr->p - xdr_start - 1) * 4);
-	xdr_start[1] = cpu_to_be32(count);
-out:
-	spin_unlock(&bl->bl_ext_lock);
-	dprintk("%s found %i ranges\n", __func__, count);
-	return 0;
-}
-
-/* Helper function to set_to_rw that initialize a new extent */
-static void
-_prep_new_extent(struct pnfs_block_extent *new,
-		 struct pnfs_block_extent *orig,
-		 sector_t offset, sector_t length, int state)
-{
-	kref_init(&new->be_refcnt);
-	/* don't need to INIT_LIST_HEAD(&new->be_node) */
-	memcpy(&new->be_devid, &orig->be_devid, sizeof(struct nfs4_deviceid));
-	new->be_mdev = orig->be_mdev;
-	new->be_f_offset = offset;
-	new->be_length = length;
-	new->be_v_offset = orig->be_v_offset - orig->be_f_offset + offset;
-	new->be_state = state;
-	new->be_inval = orig->be_inval;
-}
-
-/* Tries to merge be with extent in front of it in list.
- * Frees storage if not used.
- */
-static struct pnfs_block_extent *
-_front_merge(struct pnfs_block_extent *be, struct list_head *head,
-	     struct pnfs_block_extent *storage)
-{
-	struct pnfs_block_extent *prev;
-
-	if (!storage)
-		goto no_merge;
-	if (&be->be_node == head || be->be_node.prev == head)
-		goto no_merge;
-	prev = list_entry(be->be_node.prev, struct pnfs_block_extent, be_node);
-	if ((prev->be_f_offset + prev->be_length != be->be_f_offset) ||
-	    !extents_consistent(prev, be))
-		goto no_merge;
-	_prep_new_extent(storage, prev, prev->be_f_offset,
-			 prev->be_length + be->be_length, prev->be_state);
-	list_replace(&prev->be_node, &storage->be_node);
-	bl_put_extent(prev);
-	list_del(&be->be_node);
-	bl_put_extent(be);
-	return storage;
-
- no_merge:
-	kfree(storage);
-	return be;
-}
-
-static u64
-set_to_rw(struct pnfs_block_layout *bl, u64 offset, u64 length)
-{
-	u64 rv = offset + length;
-	struct pnfs_block_extent *be, *e1, *e2, *e3, *new, *old;
-	struct pnfs_block_extent *children[3];
-	struct pnfs_block_extent *merge1 = NULL, *merge2 = NULL;
-	int i = 0, j;
-
-	dprintk("%s(%llu, %llu)\n", __func__, offset, length);
-	/* Create storage for up to three new extents e1, e2, e3 */
-	e1 = kmalloc(sizeof(*e1), GFP_ATOMIC);
-	e2 = kmalloc(sizeof(*e2), GFP_ATOMIC);
-	e3 = kmalloc(sizeof(*e3), GFP_ATOMIC);
-	/* BUG - we are ignoring any failure */
-	if (!e1 || !e2 || !e3)
-		goto out_nosplit;
-
-	spin_lock(&bl->bl_ext_lock);
-	be = bl_find_get_extent_locked(bl, offset);
-	rv = be->be_f_offset + be->be_length;
-	if (be->be_state != PNFS_BLOCK_INVALID_DATA) {
-		spin_unlock(&bl->bl_ext_lock);
-		goto out_nosplit;
-	}
-	/* Add e* to children, bumping e*'s krefs */
-	if (be->be_f_offset != offset) {
-		_prep_new_extent(e1, be, be->be_f_offset,
-				 offset - be->be_f_offset,
-				 PNFS_BLOCK_INVALID_DATA);
-		children[i++] = e1;
-		print_bl_extent(e1);
-	} else
-		merge1 = e1;
-	_prep_new_extent(e2, be, offset,
-			 min(length, be->be_f_offset + be->be_length - offset),
-			 PNFS_BLOCK_READWRITE_DATA);
-	children[i++] = e2;
-	print_bl_extent(e2);
-	if (offset + length < be->be_f_offset + be->be_length) {
-		_prep_new_extent(e3, be, e2->be_f_offset + e2->be_length,
-				 be->be_f_offset + be->be_length -
-				 offset - length,
-				 PNFS_BLOCK_INVALID_DATA);
-		children[i++] = e3;
-		print_bl_extent(e3);
-	} else
-		merge2 = e3;
-
-	/* Remove be from list, and insert the e* */
-	/* We don't get refs on e*, since this list is the base reference
-	 * set when init'ed.
-	 */
-	if (i < 3)
-		children[i] = NULL;
-	new = children[0];
-	list_replace(&be->be_node, &new->be_node);
-	bl_put_extent(be);
-	new = _front_merge(new, &bl->bl_extents[RW_EXTENT], merge1);
-	for (j = 1; j < i; j++) {
-		old = new;
-		new = children[j];
-		list_add(&new->be_node, &old->be_node);
-	}
-	if (merge2) {
-		/* This is a HACK, should just create a _back_merge function */
-		new = list_entry(new->be_node.next,
-				 struct pnfs_block_extent, be_node);
-		new = _front_merge(new, &bl->bl_extents[RW_EXTENT], merge2);
-	}
-	spin_unlock(&bl->bl_ext_lock);
-
-	/* Since we removed the base reference above, be is now scheduled for
-	 * destruction.
-	 */
-	bl_put_extent(be);
-	dprintk("%s returns %llu after split\n", __func__, rv);
-	return rv;
-
- out_nosplit:
-	kfree(e1);
-	kfree(e2);
-	kfree(e3);
-	dprintk("%s returns %llu without splitting\n", __func__, rv);
-	return rv;
-}
-
-void
-clean_pnfs_block_layoutupdate(struct pnfs_block_layout *bl,
-			      const struct nfs4_layoutcommit_args *arg,
-			      int status)
-{
-	struct pnfs_block_short_extent *lce, *save;
-
-	dprintk("%s status %d\n", __func__, status);
-	list_for_each_entry_safe(lce, save, &bl->bl_committing, bse_node) {
-		if (likely(!status)) {
-			u64 offset = lce->bse_f_offset;
-			u64 end = offset + lce->bse_length;
-
-			do {
-				offset = set_to_rw(bl, offset, end - offset);
-			} while (offset < end);
-			list_del(&lce->bse_node);
-
-			kfree(lce);
-		} else {
-			list_del(&lce->bse_node);
-			spin_lock(&bl->bl_ext_lock);
-			add_to_commitlist(bl, lce);
-			spin_unlock(&bl->bl_ext_lock);
-		}
-	}
-}
-
-int bl_push_one_short_extent(struct pnfs_inval_markings *marks)
-{
-	struct pnfs_block_short_extent *new;
-
-	new = kmalloc(sizeof(*new), GFP_NOFS);
-	if (unlikely(!new))
-		return -ENOMEM;
-
-	spin_lock_bh(&marks->im_lock);
-	list_add(&new->bse_node, &marks->im_extents);
-	spin_unlock_bh(&marks->im_lock);
-
-	return 0;
-}
-
-struct pnfs_block_short_extent *
-bl_pop_one_short_extent(struct pnfs_inval_markings *marks)
-{
-	struct pnfs_block_short_extent *rv = NULL;
-
-	spin_lock_bh(&marks->im_lock);
-	if (!list_empty(&marks->im_extents)) {
-		rv = list_entry((&marks->im_extents)->next,
-				struct pnfs_block_short_extent, bse_node);
-		list_del_init(&rv->bse_node);
-	}
-	spin_unlock_bh(&marks->im_lock);
-
-	return rv;
-}
-
-void bl_free_short_extents(struct pnfs_inval_markings *marks, int num_to_free)
-{
-	struct pnfs_block_short_extent *se = NULL, *tmp;
-
-	if (num_to_free <= 0)
-		return;
-
-	spin_lock(&marks->im_lock);
-	list_for_each_entry_safe(se, tmp, &marks->im_extents, bse_node) {
-		list_del(&se->bse_node);
-		kfree(se);
-		if (--num_to_free == 0)
-			break;
-	}
-	spin_unlock(&marks->im_lock);
-
-	BUG_ON(num_to_free > 0);
-}
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH 7/9] pnfs/blocklayout: implement the return_range method
  2014-09-10 15:23 pnfs block layout driver fixes V4 Christoph Hellwig
                   ` (5 preceding siblings ...)
  2014-09-10 15:23 ` [PATCH 6/9] pnfs/blocklayout: rewrite extent tracking Christoph Hellwig
@ 2014-09-10 15:23 ` Christoph Hellwig
  2014-09-11 14:16   ` Peng Tao
  2014-09-10 15:23 ` [PATCH 8/9] pnfs/blocklayout: return layouts on setattr Christoph Hellwig
  2014-09-10 15:23 ` [PATCH 9/9] pnfs/blocklayout: allocate separate pages for the layoutcommit payload Christoph Hellwig
  8 siblings, 1 reply; 32+ messages in thread
From: Christoph Hellwig @ 2014-09-10 15:23 UTC (permalink / raw)
  To: linux-nfs

This allows removing extents from the extent tree especially on truncate
operations, and thus fixing reads from truncated and re-extended that
previously returned stale data.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 fs/nfs/blocklayout/blocklayout.c | 30 ++++++++++++++++++++++++++++++
 1 file changed, 30 insertions(+)

diff --git a/fs/nfs/blocklayout/blocklayout.c b/fs/nfs/blocklayout/blocklayout.c
index 42b6f9c..a7524c4 100644
--- a/fs/nfs/blocklayout/blocklayout.c
+++ b/fs/nfs/blocklayout/blocklayout.c
@@ -470,6 +470,35 @@ static struct pnfs_layout_segment *bl_alloc_lseg(struct pnfs_layout_hdr *lo,
 }
 
 static void
+bl_return_range(struct pnfs_layout_hdr *lo,
+		struct pnfs_layout_range *range)
+{
+	struct pnfs_block_layout *bl = BLK_LO2EXT(lo);
+	sector_t offset = range->offset >> SECTOR_SHIFT, end;
+	int err;
+
+	if (range->offset % 8) {
+		dprintk("%s: offset %lld not block size aligned\n",
+			__func__, range->offset);
+		return;
+	}
+
+	if (range->length != NFS4_MAX_UINT64) {
+		if (range->length % 8) {
+			dprintk("%s: length %lld not block size aligned\n",
+				__func__, range->length);
+			return;
+		}
+
+		end = offset + (range->length >> SECTOR_SHIFT);
+	} else {
+		end = round_down(NFS4_MAX_UINT64, PAGE_SIZE);
+	}
+
+	err = ext_tree_remove(bl, range->iomode & IOMODE_RW, offset, end);
+}
+
+static void
 bl_encode_layoutcommit(struct pnfs_layout_hdr *lo, struct xdr_stream *xdr,
 		       const struct nfs4_layoutcommit_args *arg)
 {
@@ -777,6 +806,7 @@ static struct pnfs_layoutdriver_type blocklayout_type = {
 	.free_layout_hdr		= bl_free_layout_hdr,
 	.alloc_lseg			= bl_alloc_lseg,
 	.free_lseg			= bl_free_lseg,
+	.return_range			= bl_return_range,
 	.encode_layoutcommit		= bl_encode_layoutcommit,
 	.cleanup_layoutcommit		= bl_cleanup_layoutcommit,
 	.set_layoutdriver		= bl_set_layoutdriver,
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH 8/9] pnfs/blocklayout: return layouts on setattr
  2014-09-10 15:23 pnfs block layout driver fixes V4 Christoph Hellwig
                   ` (6 preceding siblings ...)
  2014-09-10 15:23 ` [PATCH 7/9] pnfs/blocklayout: implement the return_range method Christoph Hellwig
@ 2014-09-10 15:23 ` Christoph Hellwig
  2014-09-11 14:24   ` Peng Tao
  2014-09-10 15:23 ` [PATCH 9/9] pnfs/blocklayout: allocate separate pages for the layoutcommit payload Christoph Hellwig
  8 siblings, 1 reply; 32+ messages in thread
From: Christoph Hellwig @ 2014-09-10 15:23 UTC (permalink / raw)
  To: linux-nfs

This speads up truncate-heavy workloads like fsx by multiple orders of
magnitude.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 fs/nfs/blocklayout/blocklayout.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/fs/nfs/blocklayout/blocklayout.c b/fs/nfs/blocklayout/blocklayout.c
index a7524c4..d5a2b87 100644
--- a/fs/nfs/blocklayout/blocklayout.c
+++ b/fs/nfs/blocklayout/blocklayout.c
@@ -799,7 +799,8 @@ static struct pnfs_layoutdriver_type blocklayout_type = {
 	.id				= LAYOUT_BLOCK_VOLUME,
 	.name				= "LAYOUT_BLOCK_VOLUME",
 	.owner				= THIS_MODULE,
-	.flags				= PNFS_READ_WHOLE_PAGE,
+	.flags				= PNFS_LAYOUTRET_ON_SETATTR |
+					  PNFS_READ_WHOLE_PAGE,
 	.read_pagelist			= bl_read_pagelist,
 	.write_pagelist			= bl_write_pagelist,
 	.alloc_layout_hdr		= bl_alloc_layout_hdr,
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH 9/9] pnfs/blocklayout: allocate separate pages for the layoutcommit payload
  2014-09-10 15:23 pnfs block layout driver fixes V4 Christoph Hellwig
                   ` (7 preceding siblings ...)
  2014-09-10 15:23 ` [PATCH 8/9] pnfs/blocklayout: return layouts on setattr Christoph Hellwig
@ 2014-09-10 15:23 ` Christoph Hellwig
  8 siblings, 0 replies; 32+ messages in thread
From: Christoph Hellwig @ 2014-09-10 15:23 UTC (permalink / raw)
  To: linux-nfs

Instead of overflowing the XDR send buffer with our extent list allocate
pages and pre-encode the layoutupdate payload into them.  We optimistically
allocate a single page use alloc_page and only switch to vmalloc when we
have more extents outstanding.  Currently there is only a single testcase
(xfstests generic/113) which can reproduce large enough extent lists for
this to occur.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 fs/nfs/blocklayout/blocklayout.c |  15 ++----
 fs/nfs/blocklayout/blocklayout.h |   8 ++--
 fs/nfs/blocklayout/extent_tree.c | 100 +++++++++++++++++++++++++++++++--------
 3 files changed, 90 insertions(+), 33 deletions(-)

diff --git a/fs/nfs/blocklayout/blocklayout.c b/fs/nfs/blocklayout/blocklayout.c
index d5a2b87..fdc065c 100644
--- a/fs/nfs/blocklayout/blocklayout.c
+++ b/fs/nfs/blocklayout/blocklayout.c
@@ -498,21 +498,16 @@ bl_return_range(struct pnfs_layout_hdr *lo,
 	err = ext_tree_remove(bl, range->iomode & IOMODE_RW, offset, end);
 }
 
-static void
-bl_encode_layoutcommit(struct pnfs_layout_hdr *lo, struct xdr_stream *xdr,
-		       const struct nfs4_layoutcommit_args *arg)
+static int
+bl_prepare_layoutcommit(struct nfs4_layoutcommit_args *arg)
 {
-	dprintk("%s enter\n", __func__);
-	ext_tree_encode_commit(BLK_LO2EXT(lo), xdr);
+	return ext_tree_prepare_commit(arg);
 }
 
 static void
 bl_cleanup_layoutcommit(struct nfs4_layoutcommit_data *lcdata)
 {
-	struct pnfs_layout_hdr *lo = NFS_I(lcdata->args.inode)->layout;
-
-	dprintk("%s enter\n", __func__);
-	ext_tree_mark_committed(BLK_LO2EXT(lo), lcdata->res.status);
+	ext_tree_mark_committed(&lcdata->args, lcdata->res.status);
 }
 
 static void free_blk_mountid(struct block_mount_id *mid)
@@ -808,7 +803,7 @@ static struct pnfs_layoutdriver_type blocklayout_type = {
 	.alloc_lseg			= bl_alloc_lseg,
 	.free_lseg			= bl_free_lseg,
 	.return_range			= bl_return_range,
-	.encode_layoutcommit		= bl_encode_layoutcommit,
+	.prepare_layoutcommit		= bl_prepare_layoutcommit,
 	.cleanup_layoutcommit		= bl_cleanup_layoutcommit,
 	.set_layoutdriver		= bl_set_layoutdriver,
 	.clear_layoutdriver		= bl_clear_layoutdriver,
diff --git a/fs/nfs/blocklayout/blocklayout.h b/fs/nfs/blocklayout/blocklayout.h
index b4f66d8..6f3a550 100644
--- a/fs/nfs/blocklayout/blocklayout.h
+++ b/fs/nfs/blocklayout/blocklayout.h
@@ -80,6 +80,9 @@ struct pnfs_block_extent {
 	unsigned int	be_tag;
 };
 
+/* on the wire size of the extent */
+#define BL_EXTENT_SIZE	(7 * sizeof(__be32) + NFS4_DEVICEID4_SIZE)
+
 struct pnfs_block_layout {
 	struct pnfs_layout_hdr	bl_layout;
 	struct rb_root		bl_ext_rw;
@@ -138,8 +141,7 @@ int ext_tree_mark_written(struct pnfs_block_layout *bl, sector_t start,
 		sector_t len);
 bool ext_tree_lookup(struct pnfs_block_layout *bl, sector_t isect,
 		struct pnfs_block_extent *ret, bool rw);
-int ext_tree_encode_commit(struct pnfs_block_layout *bl,
-		struct xdr_stream *xdr);
-void ext_tree_mark_committed(struct pnfs_block_layout *bl, int status);
+int ext_tree_prepare_commit(struct nfs4_layoutcommit_args *arg);
+void ext_tree_mark_committed(struct nfs4_layoutcommit_args *arg, int status);
 
 #endif /* FS_NFS_NFS4BLOCKLAYOUT_H */
diff --git a/fs/nfs/blocklayout/extent_tree.c b/fs/nfs/blocklayout/extent_tree.c
index c8c59a5..20c12b6 100644
--- a/fs/nfs/blocklayout/extent_tree.c
+++ b/fs/nfs/blocklayout/extent_tree.c
@@ -467,19 +467,25 @@ out:
 	return err;
 }
 
-int
-ext_tree_encode_commit(struct pnfs_block_layout *bl, struct xdr_stream *xdr)
+static void ext_tree_free_commitdata(struct nfs4_layoutcommit_args *arg,
+		size_t buffer_size)
 {
-	struct pnfs_block_extent *be;
-	unsigned int count = 0;
-	__be32 *p, *xdr_start;
-	int ret = 0;
+	if (arg->layoutupdate_pages != &arg->layoutupdate_page) {
+		int nr_pages = DIV_ROUND_UP(buffer_size, PAGE_SIZE), i;
 
-	dprintk("%s enter\n", __func__);
+		for (i = 0; i < nr_pages; i++)
+			put_page(arg->layoutupdate_pages[i]);
+		kfree(arg->layoutupdate_pages);
+	} else {
+		put_page(arg->layoutupdate_page);
+	}
+}
 
-	xdr_start = xdr_reserve_space(xdr, 8);
-	if (!xdr_start)
-		return -ENOSPC;
+static int ext_tree_encode_commit(struct pnfs_block_layout *bl, __be32 *p,
+		size_t buffer_size, size_t *count)
+{
+	struct pnfs_block_extent *be;
+	int ret = 0;
 
 	spin_lock(&bl->bl_ext_lock);
 	for (be = ext_tree_first(&bl->bl_ext_rw); be; be = ext_tree_next(be)) {
@@ -487,12 +493,11 @@ ext_tree_encode_commit(struct pnfs_block_layout *bl, struct xdr_stream *xdr)
 		    be->be_tag != EXTENT_WRITTEN)
 			continue;
 
-		p = xdr_reserve_space(xdr, 7 * sizeof(__be32) +
-					NFS4_DEVICEID4_SIZE);
-		if (!p) {
-			printk("%s: out of space for extent list\n", __func__);
+		(*count)++;
+		if (*count * BL_EXTENT_SIZE > buffer_size) {
+			/* keep counting.. */
 			ret = -ENOSPC;
-			break;
+			continue;
 		}
 
 		p = xdr_encode_opaque_fixed(p, be->be_devid.data,
@@ -503,25 +508,80 @@ ext_tree_encode_commit(struct pnfs_block_layout *bl, struct xdr_stream *xdr)
 		*p++ = cpu_to_be32(PNFS_BLOCK_READWRITE_DATA);
 
 		be->be_tag = EXTENT_COMMITTING;
-		count++;
 	}
 	spin_unlock(&bl->bl_ext_lock);
 
-	xdr_start[0] = cpu_to_be32((xdr->p - xdr_start - 1) * 4);
-	xdr_start[1] = cpu_to_be32(count);
+	return ret;
+}
+
+int
+ext_tree_prepare_commit(struct nfs4_layoutcommit_args *arg)
+{
+	struct pnfs_block_layout *bl = BLK_LO2EXT(NFS_I(arg->inode)->layout);
+	size_t count = 0, buffer_size = PAGE_SIZE;
+	__be32 *start_p;
+	int ret;
+
+	dprintk("%s enter\n", __func__);
+
+	arg->layoutupdate_page = alloc_page(GFP_NOFS);
+	if (!arg->layoutupdate_page)
+		return -ENOMEM;
+	start_p = page_address(arg->layoutupdate_page);
+	arg->layoutupdate_pages = &arg->layoutupdate_page;
+
+retry:
+	ret = ext_tree_encode_commit(bl, start_p + 1, buffer_size, &count);
+	if (unlikely(ret)) {
+		ext_tree_free_commitdata(arg, buffer_size);
+
+		buffer_size = sizeof(__be32) + BL_EXTENT_SIZE * count;
+		count = 0;
+
+		arg->layoutupdate_pages =
+			kcalloc(DIV_ROUND_UP(buffer_size, PAGE_SIZE),
+				sizeof(struct page *), GFP_NOFS);
+		if (!arg->layoutupdate_pages)
+			return -ENOMEM;
+
+		start_p = __vmalloc(buffer_size, GFP_NOFS, PAGE_KERNEL);
+		if (!start_p) {
+			kfree(arg->layoutupdate_pages);
+			return -ENOMEM;
+		}
+
+		goto retry;
+	}
+
+	*start_p = cpu_to_be32(count);
+	arg->layoutupdate_len = sizeof(__be32) + BL_EXTENT_SIZE * count;
+
+	if (unlikely(arg->layoutupdate_pages != &arg->layoutupdate_page)) {
+		__be32 *p = start_p;
+		int i = 0;
+
+		for (p = start_p;
+		     p < start_p + arg->layoutupdate_len;
+		     p += PAGE_SIZE) {
+			arg->layoutupdate_pages[i++] = vmalloc_to_page(p);
+		}
+	}
 
 	dprintk("%s found %i ranges\n", __func__, count);
-	return ret;
+	return 0;
 }
 
 void
-ext_tree_mark_committed(struct pnfs_block_layout *bl, int status)
+ext_tree_mark_committed(struct nfs4_layoutcommit_args *arg, int status)
 {
+	struct pnfs_block_layout *bl = BLK_LO2EXT(NFS_I(arg->inode)->layout);
 	struct rb_root *root = &bl->bl_ext_rw;
 	struct pnfs_block_extent *be;
 
 	dprintk("%s status %d\n", __func__, status);
 
+	ext_tree_free_commitdata(arg, arg->layoutupdate_len);
+
 	spin_lock(&bl->bl_ext_lock);
 	for (be = ext_tree_first(root); be; be = ext_tree_next(be)) {
 		if (be->be_state != PNFS_BLOCK_INVALID_DATA ||
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* Re: [PATCH 3/9] pnfs: add return_range method
  2014-09-10 15:23 ` [PATCH 3/9] pnfs: add return_range method Christoph Hellwig
@ 2014-09-11 13:54   ` Peng Tao
  2014-09-11 15:20     ` Christoph Hellwig
  0 siblings, 1 reply; 32+ messages in thread
From: Peng Tao @ 2014-09-11 13:54 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: Linux NFS Mailing List

On Wed, Sep 10, 2014 at 11:23 PM, Christoph Hellwig <hch@lst.de> wrote:
> If a layout driver keeps per-inode state outside of the layout segments it
> needs to be notified of any layout returns or recalls on an inode, and not
> just about the freeing of layout segments.  Add a method to acomplish this,
> which will allow the block layout driver to handle the case of truncated
> and re-expanded files properly.
>
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>  fs/nfs/callback_proc.c | 12 +++++++++---
>  fs/nfs/pnfs.c          | 10 ++++++++++
>  fs/nfs/pnfs.h          |  3 +++
>  3 files changed, 22 insertions(+), 3 deletions(-)
>
> diff --git a/fs/nfs/callback_proc.c b/fs/nfs/callback_proc.c
> index 0d26951..56bd0ea 100644
> --- a/fs/nfs/callback_proc.c
> +++ b/fs/nfs/callback_proc.c
> @@ -181,10 +181,16 @@ static u32 initiate_file_draining(struct nfs_client *clp,
>         spin_lock(&ino->i_lock);
>         if (test_bit(NFS_LAYOUT_BULK_RECALL, &lo->plh_flags) ||
>             pnfs_mark_matching_lsegs_invalid(lo, &free_me_list,
> -                                       &args->cbl_range))
> +                                       &args->cbl_range)) {
>                 rv = NFS4ERR_DELAY;
> -       else
> -               rv = NFS4ERR_NOMATCHING_LAYOUT;
> +               goto unlock;
> +       }
> +
> +       if (NFS_SERVER(ino)->pnfs_curr_ld->return_range) {
It looks better to put return_range inside
pnfs_mark_matching_lsegs_invalid() to have it called every time a
range of layout segments all get freed. So that ld is sure to free
things up.

Cheers,
Tao

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 6/9] pnfs/blocklayout: rewrite extent tracking
  2014-09-10 15:23 ` [PATCH 6/9] pnfs/blocklayout: rewrite extent tracking Christoph Hellwig
@ 2014-09-11 14:11   ` Peng Tao
  2014-09-11 15:21     ` Christoph Hellwig
  0 siblings, 1 reply; 32+ messages in thread
From: Peng Tao @ 2014-09-11 14:11 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: Linux NFS Mailing List

Hi Christoph,

I really like your refactoring. It looks a lot better than the
original lists-every-where code. One minor comment below:

<snip..>
>  static void bl_write_cleanup(struct work_struct *work)
>  {
> -       struct rpc_task *task;
> -       struct nfs_pgio_header *hdr;
> +       struct rpc_task *task = container_of(work, struct rpc_task, u.tk_work);
> +       struct nfs_pgio_header *hdr =
> +                       container_of(task, struct nfs_pgio_header, task);
> +
>         dprintk("%s enter\n", __func__);
> -       task = container_of(work, struct rpc_task, u.tk_work);
> -       hdr = container_of(task, struct nfs_pgio_header, task);
> +
>         if (likely(!hdr->pnfs_error)) {
> -               /* Marks for LAYOUTCOMMIT */
> -               mark_extents_written(BLK_LSEG2EXT(hdr->lseg),
> -                                    hdr->args.offset, hdr->args.count);
> +               struct pnfs_block_layout *bl = BLK_LSEG2EXT(hdr->lseg);
> +               u64 start = hdr->args.offset & (loff_t)PAGE_CACHE_MASK;
> +               u64 end = (hdr->args.offset + hdr->args.count +
> +                       PAGE_CACHE_SIZE - 1) & (loff_t)PAGE_CACHE_MASK;
> +
> +               ext_tree_mark_written(bl, start >> SECTOR_SHIFT,
> +                                       (end - start) >> SECTOR_SHIFT);
The old code works in a way that mark_extents_written() always
succeeds. Now that ext_tree_mark_written() may fail, it should check
for the return value and resend to MDS if failed, otherwise there may
be silent data corruption.

Thanks,
Tao

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 7/9] pnfs/blocklayout: implement the return_range method
  2014-09-10 15:23 ` [PATCH 7/9] pnfs/blocklayout: implement the return_range method Christoph Hellwig
@ 2014-09-11 14:16   ` Peng Tao
  2014-09-11 15:24     ` Christoph Hellwig
  0 siblings, 1 reply; 32+ messages in thread
From: Peng Tao @ 2014-09-11 14:16 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: Linux NFS Mailing List

On Wed, Sep 10, 2014 at 11:23 PM, Christoph Hellwig <hch@lst.de> wrote:
> This allows removing extents from the extent tree especially on truncate
> operations, and thus fixing reads from truncated and re-extended that
> previously returned stale data.
>
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>  fs/nfs/blocklayout/blocklayout.c | 30 ++++++++++++++++++++++++++++++
>  1 file changed, 30 insertions(+)
>
> diff --git a/fs/nfs/blocklayout/blocklayout.c b/fs/nfs/blocklayout/blocklayout.c
> index 42b6f9c..a7524c4 100644
> --- a/fs/nfs/blocklayout/blocklayout.c
> +++ b/fs/nfs/blocklayout/blocklayout.c
> @@ -470,6 +470,35 @@ static struct pnfs_layout_segment *bl_alloc_lseg(struct pnfs_layout_hdr *lo,
>  }
>
>  static void
> +bl_return_range(struct pnfs_layout_hdr *lo,
> +               struct pnfs_layout_range *range)
> +{
> +       struct pnfs_block_layout *bl = BLK_LO2EXT(lo);
> +       sector_t offset = range->offset >> SECTOR_SHIFT, end;
> +       int err;
> +
> +       if (range->offset % 8) {
why arbitrary block size? You should be able to use the blocksize
returned by server, right?

btw, did you test your patchset with smaller block size such as 2K/1K?
Did it work?

> +               dprintk("%s: offset %lld not block size aligned\n",
> +                       __func__, range->offset);
> +               return;
> +       }
> +
> +       if (range->length != NFS4_MAX_UINT64) {
> +               if (range->length % 8) {
ditto here.

Cheers,
Tao

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 8/9] pnfs/blocklayout: return layouts on setattr
  2014-09-10 15:23 ` [PATCH 8/9] pnfs/blocklayout: return layouts on setattr Christoph Hellwig
@ 2014-09-11 14:24   ` Peng Tao
  2014-09-11 14:42     ` Boaz Harrosh
  2014-09-11 15:25     ` Christoph Hellwig
  0 siblings, 2 replies; 32+ messages in thread
From: Peng Tao @ 2014-09-11 14:24 UTC (permalink / raw)
  To: Christoph Hellwig, Boaz Harrosh; +Cc: Linux NFS Mailing List

On Wed, Sep 10, 2014 at 11:23 PM, Christoph Hellwig <hch@lst.de> wrote:
> This speads up truncate-heavy workloads like fsx by multiple orders of
> magnitude.
>
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>  fs/nfs/blocklayout/blocklayout.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/fs/nfs/blocklayout/blocklayout.c b/fs/nfs/blocklayout/blocklayout.c
> index a7524c4..d5a2b87 100644
> --- a/fs/nfs/blocklayout/blocklayout.c
> +++ b/fs/nfs/blocklayout/blocklayout.c
> @@ -799,7 +799,8 @@ static struct pnfs_layoutdriver_type blocklayout_type = {
>         .id                             = LAYOUT_BLOCK_VOLUME,
>         .name                           = "LAYOUT_BLOCK_VOLUME",
>         .owner                          = THIS_MODULE,
> -       .flags                          = PNFS_READ_WHOLE_PAGE,
> +       .flags                          = PNFS_LAYOUTRET_ON_SETATTR |
> +                                         PNFS_READ_WHOLE_PAGE,
The reason I didn't add it was because PNFS_LAYOUTRET_ON_SETATTR is
too much for blocks layout. What we really want is to return layouts
on truncate and chown, instead of _all_ setattr requests.

Boaz, does object layout require return on setattr for other reasons?
If not, I'd suggest we change PNFS_LAYOUTRET_ON_SETATTR to return only
on chown/truncate events.

Thanks,
Tao

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 8/9] pnfs/blocklayout: return layouts on setattr
  2014-09-11 14:24   ` Peng Tao
@ 2014-09-11 14:42     ` Boaz Harrosh
  2014-09-11 15:25     ` Christoph Hellwig
  1 sibling, 0 replies; 32+ messages in thread
From: Boaz Harrosh @ 2014-09-11 14:42 UTC (permalink / raw)
  To: Peng Tao, Christoph Hellwig; +Cc: Linux NFS Mailing List

On 09/11/2014 05:24 PM, Peng Tao wrote:
> On Wed, Sep 10, 2014 at 11:23 PM, Christoph Hellwig <hch@lst.de> wrote:
>> This speads up truncate-heavy workloads like fsx by multiple orders of
>> magnitude.
>>
>> Signed-off-by: Christoph Hellwig <hch@lst.de>
>> ---
>>  fs/nfs/blocklayout/blocklayout.c | 3 ++-
>>  1 file changed, 2 insertions(+), 1 deletion(-)
>>
>> diff --git a/fs/nfs/blocklayout/blocklayout.c b/fs/nfs/blocklayout/blocklayout.c
>> index a7524c4..d5a2b87 100644
>> --- a/fs/nfs/blocklayout/blocklayout.c
>> +++ b/fs/nfs/blocklayout/blocklayout.c
>> @@ -799,7 +799,8 @@ static struct pnfs_layoutdriver_type blocklayout_type = {
>>         .id                             = LAYOUT_BLOCK_VOLUME,
>>         .name                           = "LAYOUT_BLOCK_VOLUME",
>>         .owner                          = THIS_MODULE,
>> -       .flags                          = PNFS_READ_WHOLE_PAGE,
>> +       .flags                          = PNFS_LAYOUTRET_ON_SETATTR |
>> +                                         PNFS_READ_WHOLE_PAGE,
> The reason I didn't add it was because PNFS_LAYOUTRET_ON_SETATTR is
> too much for blocks layout. What we really want is to return layouts
> on truncate and chown, instead of _all_ setattr requests.
> 
> Boaz, does object layout require return on setattr for other reasons?
> If not, I'd suggest we change PNFS_LAYOUTRET_ON_SETATTR to return only
> on chown/truncate events.
> 

ACK. chown/truncate

> Thanks,
> Tao
> 

Thanks
Boaz

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 3/9] pnfs: add return_range method
  2014-09-11 13:54   ` Peng Tao
@ 2014-09-11 15:20     ` Christoph Hellwig
  2014-09-11 15:30       ` Peng Tao
  0 siblings, 1 reply; 32+ messages in thread
From: Christoph Hellwig @ 2014-09-11 15:20 UTC (permalink / raw)
  To: Peng Tao; +Cc: Christoph Hellwig, Linux NFS Mailing List

On Thu, Sep 11, 2014 at 09:54:42PM +0800, Peng Tao wrote:
> It looks better to put return_range inside
> pnfs_mark_matching_lsegs_invalid() to have it called every time a
> range of layout segments all get freed. So that ld is sure to free
> things up.

Actually we specificly want to avoid that for cases like the internal
new stateid handling in pnfs_layout_process, where we might allocate
a new layout that overlaps an existing one, and where we need to keep
the extents around.


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 6/9] pnfs/blocklayout: rewrite extent tracking
  2014-09-11 14:11   ` Peng Tao
@ 2014-09-11 15:21     ` Christoph Hellwig
  0 siblings, 0 replies; 32+ messages in thread
From: Christoph Hellwig @ 2014-09-11 15:21 UTC (permalink / raw)
  To: Peng Tao; +Cc: Christoph Hellwig, Linux NFS Mailing List

On Thu, Sep 11, 2014 at 10:11:42PM +0800, Peng Tao wrote:
> The old code works in a way that mark_extents_written() always
> succeeds. Now that ext_tree_mark_written() may fail, it should check
> for the return value and resend to MDS if failed, otherwise there may
> be silent data corruption.

It can only fail for memory allocation erorrs, which can indeed happen in
theory although not in practice for the amount we do.  But I'll prepare
testing using error injection and will handle it, thanks!


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 7/9] pnfs/blocklayout: implement the return_range method
  2014-09-11 14:16   ` Peng Tao
@ 2014-09-11 15:24     ` Christoph Hellwig
  0 siblings, 0 replies; 32+ messages in thread
From: Christoph Hellwig @ 2014-09-11 15:24 UTC (permalink / raw)
  To: Peng Tao; +Cc: Christoph Hellwig, Linux NFS Mailing List

On Thu, Sep 11, 2014 at 10:16:56PM +0800, Peng Tao wrote:
> > diff --git a/fs/nfs/blocklayout/blocklayout.c b/fs/nfs/blocklayout/blocklayout.c
> > index 42b6f9c..a7524c4 100644
> > --- a/fs/nfs/blocklayout/blocklayout.c
> > +++ b/fs/nfs/blocklayout/blocklayout.c
> > @@ -470,6 +470,35 @@ static struct pnfs_layout_segment *bl_alloc_lseg(struct pnfs_layout_hdr *lo,
> >  }
> >
> >  static void
> > +bl_return_range(struct pnfs_layout_hdr *lo,
> > +               struct pnfs_layout_range *range)
> > +{
> > +       struct pnfs_block_layout *bl = BLK_LO2EXT(lo);
> > +       sector_t offset = range->offset >> SECTOR_SHIFT, end;
> > +       int err;
> > +
> > +       if (range->offset % 8) {
> why arbitrary block size? You should be able to use the blocksize
> returned by server, right?

I need to look into it.  Right now the client has all extents properly
aligned, and allowing a smaller size here would change that.  Give me
some time to test it and get back to you.

> 
> btw, did you test your patchset with smaller block size such as 2K/1K?
> Did it work?

I did test a very early version but haven't redone the tests.  It worked
fine because even although the server supported smaller blocks the client
now never asks for anything not aligned to 4k!


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 8/9] pnfs/blocklayout: return layouts on setattr
  2014-09-11 14:24   ` Peng Tao
  2014-09-11 14:42     ` Boaz Harrosh
@ 2014-09-11 15:25     ` Christoph Hellwig
  2014-09-11 15:38       ` Trond Myklebust
  1 sibling, 1 reply; 32+ messages in thread
From: Christoph Hellwig @ 2014-09-11 15:25 UTC (permalink / raw)
  To: Peng Tao; +Cc: Christoph Hellwig, Boaz Harrosh, Linux NFS Mailing List

On Thu, Sep 11, 2014 at 10:24:04PM +0800, Peng Tao wrote:
> The reason I didn't add it was because PNFS_LAYOUTRET_ON_SETATTR is
> too much for blocks layout. What we really want is to return layouts
> on truncate and chown, instead of _all_ setattr requests.
> 
> Boaz, does object layout require return on setattr for other reasons?
> If not, I'd suggest we change PNFS_LAYOUTRET_ON_SETATTR to return only
> on chown/truncate events.

I was actually going to ask the same question, I can't see a point
why the object layout driver would want it on any other setattr.

In fact it could probably be narrowed down to chown or truncate to a smaller
size.

I'd also love to know why we don't want to do this for the filelayout driver.


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 3/9] pnfs: add return_range method
  2014-09-11 15:20     ` Christoph Hellwig
@ 2014-09-11 15:30       ` Peng Tao
  2014-09-11 15:36         ` Christoph Hellwig
  0 siblings, 1 reply; 32+ messages in thread
From: Peng Tao @ 2014-09-11 15:30 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: Linux NFS Mailing List

On Thu, Sep 11, 2014 at 11:20 PM, Christoph Hellwig <hch@lst.de> wrote:
> On Thu, Sep 11, 2014 at 09:54:42PM +0800, Peng Tao wrote:
>> It looks better to put return_range inside
>> pnfs_mark_matching_lsegs_invalid() to have it called every time a
>> range of layout segments all get freed. So that ld is sure to free
>> things up.
>
> Actually we specificly want to avoid that for cases like the internal
> new stateid handling in pnfs_layout_process, where we might allocate
> a new layout that overlaps an existing one, and where we need to keep
> the extents around.
>
It looks dangerous to have extents lurking around without matching
layout segments. One example is NFS4ERR_EXPIRED and
NFS4ERR_BAD_STATEID in nfs4_layoutget_done(), client needs to drop all
layout segments but may keep the layout header, in which case blocks
layout would still hold all extents at hand.

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 3/9] pnfs: add return_range method
  2014-09-11 15:30       ` Peng Tao
@ 2014-09-11 15:36         ` Christoph Hellwig
  2014-09-12  2:22           ` Peng Tao
  0 siblings, 1 reply; 32+ messages in thread
From: Christoph Hellwig @ 2014-09-11 15:36 UTC (permalink / raw)
  To: Peng Tao; +Cc: Linux NFS Mailing List

On Thu, Sep 11, 2014 at 11:30:50PM +0800, Peng Tao wrote:
> It looks dangerous to have extents lurking around without matching
> layout segments. One example is NFS4ERR_EXPIRED and
> NFS4ERR_BAD_STATEID in nfs4_layoutget_done(), client needs to drop all
> layout segments but may keep the layout header, in which case blocks
> layout would still hold all extents at hand.

We never do I/O without a valid layout, but we keep the extents around
because we need to keep state like that it's been written to or has
a commit pending in a single place, and also need to keep it if a layout
goes away temporarily.

I hate the way it has been done in the old client, and tried moving the
extent tracking to a per-layout basis, and spent way too much time on it
but still couldn't get it to work, so I finally gave up.  I think we're
between a rock (the idiocy on rfc5663 that it tracks data in extents and not
in layouts) and a hard place (the forgetful Linux client and it's lose layout
coherency) here.

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 8/9] pnfs/blocklayout: return layouts on setattr
  2014-09-11 15:25     ` Christoph Hellwig
@ 2014-09-11 15:38       ` Trond Myklebust
  2014-09-11 15:48         ` Christoph Hellwig
  0 siblings, 1 reply; 32+ messages in thread
From: Trond Myklebust @ 2014-09-11 15:38 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: Peng Tao, Boaz Harrosh, Linux NFS Mailing List

On Thu, Sep 11, 2014 at 11:25 AM, Christoph Hellwig <hch@lst.de> wrote:
> On Thu, Sep 11, 2014 at 10:24:04PM +0800, Peng Tao wrote:
>> The reason I didn't add it was because PNFS_LAYOUTRET_ON_SETATTR is
>> too much for blocks layout. What we really want is to return layouts
>> on truncate and chown, instead of _all_ setattr requests.
>>
>> Boaz, does object layout require return on setattr for other reasons?
>> If not, I'd suggest we change PNFS_LAYOUTRET_ON_SETATTR to return only
>> on chown/truncate events.
>
> I was actually going to ask the same question, I can't see a point
> why the object layout driver would want it on any other setattr.
>
> In fact it could probably be narrowed down to chown or truncate to a smaller
> size.
>
> I'd also love to know why we don't want to do this for the filelayout driver.
>

Why would it be needed? The layout isn't expected to change. If the
chown affects permissions then it is up to the DS to enforce that
(although POSIX does not require it to do that).

-- 
Trond Myklebust

Linux NFS client maintainer, PrimaryData

trond.myklebust@primarydata.com

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 8/9] pnfs/blocklayout: return layouts on setattr
  2014-09-11 15:38       ` Trond Myklebust
@ 2014-09-11 15:48         ` Christoph Hellwig
  2014-09-11 16:14           ` Trond Myklebust
  0 siblings, 1 reply; 32+ messages in thread
From: Christoph Hellwig @ 2014-09-11 15:48 UTC (permalink / raw)
  To: Trond Myklebust
  Cc: Christoph Hellwig, Peng Tao, Boaz Harrosh, Linux NFS Mailing List

On Thu, Sep 11, 2014 at 11:38:24AM -0400, Trond Myklebust wrote:
> Why would it be needed? The layout isn't expected to change. If the
> chown affects permissions then it is up to the DS to enforce that
> (although POSIX does not require it to do that).

I was wondering about the truncate case.  Even if the DS needs to be
able to enforce the new size it seems pointless to keep a layout beyond the
size around.

I don't really see a need to drop on a chown for the blocklayout or
objlayout drivers either, given that these semantics are enforced at a higher
level.

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 8/9] pnfs/blocklayout: return layouts on setattr
  2014-09-11 15:48         ` Christoph Hellwig
@ 2014-09-11 16:14           ` Trond Myklebust
  2014-09-14 10:55             ` Boaz Harrosh
  0 siblings, 1 reply; 32+ messages in thread
From: Trond Myklebust @ 2014-09-11 16:14 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: Peng Tao, Boaz Harrosh, Linux NFS Mailing List

On Thu, Sep 11, 2014 at 11:48 AM, Christoph Hellwig <hch@lst.de> wrote:
> On Thu, Sep 11, 2014 at 11:38:24AM -0400, Trond Myklebust wrote:
>> Why would it be needed? The layout isn't expected to change. If the
>> chown affects permissions then it is up to the DS to enforce that
>> (although POSIX does not require it to do that).
>
> I was wondering about the truncate case.  Even if the DS needs to be
> able to enforce the new size it seems pointless to keep a layout beyond the
> size around.

In the files layout case, it is actually quite common for the server
to hand out an "infinite" sized layout in response to a LAYOUTGET. It
means that the client doesn't need to ask for a new layout in order to
append to the file.

> I don't really see a need to drop on a chown for the blocklayout or
> objlayout drivers either, given that these semantics are enforced at a higher
> level.

-- 
Trond Myklebust

Linux NFS client maintainer, PrimaryData

trond.myklebust@primarydata.com

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 3/9] pnfs: add return_range method
  2014-09-11 15:36         ` Christoph Hellwig
@ 2014-09-12  2:22           ` Peng Tao
  2014-09-13 19:38             ` Christoph Hellwig
  0 siblings, 1 reply; 32+ messages in thread
From: Peng Tao @ 2014-09-12  2:22 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: Linux NFS Mailing List

On Thu, Sep 11, 2014 at 11:36 PM, Christoph Hellwig <hch@lst.de> wrote:
> On Thu, Sep 11, 2014 at 11:30:50PM +0800, Peng Tao wrote:
>> It looks dangerous to have extents lurking around without matching
>> layout segments. One example is NFS4ERR_EXPIRED and
>> NFS4ERR_BAD_STATEID in nfs4_layoutget_done(), client needs to drop all
>> layout segments but may keep the layout header, in which case blocks
>> layout would still hold all extents at hand.
>
> We never do I/O without a valid layout, but we keep the extents around
> because we need to keep state like that it's been written to or has
> a commit pending in a single place, and also need to keep it if a layout
> goes away temporarily.
We don't know if layout goes away temporarily or permanently. If
server happens to move the file's data (enterprise NAS does that for a
lot of reasons) in between, next time client does layoutget, it will
get a different extent mapping. I'm not sure if the new extent
tracking code is able to cope with it. The old code will certainly
fail though.

If your concern is current layoutreturn implementation that ignores
pending layoutcommit (and inflight IOs), that needs to be fixed. I
have a patchset to fix it and it is still under internal QA. I'll post
it once it passes testing.

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 3/9] pnfs: add return_range method
  2014-09-12  2:22           ` Peng Tao
@ 2014-09-13 19:38             ` Christoph Hellwig
  0 siblings, 0 replies; 32+ messages in thread
From: Christoph Hellwig @ 2014-09-13 19:38 UTC (permalink / raw)
  To: Peng Tao; +Cc: Christoph Hellwig, Linux NFS Mailing List

On Fri, Sep 12, 2014 at 10:22:11AM +0800, Peng Tao wrote:
> > We never do I/O without a valid layout, but we keep the extents around
> > because we need to keep state like that it's been written to or has
> > a commit pending in a single place, and also need to keep it if a layout
> > goes away temporarily.
> We don't know if layout goes away temporarily or permanently. If
> server happens to move the file's data (enterprise NAS does that for a
> lot of reasons) in between, next time client does layoutget, it will
> get a different extent mapping. I'm not sure if the new extent
> tracking code is able to cope with it. The old code will certainly
> fail though.

A server is expected to do a layout recall with the clora_changed set
to true if it moves data around and thus the layout goes away permanently.
The Linux client currently irgnores that field, but that's something which
needs to fixed in the core pnfs code and not the block layout drivers.

But this isn't one of the cases were we keep the extent around but
not the layout - those are the ones were we have stateid mismatches
or similar (most of them now fixed by me).

We actually have a much bigger elephant in the room, which this series
doesn't address yet:  if a layoutcommit fails we curently don't have
a way to resend the data to the MDS.  This will be more common pnfs core
work as we basically need to keep the data in a state similar to NFS
unstable writes, instead of claiming that we finished a NFS_FILE_SYNC,
which allows the VM to drop the data from the pagecache before the
commit happened.

I'm slowly wading through all these issues - I initially only planned to
write a server but so far fixing up the client has taken way more of my
time compared to the relatively trivial server..

> If your concern is current layoutreturn implementation that ignores
> pending layoutcommit (and inflight IOs), that needs to be fixed. I
> have a patchset to fix it and it is still under internal QA. I'll post
> it once it passes testing.

That's a different issues, but I'm happy to review that as well.

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 8/9] pnfs/blocklayout: return layouts on setattr
  2014-09-11 16:14           ` Trond Myklebust
@ 2014-09-14 10:55             ` Boaz Harrosh
  2014-09-14 13:24               ` Trond Myklebust
  0 siblings, 1 reply; 32+ messages in thread
From: Boaz Harrosh @ 2014-09-14 10:55 UTC (permalink / raw)
  To: Trond Myklebust, Christoph Hellwig
  Cc: Peng Tao, Boaz Harrosh, Linux NFS Mailing List

On 09/11/2014 07:14 PM, Trond Myklebust wrote:
<>
> In the files layout case, it is actually quite common for the server
> to hand out an "infinite" sized layout in response to a LAYOUTGET. It
> means that the client doesn't need to ask for a new layout in order to
> append to the file.
> 

You mean to say that: "The Linux files-layout driver only supports infinite
 sized layout, so it is pointless to return a layout which will be returned
 again, exactly the same"

Because I agree that "an infinite sized layout" need not be returned but
an none-infinite should, only we do not have any.

Cheers
Boaz


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 8/9] pnfs/blocklayout: return layouts on setattr
  2014-09-14 10:55             ` Boaz Harrosh
@ 2014-09-14 13:24               ` Trond Myklebust
  2014-09-14 13:51                 ` Boaz Harrosh
  0 siblings, 1 reply; 32+ messages in thread
From: Trond Myklebust @ 2014-09-14 13:24 UTC (permalink / raw)
  To: Boaz Harrosh
  Cc: Christoph Hellwig, Peng Tao, Boaz Harrosh, Linux NFS Mailing List

On Sun, Sep 14, 2014 at 6:55 AM, Boaz Harrosh <openosd@gmail.com> wrote:
> On 09/11/2014 07:14 PM, Trond Myklebust wrote:
> <>
>> In the files layout case, it is actually quite common for the server
>> to hand out an "infinite" sized layout in response to a LAYOUTGET. It
>> means that the client doesn't need to ask for a new layout in order to
>> append to the file.
>>
>
> You mean to say that: "The Linux files-layout driver only supports infinite
>  sized layout, so it is pointless to return a layout which will be returned
>  again, exactly the same"
>
> Because I agree that "an infinite sized layout" need not be returned but
> an none-infinite should, only we do not have any.
>

No Boaz. I mean that it is utterly pointless and stupid to return a
layout that doesn't preallocate any resources when it isn't necessary
to do so.

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 8/9] pnfs/blocklayout: return layouts on setattr
  2014-09-14 13:24               ` Trond Myklebust
@ 2014-09-14 13:51                 ` Boaz Harrosh
  2014-09-14 14:16                   ` Trond Myklebust
  0 siblings, 1 reply; 32+ messages in thread
From: Boaz Harrosh @ 2014-09-14 13:51 UTC (permalink / raw)
  To: Trond Myklebust, Boaz Harrosh
  Cc: Christoph Hellwig, Peng Tao, Linux NFS Mailing List

On 09/14/2014 04:24 PM, Trond Myklebust wrote:
> On Sun, Sep 14, 2014 at 6:55 AM, Boaz Harrosh <openosd@gmail.com> wrote:
<>
> 
> No Boaz. I mean that it is utterly pointless and stupid to return a
> layout that doesn't preallocate any resources when it isn't necessary
> to do so.
> 

"preallocate any resources" where? at the client it might not but the server
might have allocated resources per lo-segment.
For example a CEPH server allocates a device_id per file-lo-segment. If you
lo_return a truncated segment it might be able to free that device_id.

Cheers
Boaz


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 8/9] pnfs/blocklayout: return layouts on setattr
  2014-09-14 13:51                 ` Boaz Harrosh
@ 2014-09-14 14:16                   ` Trond Myklebust
  2014-09-14 14:28                     ` Boaz Harrosh
  0 siblings, 1 reply; 32+ messages in thread
From: Trond Myklebust @ 2014-09-14 14:16 UTC (permalink / raw)
  To: Boaz Harrosh
  Cc: Boaz Harrosh, Christoph Hellwig, Peng Tao, Linux NFS Mailing List

On Sun, Sep 14, 2014 at 9:51 AM, Boaz Harrosh <boaz@plexistor.com> wrote:
> On 09/14/2014 04:24 PM, Trond Myklebust wrote:
>> On Sun, Sep 14, 2014 at 6:55 AM, Boaz Harrosh <openosd@gmail.com> wrote:
> <>
>>
>> No Boaz. I mean that it is utterly pointless and stupid to return a
>> layout that doesn't preallocate any resources when it isn't necessary
>> to do so.
>>
>
> "preallocate any resources" where? at the client it might not but the server
> might have allocated resources per lo-segment.
> For example a CEPH server allocates a device_id per file-lo-segment. If you
> lo_return a truncated segment it might be able to free that device_id.
>

There is no requirement anywhere in RFC5661 to state that a truncate
must be prefixed by a layout return. Not in section 12 (Parallel NFS)
nor in section 13 (NFSv4.1 as a Storage Protocol in pNFS: the File
Layout Type), nor in the ERRATA. Existing pNFS files servers (i.e.
NetApp) don't need it either.

If a future CEPH server wants to add its own requirements, then it is
free to issue a layoutrecall. However my understanding from
conversations with Matt (the Ganesha release notes appear to confirm)
was that the CRUSH algorithm is pretty much impossible to implement as
a files layout type, so I'm confused as to why it would be a problem
in the first place.

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 8/9] pnfs/blocklayout: return layouts on setattr
  2014-09-14 14:16                   ` Trond Myklebust
@ 2014-09-14 14:28                     ` Boaz Harrosh
  0 siblings, 0 replies; 32+ messages in thread
From: Boaz Harrosh @ 2014-09-14 14:28 UTC (permalink / raw)
  To: Trond Myklebust
  Cc: Boaz Harrosh, Christoph Hellwig, Peng Tao, Linux NFS Mailing List

On 09/14/2014 05:16 PM, Trond Myklebust wrote:
<>
> 
> There is no requirement anywhere in RFC5661 to state that a truncate
> must be prefixed by a layout return. Not in section 12 (Parallel NFS)
> nor in section 13 (NFSv4.1 as a Storage Protocol in pNFS: the File
> Layout Type), nor in the ERRATA. Existing pNFS files servers (i.e.
> NetApp) don't need it either.
> 

Yes! which is why it sucks ;-)

> If a future CEPH server wants to add its own requirements, then it is
> free to issue a layoutrecall. However my understanding from
> conversations with Matt (the Ganesha release notes appear to confirm)
> was that the CRUSH algorithm is pretty much impossible to implement as
> a files layout type, so I'm confused as to why it would be a problem
> in the first place.
> 

"impossible to implement" without lo-segments yes. *With* lo-segments it
is my opinion that it is possible. With the contraption of device_id per
segment.

But this argument is mute we both agree that in current code it is not
needed. 

[ If at future time someone will want to implement lo-segments for
 Linux files layout, I think it could be a nice optimization, just as it
 is with objects and blocks.

 Imagine a most simple files MDS that wants to not create filehandles(files)
 on DSs that will not be reached. (Create on demand). In such case deleting
 the DS-files on truncate might make sense no?
]

Xheers
Boaz


^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH 3/9] pnfs: add return_range method
  2014-09-09 16:40 pnfs block layout driver fixes V3 Christoph Hellwig
@ 2014-09-09 16:40 ` Christoph Hellwig
  0 siblings, 0 replies; 32+ messages in thread
From: Christoph Hellwig @ 2014-09-09 16:40 UTC (permalink / raw)
  To: linux-nfs

If a layout driver keeps per-inode state outside of the layout segments it
needs to be notified of any layout returns or recalls on an inode, and not
just about the freeing of layout segments.  Add a method to acomplish this,
which will allow the block layout driver to handle the case of truncated
and re-expanded files properly.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 fs/nfs/callback_proc.c | 12 +++++++++---
 fs/nfs/pnfs.c          | 10 ++++++++++
 fs/nfs/pnfs.h          |  3 +++
 3 files changed, 22 insertions(+), 3 deletions(-)

diff --git a/fs/nfs/callback_proc.c b/fs/nfs/callback_proc.c
index 35f2712..226e206 100644
--- a/fs/nfs/callback_proc.c
+++ b/fs/nfs/callback_proc.c
@@ -181,10 +181,16 @@ static u32 initiate_file_draining(struct nfs_client *clp,
 	spin_lock(&ino->i_lock);
 	if (test_bit(NFS_LAYOUT_BULK_RECALL, &lo->plh_flags) ||
 	    pnfs_mark_matching_lsegs_invalid(lo, &free_me_list,
-					&args->cbl_range))
+					&args->cbl_range)) {
 		rv = NFS4ERR_DELAY;
-	else
-		rv = NFS4ERR_NOMATCHING_LAYOUT;
+		goto unlock;
+	}
+
+	if (NFS_SERVER(ino)->pnfs_curr_ld->return_range) {
+		NFS_SERVER(ino)->pnfs_curr_ld->return_range(lo,
+			&args->cbl_range);
+	}
+unlock:
 	spin_unlock(&ino->i_lock);
 
 	pnfs_free_lseg_list(&free_me_list);
diff --git a/fs/nfs/pnfs.c b/fs/nfs/pnfs.c
index 04050eb..8bd2dc2 100644
--- a/fs/nfs/pnfs.c
+++ b/fs/nfs/pnfs.c
@@ -858,6 +858,16 @@ _pnfs_return_layout(struct inode *ino)
 	empty = list_empty(&lo->plh_segs);
 	pnfs_clear_layoutcommit(ino, &tmp_list);
 	pnfs_mark_matching_lsegs_invalid(lo, &tmp_list, NULL);
+
+	if (NFS_SERVER(ino)->pnfs_curr_ld->return_range) {
+		struct pnfs_layout_range range = {
+			.iomode		= IOMODE_ANY,
+			.offset		= 0,
+			.length		= NFS4_MAX_UINT64,
+		};
+		NFS_SERVER(ino)->pnfs_curr_ld->return_range(lo, &range);
+	}
+
 	/* Don't send a LAYOUTRETURN if list was initially empty */
 	if (empty) {
 		spin_unlock(&ino->i_lock);
diff --git a/fs/nfs/pnfs.h b/fs/nfs/pnfs.h
index a4c530e..2c24942 100644
--- a/fs/nfs/pnfs.h
+++ b/fs/nfs/pnfs.h
@@ -94,6 +94,9 @@ struct pnfs_layoutdriver_type {
 	struct pnfs_layout_segment * (*alloc_lseg) (struct pnfs_layout_hdr *layoutid, struct nfs4_layoutget_res *lgr, gfp_t gfp_flags);
 	void (*free_lseg) (struct pnfs_layout_segment *lseg);
 
+	void (*return_range) (struct pnfs_layout_hdr *lo,
+			      struct pnfs_layout_range *range);
+
 	/* test for nfs page cache coalescing */
 	const struct nfs_pageio_ops *pg_read_ops;
 	const struct nfs_pageio_ops *pg_write_ops;
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 32+ messages in thread

end of thread, other threads:[~2014-09-14 14:28 UTC | newest]

Thread overview: 32+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-09-10 15:23 pnfs block layout driver fixes V4 Christoph Hellwig
2014-09-10 15:23 ` [PATCH 1/9] pnfs: force a layout commit when encountering busy segments during recall Christoph Hellwig
2014-09-10 15:23 ` [PATCH 2/9] pnfs: add flag to force read-modify-write in ->write_begin Christoph Hellwig
2014-09-10 15:23 ` [PATCH 3/9] pnfs: add return_range method Christoph Hellwig
2014-09-11 13:54   ` Peng Tao
2014-09-11 15:20     ` Christoph Hellwig
2014-09-11 15:30       ` Peng Tao
2014-09-11 15:36         ` Christoph Hellwig
2014-09-12  2:22           ` Peng Tao
2014-09-13 19:38             ` Christoph Hellwig
2014-09-10 15:23 ` [PATCH 4/9] pnfs/blocklayout: remove read-modify-write handling in bl_write_pagelist Christoph Hellwig
2014-09-10 15:23 ` [PATCH 5/9] pnfs/blocklayout: don't set pages uptodate Christoph Hellwig
2014-09-10 15:23 ` [PATCH 6/9] pnfs/blocklayout: rewrite extent tracking Christoph Hellwig
2014-09-11 14:11   ` Peng Tao
2014-09-11 15:21     ` Christoph Hellwig
2014-09-10 15:23 ` [PATCH 7/9] pnfs/blocklayout: implement the return_range method Christoph Hellwig
2014-09-11 14:16   ` Peng Tao
2014-09-11 15:24     ` Christoph Hellwig
2014-09-10 15:23 ` [PATCH 8/9] pnfs/blocklayout: return layouts on setattr Christoph Hellwig
2014-09-11 14:24   ` Peng Tao
2014-09-11 14:42     ` Boaz Harrosh
2014-09-11 15:25     ` Christoph Hellwig
2014-09-11 15:38       ` Trond Myklebust
2014-09-11 15:48         ` Christoph Hellwig
2014-09-11 16:14           ` Trond Myklebust
2014-09-14 10:55             ` Boaz Harrosh
2014-09-14 13:24               ` Trond Myklebust
2014-09-14 13:51                 ` Boaz Harrosh
2014-09-14 14:16                   ` Trond Myklebust
2014-09-14 14:28                     ` Boaz Harrosh
2014-09-10 15:23 ` [PATCH 9/9] pnfs/blocklayout: allocate separate pages for the layoutcommit payload Christoph Hellwig
  -- strict thread matches above, loose matches on Subject: below --
2014-09-09 16:40 pnfs block layout driver fixes V3 Christoph Hellwig
2014-09-09 16:40 ` [PATCH 3/9] pnfs: add return_range method Christoph Hellwig

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.