All of lore.kernel.org
 help / color / mirror / Atom feed
* futher writeback updates
@ 2016-02-24  8:20 Christoph Hellwig
  2016-02-24  8:20 ` [PATCH 1/3] xfs: build bios directly in xfs_add_to_ioend Christoph Hellwig
                   ` (3 more replies)
  0 siblings, 4 replies; 16+ messages in thread
From: Christoph Hellwig @ 2016-02-24  8:20 UTC (permalink / raw)
  To: xfs

This series contains two patches from Dave (with a rebase and minor updates
from me) which he posted as RFC earlier to reduce the buffer_head dependency
in the writeback path, and one patch from me to make use of bios embedded
into the ioend and bio chaining to further simplify and optimize the writeback
path.

They pass xfstests on 4k and 1k file systems.

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH 1/3] xfs: build bios directly in xfs_add_to_ioend
  2016-02-24  8:20 futher writeback updates Christoph Hellwig
@ 2016-02-24  8:20 ` Christoph Hellwig
  2016-03-03 15:17   ` Brian Foster
  2016-02-24  8:20 ` [PATCH 2/3] xfs: don't release bios on completion immediately Christoph Hellwig
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 16+ messages in thread
From: Christoph Hellwig @ 2016-02-24  8:20 UTC (permalink / raw)
  To: xfs; +Cc: Dave Chinner

From: Dave Chinner <dchinner@redhat.com>

Currently adding a buffer to the ioend and then building a bio from
the buffer list are two separate operations. We don't build the bios
and submit them until the ioend is submitted, and this places a
fixed dependency on bufferhead chaining in the ioend.

The first step to removing the bufferhead chaining in the ioend is
on the IO submission side. We can build the bio directly as we add
the buffers to the ioend chain, thereby removing the need for a
latter "buffer-to-bio" submission loop. This allows us to submit
bios on large ioends as soon as we cannot add more data to the bio.

These bios then get captured by the active plug, and hence will be
dispatched as soon as either the plug overflows or we schedule away
from the writeback context. This will reduce submission latency for
large IOs, but will also allow more timely request queue based
writeback blocking when the device becomes congested.

Signed-off-by: Dave Chinner <dchinner@redhat.com>
---
 fs/xfs/xfs_aops.c | 117 ++++++++++++++++++++++++++----------------------------
 fs/xfs/xfs_aops.h |   1 +
 2 files changed, 57 insertions(+), 61 deletions(-)

diff --git a/fs/xfs/xfs_aops.c b/fs/xfs/xfs_aops.c
index 7a467b3..90e6e3a 100644
--- a/fs/xfs/xfs_aops.c
+++ b/fs/xfs/xfs_aops.c
@@ -274,6 +274,7 @@ xfs_alloc_ioend(
 	xfs_ioend_t		*ioend;
 
 	ioend = mempool_alloc(xfs_ioend_pool, GFP_NOFS);
+	memset(ioend, 0, sizeof(*ioend));
 
 	/*
 	 * Set the count to 1 initially, which will prevent an I/O
@@ -281,16 +282,9 @@ xfs_alloc_ioend(
 	 * all the I/O from calling the completion routine too early.
 	 */
 	atomic_set(&ioend->io_remaining, 1);
-	ioend->io_error = 0;
 	INIT_LIST_HEAD(&ioend->io_list);
 	ioend->io_type = type;
 	ioend->io_inode = inode;
-	ioend->io_buffer_head = NULL;
-	ioend->io_buffer_tail = NULL;
-	ioend->io_offset = 0;
-	ioend->io_size = 0;
-	ioend->io_append_trans = NULL;
-
 	INIT_WORK(&ioend->io_work, xfs_end_io);
 	return ioend;
 }
@@ -452,13 +446,18 @@ static inline int xfs_bio_add_buffer(struct bio *bio, struct buffer_head *bh)
 }
 
 /*
- * Submit all of the bios for an ioend. We are only passed a single ioend at a
- * time; the caller is responsible for chaining prior to submission.
+ * Submit the bio for an ioend. We are passed an ioend with a bio attached to
+ * it, and we submit that bio. The ioend may be used for multiple bio
+ * submissions, so we only want to allocate an append transaction for the ioend
+ * once. In the case of multiple bio submission, each bio will take an IO
+ * reference to the ioend to ensure that the ioend completion is only done once
+ * all bios have been submitted and the ioend is really done.
  *
  * If @fail is non-zero, it means that we have a situation where some part of
  * the submission process has failed after we have marked paged for writeback
- * and unlocked them. In this situation, we need to fail the ioend chain rather
- * than submit it to IO. This typically only happens on a filesystem shutdown.
+ * and unlocked them. In this situation, we need to fail the bio and ioend
+ * rather than submit it to IO. This typically only happens on a filesystem
+ * shutdown.
  */
 STATIC int
 xfs_submit_ioend(
@@ -466,48 +465,34 @@ xfs_submit_ioend(
 	xfs_ioend_t		*ioend,
 	int			status)
 {
-	struct buffer_head	*bh;
-	struct bio		*bio;
-	sector_t		lastblock = 0;
+	if (!ioend->io_bio || status)
+		goto error_finish;
 
 	/* Reserve log space if we might write beyond the on-disk inode size. */
-	if (!status &&
-	     ioend->io_type != XFS_IO_UNWRITTEN && xfs_ioend_is_append(ioend))
+	if (ioend->io_type != XFS_IO_UNWRITTEN && xfs_ioend_is_append(ioend) &&
+	    !ioend->io_append_trans) {
 		status = xfs_setfilesize_trans_alloc(ioend);
-	/*
-	 * If we are failing the IO now, just mark the ioend with an
-	 * error and finish it. This will run IO completion immediately
-	 * as there is only one reference to the ioend at this point in
-	 * time.
-	 */
-	if (status) {
-		ioend->io_error = status;
-		xfs_finish_ioend(ioend);
-		return status;
+		if (status)
+			goto error_finish;
 	}
 
-	bio = NULL;
-	for (bh = ioend->io_buffer_head; bh; bh = bh->b_private) {
-
-		if (!bio) {
-retry:
-			bio = xfs_alloc_ioend_bio(bh);
-		} else if (bh->b_blocknr != lastblock + 1) {
-			xfs_submit_ioend_bio(wbc, ioend, bio);
-			goto retry;
-		}
-
-		if (xfs_bio_add_buffer(bio, bh) != bh->b_size) {
-			xfs_submit_ioend_bio(wbc, ioend, bio);
-			goto retry;
-		}
-
-		lastblock = bh->b_blocknr;
-	}
-	if (bio)
-		xfs_submit_ioend_bio(wbc, ioend, bio);
+	xfs_submit_ioend_bio(wbc, ioend, ioend->io_bio);
+	ioend->io_bio = NULL;
 	xfs_finish_ioend(ioend);
 	return 0;
+
+	/*
+	 * If we are failing the IO now, just mark the ioend with an error and
+	 * finish it, releasing the active bio if there is one.  This will run
+	 * IO completion immediately as there is only one reference to the ioend
+	 * at this point in time.
+	 */
+error_finish:
+	if (ioend->io_bio)
+		bio_put(ioend->io_bio);
+	ioend->io_error = status;
+	xfs_finish_ioend(ioend);
+	return status;
 }
 
 /*
@@ -523,27 +508,37 @@ xfs_add_to_ioend(
 	struct buffer_head	*bh,
 	xfs_off_t		offset,
 	struct xfs_writepage_ctx *wpc,
+	struct writeback_control *wbc,
 	struct list_head	*iolist)
 {
-	if (!wpc->ioend || wpc->io_type != wpc->ioend->io_type ||
-	    bh->b_blocknr != wpc->last_block + 1) {
-		struct xfs_ioend	*new;
+	struct xfs_ioend	*ioend = wpc->ioend;
 
-		if (wpc->ioend)
-			list_add(&wpc->ioend->io_list, iolist);
+	if (!ioend || wpc->io_type != ioend->io_type ||
+	    bh->b_blocknr != wpc->last_block + 1) {
+		if (ioend)
+			list_add(&ioend->io_list, iolist);
 
-		new = xfs_alloc_ioend(inode, wpc->io_type);
-		new->io_offset = offset;
-		new->io_buffer_head = bh;
-		new->io_buffer_tail = bh;
-		wpc->ioend = new;
+		ioend = wpc->ioend = xfs_alloc_ioend(inode, wpc->io_type);
+		ioend->io_offset = offset;
+		ioend->io_buffer_head = bh;
+		ioend->io_buffer_tail = bh;
 	} else {
-		wpc->ioend->io_buffer_tail->b_private = bh;
-		wpc->ioend->io_buffer_tail = bh;
+		ioend->io_buffer_tail->b_private = bh;
+		ioend->io_buffer_tail = bh;
 	}
-
 	bh->b_private = NULL;
-	wpc->ioend->io_size += bh->b_size;
+
+retry:
+	if (!ioend->io_bio)
+		ioend->io_bio = xfs_alloc_ioend_bio(bh);
+
+	if (xfs_bio_add_buffer(ioend->io_bio, bh) != bh->b_size) {
+		xfs_submit_ioend_bio(wbc, ioend, ioend->io_bio);
+		ioend->io_bio = NULL;
+		goto retry;
+	}
+
+	ioend->io_size += bh->b_size;
 	wpc->last_block = bh->b_blocknr;
 	xfs_start_buffer_writeback(bh);
 }
@@ -802,7 +797,7 @@ xfs_writepage_map(
 			lock_buffer(bh);
 			if (wpc->io_type != XFS_IO_OVERWRITE)
 				xfs_map_at_offset(inode, bh, &wpc->imap, offset);
-			xfs_add_to_ioend(inode, bh, offset, wpc, &submit_list);
+			xfs_add_to_ioend(inode, bh, offset, wpc, wbc, &submit_list);
 			count++;
 		}
 
diff --git a/fs/xfs/xfs_aops.h b/fs/xfs/xfs_aops.h
index 4e01bd5..c89c3bd 100644
--- a/fs/xfs/xfs_aops.h
+++ b/fs/xfs/xfs_aops.h
@@ -52,6 +52,7 @@ typedef struct xfs_ioend {
 	xfs_off_t		io_offset;	/* offset in the file */
 	struct work_struct	io_work;	/* xfsdatad work queue */
 	struct xfs_trans	*io_append_trans;/* xact. for size update */
+	struct bio		*io_bio;	/* bio being built */
 } xfs_ioend_t;
 
 extern const struct address_space_operations xfs_address_space_operations;
-- 
2.1.4

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 2/3] xfs: don't release bios on completion immediately
  2016-02-24  8:20 futher writeback updates Christoph Hellwig
  2016-02-24  8:20 ` [PATCH 1/3] xfs: build bios directly in xfs_add_to_ioend Christoph Hellwig
@ 2016-02-24  8:20 ` Christoph Hellwig
  2016-03-03 15:17   ` Brian Foster
  2016-02-24  8:20 ` [PATCH 3/3] xfs: optimize bio handling in the buffer writeback path Christoph Hellwig
  2016-03-01 13:01 ` futher writeback updates Christoph Hellwig
  3 siblings, 1 reply; 16+ messages in thread
From: Christoph Hellwig @ 2016-02-24  8:20 UTC (permalink / raw)
  To: xfs; +Cc: Dave Chinner

Completion of an ioend requires us to walk the bufferhead list to
end writback on all the bufferheads. This, in turn, is needed so
that we can end writeback on all the pages we just did IO on.

To remove our dependency on bufferheads in writeback, we need to
turn this around the other way - we need to walk the pages we've
just completed IO on, and then walk the buffers attached to the
pages and complete their IO. In doing this, we remove the
requirement for the ioend to track bufferheads directly.

To enable IO completion to walk all the pages we've submitted IO on,
we need to keep the bios that we used for IO around until the ioend
has been completed. We can do this simply by chaining the bios to
the ioend at completion time, and then walking their pages directly
just before destroying the ioend.

Signed-off-by: Dave Chinner <dchinner@redhat.com>
---
 fs/xfs/xfs_aops.c | 90 +++++++++++++++++++++++++++++++++++++++++--------------
 fs/xfs/xfs_aops.h |  5 ++--
 2 files changed, 71 insertions(+), 24 deletions(-)

diff --git a/fs/xfs/xfs_aops.c b/fs/xfs/xfs_aops.c
index 90e6e3a..fc4fed6 100644
--- a/fs/xfs/xfs_aops.c
+++ b/fs/xfs/xfs_aops.c
@@ -84,25 +84,71 @@ xfs_find_bdev_for_inode(
 }
 
 /*
- * We're now finished for good with this ioend structure.
- * Update the page state via the associated buffer_heads,
- * release holds on the inode and bio, and finally free
- * up memory.  Do not use the ioend after this.
+ * We're now finished for good with this page.  Update the page state via the
+ * associated buffer_heads, paying attention to the start and end offsets that
+ * we need to process on the page.
+ */
+static void
+xfs_finish_page_writeback(
+	struct page	*page,
+	unsigned int	start,
+	unsigned int	end,
+	int		error)
+{
+	struct buffer_head	*head, *bh;
+	unsigned int		off = 0;
+
+	bh = head = page_buffers(page);
+
+	do {
+		if (start > off)
+			goto next_bh;
+		if (off > end)
+			break;
+		bh->b_end_io(bh, !error);
+next_bh:
+		off += bh->b_size;
+	} while ((bh = bh->b_this_page) != head);
+}
+
+/*
+ * We're now finished for good with this ioend structure.  Update the page
+ * state, release holds on bios, and finally free up memory.  Do not use the
+ * ioend after this.
  */
 STATIC void
 xfs_destroy_ioend(
-	xfs_ioend_t		*ioend)
+	struct xfs_ioend	*ioend)
 {
-	struct buffer_head	*bh, *next;
+	struct bio		*bio, *next;
+
+	for (bio = ioend->io_bio_done; bio; bio = next) {
+		struct bio_vec	*bvec;
+		int		i;
 
-	for (bh = ioend->io_buffer_head; bh; bh = next) {
-		next = bh->b_private;
-		bh->b_end_io(bh, !ioend->io_error);
+		next = bio->bi_private;
+		bio->bi_private = NULL;
+
+		/* walk each page on bio, ending page IO on them */
+		bio_for_each_segment_all(bvec, bio, i) {
+			struct page	*page = bvec->bv_page;
+			unsigned int	off = bvec->bv_offset;
+			unsigned int	end_off = off + bvec->bv_len - 1;
+
+			ASSERT(off < PAGE_SIZE);
+			ASSERT(end_off <= PAGE_SIZE);
+
+			xfs_finish_page_writeback(page, off, end_off,
+						  ioend->io_error);
+
+		}
+		bio_put(bio);
 	}
 
 	mempool_free(ioend, xfs_ioend_pool);
 }
 
+
 /*
  * Fast and loose check if this write could update the on-disk inode size.
  */
@@ -286,6 +332,7 @@ xfs_alloc_ioend(
 	ioend->io_type = type;
 	ioend->io_inode = inode;
 	INIT_WORK(&ioend->io_work, xfs_end_io);
+	spin_lock_init(&ioend->io_lock);
 	return ioend;
 }
 
@@ -365,15 +412,21 @@ STATIC void
 xfs_end_bio(
 	struct bio		*bio)
 {
-	xfs_ioend_t		*ioend = bio->bi_private;
+	struct xfs_ioend	*ioend = bio->bi_private;
+	unsigned long		flags;
 
-	if (!ioend->io_error)
-		ioend->io_error = bio->bi_error;
-
-	/* Toss bio and pass work off to an xfsdatad thread */
 	bio->bi_private = NULL;
 	bio->bi_end_io = NULL;
-	bio_put(bio);
+
+	spin_lock_irqsave(&ioend->io_lock, flags);
+	if (!ioend->io_error)
+	       ioend->io_error = bio->bi_error;
+	if (!ioend->io_bio_done)
+		ioend->io_bio_done = bio;
+	else
+		ioend->io_bio_done_tail->bi_private = bio;
+	ioend->io_bio_done_tail = bio;
+	spin_unlock_irqrestore(&ioend->io_lock, flags);
 
 	xfs_finish_ioend(ioend);
 }
@@ -517,16 +570,9 @@ xfs_add_to_ioend(
 	    bh->b_blocknr != wpc->last_block + 1) {
 		if (ioend)
 			list_add(&ioend->io_list, iolist);
-
 		ioend = wpc->ioend = xfs_alloc_ioend(inode, wpc->io_type);
 		ioend->io_offset = offset;
-		ioend->io_buffer_head = bh;
-		ioend->io_buffer_tail = bh;
-	} else {
-		ioend->io_buffer_tail->b_private = bh;
-		ioend->io_buffer_tail = bh;
 	}
-	bh->b_private = NULL;
 
 retry:
 	if (!ioend->io_bio)
diff --git a/fs/xfs/xfs_aops.h b/fs/xfs/xfs_aops.h
index c89c3bd..1c7b041 100644
--- a/fs/xfs/xfs_aops.h
+++ b/fs/xfs/xfs_aops.h
@@ -46,13 +46,14 @@ typedef struct xfs_ioend {
 	int			io_error;	/* I/O error code */
 	atomic_t		io_remaining;	/* hold count */
 	struct inode		*io_inode;	/* file being written to */
-	struct buffer_head	*io_buffer_head;/* buffer linked list head */
-	struct buffer_head	*io_buffer_tail;/* buffer linked list tail */
 	size_t			io_size;	/* size of the extent */
 	xfs_off_t		io_offset;	/* offset in the file */
 	struct work_struct	io_work;	/* xfsdatad work queue */
 	struct xfs_trans	*io_append_trans;/* xact. for size update */
 	struct bio		*io_bio;	/* bio being built */
+	struct bio		*io_bio_done;	/* bios completed */
+	struct bio		*io_bio_done_tail; /* bios completed */
+	spinlock_t		io_lock;	/* for bio completion list */
 } xfs_ioend_t;
 
 extern const struct address_space_operations xfs_address_space_operations;
-- 
2.1.4

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 3/3] xfs: optimize bio handling in the buffer writeback path
  2016-02-24  8:20 futher writeback updates Christoph Hellwig
  2016-02-24  8:20 ` [PATCH 1/3] xfs: build bios directly in xfs_add_to_ioend Christoph Hellwig
  2016-02-24  8:20 ` [PATCH 2/3] xfs: don't release bios on completion immediately Christoph Hellwig
@ 2016-02-24  8:20 ` Christoph Hellwig
  2016-03-03 15:17   ` Brian Foster
  2016-03-01 13:01 ` futher writeback updates Christoph Hellwig
  3 siblings, 1 reply; 16+ messages in thread
From: Christoph Hellwig @ 2016-02-24  8:20 UTC (permalink / raw)
  To: xfs

This patch implements two closely related changes:  First it embedds a
bio the ioend structure so that we don't have to allocate one separately.
Second it uses the block layer bio chaining mechanism to chain additional
bios off this first one if needed instead of manually accouting for
multiple bio completions in the ioend structure.  Together this removes a
memory allocation per ioend and greatly simplifies the ioend setup and
I/O completion path.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 fs/xfs/xfs_aops.c  | 217 ++++++++++++++++++++---------------------------------
 fs/xfs/xfs_aops.h  |  15 ++--
 fs/xfs/xfs_super.c |  26 ++-----
 3 files changed, 93 insertions(+), 165 deletions(-)

diff --git a/fs/xfs/xfs_aops.c b/fs/xfs/xfs_aops.c
index fc4fed6..1ea4167 100644
--- a/fs/xfs/xfs_aops.c
+++ b/fs/xfs/xfs_aops.c
@@ -118,17 +118,15 @@ next_bh:
  */
 STATIC void
 xfs_destroy_ioend(
-	struct xfs_ioend	*ioend)
+	struct xfs_ioend	*ioend,
+	int			error)
 {
-	struct bio		*bio, *next;
+	struct bio		*bio, *next, *last = ioend->io_bio;
 
-	for (bio = ioend->io_bio_done; bio; bio = next) {
+	for (bio = &ioend->io_inline_bio; bio; bio = next) {
 		struct bio_vec	*bvec;
 		int		i;
 
-		next = bio->bi_private;
-		bio->bi_private = NULL;
-
 		/* walk each page on bio, ending page IO on them */
 		bio_for_each_segment_all(bvec, bio, i) {
 			struct page	*page = bvec->bv_page;
@@ -138,17 +136,21 @@ xfs_destroy_ioend(
 			ASSERT(off < PAGE_SIZE);
 			ASSERT(end_off <= PAGE_SIZE);
 
-			xfs_finish_page_writeback(page, off, end_off,
-						  ioend->io_error);
-
+			xfs_finish_page_writeback(page, off, end_off, error);
 		}
+
+		/*
+		 * For the last bio, bi_private points to the ioend, so we
+		 * need to explicitly end the iteration here.
+		 */
+		if (bio == last)
+			next = NULL;
+		else
+			next = bio->bi_private;
 		bio_put(bio);
 	}
-
-	mempool_free(ioend, xfs_ioend_pool);
 }
 
-
 /*
  * Fast and loose check if this write could update the on-disk inode size.
  */
@@ -220,7 +222,8 @@ xfs_setfilesize(
 
 STATIC int
 xfs_setfilesize_ioend(
-	struct xfs_ioend	*ioend)
+	struct xfs_ioend	*ioend,
+	int			error)
 {
 	struct xfs_inode	*ip = XFS_I(ioend->io_inode);
 	struct xfs_trans	*tp = ioend->io_append_trans;
@@ -234,53 +237,32 @@ xfs_setfilesize_ioend(
 	__sb_writers_acquired(VFS_I(ip)->i_sb, SB_FREEZE_FS);
 
 	/* we abort the update if there was an IO error */
-	if (ioend->io_error) {
+	if (error) {
 		xfs_trans_cancel(tp);
-		return ioend->io_error;
+		return error;
 	}
 
 	return xfs_setfilesize(ip, tp, ioend->io_offset, ioend->io_size);
 }
 
 /*
- * Schedule IO completion handling on the final put of an ioend.
- *
- * If there is no work to do we might as well call it a day and free the
- * ioend right now.
- */
-STATIC void
-xfs_finish_ioend(
-	struct xfs_ioend	*ioend)
-{
-	if (atomic_dec_and_test(&ioend->io_remaining)) {
-		struct xfs_mount	*mp = XFS_I(ioend->io_inode)->i_mount;
-
-		if (ioend->io_type == XFS_IO_UNWRITTEN)
-			queue_work(mp->m_unwritten_workqueue, &ioend->io_work);
-		else if (ioend->io_append_trans)
-			queue_work(mp->m_data_workqueue, &ioend->io_work);
-		else
-			xfs_destroy_ioend(ioend);
-	}
-}
-
-/*
  * IO write completion.
  */
 STATIC void
 xfs_end_io(
 	struct work_struct *work)
 {
-	xfs_ioend_t	*ioend = container_of(work, xfs_ioend_t, io_work);
-	struct xfs_inode *ip = XFS_I(ioend->io_inode);
-	int		error = 0;
+	struct xfs_ioend	*ioend =
+		container_of(work, struct xfs_ioend, io_work);
+	struct xfs_inode	*ip = XFS_I(ioend->io_inode);
+	int			error = ioend->io_bio->bi_error;
 
 	/*
 	 * Set an error if the mount has shut down and proceed with end I/O
 	 * processing so it can perform whatever cleanups are necessary.
 	 */
 	if (XFS_FORCED_SHUTDOWN(ip->i_mount))
-		ioend->io_error = -EIO;
+		error = -EIO;
 
 	/*
 	 * For unwritten extents we need to issue transactions to convert a
@@ -290,50 +272,33 @@ xfs_end_io(
 	 * on error.
 	 */
 	if (ioend->io_type == XFS_IO_UNWRITTEN) {
-		if (ioend->io_error)
+		if (error)
 			goto done;
 		error = xfs_iomap_write_unwritten(ip, ioend->io_offset,
 						  ioend->io_size);
 	} else if (ioend->io_append_trans) {
-		error = xfs_setfilesize_ioend(ioend);
+		error = xfs_setfilesize_ioend(ioend, error);
 	} else {
 		ASSERT(!xfs_ioend_is_append(ioend));
 	}
 
 done:
-	if (error)
-		ioend->io_error = error;
-	xfs_destroy_ioend(ioend);
+	xfs_destroy_ioend(ioend, error);
 }
 
-/*
- * Allocate and initialise an IO completion structure.
- * We need to track unwritten extent write completion here initially.
- * We'll need to extend this for updating the ondisk inode size later
- * (vs. incore size).
- */
-STATIC xfs_ioend_t *
-xfs_alloc_ioend(
-	struct inode		*inode,
-	unsigned int		type)
+STATIC void
+xfs_end_bio(
+	struct bio		*bio)
 {
-	xfs_ioend_t		*ioend;
-
-	ioend = mempool_alloc(xfs_ioend_pool, GFP_NOFS);
-	memset(ioend, 0, sizeof(*ioend));
+	struct xfs_ioend	*ioend = bio->bi_private;
+	struct xfs_mount	*mp = XFS_I(ioend->io_inode)->i_mount;
 
-	/*
-	 * Set the count to 1 initially, which will prevent an I/O
-	 * completion callback from happening before we have started
-	 * all the I/O from calling the completion routine too early.
-	 */
-	atomic_set(&ioend->io_remaining, 1);
-	INIT_LIST_HEAD(&ioend->io_list);
-	ioend->io_type = type;
-	ioend->io_inode = inode;
-	INIT_WORK(&ioend->io_work, xfs_end_io);
-	spin_lock_init(&ioend->io_lock);
-	return ioend;
+	if (ioend->io_type == XFS_IO_UNWRITTEN)
+		queue_work(mp->m_unwritten_workqueue, &ioend->io_work);
+	else if (ioend->io_append_trans)
+		queue_work(mp->m_data_workqueue, &ioend->io_work);
+	else
+		xfs_destroy_ioend(ioend, bio->bi_error);
 }
 
 STATIC int
@@ -405,56 +370,6 @@ xfs_imap_valid(
 		offset < imap->br_startoff + imap->br_blockcount;
 }
 
-/*
- * BIO completion handler for buffered IO.
- */
-STATIC void
-xfs_end_bio(
-	struct bio		*bio)
-{
-	struct xfs_ioend	*ioend = bio->bi_private;
-	unsigned long		flags;
-
-	bio->bi_private = NULL;
-	bio->bi_end_io = NULL;
-
-	spin_lock_irqsave(&ioend->io_lock, flags);
-	if (!ioend->io_error)
-	       ioend->io_error = bio->bi_error;
-	if (!ioend->io_bio_done)
-		ioend->io_bio_done = bio;
-	else
-		ioend->io_bio_done_tail->bi_private = bio;
-	ioend->io_bio_done_tail = bio;
-	spin_unlock_irqrestore(&ioend->io_lock, flags);
-
-	xfs_finish_ioend(ioend);
-}
-
-STATIC void
-xfs_submit_ioend_bio(
-	struct writeback_control *wbc,
-	xfs_ioend_t		*ioend,
-	struct bio		*bio)
-{
-	atomic_inc(&ioend->io_remaining);
-	bio->bi_private = ioend;
-	bio->bi_end_io = xfs_end_bio;
-	submit_bio(wbc->sync_mode == WB_SYNC_ALL ? WRITE_SYNC : WRITE, bio);
-}
-
-STATIC struct bio *
-xfs_alloc_ioend_bio(
-	struct buffer_head	*bh)
-{
-	struct bio		*bio = bio_alloc(GFP_NOIO, BIO_MAX_PAGES);
-
-	ASSERT(bio->bi_private == NULL);
-	bio->bi_iter.bi_sector = bh->b_blocknr * (bh->b_size >> 9);
-	bio->bi_bdev = bh->b_bdev;
-	return bio;
-}
-
 STATIC void
 xfs_start_buffer_writeback(
 	struct buffer_head	*bh)
@@ -515,10 +430,10 @@ static inline int xfs_bio_add_buffer(struct bio *bio, struct buffer_head *bh)
 STATIC int
 xfs_submit_ioend(
 	struct writeback_control *wbc,
-	xfs_ioend_t		*ioend,
+	struct xfs_ioend	*ioend,
 	int			status)
 {
-	if (!ioend->io_bio || status)
+	if (status)
 		goto error_finish;
 
 	/* Reserve log space if we might write beyond the on-disk inode size. */
@@ -529,9 +444,10 @@ xfs_submit_ioend(
 			goto error_finish;
 	}
 
-	xfs_submit_ioend_bio(wbc, ioend, ioend->io_bio);
-	ioend->io_bio = NULL;
-	xfs_finish_ioend(ioend);
+	ioend->io_bio->bi_private = ioend;
+	ioend->io_bio->bi_end_io = xfs_end_bio;
+	submit_bio(wbc->sync_mode == WB_SYNC_ALL ? WRITE_SYNC : WRITE,
+			ioend->io_bio);
 	return 0;
 
 	/*
@@ -541,10 +457,8 @@ xfs_submit_ioend(
 	 * at this point in time.
 	 */
 error_finish:
-	if (ioend->io_bio)
-		bio_put(ioend->io_bio);
-	ioend->io_error = status;
-	xfs_finish_ioend(ioend);
+	ioend->io_bio->bi_error = status;
+	bio_endio(ioend->io_bio);
 	return status;
 }
 
@@ -565,22 +479,51 @@ xfs_add_to_ioend(
 	struct list_head	*iolist)
 {
 	struct xfs_ioend	*ioend = wpc->ioend;
+	struct bio		*new;
 
 	if (!ioend || wpc->io_type != ioend->io_type ||
 	    bh->b_blocknr != wpc->last_block + 1) {
+		new = bio_alloc_bioset(GFP_NOFS, BIO_MAX_PAGES,
+					xfs_ioend_bioset);
+		new->bi_iter.bi_sector = bh->b_blocknr * (bh->b_size >> 9);
+		new->bi_bdev = bh->b_bdev;
+	
 		if (ioend)
 			list_add(&ioend->io_list, iolist);
-		ioend = wpc->ioend = xfs_alloc_ioend(inode, wpc->io_type);
+
+		ioend = container_of(new, struct xfs_ioend, io_inline_bio);
+		INIT_LIST_HEAD(&ioend->io_list);
+		ioend->io_type = wpc->io_type;
+		ioend->io_inode = inode;
+		ioend->io_size = 0;
 		ioend->io_offset = offset;
+		INIT_WORK(&ioend->io_work, xfs_end_io);
+		ioend->io_append_trans = NULL;
+		ioend->io_bio = new;
+
+		wpc->ioend = ioend;
 	}
 
 retry:
-	if (!ioend->io_bio)
-		ioend->io_bio = xfs_alloc_ioend_bio(bh);
-
 	if (xfs_bio_add_buffer(ioend->io_bio, bh) != bh->b_size) {
-		xfs_submit_ioend_bio(wbc, ioend, ioend->io_bio);
-		ioend->io_bio = NULL;
+		/*
+		 * No space left in the bio.
+		 *
+		 * Allocate a new one, and chain the old bio to the new one.
+		 * Note that we have to do perform the chaining in this
+		 * unintuitive order so that the bi_private linkage is set up
+		 * in the right direction for the traversal in
+		 * xfs_destroy_ioend().
+		 */
+		new = bio_alloc(GFP_NOFS, BIO_MAX_PAGES);
+		new->bi_iter.bi_sector = bh->b_blocknr * (bh->b_size >> 9);
+		new->bi_bdev = bh->b_bdev;
+
+		bio_chain(ioend->io_bio, new);
+		bio_get(ioend->io_bio);	/* for xfs_destroy_ioend */
+		submit_bio(wbc->sync_mode == WB_SYNC_ALL ? WRITE_SYNC : WRITE,
+				ioend->io_bio);
+		ioend->io_bio = new;
 		goto retry;
 	}
 
diff --git a/fs/xfs/xfs_aops.h b/fs/xfs/xfs_aops.h
index 1c7b041..8b5b641 100644
--- a/fs/xfs/xfs_aops.h
+++ b/fs/xfs/xfs_aops.h
@@ -18,7 +18,7 @@
 #ifndef __XFS_AOPS_H__
 #define __XFS_AOPS_H__
 
-extern mempool_t *xfs_ioend_pool;
+extern struct bio_set *xfs_ioend_bioset;
 
 /*
  * Types of I/O for bmap clustering and I/O completion tracking.
@@ -37,24 +37,19 @@ enum {
 	{ XFS_IO_OVERWRITE,		"overwrite" }
 
 /*
- * xfs_ioend struct manages large extent writes for XFS.
- * It can manage several multi-page bio's at once.
+ * Structure for buffered I/O completions.
  */
-typedef struct xfs_ioend {
+struct xfs_ioend {
 	struct list_head	io_list;	/* next ioend in chain */
 	unsigned int		io_type;	/* delalloc / unwritten */
-	int			io_error;	/* I/O error code */
-	atomic_t		io_remaining;	/* hold count */
 	struct inode		*io_inode;	/* file being written to */
 	size_t			io_size;	/* size of the extent */
 	xfs_off_t		io_offset;	/* offset in the file */
 	struct work_struct	io_work;	/* xfsdatad work queue */
 	struct xfs_trans	*io_append_trans;/* xact. for size update */
 	struct bio		*io_bio;	/* bio being built */
-	struct bio		*io_bio_done;	/* bios completed */
-	struct bio		*io_bio_done_tail; /* bios completed */
-	spinlock_t		io_lock;	/* for bio completion list */
-} xfs_ioend_t;
+	struct bio		io_inline_bio;	/* MUST BE LAST! */
+};
 
 extern const struct address_space_operations xfs_address_space_operations;
 
diff --git a/fs/xfs/xfs_super.c b/fs/xfs/xfs_super.c
index 59c9b7b..33aa638 100644
--- a/fs/xfs/xfs_super.c
+++ b/fs/xfs/xfs_super.c
@@ -57,8 +57,7 @@
 #include <linux/parser.h>
 
 static const struct super_operations xfs_super_operations;
-static kmem_zone_t *xfs_ioend_zone;
-mempool_t *xfs_ioend_pool;
+struct bio_set *xfs_ioend_bioset;
 
 static struct kset *xfs_kset;		/* top-level xfs sysfs dir */
 #ifdef DEBUG
@@ -1646,20 +1645,15 @@ MODULE_ALIAS_FS("xfs");
 STATIC int __init
 xfs_init_zones(void)
 {
-
-	xfs_ioend_zone = kmem_zone_init(sizeof(xfs_ioend_t), "xfs_ioend");
-	if (!xfs_ioend_zone)
+	xfs_ioend_bioset = bioset_create(4 * MAX_BUF_PER_PAGE,
+			offsetof(struct xfs_ioend, io_inline_bio));
+	if (!xfs_ioend_bioset)
 		goto out;
 
-	xfs_ioend_pool = mempool_create_slab_pool(4 * MAX_BUF_PER_PAGE,
-						  xfs_ioend_zone);
-	if (!xfs_ioend_pool)
-		goto out_destroy_ioend_zone;
-
 	xfs_log_ticket_zone = kmem_zone_init(sizeof(xlog_ticket_t),
 						"xfs_log_ticket");
 	if (!xfs_log_ticket_zone)
-		goto out_destroy_ioend_pool;
+		goto out_free_ioend_bioset;
 
 	xfs_bmap_free_item_zone = kmem_zone_init(sizeof(xfs_bmap_free_item_t),
 						"xfs_bmap_free_item");
@@ -1755,10 +1749,8 @@ xfs_init_zones(void)
 	kmem_zone_destroy(xfs_bmap_free_item_zone);
  out_destroy_log_ticket_zone:
 	kmem_zone_destroy(xfs_log_ticket_zone);
- out_destroy_ioend_pool:
-	mempool_destroy(xfs_ioend_pool);
- out_destroy_ioend_zone:
-	kmem_zone_destroy(xfs_ioend_zone);
+ out_free_ioend_bioset:
+ 	bioset_free(xfs_ioend_bioset);
  out:
 	return -ENOMEM;
 }
@@ -1784,9 +1776,7 @@ xfs_destroy_zones(void)
 	kmem_zone_destroy(xfs_btree_cur_zone);
 	kmem_zone_destroy(xfs_bmap_free_item_zone);
 	kmem_zone_destroy(xfs_log_ticket_zone);
-	mempool_destroy(xfs_ioend_pool);
-	kmem_zone_destroy(xfs_ioend_zone);
-
+ 	bioset_free(xfs_ioend_bioset);
 }
 
 STATIC int __init
-- 
2.1.4

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: futher writeback updates
  2016-02-24  8:20 futher writeback updates Christoph Hellwig
                   ` (2 preceding siblings ...)
  2016-02-24  8:20 ` [PATCH 3/3] xfs: optimize bio handling in the buffer writeback path Christoph Hellwig
@ 2016-03-01 13:01 ` Christoph Hellwig
  2016-03-01 21:42   ` Dave Chinner
  3 siblings, 1 reply; 16+ messages in thread
From: Christoph Hellwig @ 2016-03-01 13:01 UTC (permalink / raw)
  To: xfs

Any comments?

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: futher writeback updates
  2016-03-01 13:01 ` futher writeback updates Christoph Hellwig
@ 2016-03-01 21:42   ` Dave Chinner
  0 siblings, 0 replies; 16+ messages in thread
From: Dave Chinner @ 2016-03-01 21:42 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: xfs

On Tue, Mar 01, 2016 at 05:01:13AM -0800, Christoph Hellwig wrote:
> Any comments?

I pulled them in to my testing stack yesterday afternoon.  There's a
good chance I'll end up delaying commiting these to the next merge
window, simply because I don't have the bandwidth at this point to
process any more significant changes to the for-next tree.  I think
it's much more important right now to get the rmap/reflink in the
the 4.6 merge window than these writeback changes.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 1/3] xfs: build bios directly in xfs_add_to_ioend
  2016-02-24  8:20 ` [PATCH 1/3] xfs: build bios directly in xfs_add_to_ioend Christoph Hellwig
@ 2016-03-03 15:17   ` Brian Foster
  0 siblings, 0 replies; 16+ messages in thread
From: Brian Foster @ 2016-03-03 15:17 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: Dave Chinner, xfs

On Wed, Feb 24, 2016 at 09:20:09AM +0100, Christoph Hellwig wrote:
> From: Dave Chinner <dchinner@redhat.com>
> 
> Currently adding a buffer to the ioend and then building a bio from
> the buffer list are two separate operations. We don't build the bios
> and submit them until the ioend is submitted, and this places a
> fixed dependency on bufferhead chaining in the ioend.
> 
> The first step to removing the bufferhead chaining in the ioend is
> on the IO submission side. We can build the bio directly as we add
> the buffers to the ioend chain, thereby removing the need for a
> latter "buffer-to-bio" submission loop. This allows us to submit
> bios on large ioends as soon as we cannot add more data to the bio.
> 
> These bios then get captured by the active plug, and hence will be
> dispatched as soon as either the plug overflows or we schedule away
> from the writeback context. This will reduce submission latency for
> large IOs, but will also allow more timely request queue based
> writeback blocking when the device becomes congested.
> 
> Signed-off-by: Dave Chinner <dchinner@redhat.com>
> ---

Reviewed-by: Brian Foster <bfoster@redhat.com>

>  fs/xfs/xfs_aops.c | 117 ++++++++++++++++++++++++++----------------------------
>  fs/xfs/xfs_aops.h |   1 +
>  2 files changed, 57 insertions(+), 61 deletions(-)
> 
> diff --git a/fs/xfs/xfs_aops.c b/fs/xfs/xfs_aops.c
> index 7a467b3..90e6e3a 100644
> --- a/fs/xfs/xfs_aops.c
> +++ b/fs/xfs/xfs_aops.c
> @@ -274,6 +274,7 @@ xfs_alloc_ioend(
>  	xfs_ioend_t		*ioend;
>  
>  	ioend = mempool_alloc(xfs_ioend_pool, GFP_NOFS);
> +	memset(ioend, 0, sizeof(*ioend));
>  
>  	/*
>  	 * Set the count to 1 initially, which will prevent an I/O
> @@ -281,16 +282,9 @@ xfs_alloc_ioend(
>  	 * all the I/O from calling the completion routine too early.
>  	 */
>  	atomic_set(&ioend->io_remaining, 1);
> -	ioend->io_error = 0;
>  	INIT_LIST_HEAD(&ioend->io_list);
>  	ioend->io_type = type;
>  	ioend->io_inode = inode;
> -	ioend->io_buffer_head = NULL;
> -	ioend->io_buffer_tail = NULL;
> -	ioend->io_offset = 0;
> -	ioend->io_size = 0;
> -	ioend->io_append_trans = NULL;
> -
>  	INIT_WORK(&ioend->io_work, xfs_end_io);
>  	return ioend;
>  }
> @@ -452,13 +446,18 @@ static inline int xfs_bio_add_buffer(struct bio *bio, struct buffer_head *bh)
>  }
>  
>  /*
> - * Submit all of the bios for an ioend. We are only passed a single ioend at a
> - * time; the caller is responsible for chaining prior to submission.
> + * Submit the bio for an ioend. We are passed an ioend with a bio attached to
> + * it, and we submit that bio. The ioend may be used for multiple bio
> + * submissions, so we only want to allocate an append transaction for the ioend
> + * once. In the case of multiple bio submission, each bio will take an IO
> + * reference to the ioend to ensure that the ioend completion is only done once
> + * all bios have been submitted and the ioend is really done.
>   *
>   * If @fail is non-zero, it means that we have a situation where some part of
>   * the submission process has failed after we have marked paged for writeback
> - * and unlocked them. In this situation, we need to fail the ioend chain rather
> - * than submit it to IO. This typically only happens on a filesystem shutdown.
> + * and unlocked them. In this situation, we need to fail the bio and ioend
> + * rather than submit it to IO. This typically only happens on a filesystem
> + * shutdown.
>   */
>  STATIC int
>  xfs_submit_ioend(
> @@ -466,48 +465,34 @@ xfs_submit_ioend(
>  	xfs_ioend_t		*ioend,
>  	int			status)
>  {
> -	struct buffer_head	*bh;
> -	struct bio		*bio;
> -	sector_t		lastblock = 0;
> +	if (!ioend->io_bio || status)
> +		goto error_finish;
>  
>  	/* Reserve log space if we might write beyond the on-disk inode size. */
> -	if (!status &&
> -	     ioend->io_type != XFS_IO_UNWRITTEN && xfs_ioend_is_append(ioend))
> +	if (ioend->io_type != XFS_IO_UNWRITTEN && xfs_ioend_is_append(ioend) &&
> +	    !ioend->io_append_trans) {
>  		status = xfs_setfilesize_trans_alloc(ioend);
> -	/*
> -	 * If we are failing the IO now, just mark the ioend with an
> -	 * error and finish it. This will run IO completion immediately
> -	 * as there is only one reference to the ioend at this point in
> -	 * time.
> -	 */
> -	if (status) {
> -		ioend->io_error = status;
> -		xfs_finish_ioend(ioend);
> -		return status;
> +		if (status)
> +			goto error_finish;
>  	}
>  
> -	bio = NULL;
> -	for (bh = ioend->io_buffer_head; bh; bh = bh->b_private) {
> -
> -		if (!bio) {
> -retry:
> -			bio = xfs_alloc_ioend_bio(bh);
> -		} else if (bh->b_blocknr != lastblock + 1) {
> -			xfs_submit_ioend_bio(wbc, ioend, bio);
> -			goto retry;
> -		}
> -
> -		if (xfs_bio_add_buffer(bio, bh) != bh->b_size) {
> -			xfs_submit_ioend_bio(wbc, ioend, bio);
> -			goto retry;
> -		}
> -
> -		lastblock = bh->b_blocknr;
> -	}
> -	if (bio)
> -		xfs_submit_ioend_bio(wbc, ioend, bio);
> +	xfs_submit_ioend_bio(wbc, ioend, ioend->io_bio);
> +	ioend->io_bio = NULL;
>  	xfs_finish_ioend(ioend);
>  	return 0;
> +
> +	/*
> +	 * If we are failing the IO now, just mark the ioend with an error and
> +	 * finish it, releasing the active bio if there is one.  This will run
> +	 * IO completion immediately as there is only one reference to the ioend
> +	 * at this point in time.
> +	 */
> +error_finish:
> +	if (ioend->io_bio)
> +		bio_put(ioend->io_bio);
> +	ioend->io_error = status;
> +	xfs_finish_ioend(ioend);
> +	return status;
>  }
>  
>  /*
> @@ -523,27 +508,37 @@ xfs_add_to_ioend(
>  	struct buffer_head	*bh,
>  	xfs_off_t		offset,
>  	struct xfs_writepage_ctx *wpc,
> +	struct writeback_control *wbc,
>  	struct list_head	*iolist)
>  {
> -	if (!wpc->ioend || wpc->io_type != wpc->ioend->io_type ||
> -	    bh->b_blocknr != wpc->last_block + 1) {
> -		struct xfs_ioend	*new;
> +	struct xfs_ioend	*ioend = wpc->ioend;
>  
> -		if (wpc->ioend)
> -			list_add(&wpc->ioend->io_list, iolist);
> +	if (!ioend || wpc->io_type != ioend->io_type ||
> +	    bh->b_blocknr != wpc->last_block + 1) {
> +		if (ioend)
> +			list_add(&ioend->io_list, iolist);
>  
> -		new = xfs_alloc_ioend(inode, wpc->io_type);
> -		new->io_offset = offset;
> -		new->io_buffer_head = bh;
> -		new->io_buffer_tail = bh;
> -		wpc->ioend = new;
> +		ioend = wpc->ioend = xfs_alloc_ioend(inode, wpc->io_type);
> +		ioend->io_offset = offset;
> +		ioend->io_buffer_head = bh;
> +		ioend->io_buffer_tail = bh;
>  	} else {
> -		wpc->ioend->io_buffer_tail->b_private = bh;
> -		wpc->ioend->io_buffer_tail = bh;
> +		ioend->io_buffer_tail->b_private = bh;
> +		ioend->io_buffer_tail = bh;
>  	}
> -
>  	bh->b_private = NULL;
> -	wpc->ioend->io_size += bh->b_size;
> +
> +retry:
> +	if (!ioend->io_bio)
> +		ioend->io_bio = xfs_alloc_ioend_bio(bh);
> +
> +	if (xfs_bio_add_buffer(ioend->io_bio, bh) != bh->b_size) {
> +		xfs_submit_ioend_bio(wbc, ioend, ioend->io_bio);
> +		ioend->io_bio = NULL;
> +		goto retry;
> +	}
> +
> +	ioend->io_size += bh->b_size;
>  	wpc->last_block = bh->b_blocknr;
>  	xfs_start_buffer_writeback(bh);
>  }
> @@ -802,7 +797,7 @@ xfs_writepage_map(
>  			lock_buffer(bh);
>  			if (wpc->io_type != XFS_IO_OVERWRITE)
>  				xfs_map_at_offset(inode, bh, &wpc->imap, offset);
> -			xfs_add_to_ioend(inode, bh, offset, wpc, &submit_list);
> +			xfs_add_to_ioend(inode, bh, offset, wpc, wbc, &submit_list);
>  			count++;
>  		}
>  
> diff --git a/fs/xfs/xfs_aops.h b/fs/xfs/xfs_aops.h
> index 4e01bd5..c89c3bd 100644
> --- a/fs/xfs/xfs_aops.h
> +++ b/fs/xfs/xfs_aops.h
> @@ -52,6 +52,7 @@ typedef struct xfs_ioend {
>  	xfs_off_t		io_offset;	/* offset in the file */
>  	struct work_struct	io_work;	/* xfsdatad work queue */
>  	struct xfs_trans	*io_append_trans;/* xact. for size update */
> +	struct bio		*io_bio;	/* bio being built */
>  } xfs_ioend_t;
>  
>  extern const struct address_space_operations xfs_address_space_operations;
> -- 
> 2.1.4
> 
> _______________________________________________
> xfs mailing list
> xfs@oss.sgi.com
> http://oss.sgi.com/mailman/listinfo/xfs

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 2/3] xfs: don't release bios on completion immediately
  2016-02-24  8:20 ` [PATCH 2/3] xfs: don't release bios on completion immediately Christoph Hellwig
@ 2016-03-03 15:17   ` Brian Foster
  2016-03-11 14:47     ` Christoph Hellwig
  0 siblings, 1 reply; 16+ messages in thread
From: Brian Foster @ 2016-03-03 15:17 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: Dave Chinner, xfs

On Wed, Feb 24, 2016 at 09:20:10AM +0100, Christoph Hellwig wrote:
> Completion of an ioend requires us to walk the bufferhead list to
> end writback on all the bufferheads. This, in turn, is needed so
> that we can end writeback on all the pages we just did IO on.
> 
> To remove our dependency on bufferheads in writeback, we need to
> turn this around the other way - we need to walk the pages we've
> just completed IO on, and then walk the buffers attached to the
> pages and complete their IO. In doing this, we remove the
> requirement for the ioend to track bufferheads directly.
> 
> To enable IO completion to walk all the pages we've submitted IO on,
> we need to keep the bios that we used for IO around until the ioend
> has been completed. We can do this simply by chaining the bios to
> the ioend at completion time, and then walking their pages directly
> just before destroying the ioend.
> 
> Signed-off-by: Dave Chinner <dchinner@redhat.com>
> ---
>  fs/xfs/xfs_aops.c | 90 +++++++++++++++++++++++++++++++++++++++++--------------
>  fs/xfs/xfs_aops.h |  5 ++--
>  2 files changed, 71 insertions(+), 24 deletions(-)
> 
> diff --git a/fs/xfs/xfs_aops.c b/fs/xfs/xfs_aops.c
> index 90e6e3a..fc4fed6 100644
> --- a/fs/xfs/xfs_aops.c
> +++ b/fs/xfs/xfs_aops.c
> @@ -84,25 +84,71 @@ xfs_find_bdev_for_inode(
>  }
>  
>  /*
> - * We're now finished for good with this ioend structure.
> - * Update the page state via the associated buffer_heads,
> - * release holds on the inode and bio, and finally free
> - * up memory.  Do not use the ioend after this.
> + * We're now finished for good with this page.  Update the page state via the
> + * associated buffer_heads, paying attention to the start and end offsets that
> + * we need to process on the page.
> + */
> +static void
> +xfs_finish_page_writeback(
> +	struct page	*page,
> +	unsigned int	start,
> +	unsigned int	end,
> +	int		error)
> +{
> +	struct buffer_head	*head, *bh;
> +	unsigned int		off = 0;
> +
> +	bh = head = page_buffers(page);
> +
> +	do {
> +		if (start > off)
> +			goto next_bh;

Probably not an issue for current usage, which appears to be on buffer
size granularity, but shouldn't this check whether start is beyond the
end of the current buffer (e.g., start >= off + bh->b_size)?

> +		if (off > end)
> +			break;
> +		bh->b_end_io(bh, !error);
> +next_bh:
> +		off += bh->b_size;
> +	} while ((bh = bh->b_this_page) != head);
> +}
> +
> +/*
> + * We're now finished for good with this ioend structure.  Update the page
> + * state, release holds on bios, and finally free up memory.  Do not use the
> + * ioend after this.
>   */
>  STATIC void
>  xfs_destroy_ioend(
> -	xfs_ioend_t		*ioend)
> +	struct xfs_ioend	*ioend)
>  {
> -	struct buffer_head	*bh, *next;
> +	struct bio		*bio, *next;
> +
> +	for (bio = ioend->io_bio_done; bio; bio = next) {
> +		struct bio_vec	*bvec;
> +		int		i;
>  
> -	for (bh = ioend->io_buffer_head; bh; bh = next) {
> -		next = bh->b_private;
> -		bh->b_end_io(bh, !ioend->io_error);
> +		next = bio->bi_private;
> +		bio->bi_private = NULL;
> +
> +		/* walk each page on bio, ending page IO on them */
> +		bio_for_each_segment_all(bvec, bio, i) {
> +			struct page	*page = bvec->bv_page;
> +			unsigned int	off = bvec->bv_offset;
> +			unsigned int	end_off = off + bvec->bv_len - 1;
> +
> +			ASSERT(off < PAGE_SIZE);
> +			ASSERT(end_off <= PAGE_SIZE);
> +
> +			xfs_finish_page_writeback(page, off, end_off,
> +						  ioend->io_error);
> +
> +		}
> +		bio_put(bio);
>  	}
>  
>  	mempool_free(ioend, xfs_ioend_pool);
>  }
>  
> +

Unnecessary whitespace here.

Brian

>  /*
>   * Fast and loose check if this write could update the on-disk inode size.
>   */
> @@ -286,6 +332,7 @@ xfs_alloc_ioend(
>  	ioend->io_type = type;
>  	ioend->io_inode = inode;
>  	INIT_WORK(&ioend->io_work, xfs_end_io);
> +	spin_lock_init(&ioend->io_lock);
>  	return ioend;
>  }
>  
> @@ -365,15 +412,21 @@ STATIC void
>  xfs_end_bio(
>  	struct bio		*bio)
>  {
> -	xfs_ioend_t		*ioend = bio->bi_private;
> +	struct xfs_ioend	*ioend = bio->bi_private;
> +	unsigned long		flags;
>  
> -	if (!ioend->io_error)
> -		ioend->io_error = bio->bi_error;
> -
> -	/* Toss bio and pass work off to an xfsdatad thread */
>  	bio->bi_private = NULL;
>  	bio->bi_end_io = NULL;
> -	bio_put(bio);
> +
> +	spin_lock_irqsave(&ioend->io_lock, flags);
> +	if (!ioend->io_error)
> +	       ioend->io_error = bio->bi_error;
> +	if (!ioend->io_bio_done)
> +		ioend->io_bio_done = bio;
> +	else
> +		ioend->io_bio_done_tail->bi_private = bio;
> +	ioend->io_bio_done_tail = bio;
> +	spin_unlock_irqrestore(&ioend->io_lock, flags);
>  
>  	xfs_finish_ioend(ioend);
>  }
> @@ -517,16 +570,9 @@ xfs_add_to_ioend(
>  	    bh->b_blocknr != wpc->last_block + 1) {
>  		if (ioend)
>  			list_add(&ioend->io_list, iolist);
> -
>  		ioend = wpc->ioend = xfs_alloc_ioend(inode, wpc->io_type);
>  		ioend->io_offset = offset;
> -		ioend->io_buffer_head = bh;
> -		ioend->io_buffer_tail = bh;
> -	} else {
> -		ioend->io_buffer_tail->b_private = bh;
> -		ioend->io_buffer_tail = bh;
>  	}
> -	bh->b_private = NULL;
>  
>  retry:
>  	if (!ioend->io_bio)
> diff --git a/fs/xfs/xfs_aops.h b/fs/xfs/xfs_aops.h
> index c89c3bd..1c7b041 100644
> --- a/fs/xfs/xfs_aops.h
> +++ b/fs/xfs/xfs_aops.h
> @@ -46,13 +46,14 @@ typedef struct xfs_ioend {
>  	int			io_error;	/* I/O error code */
>  	atomic_t		io_remaining;	/* hold count */
>  	struct inode		*io_inode;	/* file being written to */
> -	struct buffer_head	*io_buffer_head;/* buffer linked list head */
> -	struct buffer_head	*io_buffer_tail;/* buffer linked list tail */
>  	size_t			io_size;	/* size of the extent */
>  	xfs_off_t		io_offset;	/* offset in the file */
>  	struct work_struct	io_work;	/* xfsdatad work queue */
>  	struct xfs_trans	*io_append_trans;/* xact. for size update */
>  	struct bio		*io_bio;	/* bio being built */
> +	struct bio		*io_bio_done;	/* bios completed */
> +	struct bio		*io_bio_done_tail; /* bios completed */
> +	spinlock_t		io_lock;	/* for bio completion list */
>  } xfs_ioend_t;
>  
>  extern const struct address_space_operations xfs_address_space_operations;
> -- 
> 2.1.4
> 
> _______________________________________________
> xfs mailing list
> xfs@oss.sgi.com
> http://oss.sgi.com/mailman/listinfo/xfs

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 3/3] xfs: optimize bio handling in the buffer writeback path
  2016-02-24  8:20 ` [PATCH 3/3] xfs: optimize bio handling in the buffer writeback path Christoph Hellwig
@ 2016-03-03 15:17   ` Brian Foster
  2016-03-04 13:38     ` Brian Foster
  2016-03-11 14:55     ` Christoph Hellwig
  0 siblings, 2 replies; 16+ messages in thread
From: Brian Foster @ 2016-03-03 15:17 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: xfs

On Wed, Feb 24, 2016 at 09:20:11AM +0100, Christoph Hellwig wrote:
> This patch implements two closely related changes:  First it embedds a
> bio the ioend structure so that we don't have to allocate one separately.
> Second it uses the block layer bio chaining mechanism to chain additional
> bios off this first one if needed instead of manually accouting for
> multiple bio completions in the ioend structure.  Together this removes a
> memory allocation per ioend and greatly simplifies the ioend setup and
> I/O completion path.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>  fs/xfs/xfs_aops.c  | 217 ++++++++++++++++++++---------------------------------
>  fs/xfs/xfs_aops.h  |  15 ++--
>  fs/xfs/xfs_super.c |  26 ++-----
>  3 files changed, 93 insertions(+), 165 deletions(-)
> 
> diff --git a/fs/xfs/xfs_aops.c b/fs/xfs/xfs_aops.c
> index fc4fed6..1ea4167 100644
> --- a/fs/xfs/xfs_aops.c
> +++ b/fs/xfs/xfs_aops.c
> @@ -118,17 +118,15 @@ next_bh:
>   */
>  STATIC void
>  xfs_destroy_ioend(
> -	struct xfs_ioend	*ioend)
> +	struct xfs_ioend	*ioend,
> +	int			error)
>  {
> -	struct bio		*bio, *next;
> +	struct bio		*bio, *next, *last = ioend->io_bio;
>  
> -	for (bio = ioend->io_bio_done; bio; bio = next) {
> +	for (bio = &ioend->io_inline_bio; bio; bio = next) {
>  		struct bio_vec	*bvec;
>  		int		i;
>  
> -		next = bio->bi_private;
> -		bio->bi_private = NULL;
> -
>  		/* walk each page on bio, ending page IO on them */
>  		bio_for_each_segment_all(bvec, bio, i) {
>  			struct page	*page = bvec->bv_page;
> @@ -138,17 +136,21 @@ xfs_destroy_ioend(
>  			ASSERT(off < PAGE_SIZE);
>  			ASSERT(end_off <= PAGE_SIZE);
>  
> -			xfs_finish_page_writeback(page, off, end_off,
> -						  ioend->io_error);
> -
> +			xfs_finish_page_writeback(page, off, end_off, error);
>  		}
> +
> +		/*
> +		 * For the last bio, bi_private points to the ioend, so we
> +		 * need to explicitly end the iteration here.
> +		 */

Do you mean the last bio is pointed to by the ioend?

> +		if (bio == last)
> +			next = NULL;
> +		else
> +			next = bio->bi_private;
>  		bio_put(bio);
>  	}
> -
> -	mempool_free(ioend, xfs_ioend_pool);
>  }
>  
> -
>  /*
>   * Fast and loose check if this write could update the on-disk inode size.
>   */
...
> @@ -515,10 +430,10 @@ static inline int xfs_bio_add_buffer(struct bio *bio, struct buffer_head *bh)
>  STATIC int
>  xfs_submit_ioend(
>  	struct writeback_control *wbc,
> -	xfs_ioend_t		*ioend,
> +	struct xfs_ioend	*ioend,
>  	int			status)
>  {
> -	if (!ioend->io_bio || status)
> +	if (status)
>  		goto error_finish;
>  
>  	/* Reserve log space if we might write beyond the on-disk inode size. */
> @@ -529,9 +444,10 @@ xfs_submit_ioend(
>  			goto error_finish;
>  	}
>  
> -	xfs_submit_ioend_bio(wbc, ioend, ioend->io_bio);
> -	ioend->io_bio = NULL;
> -	xfs_finish_ioend(ioend);
> +	ioend->io_bio->bi_private = ioend;
> +	ioend->io_bio->bi_end_io = xfs_end_bio;
> +	submit_bio(wbc->sync_mode == WB_SYNC_ALL ? WRITE_SYNC : WRITE,
> +			ioend->io_bio);
>  	return 0;
>  
>  	/*
> @@ -541,10 +457,8 @@ xfs_submit_ioend(
>  	 * at this point in time.
>  	 */
>  error_finish:
> -	if (ioend->io_bio)
> -		bio_put(ioend->io_bio);
> -	ioend->io_error = status;
> -	xfs_finish_ioend(ioend);
> +	ioend->io_bio->bi_error = status;
> +	bio_endio(ioend->io_bio);
>  	return status;

bi_end_io is not set here, so what happens to the buffers added to the
ioend in this case?

>  }
>  
> @@ -565,22 +479,51 @@ xfs_add_to_ioend(
>  	struct list_head	*iolist)
>  {
>  	struct xfs_ioend	*ioend = wpc->ioend;
> +	struct bio		*new;
>  
>  	if (!ioend || wpc->io_type != ioend->io_type ||
>  	    bh->b_blocknr != wpc->last_block + 1) {
> +		new = bio_alloc_bioset(GFP_NOFS, BIO_MAX_PAGES,
> +					xfs_ioend_bioset);
> +		new->bi_iter.bi_sector = bh->b_blocknr * (bh->b_size >> 9);
> +		new->bi_bdev = bh->b_bdev;
> +	

Trailing whitespace on the above line. And FWIW, I'm not a huge fan of
open coding both the bio and ioend allocations. It makes it easier to
distinguish the higher level algorithm from all of the structure
initialization noise. It looks to me that alloc_ioend() could remain
mostly as is, using the new bioset allocation, and alloc_ioend_bio()
could be inlined and renamed to something like init_bio_from_bh() or
some such.

>  		if (ioend)
>  			list_add(&ioend->io_list, iolist);
> -		ioend = wpc->ioend = xfs_alloc_ioend(inode, wpc->io_type);
> +
> +		ioend = container_of(new, struct xfs_ioend, io_inline_bio);
> +		INIT_LIST_HEAD(&ioend->io_list);
> +		ioend->io_type = wpc->io_type;
> +		ioend->io_inode = inode;
> +		ioend->io_size = 0;
>  		ioend->io_offset = offset;
> +		INIT_WORK(&ioend->io_work, xfs_end_io);
> +		ioend->io_append_trans = NULL;
> +		ioend->io_bio = new;
> +
> +		wpc->ioend = ioend;
>  	}
>  
>  retry:
> -	if (!ioend->io_bio)
> -		ioend->io_bio = xfs_alloc_ioend_bio(bh);
> -
>  	if (xfs_bio_add_buffer(ioend->io_bio, bh) != bh->b_size) {
> -		xfs_submit_ioend_bio(wbc, ioend, ioend->io_bio);
> -		ioend->io_bio = NULL;
> +		/*
> +		 * No space left in the bio.
> +		 *
> +		 * Allocate a new one, and chain the old bio to the new one.
> +		 * Note that we have to do perform the chaining in this
> +		 * unintuitive order so that the bi_private linkage is set up
> +		 * in the right direction for the traversal in
> +		 * xfs_destroy_ioend().
> +		 */
> +		new = bio_alloc(GFP_NOFS, BIO_MAX_PAGES);
> +		new->bi_iter.bi_sector = bh->b_blocknr * (bh->b_size >> 9);
> +		new->bi_bdev = bh->b_bdev;
> +
> +		bio_chain(ioend->io_bio, new);
> +		bio_get(ioend->io_bio);	/* for xfs_destroy_ioend */
> +		submit_bio(wbc->sync_mode == WB_SYNC_ALL ? WRITE_SYNC : WRITE,
> +				ioend->io_bio);
> +		ioend->io_bio = new;

I'm trying to make sure I grok how this works without knowing much about
the block layer. So we chain the current bio to the new one, the latter
becoming the parent, and submit the old one. It looks to me that this
could result in bio_endio() on the parent, which seems fishy... what am
I missing? IOW, is it safe to submit bios like this before the entire
chain is created?

>  		goto retry;
>  	}
>  
...
> diff --git a/fs/xfs/xfs_super.c b/fs/xfs/xfs_super.c
> index 59c9b7b..33aa638 100644
> --- a/fs/xfs/xfs_super.c
> +++ b/fs/xfs/xfs_super.c
...
> @@ -1755,10 +1749,8 @@ xfs_init_zones(void)
>  	kmem_zone_destroy(xfs_bmap_free_item_zone);
>   out_destroy_log_ticket_zone:
>  	kmem_zone_destroy(xfs_log_ticket_zone);
> - out_destroy_ioend_pool:
> -	mempool_destroy(xfs_ioend_pool);
> - out_destroy_ioend_zone:
> -	kmem_zone_destroy(xfs_ioend_zone);
> + out_free_ioend_bioset:
> + 	bioset_free(xfs_ioend_bioset);

Space before tab ^.

>   out:
>  	return -ENOMEM;
>  }
> @@ -1784,9 +1776,7 @@ xfs_destroy_zones(void)
>  	kmem_zone_destroy(xfs_btree_cur_zone);
>  	kmem_zone_destroy(xfs_bmap_free_item_zone);
>  	kmem_zone_destroy(xfs_log_ticket_zone);
> -	mempool_destroy(xfs_ioend_pool);
> -	kmem_zone_destroy(xfs_ioend_zone);
> -
> + 	bioset_free(xfs_ioend_bioset);

Space before tab ^.

Brian

>  }
>  
>  STATIC int __init
> -- 
> 2.1.4
> 
> _______________________________________________
> xfs mailing list
> xfs@oss.sgi.com
> http://oss.sgi.com/mailman/listinfo/xfs

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 3/3] xfs: optimize bio handling in the buffer writeback path
  2016-03-03 15:17   ` Brian Foster
@ 2016-03-04 13:38     ` Brian Foster
  2016-03-11 15:06       ` Christoph Hellwig
  2016-03-11 14:55     ` Christoph Hellwig
  1 sibling, 1 reply; 16+ messages in thread
From: Brian Foster @ 2016-03-04 13:38 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: xfs

On Thu, Mar 03, 2016 at 10:17:30AM -0500, Brian Foster wrote:
> On Wed, Feb 24, 2016 at 09:20:11AM +0100, Christoph Hellwig wrote:
> > This patch implements two closely related changes:  First it embedds a
> > bio the ioend structure so that we don't have to allocate one separately.
> > Second it uses the block layer bio chaining mechanism to chain additional
> > bios off this first one if needed instead of manually accouting for
> > multiple bio completions in the ioend structure.  Together this removes a
> > memory allocation per ioend and greatly simplifies the ioend setup and
> > I/O completion path.
> > 
> > Signed-off-by: Christoph Hellwig <hch@lst.de>
> > ---
> >  fs/xfs/xfs_aops.c  | 217 ++++++++++++++++++++---------------------------------
> >  fs/xfs/xfs_aops.h  |  15 ++--
> >  fs/xfs/xfs_super.c |  26 ++-----
> >  3 files changed, 93 insertions(+), 165 deletions(-)
> > 
> > diff --git a/fs/xfs/xfs_aops.c b/fs/xfs/xfs_aops.c
> > index fc4fed6..1ea4167 100644
> > --- a/fs/xfs/xfs_aops.c
> > +++ b/fs/xfs/xfs_aops.c
...
> >  	if (xfs_bio_add_buffer(ioend->io_bio, bh) != bh->b_size) {
> > -		xfs_submit_ioend_bio(wbc, ioend, ioend->io_bio);
> > -		ioend->io_bio = NULL;
> > +		/*
> > +		 * No space left in the bio.
> > +		 *
> > +		 * Allocate a new one, and chain the old bio to the new one.
> > +		 * Note that we have to do perform the chaining in this
> > +		 * unintuitive order so that the bi_private linkage is set up
> > +		 * in the right direction for the traversal in
> > +		 * xfs_destroy_ioend().
> > +		 */
> > +		new = bio_alloc(GFP_NOFS, BIO_MAX_PAGES);
> > +		new->bi_iter.bi_sector = bh->b_blocknr * (bh->b_size >> 9);
> > +		new->bi_bdev = bh->b_bdev;
> > +
> > +		bio_chain(ioend->io_bio, new);
> > +		bio_get(ioend->io_bio);	/* for xfs_destroy_ioend */
> > +		submit_bio(wbc->sync_mode == WB_SYNC_ALL ? WRITE_SYNC : WRITE,
> > +				ioend->io_bio);
> > +		ioend->io_bio = new;
> 
> I'm trying to make sure I grok how this works without knowing much about
> the block layer. So we chain the current bio to the new one, the latter
> becoming the parent, and submit the old one. It looks to me that this
> could result in bio_endio() on the parent, which seems fishy... what am
> I missing? IOW, is it safe to submit bios like this before the entire
> chain is created?
> 

Responding to my own question...

Dave pointed out on irc yesterday that the bio chaining does some
reference counting similar to the xfs_ioend to ensure that bi_end_io()
of the parent is never called until the chain is complete. I took a
closer look at the code and managed to convince myself that this should
work fine. When the old bio is chained to the new, the new becomes the
parent and the remaining I/O count is elevated. If the child bio
happens to complete while the parent is still being built, bio_endio()
on the child propagates an error to the parent and calls
bio_remaining_done() on the parent, which drops the extra reference it
held. The parent can still subsequently be chained itself to another bio
or have the XFS bi_end_io() cb set.

One thing I'm a bit suspicious about still is whether the error
propagation is racy. For example, consider we've created two chained
bios A and B, such that A is the parent and thus bio(io_remaining) for
each is A(2) and B(1). Suppose bio A happens to complete first with an
error. A->bi_error is set and bio_endio(A) is called, which IIUC
basically just does A(2)->A(1). If bio B completes successfully,
B->bi_error presumably remains set to 0 and bio_endio(B) is called. The
latter checks that B->bi_end_io == bio_chain_endio, propagates
B->bi_error to A->bi_error unconditionally and then walks up to the
parent bio to drop its reference and finally call A->bi_end_io().

Doesn't this mean that we can potentially lose errors in the chain? I
could easily still be missing something here...

Brian

> >  		goto retry;
> >  	}
> >  
> ...
> > diff --git a/fs/xfs/xfs_super.c b/fs/xfs/xfs_super.c
> > index 59c9b7b..33aa638 100644
> > --- a/fs/xfs/xfs_super.c
> > +++ b/fs/xfs/xfs_super.c
> ...
> > @@ -1755,10 +1749,8 @@ xfs_init_zones(void)
> >  	kmem_zone_destroy(xfs_bmap_free_item_zone);
> >   out_destroy_log_ticket_zone:
> >  	kmem_zone_destroy(xfs_log_ticket_zone);
> > - out_destroy_ioend_pool:
> > -	mempool_destroy(xfs_ioend_pool);
> > - out_destroy_ioend_zone:
> > -	kmem_zone_destroy(xfs_ioend_zone);
> > + out_free_ioend_bioset:
> > + 	bioset_free(xfs_ioend_bioset);
> 
> Space before tab ^.
> 
> >   out:
> >  	return -ENOMEM;
> >  }
> > @@ -1784,9 +1776,7 @@ xfs_destroy_zones(void)
> >  	kmem_zone_destroy(xfs_btree_cur_zone);
> >  	kmem_zone_destroy(xfs_bmap_free_item_zone);
> >  	kmem_zone_destroy(xfs_log_ticket_zone);
> > -	mempool_destroy(xfs_ioend_pool);
> > -	kmem_zone_destroy(xfs_ioend_zone);
> > -
> > + 	bioset_free(xfs_ioend_bioset);
> 
> Space before tab ^.
> 
> Brian
> 
> >  }
> >  
> >  STATIC int __init
> > -- 
> > 2.1.4
> > 
> > _______________________________________________
> > xfs mailing list
> > xfs@oss.sgi.com
> > http://oss.sgi.com/mailman/listinfo/xfs
> 
> _______________________________________________
> xfs mailing list
> xfs@oss.sgi.com
> http://oss.sgi.com/mailman/listinfo/xfs

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 2/3] xfs: don't release bios on completion immediately
  2016-03-03 15:17   ` Brian Foster
@ 2016-03-11 14:47     ` Christoph Hellwig
  2016-03-11 17:52       ` Brian Foster
  0 siblings, 1 reply; 16+ messages in thread
From: Christoph Hellwig @ 2016-03-11 14:47 UTC (permalink / raw)
  To: Brian Foster; +Cc: Dave Chinner, Christoph Hellwig, xfs

On Thu, Mar 03, 2016 at 10:17:22AM -0500, Brian Foster wrote:
> > +xfs_finish_page_writeback(
> > +	struct page	*page,
> > +	unsigned int	start,
> > +	unsigned int	end,
> > +	int		error)
> > +{
> > +	struct buffer_head	*head, *bh;
> > +	unsigned int		off = 0;
> > +
> > +	bh = head = page_buffers(page);
> > +
> > +	do {
> > +		if (start > off)
> > +			goto next_bh;
> 
> Probably not an issue for current usage, which appears to be on buffer
> size granularity, but shouldn't this check whether start is beyond the
> end of the current buffer (e.g., start >= off + bh->b_size)?

I don't understand that question.  We get called for a given page,
and a start and end offset inside that page.  These offsets by design
need to be aligned to the filesystem blocksize.  So what we basically
want is to skip a few buffers at the beginning and/or end of the page,
and the code seems to handle that fine.

> >  	mempool_free(ioend, xfs_ioend_pool);
> >  }
> >  
> > +
> 
> Unnecessary whitespace here.

Fixed.

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 3/3] xfs: optimize bio handling in the buffer writeback path
  2016-03-03 15:17   ` Brian Foster
  2016-03-04 13:38     ` Brian Foster
@ 2016-03-11 14:55     ` Christoph Hellwig
  1 sibling, 0 replies; 16+ messages in thread
From: Christoph Hellwig @ 2016-03-11 14:55 UTC (permalink / raw)
  To: Brian Foster; +Cc: Christoph Hellwig, xfs

On Thu, Mar 03, 2016 at 10:17:30AM -0500, Brian Foster wrote:
> > +
> > +		/*
> > +		 * For the last bio, bi_private points to the ioend, so we
> > +		 * need to explicitly end the iteration here.
> > +		 */
> 
> Do you mean the last bio is pointed to by the ioend?

No, bio->bi_private of the last bio points to the ioend.

> > @@ -541,10 +457,8 @@ xfs_submit_ioend(
> >  	 * at this point in time.
> >  	 */
> >  error_finish:
> > -	if (ioend->io_bio)
> > -		bio_put(ioend->io_bio);
> > -	ioend->io_error = status;
> > -	xfs_finish_ioend(ioend);
> > +	ioend->io_bio->bi_error = status;
> > +	bio_endio(ioend->io_bio);
> >  	return status;
> 
> bi_end_io is not set here, so what happens to the buffers added to the
> ioend in this case?

We're not calling the end_io function we should be, good catch and fixed.

> Trailing whitespace on the above line.

Ok, fixed.

> And FWIW, I'm not a huge fan of
> open coding both the bio and ioend allocations. It makes it easier to
> distinguish the higher level algorithm from all of the structure
> initialization noise. It looks to me that alloc_ioend() could remain
> mostly as is, using the new bioset allocation, and alloc_ioend_bio()
> could be inlined and renamed to something like init_bio_from_bh() or
> some such.

Hmm, not a huge fan of these single use function in general, but
I'll see if I can do something sensible.

> I'm trying to make sure I grok how this works without knowing much about
> the block layer. So we chain the current bio to the new one, the latter
> becoming the parent, and submit the old one. It looks to me that this
> could result in bio_endio() on the parent, which seems fishy... what am
> I missing? IOW, is it safe to submit bios like this before the entire
> chain is created?

Ignoring this for now and jumping to the next reply..

> > + out_free_ioend_bioset:
> > + 	bioset_free(xfs_ioend_bioset);
> 
> Space before tab ^.

Fixed.

> > + 	bioset_free(xfs_ioend_bioset);
> 
> Space before tab ^.

Fixed.

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 3/3] xfs: optimize bio handling in the buffer writeback path
  2016-03-04 13:38     ` Brian Foster
@ 2016-03-11 15:06       ` Christoph Hellwig
  0 siblings, 0 replies; 16+ messages in thread
From: Christoph Hellwig @ 2016-03-11 15:06 UTC (permalink / raw)
  To: Brian Foster; +Cc: Christoph Hellwig, xfs

On Fri, Mar 04, 2016 at 08:38:55AM -0500, Brian Foster wrote:
> One thing I'm a bit suspicious about still is whether the error
> propagation is racy. For example, consider we've created two chained
> bios A and B, such that A is the parent and thus bio(io_remaining) for
> each is A(2) and B(1). Suppose bio A happens to complete first with an
> error. A->bi_error is set and bio_endio(A) is called, which IIUC
> basically just does A(2)->A(1). If bio B completes successfully,
> B->bi_error presumably remains set to 0 and bio_endio(B) is called. The
> latter checks that B->bi_end_io == bio_chain_endio, propagates
> B->bi_error to A->bi_error unconditionally and then walks up to the
> parent bio to drop its reference and finally call A->bi_end_io().
> 
> Doesn't this mean that we can potentially lose errors in the chain? I
> could easily still be missing something here...

Yes, it looks like bio_chain_endio and bio_endio should be fixed
to only set parent->bi_error if it's not already set.  I'll send a 
patch.

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 2/3] xfs: don't release bios on completion immediately
  2016-03-11 14:47     ` Christoph Hellwig
@ 2016-03-11 17:52       ` Brian Foster
  0 siblings, 0 replies; 16+ messages in thread
From: Brian Foster @ 2016-03-11 17:52 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: xfs, Dave Chinner

On Fri, Mar 11, 2016 at 03:47:58PM +0100, Christoph Hellwig wrote:
> On Thu, Mar 03, 2016 at 10:17:22AM -0500, Brian Foster wrote:
> > > +xfs_finish_page_writeback(
> > > +	struct page	*page,
> > > +	unsigned int	start,
> > > +	unsigned int	end,
> > > +	int		error)
> > > +{
> > > +	struct buffer_head	*head, *bh;
> > > +	unsigned int		off = 0;
> > > +
> > > +	bh = head = page_buffers(page);
> > > +
> > > +	do {
> > > +		if (start > off)
> > > +			goto next_bh;
> > 
> > Probably not an issue for current usage, which appears to be on buffer
> > size granularity, but shouldn't this check whether start is beyond the
> > end of the current buffer (e.g., start >= off + bh->b_size)?
> 
> I don't understand that question.  We get called for a given page,
> and a start and end offset inside that page.  These offsets by design
> need to be aligned to the filesystem blocksize.  So what we basically
> want is to skip a few buffers at the beginning and/or end of the page,
> and the code seems to handle that fine.
> 

Yes, that's what I meant by it not being an issue for current usage.

My point was that when looking at the new helper, nothing explicitly
indicates the offsets must be block aligned. This page buffer_head
walking loop is rewritten all over the fs/ code in a manner that is
correct for arbitrary offsets, so this instance looked wrong at first
glance. Then I realized it probably works fine once looking at the
callers.

I'm fine with it as is if we add an assert for the alignment of the
offsets.

Brian

> > >  	mempool_free(ioend, xfs_ioend_pool);
> > >  }
> > >  
> > > +
> > 
> > Unnecessary whitespace here.
> 
> Fixed.
> 
> _______________________________________________
> xfs mailing list
> xfs@oss.sgi.com
> http://oss.sgi.com/mailman/listinfo/xfs

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 1/3] xfs: build bios directly in xfs_add_to_ioend
  2016-03-16 11:44 ` [PATCH 1/3] xfs: build bios directly in xfs_add_to_ioend Christoph Hellwig
@ 2016-03-17 13:05   ` Brian Foster
  0 siblings, 0 replies; 16+ messages in thread
From: Brian Foster @ 2016-03-17 13:05 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: Dave Chinner, xfs

On Wed, Mar 16, 2016 at 12:44:39PM +0100, Christoph Hellwig wrote:
> Currently adding a buffer to the ioend and then building a bio from
> the buffer list are two separate operations. We don't build the bios
> and submit them until the ioend is submitted, and this places a
> fixed dependency on bufferhead chaining in the ioend.
> 
> The first step to removing the bufferhead chaining in the ioend is
> on the IO submission side. We can build the bio directly as we add
> the buffers to the ioend chain, thereby removing the need for a
> latter "buffer-to-bio" submission loop. This allows us to submit
> bios on large ioends as soon as we cannot add more data to the bio.
> 
> These bios then get captured by the active plug, and hence will be
> dispatched as soon as either the plug overflows or we schedule away
> from the writeback context. This will reduce submission latency for
> large IOs, but will also allow more timely request queue based
> writeback blocking when the device becomes congested.
> 
> Signed-off-by: Dave Chinner <dchinner@redhat.com>
> [hch: various small updates]
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---

Reviewed-by: Brian Foster <bfoster@redhat.com>

>  fs/xfs/xfs_aops.c | 69 +++++++++++++++++++++++++------------------------------
>  fs/xfs/xfs_aops.h |  1 +
>  2 files changed, 32 insertions(+), 38 deletions(-)
> 
> diff --git a/fs/xfs/xfs_aops.c b/fs/xfs/xfs_aops.c
> index 75a39a8..17cc6cc 100644
> --- a/fs/xfs/xfs_aops.c
> +++ b/fs/xfs/xfs_aops.c
> @@ -274,6 +274,7 @@ xfs_alloc_ioend(
>  	xfs_ioend_t		*ioend;
>  
>  	ioend = mempool_alloc(xfs_ioend_pool, GFP_NOFS);
> +	memset(ioend, 0, sizeof(*ioend));
>  
>  	/*
>  	 * Set the count to 1 initially, which will prevent an I/O
> @@ -281,16 +282,9 @@ xfs_alloc_ioend(
>  	 * all the I/O from calling the completion routine too early.
>  	 */
>  	atomic_set(&ioend->io_remaining, 1);
> -	ioend->io_error = 0;
>  	INIT_LIST_HEAD(&ioend->io_list);
>  	ioend->io_type = type;
>  	ioend->io_inode = inode;
> -	ioend->io_buffer_head = NULL;
> -	ioend->io_buffer_tail = NULL;
> -	ioend->io_offset = 0;
> -	ioend->io_size = 0;
> -	ioend->io_append_trans = NULL;
> -
>  	INIT_WORK(&ioend->io_work, xfs_end_io);
>  	return ioend;
>  }
> @@ -452,13 +446,18 @@ static inline int xfs_bio_add_buffer(struct bio *bio, struct buffer_head *bh)
>  }
>  
>  /*
> - * Submit all of the bios for an ioend. We are only passed a single ioend at a
> - * time; the caller is responsible for chaining prior to submission.
> + * Submit the bio for an ioend. We are passed an ioend with a bio attached to
> + * it, and we submit that bio. The ioend may be used for multiple bio
> + * submissions, so we only want to allocate an append transaction for the ioend
> + * once. In the case of multiple bio submission, each bio will take an IO
> + * reference to the ioend to ensure that the ioend completion is only done once
> + * all bios have been submitted and the ioend is really done.
>   *
>   * If @fail is non-zero, it means that we have a situation where some part of
>   * the submission process has failed after we have marked paged for writeback
> - * and unlocked them. In this situation, we need to fail the ioend chain rather
> - * than submit it to IO. This typically only happens on a filesystem shutdown.
> + * and unlocked them. In this situation, we need to fail the bio and ioend
> + * rather than submit it to IO. This typically only happens on a filesystem
> + * shutdown.
>   */
>  STATIC int
>  xfs_submit_ioend(
> @@ -466,14 +465,13 @@ xfs_submit_ioend(
>  	xfs_ioend_t		*ioend,
>  	int			status)
>  {
> -	struct buffer_head	*bh;
> -	struct bio		*bio;
> -	sector_t		lastblock = 0;
> -
>  	/* Reserve log space if we might write beyond the on-disk inode size. */
>  	if (!status &&
> -	     ioend->io_type != XFS_IO_UNWRITTEN && xfs_ioend_is_append(ioend))
> +	    ioend->io_bio && ioend->io_type != XFS_IO_UNWRITTEN &&
> +	    xfs_ioend_is_append(ioend) &&
> +	    !ioend->io_append_trans)
>  		status = xfs_setfilesize_trans_alloc(ioend);
> +
>  	/*
>  	 * If we are failing the IO now, just mark the ioend with an
>  	 * error and finish it. This will run IO completion immediately
> @@ -481,31 +479,15 @@ xfs_submit_ioend(
>  	 * time.
>  	 */
>  	if (status) {
> +		if (ioend->io_bio)
> +			bio_put(ioend->io_bio);
>  		ioend->io_error = status;
>  		xfs_finish_ioend(ioend);
>  		return status;
>  	}
>  
> -	bio = NULL;
> -	for (bh = ioend->io_buffer_head; bh; bh = bh->b_private) {
> -
> -		if (!bio) {
> -retry:
> -			bio = xfs_alloc_ioend_bio(bh);
> -		} else if (bh->b_blocknr != lastblock + 1) {
> -			xfs_submit_ioend_bio(wbc, ioend, bio);
> -			goto retry;
> -		}
> -
> -		if (xfs_bio_add_buffer(bio, bh) != bh->b_size) {
> -			xfs_submit_ioend_bio(wbc, ioend, bio);
> -			goto retry;
> -		}
> -
> -		lastblock = bh->b_blocknr;
> -	}
> -	if (bio)
> -		xfs_submit_ioend_bio(wbc, ioend, bio);
> +	xfs_submit_ioend_bio(wbc, ioend, ioend->io_bio);
> +	ioend->io_bio = NULL;
>  	xfs_finish_ioend(ioend);
>  	return 0;
>  }
> @@ -523,6 +505,7 @@ xfs_add_to_ioend(
>  	struct buffer_head	*bh,
>  	xfs_off_t		offset,
>  	struct xfs_writepage_ctx *wpc,
> +	struct writeback_control *wbc,
>  	struct list_head	*iolist)
>  {
>  	if (!wpc->ioend || wpc->io_type != wpc->ioend->io_type ||
> @@ -542,8 +525,18 @@ xfs_add_to_ioend(
>  		wpc->ioend->io_buffer_tail->b_private = bh;
>  		wpc->ioend->io_buffer_tail = bh;
>  	}
> -
>  	bh->b_private = NULL;
> +
> +retry:
> +	if (!wpc->ioend->io_bio)
> +		wpc->ioend->io_bio = xfs_alloc_ioend_bio(bh);
> +
> +	if (xfs_bio_add_buffer(wpc->ioend->io_bio, bh) != bh->b_size) {
> +		xfs_submit_ioend_bio(wbc, wpc->ioend, wpc->ioend->io_bio);
> +		wpc->ioend->io_bio = NULL;
> +		goto retry;
> +	}
> +
>  	wpc->ioend->io_size += bh->b_size;
>  	wpc->last_block = bh->b_blocknr;
>  	xfs_start_buffer_writeback(bh);
> @@ -803,7 +796,7 @@ xfs_writepage_map(
>  			lock_buffer(bh);
>  			if (wpc->io_type != XFS_IO_OVERWRITE)
>  				xfs_map_at_offset(inode, bh, &wpc->imap, offset);
> -			xfs_add_to_ioend(inode, bh, offset, wpc, &submit_list);
> +			xfs_add_to_ioend(inode, bh, offset, wpc, wbc, &submit_list);
>  			count++;
>  		}
>  
> diff --git a/fs/xfs/xfs_aops.h b/fs/xfs/xfs_aops.h
> index 4e01bd5..c89c3bd 100644
> --- a/fs/xfs/xfs_aops.h
> +++ b/fs/xfs/xfs_aops.h
> @@ -52,6 +52,7 @@ typedef struct xfs_ioend {
>  	xfs_off_t		io_offset;	/* offset in the file */
>  	struct work_struct	io_work;	/* xfsdatad work queue */
>  	struct xfs_trans	*io_append_trans;/* xact. for size update */
> +	struct bio		*io_bio;	/* bio being built */
>  } xfs_ioend_t;
>  
>  extern const struct address_space_operations xfs_address_space_operations;
> -- 
> 2.1.4
> 
> _______________________________________________
> xfs mailing list
> xfs@oss.sgi.com
> http://oss.sgi.com/mailman/listinfo/xfs

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH 1/3] xfs: build bios directly in xfs_add_to_ioend
  2016-03-16 11:44 further writeback updates V2 Christoph Hellwig
@ 2016-03-16 11:44 ` Christoph Hellwig
  2016-03-17 13:05   ` Brian Foster
  0 siblings, 1 reply; 16+ messages in thread
From: Christoph Hellwig @ 2016-03-16 11:44 UTC (permalink / raw)
  To: xfs; +Cc: bfoster, Dave Chinner

Currently adding a buffer to the ioend and then building a bio from
the buffer list are two separate operations. We don't build the bios
and submit them until the ioend is submitted, and this places a
fixed dependency on bufferhead chaining in the ioend.

The first step to removing the bufferhead chaining in the ioend is
on the IO submission side. We can build the bio directly as we add
the buffers to the ioend chain, thereby removing the need for a
latter "buffer-to-bio" submission loop. This allows us to submit
bios on large ioends as soon as we cannot add more data to the bio.

These bios then get captured by the active plug, and hence will be
dispatched as soon as either the plug overflows or we schedule away
from the writeback context. This will reduce submission latency for
large IOs, but will also allow more timely request queue based
writeback blocking when the device becomes congested.

Signed-off-by: Dave Chinner <dchinner@redhat.com>
[hch: various small updates]
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 fs/xfs/xfs_aops.c | 69 +++++++++++++++++++++++++------------------------------
 fs/xfs/xfs_aops.h |  1 +
 2 files changed, 32 insertions(+), 38 deletions(-)

diff --git a/fs/xfs/xfs_aops.c b/fs/xfs/xfs_aops.c
index 75a39a8..17cc6cc 100644
--- a/fs/xfs/xfs_aops.c
+++ b/fs/xfs/xfs_aops.c
@@ -274,6 +274,7 @@ xfs_alloc_ioend(
 	xfs_ioend_t		*ioend;
 
 	ioend = mempool_alloc(xfs_ioend_pool, GFP_NOFS);
+	memset(ioend, 0, sizeof(*ioend));
 
 	/*
 	 * Set the count to 1 initially, which will prevent an I/O
@@ -281,16 +282,9 @@ xfs_alloc_ioend(
 	 * all the I/O from calling the completion routine too early.
 	 */
 	atomic_set(&ioend->io_remaining, 1);
-	ioend->io_error = 0;
 	INIT_LIST_HEAD(&ioend->io_list);
 	ioend->io_type = type;
 	ioend->io_inode = inode;
-	ioend->io_buffer_head = NULL;
-	ioend->io_buffer_tail = NULL;
-	ioend->io_offset = 0;
-	ioend->io_size = 0;
-	ioend->io_append_trans = NULL;
-
 	INIT_WORK(&ioend->io_work, xfs_end_io);
 	return ioend;
 }
@@ -452,13 +446,18 @@ static inline int xfs_bio_add_buffer(struct bio *bio, struct buffer_head *bh)
 }
 
 /*
- * Submit all of the bios for an ioend. We are only passed a single ioend at a
- * time; the caller is responsible for chaining prior to submission.
+ * Submit the bio for an ioend. We are passed an ioend with a bio attached to
+ * it, and we submit that bio. The ioend may be used for multiple bio
+ * submissions, so we only want to allocate an append transaction for the ioend
+ * once. In the case of multiple bio submission, each bio will take an IO
+ * reference to the ioend to ensure that the ioend completion is only done once
+ * all bios have been submitted and the ioend is really done.
  *
  * If @fail is non-zero, it means that we have a situation where some part of
  * the submission process has failed after we have marked paged for writeback
- * and unlocked them. In this situation, we need to fail the ioend chain rather
- * than submit it to IO. This typically only happens on a filesystem shutdown.
+ * and unlocked them. In this situation, we need to fail the bio and ioend
+ * rather than submit it to IO. This typically only happens on a filesystem
+ * shutdown.
  */
 STATIC int
 xfs_submit_ioend(
@@ -466,14 +465,13 @@ xfs_submit_ioend(
 	xfs_ioend_t		*ioend,
 	int			status)
 {
-	struct buffer_head	*bh;
-	struct bio		*bio;
-	sector_t		lastblock = 0;
-
 	/* Reserve log space if we might write beyond the on-disk inode size. */
 	if (!status &&
-	     ioend->io_type != XFS_IO_UNWRITTEN && xfs_ioend_is_append(ioend))
+	    ioend->io_bio && ioend->io_type != XFS_IO_UNWRITTEN &&
+	    xfs_ioend_is_append(ioend) &&
+	    !ioend->io_append_trans)
 		status = xfs_setfilesize_trans_alloc(ioend);
+
 	/*
 	 * If we are failing the IO now, just mark the ioend with an
 	 * error and finish it. This will run IO completion immediately
@@ -481,31 +479,15 @@ xfs_submit_ioend(
 	 * time.
 	 */
 	if (status) {
+		if (ioend->io_bio)
+			bio_put(ioend->io_bio);
 		ioend->io_error = status;
 		xfs_finish_ioend(ioend);
 		return status;
 	}
 
-	bio = NULL;
-	for (bh = ioend->io_buffer_head; bh; bh = bh->b_private) {
-
-		if (!bio) {
-retry:
-			bio = xfs_alloc_ioend_bio(bh);
-		} else if (bh->b_blocknr != lastblock + 1) {
-			xfs_submit_ioend_bio(wbc, ioend, bio);
-			goto retry;
-		}
-
-		if (xfs_bio_add_buffer(bio, bh) != bh->b_size) {
-			xfs_submit_ioend_bio(wbc, ioend, bio);
-			goto retry;
-		}
-
-		lastblock = bh->b_blocknr;
-	}
-	if (bio)
-		xfs_submit_ioend_bio(wbc, ioend, bio);
+	xfs_submit_ioend_bio(wbc, ioend, ioend->io_bio);
+	ioend->io_bio = NULL;
 	xfs_finish_ioend(ioend);
 	return 0;
 }
@@ -523,6 +505,7 @@ xfs_add_to_ioend(
 	struct buffer_head	*bh,
 	xfs_off_t		offset,
 	struct xfs_writepage_ctx *wpc,
+	struct writeback_control *wbc,
 	struct list_head	*iolist)
 {
 	if (!wpc->ioend || wpc->io_type != wpc->ioend->io_type ||
@@ -542,8 +525,18 @@ xfs_add_to_ioend(
 		wpc->ioend->io_buffer_tail->b_private = bh;
 		wpc->ioend->io_buffer_tail = bh;
 	}
-
 	bh->b_private = NULL;
+
+retry:
+	if (!wpc->ioend->io_bio)
+		wpc->ioend->io_bio = xfs_alloc_ioend_bio(bh);
+
+	if (xfs_bio_add_buffer(wpc->ioend->io_bio, bh) != bh->b_size) {
+		xfs_submit_ioend_bio(wbc, wpc->ioend, wpc->ioend->io_bio);
+		wpc->ioend->io_bio = NULL;
+		goto retry;
+	}
+
 	wpc->ioend->io_size += bh->b_size;
 	wpc->last_block = bh->b_blocknr;
 	xfs_start_buffer_writeback(bh);
@@ -803,7 +796,7 @@ xfs_writepage_map(
 			lock_buffer(bh);
 			if (wpc->io_type != XFS_IO_OVERWRITE)
 				xfs_map_at_offset(inode, bh, &wpc->imap, offset);
-			xfs_add_to_ioend(inode, bh, offset, wpc, &submit_list);
+			xfs_add_to_ioend(inode, bh, offset, wpc, wbc, &submit_list);
 			count++;
 		}
 
diff --git a/fs/xfs/xfs_aops.h b/fs/xfs/xfs_aops.h
index 4e01bd5..c89c3bd 100644
--- a/fs/xfs/xfs_aops.h
+++ b/fs/xfs/xfs_aops.h
@@ -52,6 +52,7 @@ typedef struct xfs_ioend {
 	xfs_off_t		io_offset;	/* offset in the file */
 	struct work_struct	io_work;	/* xfsdatad work queue */
 	struct xfs_trans	*io_append_trans;/* xact. for size update */
+	struct bio		*io_bio;	/* bio being built */
 } xfs_ioend_t;
 
 extern const struct address_space_operations xfs_address_space_operations;
-- 
2.1.4

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply related	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2016-03-17 13:05 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-02-24  8:20 futher writeback updates Christoph Hellwig
2016-02-24  8:20 ` [PATCH 1/3] xfs: build bios directly in xfs_add_to_ioend Christoph Hellwig
2016-03-03 15:17   ` Brian Foster
2016-02-24  8:20 ` [PATCH 2/3] xfs: don't release bios on completion immediately Christoph Hellwig
2016-03-03 15:17   ` Brian Foster
2016-03-11 14:47     ` Christoph Hellwig
2016-03-11 17:52       ` Brian Foster
2016-02-24  8:20 ` [PATCH 3/3] xfs: optimize bio handling in the buffer writeback path Christoph Hellwig
2016-03-03 15:17   ` Brian Foster
2016-03-04 13:38     ` Brian Foster
2016-03-11 15:06       ` Christoph Hellwig
2016-03-11 14:55     ` Christoph Hellwig
2016-03-01 13:01 ` futher writeback updates Christoph Hellwig
2016-03-01 21:42   ` Dave Chinner
2016-03-16 11:44 further writeback updates V2 Christoph Hellwig
2016-03-16 11:44 ` [PATCH 1/3] xfs: build bios directly in xfs_add_to_ioend Christoph Hellwig
2016-03-17 13:05   ` Brian Foster

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.