All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/7] Per-bdi writeback flusher threads v20
@ 2009-09-11  7:34 Jens Axboe
  2009-09-11  7:34 ` [PATCH 1/7] writeback: get rid of generic_sync_sb_inodes() export Jens Axboe
                   ` (7 more replies)
  0 siblings, 8 replies; 52+ messages in thread
From: Jens Axboe @ 2009-09-11  7:34 UTC (permalink / raw)
  To: linux-kernel, linux-fsdevel; +Cc: chris.mason, hch, tytso, akpm, jack

Hi,

(sorry if you receive this twice, the original posting had a mangled
 From address).

This is the 20th release of the writeback patchset. Changes since
v19 include:

- Drop the max writeback pages patch from Ted. I think we should do
  something to that effect, but there's really no reason to entangle
  it with this patchset.
- Fix two checkpatch warnings on missing KERN_*.
- Rebase to 2.6.31, the aoe patch conflicted with the v19.

So essentially only a few cosmetic changes, and the dropping of a patch.
Please review and ack, I'd like to send this in very shortly. Thanks!

 b/block/blk-core.c                 |    1 
 b/drivers/block/aoe/aoeblk.c       |    1 
 b/drivers/char/mem.c               |    1 
 b/drivers/staging/pohmelfs/inode.c |    9 
 b/fs/btrfs/disk-io.c               |    1 
 b/fs/buffer.c                      |    2 
 b/fs/char_dev.c                    |    1 
 b/fs/configfs/inode.c              |    1 
 b/fs/fs-writeback.c                | 1065 +++++++++++++++++++++--------
 b/fs/fuse/inode.c                  |    1 
 b/fs/hugetlbfs/inode.c             |    1 
 b/fs/nfs/client.c                  |    1 
 b/fs/ocfs2/dlm/dlmfs.c             |    1 
 b/fs/ramfs/inode.c                 |    1 
 b/fs/super.c                       |    5 
 b/fs/sync.c                        |   20 
 b/fs/sysfs/inode.c                 |    1 
 b/fs/ubifs/budget.c                |   16 
 b/fs/ubifs/super.c                 |    9 
 b/include/linux/backing-dev.h      |   55 +
 b/include/linux/fs.h               |    9 
 b/include/linux/writeback.h        |   23 
 b/kernel/cgroup.c                  |    1 
 b/mm/Makefile                      |    2 
 b/mm/backing-dev.c                 |  381 ++++++++++
 b/mm/page-writeback.c              |  182 ----
 b/mm/swap_state.c                  |    1 
 b/mm/vmscan.c                      |    2 
 mm/pdflush.c                       |  269 -------
 29 files changed, 1285 insertions(+), 778 deletions(-)

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 52+ messages in thread

* [PATCH 1/7] writeback: get rid of generic_sync_sb_inodes() export
  2009-09-11  7:34 [PATCH 0/7] Per-bdi writeback flusher threads v20 Jens Axboe
@ 2009-09-11  7:34 ` Jens Axboe
  2009-09-11  7:34 ` [PATCH 2/7] writeback: move dirty inodes from super_block to backing_dev_info Jens Axboe
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 52+ messages in thread
From: Jens Axboe @ 2009-09-11  7:34 UTC (permalink / raw)
  To: linux-kernel, linux-fsdevel
  Cc: chris.mason, hch, tytso, akpm, jack, Jens Axboe

This adds two new exported functions:

- writeback_inodes_sb(), which only attempts to writeback dirty inodes on
  this super_block, for WB_SYNC_NONE writeout.
- sync_inodes_sb(), which writes out all dirty inodes on this super_block
  and also waits for the IO to complete.

Acked-by: Jan Kara <jack@suse.cz>
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
---
 drivers/staging/pohmelfs/inode.c |    9 +----
 fs/fs-writeback.c                |   70 ++++++++++++++++++++++---------------
 fs/sync.c                        |   18 +++++----
 fs/ubifs/budget.c                |   16 +-------
 fs/ubifs/super.c                 |    8 +----
 include/linux/fs.h               |    2 -
 include/linux/writeback.h        |    3 +-
 7 files changed, 58 insertions(+), 68 deletions(-)

diff --git a/drivers/staging/pohmelfs/inode.c b/drivers/staging/pohmelfs/inode.c
index 7b60579..e63c9be 100644
--- a/drivers/staging/pohmelfs/inode.c
+++ b/drivers/staging/pohmelfs/inode.c
@@ -1950,14 +1950,7 @@ static int pohmelfs_get_sb(struct file_system_type *fs_type,
  */
 static void pohmelfs_kill_super(struct super_block *sb)
 {
-	struct writeback_control wbc = {
-		.sync_mode	= WB_SYNC_ALL,
-		.range_start	= 0,
-		.range_end	= LLONG_MAX,
-		.nr_to_write	= LONG_MAX,
-	};
-	generic_sync_sb_inodes(sb, &wbc);
-
+	sync_inodes_sb(sb);
 	kill_anon_super(sb);
 }
 
diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
index c54226b..271e5f4 100644
--- a/fs/fs-writeback.c
+++ b/fs/fs-writeback.c
@@ -458,8 +458,8 @@ writeback_single_inode(struct inode *inode, struct writeback_control *wbc)
  * on the writer throttling path, and we get decent balancing between many
  * throttled threads: we don't want them all piling up on inode_sync_wait.
  */
-void generic_sync_sb_inodes(struct super_block *sb,
-				struct writeback_control *wbc)
+static void generic_sync_sb_inodes(struct super_block *sb,
+				   struct writeback_control *wbc)
 {
 	const unsigned long start = jiffies;	/* livelock avoidance */
 	int sync = wbc->sync_mode == WB_SYNC_ALL;
@@ -593,13 +593,6 @@ void generic_sync_sb_inodes(struct super_block *sb,
 
 	return;		/* Leave any unwritten inodes on s_io */
 }
-EXPORT_SYMBOL_GPL(generic_sync_sb_inodes);
-
-static void sync_sb_inodes(struct super_block *sb,
-				struct writeback_control *wbc)
-{
-	generic_sync_sb_inodes(sb, wbc);
-}
 
 /*
  * Start writeback of dirty pagecache data against all unlocked inodes.
@@ -640,7 +633,7 @@ restart:
 			 */
 			if (down_read_trylock(&sb->s_umount)) {
 				if (sb->s_root)
-					sync_sb_inodes(sb, wbc);
+					generic_sync_sb_inodes(sb, wbc);
 				up_read(&sb->s_umount);
 			}
 			spin_lock(&sb_lock);
@@ -653,35 +646,56 @@ restart:
 	spin_unlock(&sb_lock);
 }
 
-/*
- * writeback and wait upon the filesystem's dirty inodes.  The caller will
- * do this in two passes - one to write, and one to wait.
- *
- * A finite limit is set on the number of pages which will be written.
- * To prevent infinite livelock of sys_sync().
+/**
+ * writeback_inodes_sb	-	writeback dirty inodes from given super_block
+ * @sb: the superblock
  *
- * We add in the number of potentially dirty inodes, because each inode write
- * can dirty pagecache in the underlying blockdev.
+ * Start writeback on some inodes on this super_block. No guarantees are made
+ * on how many (if any) will be written, and this function does not wait
+ * for IO completion of submitted IO. The number of pages submitted is
+ * returned.
  */
-void sync_inodes_sb(struct super_block *sb, int wait)
+long writeback_inodes_sb(struct super_block *sb)
 {
 	struct writeback_control wbc = {
-		.sync_mode	= wait ? WB_SYNC_ALL : WB_SYNC_NONE,
+		.sync_mode	= WB_SYNC_NONE,
 		.range_start	= 0,
 		.range_end	= LLONG_MAX,
 	};
+	unsigned long nr_dirty = global_page_state(NR_FILE_DIRTY);
+	unsigned long nr_unstable = global_page_state(NR_UNSTABLE_NFS);
+	long nr_to_write;
 
-	if (!wait) {
-		unsigned long nr_dirty = global_page_state(NR_FILE_DIRTY);
-		unsigned long nr_unstable = global_page_state(NR_UNSTABLE_NFS);
-
-		wbc.nr_to_write = nr_dirty + nr_unstable +
+	nr_to_write = nr_dirty + nr_unstable +
 			(inodes_stat.nr_inodes - inodes_stat.nr_unused);
-	} else
-		wbc.nr_to_write = LONG_MAX; /* doesn't actually matter */
 
-	sync_sb_inodes(sb, &wbc);
+	wbc.nr_to_write = nr_to_write;
+	generic_sync_sb_inodes(sb, &wbc);
+	return nr_to_write - wbc.nr_to_write;
+}
+EXPORT_SYMBOL(writeback_inodes_sb);
+
+/**
+ * sync_inodes_sb	-	sync sb inode pages
+ * @sb: the superblock
+ *
+ * This function writes and waits on any dirty inode belonging to this
+ * super_block. The number of pages synced is returned.
+ */
+long sync_inodes_sb(struct super_block *sb)
+{
+	struct writeback_control wbc = {
+		.sync_mode	= WB_SYNC_ALL,
+		.range_start	= 0,
+		.range_end	= LLONG_MAX,
+	};
+	long nr_to_write = LONG_MAX; /* doesn't actually matter */
+
+	wbc.nr_to_write = nr_to_write;
+	generic_sync_sb_inodes(sb, &wbc);
+	return nr_to_write - wbc.nr_to_write;
 }
+EXPORT_SYMBOL(sync_inodes_sb);
 
 /**
  * write_inode_now	-	write an inode to disk
diff --git a/fs/sync.c b/fs/sync.c
index 3422ba6..66f2104 100644
--- a/fs/sync.c
+++ b/fs/sync.c
@@ -19,20 +19,22 @@
 			SYNC_FILE_RANGE_WAIT_AFTER)
 
 /*
- * Do the filesystem syncing work. For simple filesystems sync_inodes_sb(sb, 0)
- * just dirties buffers with inodes so we have to submit IO for these buffers
- * via __sync_blockdev(). This also speeds up the wait == 1 case since in that
- * case write_inode() functions do sync_dirty_buffer() and thus effectively
- * write one block at a time.
+ * Do the filesystem syncing work. For simple filesystems
+ * writeback_inodes_sb(sb) just dirties buffers with inodes so we have to
+ * submit IO for these buffers via __sync_blockdev(). This also speeds up the
+ * wait == 1 case since in that case write_inode() functions do
+ * sync_dirty_buffer() and thus effectively write one block at a time.
  */
 static int __sync_filesystem(struct super_block *sb, int wait)
 {
 	/* Avoid doing twice syncing and cache pruning for quota sync */
-	if (!wait)
+	if (!wait) {
 		writeout_quota_sb(sb, -1);
-	else
+		writeback_inodes_sb(sb);
+	} else {
 		sync_quota_sb(sb, -1);
-	sync_inodes_sb(sb, wait);
+		sync_inodes_sb(sb);
+	}
 	if (sb->s_op->sync_fs)
 		sb->s_op->sync_fs(sb, wait);
 	return __sync_blockdev(sb->s_bdev, wait);
diff --git a/fs/ubifs/budget.c b/fs/ubifs/budget.c
index eaf6d89..1c8991b 100644
--- a/fs/ubifs/budget.c
+++ b/fs/ubifs/budget.c
@@ -65,26 +65,14 @@
 static int shrink_liability(struct ubifs_info *c, int nr_to_write)
 {
 	int nr_written;
-	struct writeback_control wbc = {
-		.sync_mode   = WB_SYNC_NONE,
-		.range_end   = LLONG_MAX,
-		.nr_to_write = nr_to_write,
-	};
-
-	generic_sync_sb_inodes(c->vfs_sb, &wbc);
-	nr_written = nr_to_write - wbc.nr_to_write;
 
+	nr_written = writeback_inodes_sb(c->vfs_sb);
 	if (!nr_written) {
 		/*
 		 * Re-try again but wait on pages/inodes which are being
 		 * written-back concurrently (e.g., by pdflush).
 		 */
-		memset(&wbc, 0, sizeof(struct writeback_control));
-		wbc.sync_mode   = WB_SYNC_ALL;
-		wbc.range_end   = LLONG_MAX;
-		wbc.nr_to_write = nr_to_write;
-		generic_sync_sb_inodes(c->vfs_sb, &wbc);
-		nr_written = nr_to_write - wbc.nr_to_write;
+		nr_written = sync_inodes_sb(c->vfs_sb);
 	}
 
 	dbg_budg("%d pages were written back", nr_written);
diff --git a/fs/ubifs/super.c b/fs/ubifs/super.c
index 26d2e0d..8d6050a 100644
--- a/fs/ubifs/super.c
+++ b/fs/ubifs/super.c
@@ -438,12 +438,6 @@ static int ubifs_sync_fs(struct super_block *sb, int wait)
 {
 	int i, err;
 	struct ubifs_info *c = sb->s_fs_info;
-	struct writeback_control wbc = {
-		.sync_mode   = WB_SYNC_ALL,
-		.range_start = 0,
-		.range_end   = LLONG_MAX,
-		.nr_to_write = LONG_MAX,
-	};
 
 	/*
 	 * Zero @wait is just an advisory thing to help the file system shove
@@ -462,7 +456,7 @@ static int ubifs_sync_fs(struct super_block *sb, int wait)
 	 * the user be able to get more accurate results of 'statfs()' after
 	 * they synchronize the file system.
 	 */
-	generic_sync_sb_inodes(sb, &wbc);
+	sync_inodes_sb(sb);
 
 	/*
 	 * Synchronize write buffers, because 'ubifs_run_commit()' does not
diff --git a/include/linux/fs.h b/include/linux/fs.h
index c1f9935..46ff7dd 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -2071,8 +2071,6 @@ static inline void invalidate_remote_inode(struct inode *inode)
 extern int invalidate_inode_pages2(struct address_space *mapping);
 extern int invalidate_inode_pages2_range(struct address_space *mapping,
 					 pgoff_t start, pgoff_t end);
-extern void generic_sync_sb_inodes(struct super_block *sb,
-				struct writeback_control *wbc);
 extern int write_inode_now(struct inode *, int);
 extern int filemap_fdatawrite(struct address_space *);
 extern int filemap_flush(struct address_space *);
diff --git a/include/linux/writeback.h b/include/linux/writeback.h
index 3224820..0703929 100644
--- a/include/linux/writeback.h
+++ b/include/linux/writeback.h
@@ -78,7 +78,8 @@ struct writeback_control {
  */	
 void writeback_inodes(struct writeback_control *wbc);
 int inode_wait(void *);
-void sync_inodes_sb(struct super_block *, int wait);
+long writeback_inodes_sb(struct super_block *);
+long sync_inodes_sb(struct super_block *);
 
 /* writeback.h requires fs.h; it, too, is not included from here. */
 static inline void wait_on_inode(struct inode *inode)
-- 
1.6.4.1.207.g68ea


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH 2/7] writeback: move dirty inodes from super_block to backing_dev_info
  2009-09-11  7:34 [PATCH 0/7] Per-bdi writeback flusher threads v20 Jens Axboe
  2009-09-11  7:34 ` [PATCH 1/7] writeback: get rid of generic_sync_sb_inodes() export Jens Axboe
@ 2009-09-11  7:34 ` Jens Axboe
  2009-09-11  7:34 ` [PATCH 3/7] writeback: switch to per-bdi threads for flushing data Jens Axboe
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 52+ messages in thread
From: Jens Axboe @ 2009-09-11  7:34 UTC (permalink / raw)
  To: linux-kernel, linux-fsdevel
  Cc: chris.mason, hch, tytso, akpm, jack, Jens Axboe

This is a first step at introducing per-bdi flusher threads. We should
have no change in behaviour, although sb_has_dirty_inodes() is now
ridiculously expensive, as there's no easy way to answer that question.
Not a huge problem, since it'll be deleted in subsequent patches.

Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
---
 fs/fs-writeback.c           |  197 ++++++++++++++++++++++++++++---------------
 fs/super.c                  |    3 -
 include/linux/backing-dev.h |    9 ++
 include/linux/fs.h          |    5 +-
 mm/backing-dev.c            |   24 +++++
 mm/page-writeback.c         |   11 +--
 6 files changed, 165 insertions(+), 84 deletions(-)

diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
index 271e5f4..45ad4bb 100644
--- a/fs/fs-writeback.c
+++ b/fs/fs-writeback.c
@@ -25,6 +25,7 @@
 #include <linux/buffer_head.h>
 #include "internal.h"
 
+#define inode_to_bdi(inode)	((inode)->i_mapping->backing_dev_info)
 
 /**
  * writeback_acquire - attempt to get exclusive writeback access to a device
@@ -165,12 +166,13 @@ void __mark_inode_dirty(struct inode *inode, int flags)
 			goto out;
 
 		/*
-		 * If the inode was already on s_dirty/s_io/s_more_io, don't
-		 * reposition it (that would break s_dirty time-ordering).
+		 * If the inode was already on b_dirty/b_io/b_more_io, don't
+		 * reposition it (that would break b_dirty time-ordering).
 		 */
 		if (!was_dirty) {
 			inode->dirtied_when = jiffies;
-			list_move(&inode->i_list, &sb->s_dirty);
+			list_move(&inode->i_list,
+					&inode_to_bdi(inode)->b_dirty);
 		}
 	}
 out:
@@ -191,31 +193,30 @@ static int write_inode(struct inode *inode, int sync)
  * furthest end of its superblock's dirty-inode list.
  *
  * Before stamping the inode's ->dirtied_when, we check to see whether it is
- * already the most-recently-dirtied inode on the s_dirty list.  If that is
+ * already the most-recently-dirtied inode on the b_dirty list.  If that is
  * the case then the inode must have been redirtied while it was being written
  * out and we don't reset its dirtied_when.
  */
 static void redirty_tail(struct inode *inode)
 {
-	struct super_block *sb = inode->i_sb;
+	struct backing_dev_info *bdi = inode_to_bdi(inode);
 
-	if (!list_empty(&sb->s_dirty)) {
-		struct inode *tail_inode;
+	if (!list_empty(&bdi->b_dirty)) {
+		struct inode *tail;
 
-		tail_inode = list_entry(sb->s_dirty.next, struct inode, i_list);
-		if (time_before(inode->dirtied_when,
-				tail_inode->dirtied_when))
+		tail = list_entry(bdi->b_dirty.next, struct inode, i_list);
+		if (time_before(inode->dirtied_when, tail->dirtied_when))
 			inode->dirtied_when = jiffies;
 	}
-	list_move(&inode->i_list, &sb->s_dirty);
+	list_move(&inode->i_list, &bdi->b_dirty);
 }
 
 /*
- * requeue inode for re-scanning after sb->s_io list is exhausted.
+ * requeue inode for re-scanning after bdi->b_io list is exhausted.
  */
 static void requeue_io(struct inode *inode)
 {
-	list_move(&inode->i_list, &inode->i_sb->s_more_io);
+	list_move(&inode->i_list, &inode_to_bdi(inode)->b_more_io);
 }
 
 static void inode_sync_complete(struct inode *inode)
@@ -262,18 +263,50 @@ static void move_expired_inodes(struct list_head *delaying_queue,
 /*
  * Queue all expired dirty inodes for io, eldest first.
  */
-static void queue_io(struct super_block *sb,
-				unsigned long *older_than_this)
+static void queue_io(struct backing_dev_info *bdi,
+		     unsigned long *older_than_this)
+{
+	list_splice_init(&bdi->b_more_io, bdi->b_io.prev);
+	move_expired_inodes(&bdi->b_dirty, &bdi->b_io, older_than_this);
+}
+
+static int sb_on_inode_list(struct super_block *sb, struct list_head *list)
 {
-	list_splice_init(&sb->s_more_io, sb->s_io.prev);
-	move_expired_inodes(&sb->s_dirty, &sb->s_io, older_than_this);
+	struct inode *inode;
+	int ret = 0;
+
+	spin_lock(&inode_lock);
+	list_for_each_entry(inode, list, i_list) {
+		if (inode->i_sb == sb) {
+			ret = 1;
+			break;
+		}
+	}
+	spin_unlock(&inode_lock);
+	return ret;
 }
 
 int sb_has_dirty_inodes(struct super_block *sb)
 {
-	return !list_empty(&sb->s_dirty) ||
-	       !list_empty(&sb->s_io) ||
-	       !list_empty(&sb->s_more_io);
+	struct backing_dev_info *bdi;
+	int ret = 0;
+
+	/*
+	 * This is REALLY expensive right now, but it'll go away
+	 * when the bdi writeback is introduced
+	 */
+	mutex_lock(&bdi_lock);
+	list_for_each_entry(bdi, &bdi_list, bdi_list) {
+		if (sb_on_inode_list(sb, &bdi->b_dirty) ||
+		    sb_on_inode_list(sb, &bdi->b_io) ||
+		    sb_on_inode_list(sb, &bdi->b_more_io)) {
+			ret = 1;
+			break;
+		}
+	}
+	mutex_unlock(&bdi_lock);
+
+	return ret;
 }
 EXPORT_SYMBOL(sb_has_dirty_inodes);
 
@@ -322,11 +355,11 @@ writeback_single_inode(struct inode *inode, struct writeback_control *wbc)
 	if (inode->i_state & I_SYNC) {
 		/*
 		 * If this inode is locked for writeback and we are not doing
-		 * writeback-for-data-integrity, move it to s_more_io so that
+		 * writeback-for-data-integrity, move it to b_more_io so that
 		 * writeback can proceed with the other inodes on s_io.
 		 *
 		 * We'll have another go at writing back this inode when we
-		 * completed a full scan of s_io.
+		 * completed a full scan of b_io.
 		 */
 		if (!wait) {
 			requeue_io(inode);
@@ -371,11 +404,11 @@ writeback_single_inode(struct inode *inode, struct writeback_control *wbc)
 			/*
 			 * We didn't write back all the pages.  nfs_writepages()
 			 * sometimes bales out without doing anything. Redirty
-			 * the inode; Move it from s_io onto s_more_io/s_dirty.
+			 * the inode; Move it from b_io onto b_more_io/b_dirty.
 			 */
 			/*
 			 * akpm: if the caller was the kupdate function we put
-			 * this inode at the head of s_dirty so it gets first
+			 * this inode at the head of b_dirty so it gets first
 			 * consideration.  Otherwise, move it to the tail, for
 			 * the reasons described there.  I'm not really sure
 			 * how much sense this makes.  Presumably I had a good
@@ -385,7 +418,7 @@ writeback_single_inode(struct inode *inode, struct writeback_control *wbc)
 			if (wbc->for_kupdate) {
 				/*
 				 * For the kupdate function we move the inode
-				 * to s_more_io so it will get more writeout as
+				 * to b_more_io so it will get more writeout as
 				 * soon as the queue becomes uncongested.
 				 */
 				inode->i_state |= I_DIRTY_PAGES;
@@ -433,51 +466,34 @@ writeback_single_inode(struct inode *inode, struct writeback_control *wbc)
 	return ret;
 }
 
-/*
- * Write out a superblock's list of dirty inodes.  A wait will be performed
- * upon no inodes, all inodes or the final one, depending upon sync_mode.
- *
- * If older_than_this is non-NULL, then only write out inodes which
- * had their first dirtying at a time earlier than *older_than_this.
- *
- * If we're a pdflush thread, then implement pdflush collision avoidance
- * against the entire list.
- *
- * If `bdi' is non-zero then we're being asked to writeback a specific queue.
- * This function assumes that the blockdev superblock's inodes are backed by
- * a variety of queues, so all inodes are searched.  For other superblocks,
- * assume that all inodes are backed by the same queue.
- *
- * FIXME: this linear search could get expensive with many fileystems.  But
- * how to fix?  We need to go from an address_space to all inodes which share
- * a queue with that address_space.  (Easy: have a global "dirty superblocks"
- * list).
- *
- * The inodes to be written are parked on sb->s_io.  They are moved back onto
- * sb->s_dirty as they are selected for writing.  This way, none can be missed
- * on the writer throttling path, and we get decent balancing between many
- * throttled threads: we don't want them all piling up on inode_sync_wait.
- */
-static void generic_sync_sb_inodes(struct super_block *sb,
-				   struct writeback_control *wbc)
+static void generic_sync_bdi_inodes(struct backing_dev_info *bdi,
+				    struct writeback_control *wbc,
+				    struct super_block *sb)
 {
+	const int is_blkdev_sb = sb_is_blkdev_sb(sb);
 	const unsigned long start = jiffies;	/* livelock avoidance */
-	int sync = wbc->sync_mode == WB_SYNC_ALL;
 
 	spin_lock(&inode_lock);
-	if (!wbc->for_kupdate || list_empty(&sb->s_io))
-		queue_io(sb, wbc->older_than_this);
 
-	while (!list_empty(&sb->s_io)) {
-		struct inode *inode = list_entry(sb->s_io.prev,
+	if (!wbc->for_kupdate || list_empty(&bdi->b_io))
+		queue_io(bdi, wbc->older_than_this);
+
+	while (!list_empty(&bdi->b_io)) {
+		struct inode *inode = list_entry(bdi->b_io.prev,
 						struct inode, i_list);
-		struct address_space *mapping = inode->i_mapping;
-		struct backing_dev_info *bdi = mapping->backing_dev_info;
 		long pages_skipped;
 
+		/*
+		 * super block given and doesn't match, skip this inode
+		 */
+		if (sb && sb != inode->i_sb) {
+			redirty_tail(inode);
+			continue;
+		}
+
 		if (!bdi_cap_writeback_dirty(bdi)) {
 			redirty_tail(inode);
-			if (sb_is_blkdev_sb(sb)) {
+			if (is_blkdev_sb) {
 				/*
 				 * Dirty memory-backed blockdev: the ramdisk
 				 * driver does this.  Skip just this inode
@@ -499,14 +515,14 @@ static void generic_sync_sb_inodes(struct super_block *sb,
 
 		if (wbc->nonblocking && bdi_write_congested(bdi)) {
 			wbc->encountered_congestion = 1;
-			if (!sb_is_blkdev_sb(sb))
+			if (!is_blkdev_sb)
 				break;		/* Skip a congested fs */
 			requeue_io(inode);
 			continue;		/* Skip a congested blockdev */
 		}
 
 		if (wbc->bdi && bdi != wbc->bdi) {
-			if (!sb_is_blkdev_sb(sb))
+			if (!is_blkdev_sb)
 				break;		/* fs has the wrong queue */
 			requeue_io(inode);
 			continue;		/* blockdev has wrong queue */
@@ -544,13 +560,57 @@ static void generic_sync_sb_inodes(struct super_block *sb,
 			wbc->more_io = 1;
 			break;
 		}
-		if (!list_empty(&sb->s_more_io))
+		if (!list_empty(&bdi->b_more_io))
 			wbc->more_io = 1;
 	}
 
-	if (sync) {
+	spin_unlock(&inode_lock);
+	/* Leave any unwritten inodes on b_io */
+}
+
+/*
+ * Write out a superblock's list of dirty inodes.  A wait will be performed
+ * upon no inodes, all inodes or the final one, depending upon sync_mode.
+ *
+ * If older_than_this is non-NULL, then only write out inodes which
+ * had their first dirtying at a time earlier than *older_than_this.
+ *
+ * If we're a pdlfush thread, then implement pdflush collision avoidance
+ * against the entire list.
+ *
+ * If `bdi' is non-zero then we're being asked to writeback a specific queue.
+ * This function assumes that the blockdev superblock's inodes are backed by
+ * a variety of queues, so all inodes are searched.  For other superblocks,
+ * assume that all inodes are backed by the same queue.
+ *
+ * FIXME: this linear search could get expensive with many fileystems.  But
+ * how to fix?  We need to go from an address_space to all inodes which share
+ * a queue with that address_space.  (Easy: have a global "dirty superblocks"
+ * list).
+ *
+ * The inodes to be written are parked on bdi->b_io.  They are moved back onto
+ * bdi->b_dirty as they are selected for writing.  This way, none can be missed
+ * on the writer throttling path, and we get decent balancing between many
+ * throttled threads: we don't want them all piling up on inode_sync_wait.
+ */
+static void generic_sync_sb_inodes(struct super_block *sb,
+				   struct writeback_control *wbc)
+{
+	struct backing_dev_info *bdi;
+
+	if (!wbc->bdi) {
+		mutex_lock(&bdi_lock);
+		list_for_each_entry(bdi, &bdi_list, bdi_list)
+			generic_sync_bdi_inodes(bdi, wbc, sb);
+		mutex_unlock(&bdi_lock);
+	} else
+		generic_sync_bdi_inodes(wbc->bdi, wbc, sb);
+
+	if (wbc->sync_mode == WB_SYNC_ALL) {
 		struct inode *inode, *old_inode = NULL;
 
+		spin_lock(&inode_lock);
+
 		/*
 		 * Data integrity sync. Must wait for all pages under writeback,
 		 * because there may have been pages dirtied before our sync
@@ -588,10 +648,7 @@ static void generic_sync_sb_inodes(struct super_block *sb,
 		}
 		spin_unlock(&inode_lock);
 		iput(old_inode);
-	} else
-		spin_unlock(&inode_lock);
-
-	return;		/* Leave any unwritten inodes on s_io */
+	}
 }
 
 /*
@@ -599,8 +656,8 @@ static void generic_sync_sb_inodes(struct super_block *sb,
  *
  * Note:
  * We don't need to grab a reference to superblock here. If it has non-empty
- * ->s_dirty it's hadn't been killed yet and kill_super() won't proceed
- * past sync_inodes_sb() until the ->s_dirty/s_io/s_more_io lists are all
+ * ->b_dirty it's hadn't been killed yet and kill_super() won't proceed
+ * past sync_inodes_sb() until the ->b_dirty/b_io/b_more_io lists are all
  * empty. Since __sync_single_inode() regains inode_lock before it finally moves
  * inode from superblock lists we are OK.
  *
diff --git a/fs/super.c b/fs/super.c
index 2761d3e..0d22ce3 100644
--- a/fs/super.c
+++ b/fs/super.c
@@ -62,9 +62,6 @@ static struct super_block *alloc_super(struct file_system_type *type)
 			s = NULL;
 			goto out;
 		}
-		INIT_LIST_HEAD(&s->s_dirty);
-		INIT_LIST_HEAD(&s->s_io);
-		INIT_LIST_HEAD(&s->s_more_io);
 		INIT_LIST_HEAD(&s->s_files);
 		INIT_LIST_HEAD(&s->s_instances);
 		INIT_HLIST_HEAD(&s->s_anon);
diff --git a/include/linux/backing-dev.h b/include/linux/backing-dev.h
index 1d52425..928cd54 100644
--- a/include/linux/backing-dev.h
+++ b/include/linux/backing-dev.h
@@ -40,6 +40,8 @@ enum bdi_stat_item {
 #define BDI_STAT_BATCH (8*(1+ilog2(nr_cpu_ids)))
 
 struct backing_dev_info {
+	struct list_head bdi_list;
+
 	unsigned long ra_pages;	/* max readahead in PAGE_CACHE_SIZE units */
 	unsigned long state;	/* Always use atomic bitops on this */
 	unsigned int capabilities; /* Device capabilities */
@@ -58,6 +60,10 @@ struct backing_dev_info {
 
 	struct device *dev;
 
+	struct list_head	b_dirty;	/* dirty inodes */
+	struct list_head	b_io;		/* parked for writeback */
+	struct list_head	b_more_io;	/* parked for more writeback */
+
 #ifdef CONFIG_DEBUG_FS
 	struct dentry *debug_dir;
 	struct dentry *debug_stats;
@@ -72,6 +78,9 @@ int bdi_register(struct backing_dev_info *bdi, struct device *parent,
 int bdi_register_dev(struct backing_dev_info *bdi, dev_t dev);
 void bdi_unregister(struct backing_dev_info *bdi);
 
+extern struct mutex bdi_lock;
+extern struct list_head bdi_list;
+
 static inline void __add_bdi_stat(struct backing_dev_info *bdi,
 		enum bdi_stat_item item, s64 amount)
 {
diff --git a/include/linux/fs.h b/include/linux/fs.h
index 46ff7dd..56371be 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -715,7 +715,7 @@ struct posix_acl;
 
 struct inode {
 	struct hlist_node	i_hash;
-	struct list_head	i_list;
+	struct list_head	i_list;		/* backing dev IO list */
 	struct list_head	i_sb_list;
 	struct list_head	i_dentry;
 	unsigned long		i_ino;
@@ -1336,9 +1336,6 @@ struct super_block {
 	struct xattr_handler	**s_xattr;
 
 	struct list_head	s_inodes;	/* all inodes */
-	struct list_head	s_dirty;	/* dirty inodes */
-	struct list_head	s_io;		/* parked for writeback */
-	struct list_head	s_more_io;	/* parked for more writeback */
 	struct hlist_head	s_anon;		/* anonymous dentries for (nfs) exporting */
 	struct list_head	s_files;
 	/* s_dentry_lru and s_nr_dentry_unused are protected by dcache_lock */
diff --git a/mm/backing-dev.c b/mm/backing-dev.c
index c86edd2..6f163e0 100644
--- a/mm/backing-dev.c
+++ b/mm/backing-dev.c
@@ -22,6 +22,8 @@ struct backing_dev_info default_backing_dev_info = {
 EXPORT_SYMBOL_GPL(default_backing_dev_info);
 
 static struct class *bdi_class;
+DEFINE_MUTEX(bdi_lock);
+LIST_HEAD(bdi_list);
 
 #ifdef CONFIG_DEBUG_FS
 #include <linux/debugfs.h>
@@ -211,6 +213,10 @@ int bdi_register(struct backing_dev_info *bdi, struct device *parent,
 		goto exit;
 	}
 
+	mutex_lock(&bdi_lock);
+	list_add_tail(&bdi->bdi_list, &bdi_list);
+	mutex_unlock(&bdi_lock);
+
 	bdi->dev = dev;
 	bdi_debug_register(bdi, dev_name(dev));
 
@@ -225,9 +231,17 @@ int bdi_register_dev(struct backing_dev_info *bdi, dev_t dev)
 }
 EXPORT_SYMBOL(bdi_register_dev);
 
+static void bdi_remove_from_list(struct backing_dev_info *bdi)
+{
+	mutex_lock(&bdi_lock);
+	list_del(&bdi->bdi_list);
+	mutex_unlock(&bdi_lock);
+}
+
 void bdi_unregister(struct backing_dev_info *bdi)
 {
 	if (bdi->dev) {
+		bdi_remove_from_list(bdi);
 		bdi_debug_unregister(bdi);
 		device_unregister(bdi->dev);
 		bdi->dev = NULL;
@@ -245,6 +259,10 @@ int bdi_init(struct backing_dev_info *bdi)
 	bdi->min_ratio = 0;
 	bdi->max_ratio = 100;
 	bdi->max_prop_frac = PROP_FRAC_BASE;
+	INIT_LIST_HEAD(&bdi->bdi_list);
+	INIT_LIST_HEAD(&bdi->b_io);
+	INIT_LIST_HEAD(&bdi->b_dirty);
+	INIT_LIST_HEAD(&bdi->b_more_io);
 
 	for (i = 0; i < NR_BDI_STAT_ITEMS; i++) {
 		err = percpu_counter_init(&bdi->bdi_stat[i], 0);
@@ -259,6 +277,8 @@ int bdi_init(struct backing_dev_info *bdi)
 err:
 		while (i--)
 			percpu_counter_destroy(&bdi->bdi_stat[i]);
+
+		bdi_remove_from_list(bdi);
 	}
 
 	return err;
@@ -269,6 +289,10 @@ void bdi_destroy(struct backing_dev_info *bdi)
 {
 	int i;
 
+	WARN_ON(!list_empty(&bdi->b_dirty));
+	WARN_ON(!list_empty(&bdi->b_io));
+	WARN_ON(!list_empty(&bdi->b_more_io));
+
 	bdi_unregister(bdi);
 
 	for (i = 0; i < NR_BDI_STAT_ITEMS; i++)
diff --git a/mm/page-writeback.c b/mm/page-writeback.c
index 81627eb..f8341b6 100644
--- a/mm/page-writeback.c
+++ b/mm/page-writeback.c
@@ -320,15 +320,13 @@ static void task_dirty_limit(struct task_struct *tsk, unsigned long *pdirty)
 /*
  *
  */
-static DEFINE_SPINLOCK(bdi_lock);
 static unsigned int bdi_min_ratio;
 
 int bdi_set_min_ratio(struct backing_dev_info *bdi, unsigned int min_ratio)
 {
 	int ret = 0;
-	unsigned long flags;
 
-	spin_lock_irqsave(&bdi_lock, flags);
+	mutex_lock(&bdi_lock);
 	if (min_ratio > bdi->max_ratio) {
 		ret = -EINVAL;
 	} else {
@@ -340,27 +338,26 @@ int bdi_set_min_ratio(struct backing_dev_info *bdi, unsigned int min_ratio)
 			ret = -EINVAL;
 		}
 	}
-	spin_unlock_irqrestore(&bdi_lock, flags);
+	mutex_unlock(&bdi_lock);
 
 	return ret;
 }
 
 int bdi_set_max_ratio(struct backing_dev_info *bdi, unsigned max_ratio)
 {
-	unsigned long flags;
 	int ret = 0;
 
 	if (max_ratio > 100)
 		return -EINVAL;
 
-	spin_lock_irqsave(&bdi_lock, flags);
+	mutex_lock(&bdi_lock);
 	if (bdi->min_ratio > max_ratio) {
 		ret = -EINVAL;
 	} else {
 		bdi->max_ratio = max_ratio;
 		bdi->max_prop_frac = (PROP_FRAC_BASE * max_ratio) / 100;
 	}
-	spin_unlock_irqrestore(&bdi_lock, flags);
+	mutex_unlock(&bdi_lock);
 
 	return ret;
 }
-- 
1.6.4.1.207.g68ea


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH 3/7] writeback: switch to per-bdi threads for flushing data
  2009-09-11  7:34 [PATCH 0/7] Per-bdi writeback flusher threads v20 Jens Axboe
  2009-09-11  7:34 ` [PATCH 1/7] writeback: get rid of generic_sync_sb_inodes() export Jens Axboe
  2009-09-11  7:34 ` [PATCH 2/7] writeback: move dirty inodes from super_block to backing_dev_info Jens Axboe
@ 2009-09-11  7:34 ` Jens Axboe
  2009-09-11  7:34 ` [PATCH 4/7] writeback: get rid of pdflush completely Jens Axboe
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 52+ messages in thread
From: Jens Axboe @ 2009-09-11  7:34 UTC (permalink / raw)
  To: linux-kernel, linux-fsdevel
  Cc: chris.mason, hch, tytso, akpm, jack, Jens Axboe

This gets rid of pdflush for bdi writeout and kupdated style cleaning.
pdflush writeout suffers from lack of locality and also requires more
threads to handle the same workload, since it has to work in a
non-blocking fashion against each queue. This also introduces lumpy
behaviour and potential request starvation, since pdflush can be starved
for queue access if others are accessing it. A sample ffsb workload that
does random writes to files is about 8% faster here on a simple SATA drive
during the benchmark phase. File layout also seems a LOT more smooth in
vmstat:

 r  b   swpd   free   buff  cache   si   so    bi    bo   in    cs us sy id wa
 0  1      0 608848   2652 375372    0    0     0 71024  604    24  1 10 48 42
 0  1      0 549644   2712 433736    0    0     0 60692  505    27  1  8 48 44
 1  0      0 476928   2784 505192    0    0     4 29540  553    24  0  9 53 37
 0  1      0 457972   2808 524008    0    0     0 54876  331    16  0  4 38 58
 0  1      0 366128   2928 614284    0    0     4 92168  710    58  0 13 53 34
 0  1      0 295092   3000 684140    0    0     0 62924  572    23  0  9 53 37
 0  1      0 236592   3064 741704    0    0     4 58256  523    17  0  8 48 44
 0  1      0 165608   3132 811464    0    0     0 57460  560    21  0  8 54 38
 0  1      0 102952   3200 873164    0    0     4 74748  540    29  1 10 48 41
 0  1      0  48604   3252 926472    0    0     0 53248  469    29  0  7 47 45

where vanilla tends to fluctuate a lot in the creation phase:

 r  b   swpd   free   buff  cache   si   so    bi    bo   in    cs us sy id wa
 1  1      0 678716   5792 303380    0    0     0 74064  565    50  1 11 52 36
 1  0      0 662488   5864 319396    0    0     4   352  302   329  0  2 47 51
 0  1      0 599312   5924 381468    0    0     0 78164  516    55  0  9 51 40
 0  1      0 519952   6008 459516    0    0     4 78156  622    56  1 11 52 37
 1  1      0 436640   6092 541632    0    0     0 82244  622    54  0 11 48 41
 0  1      0 436640   6092 541660    0    0     0     8  152    39  0  0 51 49
 0  1      0 332224   6200 644252    0    0     4 102800  728    46  1 13 49 36
 1  0      0 274492   6260 701056    0    0     4 12328  459    49  0  7 50 43
 0  1      0 211220   6324 763356    0    0     0 106940  515    37  1 10 51 39
 1  0      0 160412   6376 813468    0    0     0  8224  415    43  0  6 49 45
 1  1      0  85980   6452 886556    0    0     4 113516  575    39  1 11 54 34
 0  2      0  85968   6452 886620    0    0     0  1640  158   211  0  0 46 54

A 10 disk test with btrfs performs 26% faster with per-bdi flushing. A
SSD based writeback test on XFS performs over 20% better as well, with
the throughput being very stable around 1GB/sec, where pdflush only
manages 750MB/sec and fluctuates wildly while doing so. Random buffered
writes to many files behave a lot better as well, as does random mmap'ed
writes.

A separate thread is added to sync the super blocks. In the long term,
adding sync_supers_bdi() functionality could get rid of this thread again.

Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
---
 fs/buffer.c                 |    2 +-
 fs/fs-writeback.c           |  999 ++++++++++++++++++++++++++++++-------------
 fs/super.c                  |    2 +-
 fs/sync.c                   |    2 +-
 include/linux/backing-dev.h |   55 ++-
 include/linux/fs.h          |    2 +-
 include/linux/writeback.h   |    8 +-
 mm/backing-dev.c            |  341 ++++++++++++++-
 mm/page-writeback.c         |  179 ++-------
 mm/vmscan.c                 |    2 +-
 10 files changed, 1120 insertions(+), 472 deletions(-)

diff --git a/fs/buffer.c b/fs/buffer.c
index 28f320f..90a9886 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -281,7 +281,7 @@ static void free_more_memory(void)
 	struct zone *zone;
 	int nid;
 
-	wakeup_pdflush(1024);
+	wakeup_flusher_threads(1024);
 	yield();
 
 	for_each_online_node(nid) {
diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
index 45ad4bb..7f6dae8 100644
--- a/fs/fs-writeback.c
+++ b/fs/fs-writeback.c
@@ -19,6 +19,8 @@
 #include <linux/sched.h>
 #include <linux/fs.h>
 #include <linux/mm.h>
+#include <linux/kthread.h>
+#include <linux/freezer.h>
 #include <linux/writeback.h>
 #include <linux/blkdev.h>
 #include <linux/backing-dev.h>
@@ -27,165 +29,208 @@
 
 #define inode_to_bdi(inode)	((inode)->i_mapping->backing_dev_info)
 
-/**
- * writeback_acquire - attempt to get exclusive writeback access to a device
- * @bdi: the device's backing_dev_info structure
- *
- * It is a waste of resources to have more than one pdflush thread blocked on
- * a single request queue.  Exclusion at the request_queue level is obtained
- * via a flag in the request_queue's backing_dev_info.state.
- *
- * Non-request_queue-backed address_spaces will share default_backing_dev_info,
- * unless they implement their own.  Which is somewhat inefficient, as this
- * may prevent concurrent writeback against multiple devices.
+/*
+ * Work items for the bdi_writeback threads
  */
-static int writeback_acquire(struct backing_dev_info *bdi)
+struct bdi_work {
+	struct list_head list;
+	struct list_head wait_list;
+	struct rcu_head rcu_head;
+
+	unsigned long seen;
+	atomic_t pending;
+
+	struct super_block *sb;
+	unsigned long nr_pages;
+	enum writeback_sync_modes sync_mode;
+
+	unsigned long state;
+};
+
+enum {
+	WS_USED_B = 0,
+	WS_ONSTACK_B,
+};
+
+#define WS_USED (1 << WS_USED_B)
+#define WS_ONSTACK (1 << WS_ONSTACK_B)
+
+static inline bool bdi_work_on_stack(struct bdi_work *work)
+{
+	return test_bit(WS_ONSTACK_B, &work->state);
+}
+
+static inline void bdi_work_init(struct bdi_work *work,
+				 struct writeback_control *wbc)
+{
+	INIT_RCU_HEAD(&work->rcu_head);
+	work->sb = wbc->sb;
+	work->nr_pages = wbc->nr_to_write;
+	work->sync_mode = wbc->sync_mode;
+	work->state = WS_USED;
+}
+
+static inline void bdi_work_init_on_stack(struct bdi_work *work,
+					  struct writeback_control *wbc)
 {
-	return !test_and_set_bit(BDI_pdflush, &bdi->state);
+	bdi_work_init(work, wbc);
+	work->state |= WS_ONSTACK;
 }
 
 /**
  * writeback_in_progress - determine whether there is writeback in progress
  * @bdi: the device's backing_dev_info structure.
  *
- * Determine whether there is writeback in progress against a backing device.
+ * Determine whether there is writeback waiting to be handled against a
+ * backing device.
  */
 int writeback_in_progress(struct backing_dev_info *bdi)
 {
-	return test_bit(BDI_pdflush, &bdi->state);
+	return !list_empty(&bdi->work_list);
 }
 
-/**
- * writeback_release - relinquish exclusive writeback access against a device.
- * @bdi: the device's backing_dev_info structure
- */
-static void writeback_release(struct backing_dev_info *bdi)
+static void bdi_work_clear(struct bdi_work *work)
 {
-	BUG_ON(!writeback_in_progress(bdi));
-	clear_bit(BDI_pdflush, &bdi->state);
+	clear_bit(WS_USED_B, &work->state);
+	smp_mb__after_clear_bit();
+	wake_up_bit(&work->state, WS_USED_B);
 }
 
-static noinline void block_dump___mark_inode_dirty(struct inode *inode)
+static void bdi_work_free(struct rcu_head *head)
 {
-	if (inode->i_ino || strcmp(inode->i_sb->s_id, "bdev")) {
-		struct dentry *dentry;
-		const char *name = "?";
+	struct bdi_work *work = container_of(head, struct bdi_work, rcu_head);
 
-		dentry = d_find_alias(inode);
-		if (dentry) {
-			spin_lock(&dentry->d_lock);
-			name = (const char *) dentry->d_name.name;
-		}
-		printk(KERN_DEBUG
-		       "%s(%d): dirtied inode %lu (%s) on %s\n",
-		       current->comm, task_pid_nr(current), inode->i_ino,
-		       name, inode->i_sb->s_id);
-		if (dentry) {
-			spin_unlock(&dentry->d_lock);
-			dput(dentry);
-		}
-	}
+	if (!bdi_work_on_stack(work))
+		kfree(work);
+	else
+		bdi_work_clear(work);
 }
 
-/**
- *	__mark_inode_dirty -	internal function
- *	@inode: inode to mark
- *	@flags: what kind of dirty (i.e. I_DIRTY_SYNC)
- *	Mark an inode as dirty. Callers should use mark_inode_dirty or
- *  	mark_inode_dirty_sync.
- *
- * Put the inode on the super block's dirty list.
- *
- * CAREFUL! We mark it dirty unconditionally, but move it onto the
- * dirty list only if it is hashed or if it refers to a blockdev.
- * If it was not hashed, it will never be added to the dirty list
- * even if it is later hashed, as it will have been marked dirty already.
- *
- * In short, make sure you hash any inodes _before_ you start marking
- * them dirty.
- *
- * This function *must* be atomic for the I_DIRTY_PAGES case -
- * set_page_dirty() is called under spinlock in several places.
- *
- * Note that for blockdevs, inode->dirtied_when represents the dirtying time of
- * the block-special inode (/dev/hda1) itself.  And the ->dirtied_when field of
- * the kernel-internal blockdev inode represents the dirtying time of the
- * blockdev's pages.  This is why for I_DIRTY_PAGES we always use
- * page->mapping->host, so the page-dirtying time is recorded in the internal
- * blockdev inode.
- */
-void __mark_inode_dirty(struct inode *inode, int flags)
+static void wb_work_complete(struct bdi_work *work)
 {
-	struct super_block *sb = inode->i_sb;
+	const enum writeback_sync_modes sync_mode = work->sync_mode;
 
 	/*
-	 * Don't do this for I_DIRTY_PAGES - that doesn't actually
-	 * dirty the inode itself
+	 * For allocated work, we can clear the done/seen bit right here.
+	 * For on-stack work, we need to postpone both the clear and free
+	 * to after the RCU grace period, since the stack could be invalidated
+	 * as soon as bdi_work_clear() has done the wakeup.
 	 */
-	if (flags & (I_DIRTY_SYNC | I_DIRTY_DATASYNC)) {
-		if (sb->s_op->dirty_inode)
-			sb->s_op->dirty_inode(inode);
-	}
+	if (!bdi_work_on_stack(work))
+		bdi_work_clear(work);
+	if (sync_mode == WB_SYNC_NONE || bdi_work_on_stack(work))
+		call_rcu(&work->rcu_head, bdi_work_free);
+}
 
+static void wb_clear_pending(struct bdi_writeback *wb, struct bdi_work *work)
+{
 	/*
-	 * make sure that changes are seen by all cpus before we test i_state
-	 * -- mikulas
+	 * The caller has retrieved the work arguments from this work,
+	 * drop our reference. If this is the last ref, delete and free it
 	 */
-	smp_mb();
+	if (atomic_dec_and_test(&work->pending)) {
+		struct backing_dev_info *bdi = wb->bdi;
 
-	/* avoid the locking if we can */
-	if ((inode->i_state & flags) == flags)
-		return;
-
-	if (unlikely(block_dump))
-		block_dump___mark_inode_dirty(inode);
+		spin_lock(&bdi->wb_lock);
+		list_del_rcu(&work->list);
+		spin_unlock(&bdi->wb_lock);
 
-	spin_lock(&inode_lock);
-	if ((inode->i_state & flags) != flags) {
-		const int was_dirty = inode->i_state & I_DIRTY;
+		wb_work_complete(work);
+	}
+}
 
-		inode->i_state |= flags;
+static void bdi_queue_work(struct backing_dev_info *bdi, struct bdi_work *work)
+{
+	if (work) {
+		work->seen = bdi->wb_mask;
+		BUG_ON(!work->seen);
+		atomic_set(&work->pending, bdi->wb_cnt);
+		BUG_ON(!bdi->wb_cnt);
 
 		/*
-		 * If the inode is being synced, just update its dirty state.
-		 * The unlocker will place the inode on the appropriate
-		 * superblock list, based upon its state.
+		 * Make sure stores are seen before it appears on the list
 		 */
-		if (inode->i_state & I_SYNC)
-			goto out;
+		smp_mb();
 
-		/*
-		 * Only add valid (hashed) inodes to the superblock's
-		 * dirty list.  Add blockdev inodes as well.
-		 */
-		if (!S_ISBLK(inode->i_mode)) {
-			if (hlist_unhashed(&inode->i_hash))
-				goto out;
-		}
-		if (inode->i_state & (I_FREEING|I_CLEAR))
-			goto out;
+		spin_lock(&bdi->wb_lock);
+		list_add_tail_rcu(&work->list, &bdi->work_list);
+		spin_unlock(&bdi->wb_lock);
+	}
+
+	/*
+	 * If the default thread isn't there, make sure we add it. When
+	 * it gets created and wakes up, we'll run this work.
+	 */
+	if (unlikely(list_empty_careful(&bdi->wb_list)))
+		wake_up_process(default_backing_dev_info.wb.task);
+	else {
+		struct bdi_writeback *wb = &bdi->wb;
 
 		/*
-		 * If the inode was already on b_dirty/b_io/b_more_io, don't
-		 * reposition it (that would break b_dirty time-ordering).
+		 * If we failed allocating the bdi work item, wake up the wb
+		 * thread always. As a safety precaution, it'll flush out
+		 * everything
 		 */
-		if (!was_dirty) {
-			inode->dirtied_when = jiffies;
-			list_move(&inode->i_list,
-					&inode_to_bdi(inode)->b_dirty);
-		}
+		if (!wb_has_dirty_io(wb)) {
+			if (work)
+				wb_clear_pending(wb, work);
+		} else if (wb->task)
+			wake_up_process(wb->task);
 	}
-out:
-	spin_unlock(&inode_lock);
 }
 
-EXPORT_SYMBOL(__mark_inode_dirty);
+/*
+ * Used for on-stack allocated work items. The caller needs to wait until
+ * the wb threads have acked the work before it's safe to continue.
+ */
+static void bdi_wait_on_work_clear(struct bdi_work *work)
+{
+	wait_on_bit(&work->state, WS_USED_B, bdi_sched_wait,
+		    TASK_UNINTERRUPTIBLE);
+}
 
-static int write_inode(struct inode *inode, int sync)
+static struct bdi_work *bdi_alloc_work(struct writeback_control *wbc)
 {
-	if (inode->i_sb->s_op->write_inode && !is_bad_inode(inode))
-		return inode->i_sb->s_op->write_inode(inode, sync);
-	return 0;
+	struct bdi_work *work;
+
+	work = kmalloc(sizeof(*work), GFP_ATOMIC);
+	if (work)
+		bdi_work_init(work, wbc);
+
+	return work;
+}
+
+void bdi_start_writeback(struct writeback_control *wbc)
+{
+	const bool must_wait = wbc->sync_mode == WB_SYNC_ALL;
+	struct bdi_work work_stack, *work = NULL;
+
+	if (!must_wait)
+		work = bdi_alloc_work(wbc);
+
+	if (!work) {
+		work = &work_stack;
+		bdi_work_init_on_stack(work, wbc);
+	}
+
+	bdi_queue_work(wbc->bdi, work);
+
+	/*
+	 * If the sync mode is WB_SYNC_ALL, block waiting for the work to
+	 * complete. If not, we only need to wait for the work to be started,
+	 * if we allocated it on-stack. We use the same mechanism, if the
+	 * wait bit is set in the bdi_work struct, then threads will not
+	 * clear pending until after they are done.
+	 *
+	 * Note that work == &work_stack if must_wait is true, so we don't
+	 * need to do call_rcu() here ever, since the completion path will
+	 * have done that for us.
+	 */
+	if (must_wait || work == &work_stack) {
+		bdi_wait_on_work_clear(work);
+		if (work != &work_stack)
+			call_rcu(&work->rcu_head, bdi_work_free);
+	}
 }
 
 /*
@@ -199,16 +244,16 @@ static int write_inode(struct inode *inode, int sync)
  */
 static void redirty_tail(struct inode *inode)
 {
-	struct backing_dev_info *bdi = inode_to_bdi(inode);
+	struct bdi_writeback *wb = &inode_to_bdi(inode)->wb;
 
-	if (!list_empty(&bdi->b_dirty)) {
+	if (!list_empty(&wb->b_dirty)) {
 		struct inode *tail;
 
-		tail = list_entry(bdi->b_dirty.next, struct inode, i_list);
+		tail = list_entry(wb->b_dirty.next, struct inode, i_list);
 		if (time_before(inode->dirtied_when, tail->dirtied_when))
 			inode->dirtied_when = jiffies;
 	}
-	list_move(&inode->i_list, &bdi->b_dirty);
+	list_move(&inode->i_list, &wb->b_dirty);
 }
 
 /*
@@ -216,7 +261,9 @@ static void redirty_tail(struct inode *inode)
  */
 static void requeue_io(struct inode *inode)
 {
-	list_move(&inode->i_list, &inode_to_bdi(inode)->b_more_io);
+	struct bdi_writeback *wb = &inode_to_bdi(inode)->wb;
+
+	list_move(&inode->i_list, &wb->b_more_io);
 }
 
 static void inode_sync_complete(struct inode *inode)
@@ -263,52 +310,18 @@ static void move_expired_inodes(struct list_head *delaying_queue,
 /*
  * Queue all expired dirty inodes for io, eldest first.
  */
-static void queue_io(struct backing_dev_info *bdi,
-		     unsigned long *older_than_this)
+static void queue_io(struct bdi_writeback *wb, unsigned long *older_than_this)
 {
-	list_splice_init(&bdi->b_more_io, bdi->b_io.prev);
-	move_expired_inodes(&bdi->b_dirty, &bdi->b_io, older_than_this);
+	list_splice_init(&wb->b_more_io, wb->b_io.prev);
+	move_expired_inodes(&wb->b_dirty, &wb->b_io, older_than_this);
 }
 
-static int sb_on_inode_list(struct super_block *sb, struct list_head *list)
-{
-	struct inode *inode;
-	int ret = 0;
-
-	spin_lock(&inode_lock);
-	list_for_each_entry(inode, list, i_list) {
-		if (inode->i_sb == sb) {
-			ret = 1;
-			break;
-		}
-	}
-	spin_unlock(&inode_lock);
-	return ret;
-}
-
-int sb_has_dirty_inodes(struct super_block *sb)
+static int write_inode(struct inode *inode, int sync)
 {
-	struct backing_dev_info *bdi;
-	int ret = 0;
-
-	/*
-	 * This is REALLY expensive right now, but it'll go away
-	 * when the bdi writeback is introduced
-	 */
-	mutex_lock(&bdi_lock);
-	list_for_each_entry(bdi, &bdi_list, bdi_list) {
-		if (sb_on_inode_list(sb, &bdi->b_dirty) ||
-		    sb_on_inode_list(sb, &bdi->b_io) ||
-		    sb_on_inode_list(sb, &bdi->b_more_io)) {
-			ret = 1;
-			break;
-		}
-	}
-	mutex_unlock(&bdi_lock);
-
-	return ret;
+	if (inode->i_sb->s_op->write_inode && !is_bad_inode(inode))
+		return inode->i_sb->s_op->write_inode(inode, sync);
+	return 0;
 }
-EXPORT_SYMBOL(sb_has_dirty_inodes);
 
 /*
  * Wait for writeback on an inode to complete.
@@ -466,20 +479,71 @@ writeback_single_inode(struct inode *inode, struct writeback_control *wbc)
 	return ret;
 }
 
-static void generic_sync_bdi_inodes(struct backing_dev_info *bdi,
-				    struct writeback_control *wbc,
-				    struct super_block *sb)
+/*
+ * For WB_SYNC_NONE writeback, the caller does not have the sb pinned
+ * before calling writeback. So make sure that we do pin it, so it doesn't
+ * go away while we are writing inodes from it.
+ *
+ * Returns 0 if the super was successfully pinned (or pinning wasn't needed),
+ * 1 if we failed.
+ */
+static int pin_sb_for_writeback(struct writeback_control *wbc,
+				   struct inode *inode)
+{
+	struct super_block *sb = inode->i_sb;
+
+	/*
+	 * Caller must already hold the ref for this
+	 */
+	if (wbc->sync_mode == WB_SYNC_ALL) {
+		WARN_ON(!rwsem_is_locked(&sb->s_umount));
+		return 0;
+	}
+
+	spin_lock(&sb_lock);
+	sb->s_count++;
+	if (down_read_trylock(&sb->s_umount)) {
+		if (sb->s_root) {
+			spin_unlock(&sb_lock);
+			return 0;
+		}
+		/*
+		 * umounted, drop rwsem again and fall through to failure
+		 */
+		up_read(&sb->s_umount);
+	}
+
+	sb->s_count--;
+	spin_unlock(&sb_lock);
+	return 1;
+}
+
+static void unpin_sb_for_writeback(struct writeback_control *wbc,
+				   struct inode *inode)
+{
+	struct super_block *sb = inode->i_sb;
+
+	if (wbc->sync_mode == WB_SYNC_ALL)
+		return;
+
+	up_read(&sb->s_umount);
+	put_super(sb);
+}
+
+static void writeback_inodes_wb(struct bdi_writeback *wb,
+				struct writeback_control *wbc)
 {
+	struct super_block *sb = wbc->sb;
 	const int is_blkdev_sb = sb_is_blkdev_sb(sb);
 	const unsigned long start = jiffies;	/* livelock avoidance */
 
 	spin_lock(&inode_lock);
 
-	if (!wbc->for_kupdate || list_empty(&bdi->b_io))
-		queue_io(bdi, wbc->older_than_this);
+	if (!wbc->for_kupdate || list_empty(&wb->b_io))
+		queue_io(wb, wbc->older_than_this);
 
-	while (!list_empty(&bdi->b_io)) {
-		struct inode *inode = list_entry(bdi->b_io.prev,
+	while (!list_empty(&wb->b_io)) {
+		struct inode *inode = list_entry(wb->b_io.prev,
 						struct inode, i_list);
 		long pages_skipped;
 
@@ -491,7 +555,7 @@ static void generic_sync_bdi_inodes(struct backing_dev_info *bdi,
 			continue;
 		}
 
-		if (!bdi_cap_writeback_dirty(bdi)) {
+		if (!bdi_cap_writeback_dirty(wb->bdi)) {
 			redirty_tail(inode);
 			if (is_blkdev_sb) {
 				/*
@@ -513,7 +577,7 @@ static void generic_sync_bdi_inodes(struct backing_dev_info *bdi,
 			continue;
 		}
 
-		if (wbc->nonblocking && bdi_write_congested(bdi)) {
+		if (wbc->nonblocking && bdi_write_congested(wb->bdi)) {
 			wbc->encountered_congestion = 1;
 			if (!is_blkdev_sb)
 				break;		/* Skip a congested fs */
@@ -521,13 +585,6 @@ static void generic_sync_bdi_inodes(struct backing_dev_info *bdi,
 			continue;		/* Skip a congested blockdev */
 		}
 
-		if (wbc->bdi && bdi != wbc->bdi) {
-			if (!is_blkdev_sb)
-				break;		/* fs has the wrong queue */
-			requeue_io(inode);
-			continue;		/* blockdev has wrong queue */
-		}
-
 		/*
 		 * Was this inode dirtied after sync_sb_inodes was called?
 		 * This keeps sync from extra jobs and livelock.
@@ -535,16 +592,16 @@ static void generic_sync_bdi_inodes(struct backing_dev_info *bdi,
 		if (inode_dirtied_after(inode, start))
 			break;
 
-		/* Is another pdflush already flushing this queue? */
-		if (current_is_pdflush() && !writeback_acquire(bdi))
-			break;
+		if (pin_sb_for_writeback(wbc, inode)) {
+			requeue_io(inode);
+			continue;
+		}
 
 		BUG_ON(inode->i_state & (I_FREEING | I_CLEAR));
 		__iget(inode);
 		pages_skipped = wbc->pages_skipped;
 		writeback_single_inode(inode, wbc);
-		if (current_is_pdflush())
-			writeback_release(bdi);
+		unpin_sb_for_writeback(wbc, inode);
 		if (wbc->pages_skipped != pages_skipped) {
 			/*
 			 * writeback is not making progress due to locked
@@ -560,7 +617,7 @@ static void generic_sync_bdi_inodes(struct backing_dev_info *bdi,
 			wbc->more_io = 1;
 			break;
 		}
-		if (!list_empty(&bdi->b_more_io))
+		if (!list_empty(&wb->b_more_io))
 			wbc->more_io = 1;
 	}
 
@@ -568,139 +625,500 @@ static void generic_sync_bdi_inodes(struct backing_dev_info *bdi,
 	/* Leave any unwritten inodes on b_io */
 }
 
+void writeback_inodes_wbc(struct writeback_control *wbc)
+{
+	struct backing_dev_info *bdi = wbc->bdi;
+
+	writeback_inodes_wb(&bdi->wb, wbc);
+}
+
 /*
- * Write out a superblock's list of dirty inodes.  A wait will be performed
- * upon no inodes, all inodes or the final one, depending upon sync_mode.
- *
- * If older_than_this is non-NULL, then only write out inodes which
- * had their first dirtying at a time earlier than *older_than_this.
- *
- * If we're a pdlfush thread, then implement pdflush collision avoidance
- * against the entire list.
+ * The maximum number of pages to writeout in a single bdi flush/kupdate
+ * operation.  We do this so we don't hold I_SYNC against an inode for
+ * enormous amounts of time, which would block a userspace task which has
+ * been forced to throttle against that inode.  Also, the code reevaluates
+ * the dirty each time it has written this many pages.
+ */
+#define MAX_WRITEBACK_PAGES     1024
+
+static inline bool over_bground_thresh(void)
+{
+	unsigned long background_thresh, dirty_thresh;
+
+	get_dirty_limits(&background_thresh, &dirty_thresh, NULL, NULL);
+
+	return (global_page_state(NR_FILE_DIRTY) +
+		global_page_state(NR_UNSTABLE_NFS) >= background_thresh);
+}
+
+/*
+ * Explicit flushing or periodic writeback of "old" data.
  *
- * If `bdi' is non-zero then we're being asked to writeback a specific queue.
- * This function assumes that the blockdev superblock's inodes are backed by
- * a variety of queues, so all inodes are searched.  For other superblocks,
- * assume that all inodes are backed by the same queue.
+ * Define "old": the first time one of an inode's pages is dirtied, we mark the
+ * dirtying-time in the inode's address_space.  So this periodic writeback code
+ * just walks the superblock inode list, writing back any inodes which are
+ * older than a specific point in time.
  *
- * FIXME: this linear search could get expensive with many fileystems.  But
- * how to fix?  We need to go from an address_space to all inodes which share
- * a queue with that address_space.  (Easy: have a global "dirty superblocks"
- * list).
+ * Try to run once per dirty_writeback_interval.  But if a writeback event
+ * takes longer than a dirty_writeback_interval interval, then leave a
+ * one-second gap.
  *
- * The inodes to be written are parked on bdi->b_io.  They are moved back onto
- * bdi->b_dirty as they are selected for writing.  This way, none can be missed
- * on the writer throttling path, and we get decent balancing between many
- * throttled threads: we don't want them all piling up on inode_sync_wait.
+ * older_than_this takes precedence over nr_to_write.  So we'll only write back
+ * all dirty pages if they are all attached to "old" mappings.
  */
-static void generic_sync_sb_inodes(struct super_block *sb,
-				   struct writeback_control *wbc)
+static long wb_writeback(struct bdi_writeback *wb, long nr_pages,
+			 struct super_block *sb,
+			 enum writeback_sync_modes sync_mode, int for_kupdate)
 {
-	struct backing_dev_info *bdi;
-
-	if (!wbc->bdi) {
-		mutex_lock(&bdi_lock);
-		list_for_each_entry(bdi, &bdi_list, bdi_list)
-			generic_sync_bdi_inodes(bdi, wbc, sb);
-		mutex_unlock(&bdi_lock);
-	} else
-		generic_sync_bdi_inodes(wbc->bdi, wbc, sb);
+	struct writeback_control wbc = {
+		.bdi			= wb->bdi,
+		.sb			= sb,
+		.sync_mode		= sync_mode,
+		.older_than_this	= NULL,
+		.for_kupdate		= for_kupdate,
+		.range_cyclic		= 1,
+	};
+	unsigned long oldest_jif;
+	long wrote = 0;
 
-	if (wbc->sync_mode == WB_SYNC_ALL) {
-		struct inode *inode, *old_inode = NULL;
+	if (wbc.for_kupdate) {
+		wbc.older_than_this = &oldest_jif;
+		oldest_jif = jiffies -
+				msecs_to_jiffies(dirty_expire_interval * 10);
+	}
 
-		spin_lock(&inode_lock);
+	for (;;) {
+		/*
+		 * Don't flush anything for non-integrity writeback where
+		 * no nr_pages was given
+		 */
+		if (!for_kupdate && nr_pages <= 0 && sync_mode == WB_SYNC_NONE)
+			break;
 
 		/*
-		 * Data integrity sync. Must wait for all pages under writeback,
-		 * because there may have been pages dirtied before our sync
-		 * call, but which had writeout started before we write it out.
-		 * In which case, the inode may not be on the dirty list, but
-		 * we still have to wait for that writeout.
+		 * If no specific pages were given and this is just a
+		 * periodic background writeout and we are below the
+		 * background dirty threshold, don't do anything
 		 */
-		list_for_each_entry(inode, &sb->s_inodes, i_sb_list) {
-			struct address_space *mapping;
+		if (for_kupdate && nr_pages <= 0 && !over_bground_thresh())
+			break;
 
-			if (inode->i_state &
-					(I_FREEING|I_CLEAR|I_WILL_FREE|I_NEW))
-				continue;
-			mapping = inode->i_mapping;
-			if (mapping->nrpages == 0)
+		wbc.more_io = 0;
+		wbc.encountered_congestion = 0;
+		wbc.nr_to_write = MAX_WRITEBACK_PAGES;
+		wbc.pages_skipped = 0;
+		writeback_inodes_wb(wb, &wbc);
+		nr_pages -= MAX_WRITEBACK_PAGES - wbc.nr_to_write;
+		wrote += MAX_WRITEBACK_PAGES - wbc.nr_to_write;
+
+		/*
+		 * If we ran out of stuff to write, bail unless more_io got set
+		 */
+		if (wbc.nr_to_write > 0 || wbc.pages_skipped > 0) {
+			if (wbc.more_io && !wbc.for_kupdate)
 				continue;
-			__iget(inode);
-			spin_unlock(&inode_lock);
+			break;
+		}
+	}
+
+	return wrote;
+}
+
+/*
+ * Return the next bdi_work struct that hasn't been processed by this
+ * wb thread yet
+ */
+static struct bdi_work *get_next_work_item(struct backing_dev_info *bdi,
+					   struct bdi_writeback *wb)
+{
+	struct bdi_work *work, *ret = NULL;
+
+	rcu_read_lock();
+
+	list_for_each_entry_rcu(work, &bdi->work_list, list) {
+		if (!test_and_clear_bit(wb->nr, &work->seen))
+			continue;
+
+		ret = work;
+		break;
+	}
+
+	rcu_read_unlock();
+	return ret;
+}
+
+static long wb_check_old_data_flush(struct bdi_writeback *wb)
+{
+	unsigned long expired;
+	long nr_pages;
+
+	expired = wb->last_old_flush +
+			msecs_to_jiffies(dirty_writeback_interval * 10);
+	if (time_before(jiffies, expired))
+		return 0;
+
+	wb->last_old_flush = jiffies;
+	nr_pages = global_page_state(NR_FILE_DIRTY) +
+			global_page_state(NR_UNSTABLE_NFS) +
+			(inodes_stat.nr_inodes - inodes_stat.nr_unused);
+
+	if (nr_pages)
+		return wb_writeback(wb, nr_pages, NULL, WB_SYNC_NONE, 1);
+
+	return 0;
+}
+
+/*
+ * Retrieve work items and do the writeback they describe
+ */
+long wb_do_writeback(struct bdi_writeback *wb, int force_wait)
+{
+	struct backing_dev_info *bdi = wb->bdi;
+	struct bdi_work *work;
+	long nr_pages, wrote = 0;
+
+	while ((work = get_next_work_item(bdi, wb)) != NULL) {
+		enum writeback_sync_modes sync_mode;
+
+		nr_pages = work->nr_pages;
+
+		/*
+		 * Override sync mode, in case we must wait for completion
+		 */
+		if (force_wait)
+			work->sync_mode = sync_mode = WB_SYNC_ALL;
+		else
+			sync_mode = work->sync_mode;
+
+		/*
+		 * If this isn't a data integrity operation, just notify
+		 * that we have seen this work and we are now starting it.
+		 */
+		if (sync_mode == WB_SYNC_NONE)
+			wb_clear_pending(wb, work);
+
+		wrote += wb_writeback(wb, nr_pages, work->sb, sync_mode, 0);
+
+		/*
+		 * This is a data integrity writeback, so only do the
+		 * notification when we have completed the work.
+		 */
+		if (sync_mode == WB_SYNC_ALL)
+			wb_clear_pending(wb, work);
+	}
+
+	/*
+	 * Check for periodic writeback, kupdated() style
+	 */
+	wrote += wb_check_old_data_flush(wb);
+
+	return wrote;
+}
+
+/*
+ * Handle writeback of dirty data for the device backed by this bdi. Also
+ * wakes up periodically and does kupdated style flushing.
+ */
+int bdi_writeback_task(struct bdi_writeback *wb)
+{
+	unsigned long last_active = jiffies;
+	unsigned long wait_jiffies = -1UL;
+	long pages_written;
+
+	while (!kthread_should_stop()) {
+		pages_written = wb_do_writeback(wb, 0);
+
+		if (pages_written)
+			last_active = jiffies;
+		else if (wait_jiffies != -1UL) {
+			unsigned long max_idle;
+
 			/*
-			 * We hold a reference to 'inode' so it couldn't have
-			 * been removed from s_inodes list while we dropped the
-			 * inode_lock.  We cannot iput the inode now as we can
-			 * be holding the last reference and we cannot iput it
-			 * under inode_lock. So we keep the reference and iput
-			 * it later.
+			 * Longest period of inactivity that we tolerate. If we
+			 * see dirty data again later, the task will get
+			 * recreated automatically.
 			 */
-			iput(old_inode);
-			old_inode = inode;
+			max_idle = max(5UL * 60 * HZ, wait_jiffies);
+			if (time_after(jiffies, max_idle + last_active))
+				break;
+		}
+
+		wait_jiffies = msecs_to_jiffies(dirty_writeback_interval * 10);
+		set_current_state(TASK_INTERRUPTIBLE);
+		schedule_timeout(wait_jiffies);
+		try_to_freeze();
+	}
+
+	return 0;
+}
+
+/*
+ * Schedule writeback for all backing devices. Expensive! If this is a data
+ * integrity operation, writeback will be complete when this returns. If
+ * we are simply called for WB_SYNC_NONE, then writeback will merely be
+ * scheduled to run.
+ */
+static void bdi_writeback_all(struct writeback_control *wbc)
+{
+	const bool must_wait = wbc->sync_mode == WB_SYNC_ALL;
+	struct backing_dev_info *bdi;
+	struct bdi_work *work;
+	LIST_HEAD(list);
+
+restart:
+	spin_lock(&bdi_lock);
+
+	list_for_each_entry(bdi, &bdi_list, bdi_list) {
+		struct bdi_work *work;
+
+		if (!bdi_has_dirty_io(bdi))
+			continue;
 
-			filemap_fdatawait(mapping);
+		/*
+		 * If work allocation fails, do the writes inline. We drop
+		 * the lock and restart the list writeout. This should be OK,
+		 * since this happens rarely and because the writeout should
+		 * eventually make more free memory available.
+		 */
+		work = bdi_alloc_work(wbc);
+		if (!work) {
+			struct writeback_control __wbc;
 
-			cond_resched();
+			/*
+			 * Not a data integrity writeout, just continue
+			 */
+			if (!must_wait)
+				continue;
 
-			spin_lock(&inode_lock);
+			spin_unlock(&bdi_lock);
+			__wbc = *wbc;
+			__wbc.bdi = bdi;
+			writeback_inodes_wbc(&__wbc);
+			goto restart;
 		}
-		spin_unlock(&inode_lock);
-		iput(old_inode);
+		if (must_wait)
+			list_add_tail(&work->wait_list, &list);
+
+		bdi_queue_work(bdi, work);
+	}
+
+	spin_unlock(&bdi_lock);
+
+	/*
+	 * If this is for WB_SYNC_ALL, wait for pending work to complete
+	 * before returning.
+	 */
+	while (!list_empty(&list)) {
+		work = list_entry(list.next, struct bdi_work, wait_list);
+		list_del(&work->wait_list);
+		bdi_wait_on_work_clear(work);
+		call_rcu(&work->rcu_head, bdi_work_free);
 	}
 }
 
 /*
- * Start writeback of dirty pagecache data against all unlocked inodes.
+ * Start writeback of `nr_pages' pages.  If `nr_pages' is zero, write back
+ * the whole world.
+ */
+void wakeup_flusher_threads(long nr_pages)
+{
+	struct writeback_control wbc = {
+		.sync_mode	= WB_SYNC_NONE,
+		.older_than_this = NULL,
+		.range_cyclic	= 1,
+	};
+
+	if (nr_pages == 0)
+		nr_pages = global_page_state(NR_FILE_DIRTY) +
+				global_page_state(NR_UNSTABLE_NFS);
+	wbc.nr_to_write = nr_pages;
+	bdi_writeback_all(&wbc);
+}
+
+static noinline void block_dump___mark_inode_dirty(struct inode *inode)
+{
+	if (inode->i_ino || strcmp(inode->i_sb->s_id, "bdev")) {
+		struct dentry *dentry;
+		const char *name = "?";
+
+		dentry = d_find_alias(inode);
+		if (dentry) {
+			spin_lock(&dentry->d_lock);
+			name = (const char *) dentry->d_name.name;
+		}
+		printk(KERN_DEBUG
+		       "%s(%d): dirtied inode %lu (%s) on %s\n",
+		       current->comm, task_pid_nr(current), inode->i_ino,
+		       name, inode->i_sb->s_id);
+		if (dentry) {
+			spin_unlock(&dentry->d_lock);
+			dput(dentry);
+		}
+	}
+}
+
+/**
+ *	__mark_inode_dirty -	internal function
+ *	@inode: inode to mark
+ *	@flags: what kind of dirty (i.e. I_DIRTY_SYNC)
+ *	Mark an inode as dirty. Callers should use mark_inode_dirty or
+ *  	mark_inode_dirty_sync.
  *
- * Note:
- * We don't need to grab a reference to superblock here. If it has non-empty
- * ->b_dirty it's hadn't been killed yet and kill_super() won't proceed
- * past sync_inodes_sb() until the ->b_dirty/b_io/b_more_io lists are all
- * empty. Since __sync_single_inode() regains inode_lock before it finally moves
- * inode from superblock lists we are OK.
+ * Put the inode on the super block's dirty list.
+ *
+ * CAREFUL! We mark it dirty unconditionally, but move it onto the
+ * dirty list only if it is hashed or if it refers to a blockdev.
+ * If it was not hashed, it will never be added to the dirty list
+ * even if it is later hashed, as it will have been marked dirty already.
+ *
+ * In short, make sure you hash any inodes _before_ you start marking
+ * them dirty.
  *
- * If `older_than_this' is non-zero then only flush inodes which have a
- * flushtime older than *older_than_this.
+ * This function *must* be atomic for the I_DIRTY_PAGES case -
+ * set_page_dirty() is called under spinlock in several places.
  *
- * If `bdi' is non-zero then we will scan the first inode against each
- * superblock until we find the matching ones.  One group will be the dirty
- * inodes against a filesystem.  Then when we hit the dummy blockdev superblock,
- * sync_sb_inodes will seekout the blockdev which matches `bdi'.  Maybe not
- * super-efficient but we're about to do a ton of I/O...
+ * Note that for blockdevs, inode->dirtied_when represents the dirtying time of
+ * the block-special inode (/dev/hda1) itself.  And the ->dirtied_when field of
+ * the kernel-internal blockdev inode represents the dirtying time of the
+ * blockdev's pages.  This is why for I_DIRTY_PAGES we always use
+ * page->mapping->host, so the page-dirtying time is recorded in the internal
+ * blockdev inode.
  */
-void
-writeback_inodes(struct writeback_control *wbc)
+void __mark_inode_dirty(struct inode *inode, int flags)
 {
-	struct super_block *sb;
+	struct super_block *sb = inode->i_sb;
 
-	might_sleep();
-	spin_lock(&sb_lock);
-restart:
-	list_for_each_entry_reverse(sb, &super_blocks, s_list) {
-		if (sb_has_dirty_inodes(sb)) {
-			/* we're making our own get_super here */
-			sb->s_count++;
-			spin_unlock(&sb_lock);
-			/*
-			 * If we can't get the readlock, there's no sense in
-			 * waiting around, most of the time the FS is going to
-			 * be unmounted by the time it is released.
-			 */
-			if (down_read_trylock(&sb->s_umount)) {
-				if (sb->s_root)
-					generic_sync_sb_inodes(sb, wbc);
-				up_read(&sb->s_umount);
-			}
-			spin_lock(&sb_lock);
-			if (__put_super_and_need_restart(sb))
-				goto restart;
+	/*
+	 * Don't do this for I_DIRTY_PAGES - that doesn't actually
+	 * dirty the inode itself
+	 */
+	if (flags & (I_DIRTY_SYNC | I_DIRTY_DATASYNC)) {
+		if (sb->s_op->dirty_inode)
+			sb->s_op->dirty_inode(inode);
+	}
+
+	/*
+	 * make sure that changes are seen by all cpus before we test i_state
+	 * -- mikulas
+	 */
+	smp_mb();
+
+	/* avoid the locking if we can */
+	if ((inode->i_state & flags) == flags)
+		return;
+
+	if (unlikely(block_dump))
+		block_dump___mark_inode_dirty(inode);
+
+	spin_lock(&inode_lock);
+	if ((inode->i_state & flags) != flags) {
+		const int was_dirty = inode->i_state & I_DIRTY;
+
+		inode->i_state |= flags;
+
+		/*
+		 * If the inode is being synced, just update its dirty state.
+		 * The unlocker will place the inode on the appropriate
+		 * superblock list, based upon its state.
+		 */
+		if (inode->i_state & I_SYNC)
+			goto out;
+
+		/*
+		 * Only add valid (hashed) inodes to the superblock's
+		 * dirty list.  Add blockdev inodes as well.
+		 */
+		if (!S_ISBLK(inode->i_mode)) {
+			if (hlist_unhashed(&inode->i_hash))
+				goto out;
+		}
+		if (inode->i_state & (I_FREEING|I_CLEAR))
+			goto out;
+
+		/*
+		 * If the inode was already on b_dirty/b_io/b_more_io, don't
+		 * reposition it (that would break b_dirty time-ordering).
+		 */
+		if (!was_dirty) {
+			struct bdi_writeback *wb = &inode_to_bdi(inode)->wb;
+
+			inode->dirtied_when = jiffies;
+			list_move(&inode->i_list, &wb->b_dirty);
 		}
-		if (wbc->nr_to_write <= 0)
-			break;
 	}
-	spin_unlock(&sb_lock);
+out:
+	spin_unlock(&inode_lock);
+}
+EXPORT_SYMBOL(__mark_inode_dirty);
+
+/*
+ * Write out a superblock's list of dirty inodes.  A wait will be performed
+ * upon no inodes, all inodes or the final one, depending upon sync_mode.
+ *
+ * If older_than_this is non-NULL, then only write out inodes which
+ * had their first dirtying at a time earlier than *older_than_this.
+ *
+ * If we're a pdlfush thread, then implement pdflush collision avoidance
+ * against the entire list.
+ *
+ * If `bdi' is non-zero then we're being asked to writeback a specific queue.
+ * This function assumes that the blockdev superblock's inodes are backed by
+ * a variety of queues, so all inodes are searched.  For other superblocks,
+ * assume that all inodes are backed by the same queue.
+ *
+ * The inodes to be written are parked on bdi->b_io.  They are moved back onto
+ * bdi->b_dirty as they are selected for writing.  This way, none can be missed
+ * on the writer throttling path, and we get decent balancing between many
+ * throttled threads: we don't want them all piling up on inode_sync_wait.
+ */
+static void wait_sb_inodes(struct writeback_control *wbc)
+{
+	struct inode *inode, *old_inode = NULL;
+
+	/*
+	 * We need to be protected against the filesystem going from
+	 * r/o to r/w or vice versa.
+	 */
+	WARN_ON(!rwsem_is_locked(&wbc->sb->s_umount));
+
+	spin_lock(&inode_lock);
+
+	/*
+	 * Data integrity sync. Must wait for all pages under writeback,
+	 * because there may have been pages dirtied before our sync
+	 * call, but which had writeout started before we write it out.
+	 * In which case, the inode may not be on the dirty list, but
+	 * we still have to wait for that writeout.
+	 */
+	list_for_each_entry(inode, &wbc->sb->s_inodes, i_sb_list) {
+		struct address_space *mapping;
+
+		if (inode->i_state & (I_FREEING|I_CLEAR|I_WILL_FREE|I_NEW))
+			continue;
+		mapping = inode->i_mapping;
+		if (mapping->nrpages == 0)
+			continue;
+		__iget(inode);
+		spin_unlock(&inode_lock);
+		/*
+		 * We hold a reference to 'inode' so it couldn't have
+		 * been removed from s_inodes list while we dropped the
+		 * inode_lock.  We cannot iput the inode now as we can
+		 * be holding the last reference and we cannot iput it
+		 * under inode_lock. So we keep the reference and iput
+		 * it later.
+		 */
+		iput(old_inode);
+		old_inode = inode;
+
+		filemap_fdatawait(mapping);
+
+		cond_resched();
+
+		spin_lock(&inode_lock);
+	}
+	spin_unlock(&inode_lock);
+	iput(old_inode);
 }
 
 /**
@@ -715,6 +1133,7 @@ restart:
 long writeback_inodes_sb(struct super_block *sb)
 {
 	struct writeback_control wbc = {
+		.sb		= sb,
 		.sync_mode	= WB_SYNC_NONE,
 		.range_start	= 0,
 		.range_end	= LLONG_MAX,
@@ -727,7 +1146,7 @@ long writeback_inodes_sb(struct super_block *sb)
 			(inodes_stat.nr_inodes - inodes_stat.nr_unused);
 
 	wbc.nr_to_write = nr_to_write;
-	generic_sync_sb_inodes(sb, &wbc);
+	bdi_writeback_all(&wbc);
 	return nr_to_write - wbc.nr_to_write;
 }
 EXPORT_SYMBOL(writeback_inodes_sb);
@@ -742,6 +1161,7 @@ EXPORT_SYMBOL(writeback_inodes_sb);
 long sync_inodes_sb(struct super_block *sb)
 {
 	struct writeback_control wbc = {
+		.sb		= sb,
 		.sync_mode	= WB_SYNC_ALL,
 		.range_start	= 0,
 		.range_end	= LLONG_MAX,
@@ -749,7 +1169,8 @@ long sync_inodes_sb(struct super_block *sb)
 	long nr_to_write = LONG_MAX; /* doesn't actually matter */
 
 	wbc.nr_to_write = nr_to_write;
-	generic_sync_sb_inodes(sb, &wbc);
+	bdi_writeback_all(&wbc);
+	wait_sb_inodes(&wbc);
 	return nr_to_write - wbc.nr_to_write;
 }
 EXPORT_SYMBOL(sync_inodes_sb);
diff --git a/fs/super.c b/fs/super.c
index 0d22ce3..9cda337 100644
--- a/fs/super.c
+++ b/fs/super.c
@@ -168,7 +168,7 @@ int __put_super_and_need_restart(struct super_block *sb)
  *	Drops a temporary reference, frees superblock if there's no
  *	references left.
  */
-static void put_super(struct super_block *sb)
+void put_super(struct super_block *sb)
 {
 	spin_lock(&sb_lock);
 	__put_super(sb);
diff --git a/fs/sync.c b/fs/sync.c
index 66f2104..103cc7f 100644
--- a/fs/sync.c
+++ b/fs/sync.c
@@ -120,7 +120,7 @@ restart:
  */
 SYSCALL_DEFINE0(sync)
 {
-	wakeup_pdflush(0);
+	wakeup_flusher_threads(0);
 	sync_filesystems(0);
 	sync_filesystems(1);
 	if (unlikely(laptop_mode))
diff --git a/include/linux/backing-dev.h b/include/linux/backing-dev.h
index 928cd54..d045f5f 100644
--- a/include/linux/backing-dev.h
+++ b/include/linux/backing-dev.h
@@ -13,6 +13,8 @@
 #include <linux/proportions.h>
 #include <linux/kernel.h>
 #include <linux/fs.h>
+#include <linux/sched.h>
+#include <linux/writeback.h>
 #include <asm/atomic.h>
 
 struct page;
@@ -23,7 +25,8 @@ struct dentry;
  * Bits in backing_dev_info.state
  */
 enum bdi_state {
-	BDI_pdflush,		/* A pdflush thread is working this device */
+	BDI_pending,		/* On its way to being activated */
+	BDI_wb_alloc,		/* Default embedded wb allocated */
 	BDI_async_congested,	/* The async (write) queue is getting full */
 	BDI_sync_congested,	/* The sync queue is getting full */
 	BDI_unused,		/* Available bits start here */
@@ -39,9 +42,22 @@ enum bdi_stat_item {
 
 #define BDI_STAT_BATCH (8*(1+ilog2(nr_cpu_ids)))
 
+struct bdi_writeback {
+	struct list_head list;			/* hangs off the bdi */
+
+	struct backing_dev_info *bdi;		/* our parent bdi */
+	unsigned int nr;
+
+	unsigned long last_old_flush;		/* last old data flush */
+
+	struct task_struct	*task;		/* writeback task */
+	struct list_head	b_dirty;	/* dirty inodes */
+	struct list_head	b_io;		/* parked for writeback */
+	struct list_head	b_more_io;	/* parked for more writeback */
+};
+
 struct backing_dev_info {
 	struct list_head bdi_list;
-
 	unsigned long ra_pages;	/* max readahead in PAGE_CACHE_SIZE units */
 	unsigned long state;	/* Always use atomic bitops on this */
 	unsigned int capabilities; /* Device capabilities */
@@ -58,11 +74,15 @@ struct backing_dev_info {
 	unsigned int min_ratio;
 	unsigned int max_ratio, max_prop_frac;
 
-	struct device *dev;
+	struct bdi_writeback wb;  /* default writeback info for this bdi */
+	spinlock_t wb_lock;	  /* protects update side of wb_list */
+	struct list_head wb_list; /* the flusher threads hanging off this bdi */
+	unsigned long wb_mask;	  /* bitmask of registered tasks */
+	unsigned int wb_cnt;	  /* number of registered tasks */
 
-	struct list_head	b_dirty;	/* dirty inodes */
-	struct list_head	b_io;		/* parked for writeback */
-	struct list_head	b_more_io;	/* parked for more writeback */
+	struct list_head work_list;
+
+	struct device *dev;
 
 #ifdef CONFIG_DEBUG_FS
 	struct dentry *debug_dir;
@@ -77,10 +97,20 @@ int bdi_register(struct backing_dev_info *bdi, struct device *parent,
 		const char *fmt, ...);
 int bdi_register_dev(struct backing_dev_info *bdi, dev_t dev);
 void bdi_unregister(struct backing_dev_info *bdi);
+void bdi_start_writeback(struct writeback_control *wbc);
+int bdi_writeback_task(struct bdi_writeback *wb);
+int bdi_has_dirty_io(struct backing_dev_info *bdi);
 
-extern struct mutex bdi_lock;
+extern spinlock_t bdi_lock;
 extern struct list_head bdi_list;
 
+static inline int wb_has_dirty_io(struct bdi_writeback *wb)
+{
+	return !list_empty(&wb->b_dirty) ||
+	       !list_empty(&wb->b_io) ||
+	       !list_empty(&wb->b_more_io);
+}
+
 static inline void __add_bdi_stat(struct backing_dev_info *bdi,
 		enum bdi_stat_item item, s64 amount)
 {
@@ -270,6 +300,11 @@ static inline bool bdi_cap_swap_backed(struct backing_dev_info *bdi)
 	return bdi->capabilities & BDI_CAP_SWAP_BACKED;
 }
 
+static inline bool bdi_cap_flush_forker(struct backing_dev_info *bdi)
+{
+	return bdi == &default_backing_dev_info;
+}
+
 static inline bool mapping_cap_writeback_dirty(struct address_space *mapping)
 {
 	return bdi_cap_writeback_dirty(mapping->backing_dev_info);
@@ -285,4 +320,10 @@ static inline bool mapping_cap_swap_backed(struct address_space *mapping)
 	return bdi_cap_swap_backed(mapping->backing_dev_info);
 }
 
+static inline int bdi_sched_wait(void *word)
+{
+	schedule();
+	return 0;
+}
+
 #endif		/* _LINUX_BACKING_DEV_H */
diff --git a/include/linux/fs.h b/include/linux/fs.h
index 56371be..26da98f 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -1786,6 +1786,7 @@ extern int get_sb_pseudo(struct file_system_type *, char *,
 	struct vfsmount *mnt);
 extern void simple_set_mnt(struct vfsmount *mnt, struct super_block *sb);
 int __put_super_and_need_restart(struct super_block *sb);
+void put_super(struct super_block *sb);
 
 /* Alas, no aliases. Too much hassle with bringing module.h everywhere */
 #define fops_get(fops) \
@@ -2182,7 +2183,6 @@ extern int bdev_read_only(struct block_device *);
 extern int set_blocksize(struct block_device *, int);
 extern int sb_set_blocksize(struct super_block *, int);
 extern int sb_min_blocksize(struct super_block *, int);
-extern int sb_has_dirty_inodes(struct super_block *);
 
 extern int generic_file_mmap(struct file *, struct vm_area_struct *);
 extern int generic_file_readonly_mmap(struct file *, struct vm_area_struct *);
diff --git a/include/linux/writeback.h b/include/linux/writeback.h
index 0703929..cef7552 100644
--- a/include/linux/writeback.h
+++ b/include/linux/writeback.h
@@ -40,6 +40,8 @@ enum writeback_sync_modes {
 struct writeback_control {
 	struct backing_dev_info *bdi;	/* If !NULL, only write back this
 					   queue */
+	struct super_block *sb;		/* if !NULL, only write inodes from
+					   this super_block */
 	enum writeback_sync_modes sync_mode;
 	unsigned long *older_than_this;	/* If !NULL, only write back inodes
 					   older than this */
@@ -76,10 +78,13 @@ struct writeback_control {
 /*
  * fs/fs-writeback.c
  */	
-void writeback_inodes(struct writeback_control *wbc);
+struct bdi_writeback;
 int inode_wait(void *);
 long writeback_inodes_sb(struct super_block *);
 long sync_inodes_sb(struct super_block *);
+void writeback_inodes_wbc(struct writeback_control *wbc);
+long wb_do_writeback(struct bdi_writeback *wb, int force_wait);
+void wakeup_flusher_threads(long nr_pages);
 
 /* writeback.h requires fs.h; it, too, is not included from here. */
 static inline void wait_on_inode(struct inode *inode)
@@ -99,7 +104,6 @@ static inline void inode_sync_wait(struct inode *inode)
 /*
  * mm/page-writeback.c
  */
-int wakeup_pdflush(long nr_pages);
 void laptop_io_completion(void);
 void laptop_sync_completion(void);
 void throttle_vm_writeout(gfp_t gfp_mask);
diff --git a/mm/backing-dev.c b/mm/backing-dev.c
index 6f163e0..7f3fa79 100644
--- a/mm/backing-dev.c
+++ b/mm/backing-dev.c
@@ -1,8 +1,11 @@
 
 #include <linux/wait.h>
 #include <linux/backing-dev.h>
+#include <linux/kthread.h>
+#include <linux/freezer.h>
 #include <linux/fs.h>
 #include <linux/pagemap.h>
+#include <linux/mm.h>
 #include <linux/sched.h>
 #include <linux/module.h>
 #include <linux/writeback.h>
@@ -22,8 +25,18 @@ struct backing_dev_info default_backing_dev_info = {
 EXPORT_SYMBOL_GPL(default_backing_dev_info);
 
 static struct class *bdi_class;
-DEFINE_MUTEX(bdi_lock);
+DEFINE_SPINLOCK(bdi_lock);
 LIST_HEAD(bdi_list);
+LIST_HEAD(bdi_pending_list);
+
+static struct task_struct *sync_supers_tsk;
+static struct timer_list sync_supers_timer;
+
+static int bdi_sync_supers(void *);
+static void sync_supers_timer_fn(unsigned long);
+static void arm_supers_timer(void);
+
+static void bdi_add_default_flusher_task(struct backing_dev_info *bdi);
 
 #ifdef CONFIG_DEBUG_FS
 #include <linux/debugfs.h>
@@ -187,6 +200,13 @@ static int __init default_bdi_init(void)
 {
 	int err;
 
+	sync_supers_tsk = kthread_run(bdi_sync_supers, NULL, "sync_supers");
+	BUG_ON(IS_ERR(sync_supers_tsk));
+
+	init_timer(&sync_supers_timer);
+	setup_timer(&sync_supers_timer, sync_supers_timer_fn, 0);
+	arm_supers_timer();
+
 	err = bdi_init(&default_backing_dev_info);
 	if (!err)
 		bdi_register(&default_backing_dev_info, NULL, "default");
@@ -195,6 +215,242 @@ static int __init default_bdi_init(void)
 }
 subsys_initcall(default_bdi_init);
 
+static void bdi_wb_init(struct bdi_writeback *wb, struct backing_dev_info *bdi)
+{
+	memset(wb, 0, sizeof(*wb));
+
+	wb->bdi = bdi;
+	wb->last_old_flush = jiffies;
+	INIT_LIST_HEAD(&wb->b_dirty);
+	INIT_LIST_HEAD(&wb->b_io);
+	INIT_LIST_HEAD(&wb->b_more_io);
+}
+
+static void bdi_task_init(struct backing_dev_info *bdi,
+			  struct bdi_writeback *wb)
+{
+	struct task_struct *tsk = current;
+
+	spin_lock(&bdi->wb_lock);
+	list_add_tail_rcu(&wb->list, &bdi->wb_list);
+	spin_unlock(&bdi->wb_lock);
+
+	tsk->flags |= PF_FLUSHER | PF_SWAPWRITE;
+	set_freezable();
+
+	/*
+	 * Our parent may run at a different priority, just set us to normal
+	 */
+	set_user_nice(tsk, 0);
+}
+
+static int bdi_start_fn(void *ptr)
+{
+	struct bdi_writeback *wb = ptr;
+	struct backing_dev_info *bdi = wb->bdi;
+	int ret;
+
+	/*
+	 * Add us to the active bdi_list
+	 */
+	spin_lock(&bdi_lock);
+	list_add(&bdi->bdi_list, &bdi_list);
+	spin_unlock(&bdi_lock);
+
+	bdi_task_init(bdi, wb);
+
+	/*
+	 * Clear pending bit and wakeup anybody waiting to tear us down
+	 */
+	clear_bit(BDI_pending, &bdi->state);
+	smp_mb__after_clear_bit();
+	wake_up_bit(&bdi->state, BDI_pending);
+
+	ret = bdi_writeback_task(wb);
+
+	/*
+	 * Remove us from the list
+	 */
+	spin_lock(&bdi->wb_lock);
+	list_del_rcu(&wb->list);
+	spin_unlock(&bdi->wb_lock);
+
+	/*
+	 * Flush any work that raced with us exiting. No new work
+	 * will be added, since this bdi isn't discoverable anymore.
+	 */
+	if (!list_empty(&bdi->work_list))
+		wb_do_writeback(wb, 1);
+
+	wb->task = NULL;
+	return ret;
+}
+
+int bdi_has_dirty_io(struct backing_dev_info *bdi)
+{
+	return wb_has_dirty_io(&bdi->wb);
+}
+
+static void bdi_flush_io(struct backing_dev_info *bdi)
+{
+	struct writeback_control wbc = {
+		.bdi			= bdi,
+		.sync_mode		= WB_SYNC_NONE,
+		.older_than_this	= NULL,
+		.range_cyclic		= 1,
+		.nr_to_write		= 1024,
+	};
+
+	writeback_inodes_wbc(&wbc);
+}
+
+/*
+ * kupdated() used to do this. We cannot do it from the bdi_forker_task()
+ * or we risk deadlocking on ->s_umount. The longer term solution would be
+ * to implement sync_supers_bdi() or similar and simply do it from the
+ * bdi writeback tasks individually.
+ */
+static int bdi_sync_supers(void *unused)
+{
+	set_user_nice(current, 0);
+
+	while (!kthread_should_stop()) {
+		set_current_state(TASK_INTERRUPTIBLE);
+		schedule();
+
+		/*
+		 * Do this periodically, like kupdated() did before.
+		 */
+		sync_supers();
+	}
+
+	return 0;
+}
+
+static void arm_supers_timer(void)
+{
+	unsigned long next;
+
+	next = msecs_to_jiffies(dirty_writeback_interval * 10) + jiffies;
+	mod_timer(&sync_supers_timer, round_jiffies_up(next));
+}
+
+static void sync_supers_timer_fn(unsigned long unused)
+{
+	wake_up_process(sync_supers_tsk);
+	arm_supers_timer();
+}
+
+static int bdi_forker_task(void *ptr)
+{
+	struct bdi_writeback *me = ptr;
+
+	bdi_task_init(me->bdi, me);
+
+	for (;;) {
+		struct backing_dev_info *bdi, *tmp;
+		struct bdi_writeback *wb;
+
+		/*
+		 * Temporary measure, we want to make sure we don't see
+		 * dirty data on the default backing_dev_info
+		 */
+		if (wb_has_dirty_io(me) || !list_empty(&me->bdi->work_list))
+			wb_do_writeback(me, 0);
+
+		spin_lock(&bdi_lock);
+
+		/*
+		 * Check if any existing bdi's have dirty data without
+		 * a thread registered. If so, set that up.
+		 */
+		list_for_each_entry_safe(bdi, tmp, &bdi_list, bdi_list) {
+			if (bdi->wb.task)
+				continue;
+			if (list_empty(&bdi->work_list) &&
+			    !bdi_has_dirty_io(bdi))
+				continue;
+
+			bdi_add_default_flusher_task(bdi);
+		}
+
+		set_current_state(TASK_INTERRUPTIBLE);
+
+		if (list_empty(&bdi_pending_list)) {
+			unsigned long wait;
+
+			spin_unlock(&bdi_lock);
+			wait = msecs_to_jiffies(dirty_writeback_interval * 10);
+			schedule_timeout(wait);
+			try_to_freeze();
+			continue;
+		}
+
+		__set_current_state(TASK_RUNNING);
+
+		/*
+		 * This is our real job - check for pending entries in
+		 * bdi_pending_list, and create the tasks that got added
+		 */
+		bdi = list_entry(bdi_pending_list.next, struct backing_dev_info,
+				 bdi_list);
+		list_del_init(&bdi->bdi_list);
+		spin_unlock(&bdi_lock);
+
+		wb = &bdi->wb;
+		wb->task = kthread_run(bdi_start_fn, wb, "flush-%s",
+					dev_name(bdi->dev));
+		/*
+		 * If task creation fails, then readd the bdi to
+		 * the pending list and force writeout of the bdi
+		 * from this forker thread. That will free some memory
+		 * and we can try again.
+		 */
+		if (IS_ERR(wb->task)) {
+			wb->task = NULL;
+
+			/*
+			 * Add this 'bdi' to the back, so we get
+			 * a chance to flush other bdi's to free
+			 * memory.
+			 */
+			spin_lock(&bdi_lock);
+			list_add_tail(&bdi->bdi_list, &bdi_pending_list);
+			spin_unlock(&bdi_lock);
+
+			bdi_flush_io(bdi);
+		}
+	}
+
+	return 0;
+}
+
+/*
+ * Add the default flusher task that gets created for any bdi
+ * that has dirty data pending writeout
+ */
+void static bdi_add_default_flusher_task(struct backing_dev_info *bdi)
+{
+	if (!bdi_cap_writeback_dirty(bdi))
+		return;
+
+	/*
+	 * Check with the helper whether to proceed adding a task. Will only
+	 * abort if we two or more simultanous calls to
+	 * bdi_add_default_flusher_task() occured, further additions will block
+	 * waiting for previous additions to finish.
+	 */
+	if (!test_and_set_bit(BDI_pending, &bdi->state)) {
+		list_move_tail(&bdi->bdi_list, &bdi_pending_list);
+
+		/*
+		 * We are now on the pending list, wake up bdi_forker_task()
+		 * to finish the job and add us back to the active bdi_list
+		 */
+		wake_up_process(default_backing_dev_info.wb.task);
+	}
+}
+
 int bdi_register(struct backing_dev_info *bdi, struct device *parent,
 		const char *fmt, ...)
 {
@@ -213,13 +469,34 @@ int bdi_register(struct backing_dev_info *bdi, struct device *parent,
 		goto exit;
 	}
 
-	mutex_lock(&bdi_lock);
+	spin_lock(&bdi_lock);
 	list_add_tail(&bdi->bdi_list, &bdi_list);
-	mutex_unlock(&bdi_lock);
+	spin_unlock(&bdi_lock);
 
 	bdi->dev = dev;
-	bdi_debug_register(bdi, dev_name(dev));
 
+	/*
+	 * Just start the forker thread for our default backing_dev_info,
+	 * and add other bdi's to the list. They will get a thread created
+	 * on-demand when they need it.
+	 */
+	if (bdi_cap_flush_forker(bdi)) {
+		struct bdi_writeback *wb = &bdi->wb;
+
+		wb->task = kthread_run(bdi_forker_task, wb, "bdi-%s",
+						dev_name(dev));
+		if (IS_ERR(wb->task)) {
+			wb->task = NULL;
+			ret = -ENOMEM;
+
+			spin_lock(&bdi_lock);
+			list_del(&bdi->bdi_list);
+			spin_unlock(&bdi_lock);
+			goto exit;
+		}
+	}
+
+	bdi_debug_register(bdi, dev_name(dev));
 exit:
 	return ret;
 }
@@ -231,17 +508,42 @@ int bdi_register_dev(struct backing_dev_info *bdi, dev_t dev)
 }
 EXPORT_SYMBOL(bdi_register_dev);
 
-static void bdi_remove_from_list(struct backing_dev_info *bdi)
+/*
+ * Remove bdi from the global list and shutdown any threads we have running
+ */
+static void bdi_wb_shutdown(struct backing_dev_info *bdi)
 {
-	mutex_lock(&bdi_lock);
+	struct bdi_writeback *wb;
+
+	if (!bdi_cap_writeback_dirty(bdi))
+		return;
+
+	/*
+	 * If setup is pending, wait for that to complete first
+	 */
+	wait_on_bit(&bdi->state, BDI_pending, bdi_sched_wait,
+			TASK_UNINTERRUPTIBLE);
+
+	/*
+	 * Make sure nobody finds us on the bdi_list anymore
+	 */
+	spin_lock(&bdi_lock);
 	list_del(&bdi->bdi_list);
-	mutex_unlock(&bdi_lock);
+	spin_unlock(&bdi_lock);
+
+	/*
+	 * Finally, kill the kernel threads. We don't need to be RCU
+	 * safe anymore, since the bdi is gone from visibility.
+	 */
+	list_for_each_entry(wb, &bdi->wb_list, list)
+		kthread_stop(wb->task);
 }
 
 void bdi_unregister(struct backing_dev_info *bdi)
 {
 	if (bdi->dev) {
-		bdi_remove_from_list(bdi);
+		if (!bdi_cap_flush_forker(bdi))
+			bdi_wb_shutdown(bdi);
 		bdi_debug_unregister(bdi);
 		device_unregister(bdi->dev);
 		bdi->dev = NULL;
@@ -251,18 +553,25 @@ EXPORT_SYMBOL(bdi_unregister);
 
 int bdi_init(struct backing_dev_info *bdi)
 {
-	int i;
-	int err;
+	int i, err;
 
 	bdi->dev = NULL;
 
 	bdi->min_ratio = 0;
 	bdi->max_ratio = 100;
 	bdi->max_prop_frac = PROP_FRAC_BASE;
+	spin_lock_init(&bdi->wb_lock);
 	INIT_LIST_HEAD(&bdi->bdi_list);
-	INIT_LIST_HEAD(&bdi->b_io);
-	INIT_LIST_HEAD(&bdi->b_dirty);
-	INIT_LIST_HEAD(&bdi->b_more_io);
+	INIT_LIST_HEAD(&bdi->wb_list);
+	INIT_LIST_HEAD(&bdi->work_list);
+
+	bdi_wb_init(&bdi->wb, bdi);
+
+	/*
+	 * Just one thread support for now, hard code mask and count
+	 */
+	bdi->wb_mask = 1;
+	bdi->wb_cnt = 1;
 
 	for (i = 0; i < NR_BDI_STAT_ITEMS; i++) {
 		err = percpu_counter_init(&bdi->bdi_stat[i], 0);
@@ -277,8 +586,6 @@ int bdi_init(struct backing_dev_info *bdi)
 err:
 		while (i--)
 			percpu_counter_destroy(&bdi->bdi_stat[i]);
-
-		bdi_remove_from_list(bdi);
 	}
 
 	return err;
@@ -289,9 +596,7 @@ void bdi_destroy(struct backing_dev_info *bdi)
 {
 	int i;
 
-	WARN_ON(!list_empty(&bdi->b_dirty));
-	WARN_ON(!list_empty(&bdi->b_io));
-	WARN_ON(!list_empty(&bdi->b_more_io));
+	WARN_ON(bdi_has_dirty_io(bdi));
 
 	bdi_unregister(bdi);
 
diff --git a/mm/page-writeback.c b/mm/page-writeback.c
index f8341b6..25e7770 100644
--- a/mm/page-writeback.c
+++ b/mm/page-writeback.c
@@ -36,15 +36,6 @@
 #include <linux/pagevec.h>
 
 /*
- * The maximum number of pages to writeout in a single bdflush/kupdate
- * operation.  We do this so we don't hold I_SYNC against an inode for
- * enormous amounts of time, which would block a userspace task which has
- * been forced to throttle against that inode.  Also, the code reevaluates
- * the dirty each time it has written this many pages.
- */
-#define MAX_WRITEBACK_PAGES	1024
-
-/*
  * After a CPU has dirtied this many pages, balance_dirty_pages_ratelimited
  * will look to see if it needs to force writeback or throttling.
  */
@@ -117,8 +108,6 @@ EXPORT_SYMBOL(laptop_mode);
 /* End of sysctl-exported parameters */
 
 
-static void background_writeout(unsigned long _min_pages);
-
 /*
  * Scale the writeback cache size proportional to the relative writeout speeds.
  *
@@ -326,7 +315,7 @@ int bdi_set_min_ratio(struct backing_dev_info *bdi, unsigned int min_ratio)
 {
 	int ret = 0;
 
-	mutex_lock(&bdi_lock);
+	spin_lock(&bdi_lock);
 	if (min_ratio > bdi->max_ratio) {
 		ret = -EINVAL;
 	} else {
@@ -338,7 +327,7 @@ int bdi_set_min_ratio(struct backing_dev_info *bdi, unsigned int min_ratio)
 			ret = -EINVAL;
 		}
 	}
-	mutex_unlock(&bdi_lock);
+	spin_unlock(&bdi_lock);
 
 	return ret;
 }
@@ -350,14 +339,14 @@ int bdi_set_max_ratio(struct backing_dev_info *bdi, unsigned max_ratio)
 	if (max_ratio > 100)
 		return -EINVAL;
 
-	mutex_lock(&bdi_lock);
+	spin_lock(&bdi_lock);
 	if (bdi->min_ratio > max_ratio) {
 		ret = -EINVAL;
 	} else {
 		bdi->max_ratio = max_ratio;
 		bdi->max_prop_frac = (PROP_FRAC_BASE * max_ratio) / 100;
 	}
-	mutex_unlock(&bdi_lock);
+	spin_unlock(&bdi_lock);
 
 	return ret;
 }
@@ -543,7 +532,7 @@ static void balance_dirty_pages(struct address_space *mapping)
 		 * up.
 		 */
 		if (bdi_nr_reclaimable > bdi_thresh) {
-			writeback_inodes(&wbc);
+			writeback_inodes_wbc(&wbc);
 			pages_written += write_chunk - wbc.nr_to_write;
 			get_dirty_limits(&background_thresh, &dirty_thresh,
 				       &bdi_thresh, bdi);
@@ -572,7 +561,7 @@ static void balance_dirty_pages(struct address_space *mapping)
 		if (pages_written >= write_chunk)
 			break;		/* We've done our duty */
 
-		congestion_wait(BLK_RW_ASYNC, HZ/10);
+		schedule_timeout(1);
 	}
 
 	if (bdi_nr_reclaimable + bdi_nr_writeback < bdi_thresh &&
@@ -591,10 +580,18 @@ static void balance_dirty_pages(struct address_space *mapping)
 	 * background_thresh, to keep the amount of dirty memory low.
 	 */
 	if ((laptop_mode && pages_written) ||
-			(!laptop_mode && (global_page_state(NR_FILE_DIRTY)
-					  + global_page_state(NR_UNSTABLE_NFS)
-					  > background_thresh)))
-		pdflush_operation(background_writeout, 0);
+	    (!laptop_mode && ((nr_writeback = global_page_state(NR_FILE_DIRTY)
+					  + global_page_state(NR_UNSTABLE_NFS))
+					  > background_thresh))) {
+		struct writeback_control wbc = {
+			.bdi		= bdi,
+			.sync_mode	= WB_SYNC_NONE,
+			.nr_to_write	= nr_writeback,
+		};
+
+
+		bdi_start_writeback(&wbc);
+	}
 }
 
 void set_page_dirty_balance(struct page *page, int page_mkwrite)
@@ -678,153 +675,35 @@ void throttle_vm_writeout(gfp_t gfp_mask)
         }
 }
 
-/*
- * writeback at least _min_pages, and keep writing until the amount of dirty
- * memory is less than the background threshold, or until we're all clean.
- */
-static void background_writeout(unsigned long _min_pages)
-{
-	long min_pages = _min_pages;
-	struct writeback_control wbc = {
-		.bdi		= NULL,
-		.sync_mode	= WB_SYNC_NONE,
-		.older_than_this = NULL,
-		.nr_to_write	= 0,
-		.nonblocking	= 1,
-		.range_cyclic	= 1,
-	};
-
-	for ( ; ; ) {
-		unsigned long background_thresh;
-		unsigned long dirty_thresh;
-
-		get_dirty_limits(&background_thresh, &dirty_thresh, NULL, NULL);
-		if (global_page_state(NR_FILE_DIRTY) +
-			global_page_state(NR_UNSTABLE_NFS) < background_thresh
-				&& min_pages <= 0)
-			break;
-		wbc.more_io = 0;
-		wbc.encountered_congestion = 0;
-		wbc.nr_to_write = MAX_WRITEBACK_PAGES;
-		wbc.pages_skipped = 0;
-		writeback_inodes(&wbc);
-		min_pages -= MAX_WRITEBACK_PAGES - wbc.nr_to_write;
-		if (wbc.nr_to_write > 0 || wbc.pages_skipped > 0) {
-			/* Wrote less than expected */
-			if (wbc.encountered_congestion || wbc.more_io)
-				congestion_wait(BLK_RW_ASYNC, HZ/10);
-			else
-				break;
-		}
-	}
-}
-
-/*
- * Start writeback of `nr_pages' pages.  If `nr_pages' is zero, write back
- * the whole world.  Returns 0 if a pdflush thread was dispatched.  Returns
- * -1 if all pdflush threads were busy.
- */
-int wakeup_pdflush(long nr_pages)
-{
-	if (nr_pages == 0)
-		nr_pages = global_page_state(NR_FILE_DIRTY) +
-				global_page_state(NR_UNSTABLE_NFS);
-	return pdflush_operation(background_writeout, nr_pages);
-}
-
-static void wb_timer_fn(unsigned long unused);
 static void laptop_timer_fn(unsigned long unused);
 
-static DEFINE_TIMER(wb_timer, wb_timer_fn, 0, 0);
 static DEFINE_TIMER(laptop_mode_wb_timer, laptop_timer_fn, 0, 0);
 
 /*
- * Periodic writeback of "old" data.
- *
- * Define "old": the first time one of an inode's pages is dirtied, we mark the
- * dirtying-time in the inode's address_space.  So this periodic writeback code
- * just walks the superblock inode list, writing back any inodes which are
- * older than a specific point in time.
- *
- * Try to run once per dirty_writeback_interval.  But if a writeback event
- * takes longer than a dirty_writeback_interval interval, then leave a
- * one-second gap.
- *
- * older_than_this takes precedence over nr_to_write.  So we'll only write back
- * all dirty pages if they are all attached to "old" mappings.
- */
-static void wb_kupdate(unsigned long arg)
-{
-	unsigned long oldest_jif;
-	unsigned long start_jif;
-	unsigned long next_jif;
-	long nr_to_write;
-	struct writeback_control wbc = {
-		.bdi		= NULL,
-		.sync_mode	= WB_SYNC_NONE,
-		.older_than_this = &oldest_jif,
-		.nr_to_write	= 0,
-		.nonblocking	= 1,
-		.for_kupdate	= 1,
-		.range_cyclic	= 1,
-	};
-
-	sync_supers();
-
-	oldest_jif = jiffies - msecs_to_jiffies(dirty_expire_interval * 10);
-	start_jif = jiffies;
-	next_jif = start_jif + msecs_to_jiffies(dirty_writeback_interval * 10);
-	nr_to_write = global_page_state(NR_FILE_DIRTY) +
-			global_page_state(NR_UNSTABLE_NFS) +
-			(inodes_stat.nr_inodes - inodes_stat.nr_unused);
-	while (nr_to_write > 0) {
-		wbc.more_io = 0;
-		wbc.encountered_congestion = 0;
-		wbc.nr_to_write = MAX_WRITEBACK_PAGES;
-		writeback_inodes(&wbc);
-		if (wbc.nr_to_write > 0) {
-			if (wbc.encountered_congestion || wbc.more_io)
-				congestion_wait(BLK_RW_ASYNC, HZ/10);
-			else
-				break;	/* All the old data is written */
-		}
-		nr_to_write -= MAX_WRITEBACK_PAGES - wbc.nr_to_write;
-	}
-	if (time_before(next_jif, jiffies + HZ))
-		next_jif = jiffies + HZ;
-	if (dirty_writeback_interval)
-		mod_timer(&wb_timer, next_jif);
-}
-
-/*
  * sysctl handler for /proc/sys/vm/dirty_writeback_centisecs
  */
 int dirty_writeback_centisecs_handler(ctl_table *table, int write,
 	struct file *file, void __user *buffer, size_t *length, loff_t *ppos)
 {
 	proc_dointvec(table, write, file, buffer, length, ppos);
-	if (dirty_writeback_interval)
-		mod_timer(&wb_timer, jiffies +
-			msecs_to_jiffies(dirty_writeback_interval * 10));
-	else
-		del_timer(&wb_timer);
 	return 0;
 }
 
-static void wb_timer_fn(unsigned long unused)
-{
-	if (pdflush_operation(wb_kupdate, 0) < 0)
-		mod_timer(&wb_timer, jiffies + HZ); /* delay 1 second */
-}
-
-static void laptop_flush(unsigned long unused)
+static void do_laptop_sync(struct work_struct *work)
 {
-	sys_sync();
+	wakeup_flusher_threads(0);
+	kfree(work);
 }
 
 static void laptop_timer_fn(unsigned long unused)
 {
-	pdflush_operation(laptop_flush, 0);
+	struct work_struct *work;
+
+	work = kmalloc(sizeof(*work), GFP_ATOMIC);
+	if (work) {
+		INIT_WORK(work, do_laptop_sync);
+		schedule_work(work);
+	}
 }
 
 /*
@@ -907,8 +786,6 @@ void __init page_writeback_init(void)
 {
 	int shift;
 
-	mod_timer(&wb_timer,
-		  jiffies + msecs_to_jiffies(dirty_writeback_interval * 10));
 	writeback_set_ratelimit();
 	register_cpu_notifier(&ratelimit_nb);
 
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 94e86dd..ba8228e 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1720,7 +1720,7 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
 		 */
 		if (total_scanned > sc->swap_cluster_max +
 					sc->swap_cluster_max / 2) {
-			wakeup_pdflush(laptop_mode ? 0 : total_scanned);
+			wakeup_flusher_threads(laptop_mode ? 0 : total_scanned);
 			sc->may_writepage = 1;
 		}
 
-- 
1.6.4.1.207.g68ea


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH 4/7] writeback: get rid of pdflush completely
  2009-09-11  7:34 [PATCH 0/7] Per-bdi writeback flusher threads v20 Jens Axboe
                   ` (2 preceding siblings ...)
  2009-09-11  7:34 ` [PATCH 3/7] writeback: switch to per-bdi threads for flushing data Jens Axboe
@ 2009-09-11  7:34 ` Jens Axboe
  2009-09-11  7:34 ` [PATCH 5/7] writeback: add some debug inode list counters to bdi stats Jens Axboe
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 52+ messages in thread
From: Jens Axboe @ 2009-09-11  7:34 UTC (permalink / raw)
  To: linux-kernel, linux-fsdevel
  Cc: chris.mason, hch, tytso, akpm, jack, Jens Axboe

It is now unused, so kill it off.

Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
---
 fs/fs-writeback.c         |    5 +
 include/linux/writeback.h |   12 --
 mm/Makefile               |    2 +-
 mm/pdflush.c              |  269 ---------------------------------------------
 4 files changed, 6 insertions(+), 282 deletions(-)
 delete mode 100644 mm/pdflush.c

diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
index 7f6dae8..2e601ce 100644
--- a/fs/fs-writeback.c
+++ b/fs/fs-writeback.c
@@ -30,6 +30,11 @@
 #define inode_to_bdi(inode)	((inode)->i_mapping->backing_dev_info)
 
 /*
+ * We don't actually have pdflush, but this one is exported though /proc...
+ */
+int nr_pdflush_threads;
+
+/*
  * Work items for the bdi_writeback threads
  */
 struct bdi_work {
diff --git a/include/linux/writeback.h b/include/linux/writeback.h
index cef7552..78b1e46 100644
--- a/include/linux/writeback.h
+++ b/include/linux/writeback.h
@@ -14,17 +14,6 @@ extern struct list_head inode_in_use;
 extern struct list_head inode_unused;
 
 /*
- * Yes, writeback.h requires sched.h
- * No, sched.h is not included from here.
- */
-static inline int task_is_pdflush(struct task_struct *task)
-{
-	return task->flags & PF_FLUSHER;
-}
-
-#define current_is_pdflush()	task_is_pdflush(current)
-
-/*
  * fs/fs-writeback.c
  */
 enum writeback_sync_modes {
@@ -155,7 +144,6 @@ balance_dirty_pages_ratelimited(struct address_space *mapping)
 typedef int (*writepage_t)(struct page *page, struct writeback_control *wbc,
 				void *data);
 
-int pdflush_operation(void (*fn)(unsigned long), unsigned long arg0);
 int generic_writepages(struct address_space *mapping,
 		       struct writeback_control *wbc);
 int write_cache_pages(struct address_space *mapping,
diff --git a/mm/Makefile b/mm/Makefile
index 5e0bd64..147a7a7 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -8,7 +8,7 @@ mmu-$(CONFIG_MMU)	:= fremap.o highmem.o madvise.o memory.o mincore.o \
 			   vmalloc.o
 
 obj-y			:= bootmem.o filemap.o mempool.o oom_kill.o fadvise.o \
-			   maccess.o page_alloc.o page-writeback.o pdflush.o \
+			   maccess.o page_alloc.o page-writeback.o \
 			   readahead.o swap.o truncate.o vmscan.o shmem.o \
 			   prio_tree.o util.o mmzone.o vmstat.o backing-dev.o \
 			   page_isolation.o mm_init.o $(mmu-y)
diff --git a/mm/pdflush.c b/mm/pdflush.c
deleted file mode 100644
index 235ac44..0000000
--- a/mm/pdflush.c
+++ /dev/null
@@ -1,269 +0,0 @@
-/*
- * mm/pdflush.c - worker threads for writing back filesystem data
- *
- * Copyright (C) 2002, Linus Torvalds.
- *
- * 09Apr2002	Andrew Morton
- *		Initial version
- * 29Feb2004	kaos@sgi.com
- *		Move worker thread creation to kthread to avoid chewing
- *		up stack space with nested calls to kernel_thread.
- */
-
-#include <linux/sched.h>
-#include <linux/list.h>
-#include <linux/signal.h>
-#include <linux/spinlock.h>
-#include <linux/gfp.h>
-#include <linux/init.h>
-#include <linux/module.h>
-#include <linux/fs.h>		/* Needed by writeback.h	  */
-#include <linux/writeback.h>	/* Prototypes pdflush_operation() */
-#include <linux/kthread.h>
-#include <linux/cpuset.h>
-#include <linux/freezer.h>
-
-
-/*
- * Minimum and maximum number of pdflush instances
- */
-#define MIN_PDFLUSH_THREADS	2
-#define MAX_PDFLUSH_THREADS	8
-
-static void start_one_pdflush_thread(void);
-
-
-/*
- * The pdflush threads are worker threads for writing back dirty data.
- * Ideally, we'd like one thread per active disk spindle.  But the disk
- * topology is very hard to divine at this level.   Instead, we take
- * care in various places to prevent more than one pdflush thread from
- * performing writeback against a single filesystem.  pdflush threads
- * have the PF_FLUSHER flag set in current->flags to aid in this.
- */
-
-/*
- * All the pdflush threads.  Protected by pdflush_lock
- */
-static LIST_HEAD(pdflush_list);
-static DEFINE_SPINLOCK(pdflush_lock);
-
-/*
- * The count of currently-running pdflush threads.  Protected
- * by pdflush_lock.
- *
- * Readable by sysctl, but not writable.  Published to userspace at
- * /proc/sys/vm/nr_pdflush_threads.
- */
-int nr_pdflush_threads = 0;
-
-/*
- * The time at which the pdflush thread pool last went empty
- */
-static unsigned long last_empty_jifs;
-
-/*
- * The pdflush thread.
- *
- * Thread pool management algorithm:
- * 
- * - The minimum and maximum number of pdflush instances are bound
- *   by MIN_PDFLUSH_THREADS and MAX_PDFLUSH_THREADS.
- * 
- * - If there have been no idle pdflush instances for 1 second, create
- *   a new one.
- * 
- * - If the least-recently-went-to-sleep pdflush thread has been asleep
- *   for more than one second, terminate a thread.
- */
-
-/*
- * A structure for passing work to a pdflush thread.  Also for passing
- * state information between pdflush threads.  Protected by pdflush_lock.
- */
-struct pdflush_work {
-	struct task_struct *who;	/* The thread */
-	void (*fn)(unsigned long);	/* A callback function */
-	unsigned long arg0;		/* An argument to the callback */
-	struct list_head list;		/* On pdflush_list, when idle */
-	unsigned long when_i_went_to_sleep;
-};
-
-static int __pdflush(struct pdflush_work *my_work)
-{
-	current->flags |= PF_FLUSHER | PF_SWAPWRITE;
-	set_freezable();
-	my_work->fn = NULL;
-	my_work->who = current;
-	INIT_LIST_HEAD(&my_work->list);
-
-	spin_lock_irq(&pdflush_lock);
-	for ( ; ; ) {
-		struct pdflush_work *pdf;
-
-		set_current_state(TASK_INTERRUPTIBLE);
-		list_move(&my_work->list, &pdflush_list);
-		my_work->when_i_went_to_sleep = jiffies;
-		spin_unlock_irq(&pdflush_lock);
-		schedule();
-		try_to_freeze();
-		spin_lock_irq(&pdflush_lock);
-		if (!list_empty(&my_work->list)) {
-			/*
-			 * Someone woke us up, but without removing our control
-			 * structure from the global list.  swsusp will do this
-			 * in try_to_freeze()->refrigerator().  Handle it.
-			 */
-			my_work->fn = NULL;
-			continue;
-		}
-		if (my_work->fn == NULL) {
-			printk("pdflush: bogus wakeup\n");
-			continue;
-		}
-		spin_unlock_irq(&pdflush_lock);
-
-		(*my_work->fn)(my_work->arg0);
-
-		spin_lock_irq(&pdflush_lock);
-
-		/*
-		 * Thread creation: For how long have there been zero
-		 * available threads?
-		 *
-		 * To throttle creation, we reset last_empty_jifs.
-		 */
-		if (time_after(jiffies, last_empty_jifs + 1 * HZ)) {
-			if (list_empty(&pdflush_list)) {
-				if (nr_pdflush_threads < MAX_PDFLUSH_THREADS) {
-					last_empty_jifs = jiffies;
-					nr_pdflush_threads++;
-					spin_unlock_irq(&pdflush_lock);
-					start_one_pdflush_thread();
-					spin_lock_irq(&pdflush_lock);
-				}
-			}
-		}
-
-		my_work->fn = NULL;
-
-		/*
-		 * Thread destruction: For how long has the sleepiest
-		 * thread slept?
-		 */
-		if (list_empty(&pdflush_list))
-			continue;
-		if (nr_pdflush_threads <= MIN_PDFLUSH_THREADS)
-			continue;
-		pdf = list_entry(pdflush_list.prev, struct pdflush_work, list);
-		if (time_after(jiffies, pdf->when_i_went_to_sleep + 1 * HZ)) {
-			/* Limit exit rate */
-			pdf->when_i_went_to_sleep = jiffies;
-			break;					/* exeunt */
-		}
-	}
-	nr_pdflush_threads--;
-	spin_unlock_irq(&pdflush_lock);
-	return 0;
-}
-
-/*
- * Of course, my_work wants to be just a local in __pdflush().  It is
- * separated out in this manner to hopefully prevent the compiler from
- * performing unfortunate optimisations against the auto variables.  Because
- * these are visible to other tasks and CPUs.  (No problem has actually
- * been observed.  This is just paranoia).
- */
-static int pdflush(void *dummy)
-{
-	struct pdflush_work my_work;
-	cpumask_var_t cpus_allowed;
-
-	/*
-	 * Since the caller doesn't even check kthread_run() worked, let's not
-	 * freak out too much if this fails.
-	 */
-	if (!alloc_cpumask_var(&cpus_allowed, GFP_KERNEL)) {
-		printk(KERN_WARNING "pdflush failed to allocate cpumask\n");
-		return 0;
-	}
-
-	/*
-	 * pdflush can spend a lot of time doing encryption via dm-crypt.  We
-	 * don't want to do that at keventd's priority.
-	 */
-	set_user_nice(current, 0);
-
-	/*
-	 * Some configs put our parent kthread in a limited cpuset,
-	 * which kthread() overrides, forcing cpus_allowed == cpu_all_mask.
-	 * Our needs are more modest - cut back to our cpusets cpus_allowed.
-	 * This is needed as pdflush's are dynamically created and destroyed.
-	 * The boottime pdflush's are easily placed w/o these 2 lines.
-	 */
-	cpuset_cpus_allowed(current, cpus_allowed);
-	set_cpus_allowed_ptr(current, cpus_allowed);
-	free_cpumask_var(cpus_allowed);
-
-	return __pdflush(&my_work);
-}
-
-/*
- * Attempt to wake up a pdflush thread, and get it to do some work for you.
- * Returns zero if it indeed managed to find a worker thread, and passed your
- * payload to it.
- */
-int pdflush_operation(void (*fn)(unsigned long), unsigned long arg0)
-{
-	unsigned long flags;
-	int ret = 0;
-
-	BUG_ON(fn == NULL);	/* Hard to diagnose if it's deferred */
-
-	spin_lock_irqsave(&pdflush_lock, flags);
-	if (list_empty(&pdflush_list)) {
-		ret = -1;
-	} else {
-		struct pdflush_work *pdf;
-
-		pdf = list_entry(pdflush_list.next, struct pdflush_work, list);
-		list_del_init(&pdf->list);
-		if (list_empty(&pdflush_list))
-			last_empty_jifs = jiffies;
-		pdf->fn = fn;
-		pdf->arg0 = arg0;
-		wake_up_process(pdf->who);
-	}
-	spin_unlock_irqrestore(&pdflush_lock, flags);
-
-	return ret;
-}
-
-static void start_one_pdflush_thread(void)
-{
-	struct task_struct *k;
-
-	k = kthread_run(pdflush, NULL, "pdflush");
-	if (unlikely(IS_ERR(k))) {
-		spin_lock_irq(&pdflush_lock);
-		nr_pdflush_threads--;
-		spin_unlock_irq(&pdflush_lock);
-	}
-}
-
-static int __init pdflush_init(void)
-{
-	int i;
-
-	/*
-	 * Pre-set nr_pdflush_threads...  If we fail to create,
-	 * the count will be decremented.
-	 */
-	nr_pdflush_threads = MIN_PDFLUSH_THREADS;
-
-	for (i = 0; i < MIN_PDFLUSH_THREADS; i++)
-		start_one_pdflush_thread();
-	return 0;
-}
-
-module_init(pdflush_init);
-- 
1.6.4.1.207.g68ea


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH 5/7] writeback: add some debug inode list counters to bdi stats
  2009-09-11  7:34 [PATCH 0/7] Per-bdi writeback flusher threads v20 Jens Axboe
                   ` (3 preceding siblings ...)
  2009-09-11  7:34 ` [PATCH 4/7] writeback: get rid of pdflush completely Jens Axboe
@ 2009-09-11  7:34 ` Jens Axboe
  2009-09-11  7:34 ` [PATCH 6/7] writeback: add name to backing_dev_info Jens Axboe
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 52+ messages in thread
From: Jens Axboe @ 2009-09-11  7:34 UTC (permalink / raw)
  To: linux-kernel, linux-fsdevel
  Cc: chris.mason, hch, tytso, akpm, jack, Jens Axboe

Add some debug entries to be able to inspect the internal state of
the writeback details.

Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
---
 mm/backing-dev.c |   38 ++++++++++++++++++++++++++++++++++----
 1 files changed, 34 insertions(+), 4 deletions(-)

diff --git a/mm/backing-dev.c b/mm/backing-dev.c
index 7f3fa79..22c45e9 100644
--- a/mm/backing-dev.c
+++ b/mm/backing-dev.c
@@ -52,9 +52,29 @@ static void bdi_debug_init(void)
 static int bdi_debug_stats_show(struct seq_file *m, void *v)
 {
 	struct backing_dev_info *bdi = m->private;
+	struct bdi_writeback *wb;
 	unsigned long background_thresh;
 	unsigned long dirty_thresh;
 	unsigned long bdi_thresh;
+	unsigned long nr_dirty, nr_io, nr_more_io, nr_wb;
+	struct inode *inode;
+
+	/*
+	 * inode lock is enough here, the bdi->wb_list is protected by
+	 * RCU on the reader side
+	 */
+	nr_wb = nr_dirty = nr_io = nr_more_io = 0;
+	spin_lock(&inode_lock);
+	list_for_each_entry(wb, &bdi->wb_list, list) {
+		nr_wb++;
+		list_for_each_entry(inode, &wb->b_dirty, i_list)
+			nr_dirty++;
+		list_for_each_entry(inode, &wb->b_io, i_list)
+			nr_io++;
+		list_for_each_entry(inode, &wb->b_more_io, i_list)
+			nr_more_io++;
+	}
+	spin_unlock(&inode_lock);
 
 	get_dirty_limits(&background_thresh, &dirty_thresh, &bdi_thresh, bdi);
 
@@ -64,12 +84,22 @@ static int bdi_debug_stats_show(struct seq_file *m, void *v)
 		   "BdiReclaimable:   %8lu kB\n"
 		   "BdiDirtyThresh:   %8lu kB\n"
 		   "DirtyThresh:      %8lu kB\n"
-		   "BackgroundThresh: %8lu kB\n",
+		   "BackgroundThresh: %8lu kB\n"
+		   "WriteBack threads:%8lu\n"
+		   "b_dirty:          %8lu\n"
+		   "b_io:             %8lu\n"
+		   "b_more_io:        %8lu\n"
+		   "bdi_list:         %8u\n"
+		   "state:            %8lx\n"
+		   "wb_mask:          %8lx\n"
+		   "wb_list:          %8u\n"
+		   "wb_cnt:           %8u\n",
 		   (unsigned long) K(bdi_stat(bdi, BDI_WRITEBACK)),
 		   (unsigned long) K(bdi_stat(bdi, BDI_RECLAIMABLE)),
-		   K(bdi_thresh),
-		   K(dirty_thresh),
-		   K(background_thresh));
+		   K(bdi_thresh), K(dirty_thresh),
+		   K(background_thresh), nr_wb, nr_dirty, nr_io, nr_more_io,
+		   !list_empty(&bdi->bdi_list), bdi->state, bdi->wb_mask,
+		   !list_empty(&bdi->wb_list), bdi->wb_cnt);
 #undef K
 
 	return 0;
-- 
1.6.4.1.207.g68ea


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH 6/7] writeback: add name to backing_dev_info
  2009-09-11  7:34 [PATCH 0/7] Per-bdi writeback flusher threads v20 Jens Axboe
                   ` (4 preceding siblings ...)
  2009-09-11  7:34 ` [PATCH 5/7] writeback: add some debug inode list counters to bdi stats Jens Axboe
@ 2009-09-11  7:34 ` Jens Axboe
  2009-09-11  7:34 ` [PATCH 7/7] writeback: check for registered bdi in flusher add and inode dirty Jens Axboe
  2009-09-11 13:42 ` [PATCH 0/7] Per-bdi writeback flusher threads v20 Theodore Tso
  7 siblings, 0 replies; 52+ messages in thread
From: Jens Axboe @ 2009-09-11  7:34 UTC (permalink / raw)
  To: linux-kernel, linux-fsdevel
  Cc: chris.mason, hch, tytso, akpm, jack, Jens Axboe

This enables us to track who does what and print info. Its main use
is catching dirty inodes on the default_backing_dev_info, so we can
fix that up.

Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
---
 block/blk-core.c            |    1 +
 drivers/block/aoe/aoeblk.c  |    1 +
 drivers/char/mem.c          |    1 +
 fs/btrfs/disk-io.c          |    1 +
 fs/char_dev.c               |    1 +
 fs/configfs/inode.c         |    1 +
 fs/fuse/inode.c             |    1 +
 fs/hugetlbfs/inode.c        |    1 +
 fs/nfs/client.c             |    1 +
 fs/ocfs2/dlm/dlmfs.c        |    1 +
 fs/ramfs/inode.c            |    1 +
 fs/sysfs/inode.c            |    1 +
 fs/ubifs/super.c            |    1 +
 include/linux/backing-dev.h |    2 ++
 kernel/cgroup.c             |    1 +
 mm/backing-dev.c            |    1 +
 mm/swap_state.c             |    1 +
 17 files changed, 18 insertions(+), 0 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index e3299a7..e695634 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -501,6 +501,7 @@ struct request_queue *blk_alloc_queue_node(gfp_t gfp_mask, int node_id)
 			(VM_MAX_READAHEAD * 1024) / PAGE_CACHE_SIZE;
 	q->backing_dev_info.state = 0;
 	q->backing_dev_info.capabilities = BDI_CAP_MAP_COPY;
+	q->backing_dev_info.name = "block";
 
 	err = bdi_init(&q->backing_dev_info);
 	if (err) {
diff --git a/drivers/block/aoe/aoeblk.c b/drivers/block/aoe/aoeblk.c
index 1e15889..95d3449 100644
--- a/drivers/block/aoe/aoeblk.c
+++ b/drivers/block/aoe/aoeblk.c
@@ -268,6 +268,7 @@ aoeblk_gdalloc(void *vp)
 	if (!d->blkq)
 		goto err_mempool;
 	blk_queue_make_request(d->blkq, aoeblk_make_request);
+	d->blkq->backing_dev_info.name = "aoe";
 	if (bdi_init(&d->blkq->backing_dev_info))
 		goto err_blkq;
 	spin_lock_irqsave(&d->lock, flags);
diff --git a/drivers/char/mem.c b/drivers/char/mem.c
index afa8813..645237b 100644
--- a/drivers/char/mem.c
+++ b/drivers/char/mem.c
@@ -822,6 +822,7 @@ static const struct file_operations zero_fops = {
  * - permits private mappings, "copies" are taken of the source of zeros
  */
 static struct backing_dev_info zero_bdi = {
+	.name		= "char/mem",
 	.capabilities	= BDI_CAP_MAP_COPY,
 };
 
diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
index e83be2e..15831d5 100644
--- a/fs/btrfs/disk-io.c
+++ b/fs/btrfs/disk-io.c
@@ -1352,6 +1352,7 @@ static int setup_bdi(struct btrfs_fs_info *info, struct backing_dev_info *bdi)
 {
 	int err;
 
+	bdi->name = "btrfs";
 	bdi->capabilities = BDI_CAP_MAP_COPY;
 	err = bdi_init(bdi);
 	if (err)
diff --git a/fs/char_dev.c b/fs/char_dev.c
index a173551..7c27a8e 100644
--- a/fs/char_dev.c
+++ b/fs/char_dev.c
@@ -31,6 +31,7 @@
  * - no readahead or I/O queue unplugging required
  */
 struct backing_dev_info directly_mappable_cdev_bdi = {
+	.name = "char",
 	.capabilities	= (
 #ifdef CONFIG_MMU
 		/* permit private copies of the data to be taken */
diff --git a/fs/configfs/inode.c b/fs/configfs/inode.c
index 4921e74..a2f7460 100644
--- a/fs/configfs/inode.c
+++ b/fs/configfs/inode.c
@@ -51,6 +51,7 @@ static const struct address_space_operations configfs_aops = {
 };
 
 static struct backing_dev_info configfs_backing_dev_info = {
+	.name		= "configfs",
 	.ra_pages	= 0,	/* No readahead */
 	.capabilities	= BDI_CAP_NO_ACCT_AND_WRITEBACK,
 };
diff --git a/fs/fuse/inode.c b/fs/fuse/inode.c
index f91ccc4..4567db6 100644
--- a/fs/fuse/inode.c
+++ b/fs/fuse/inode.c
@@ -801,6 +801,7 @@ static int fuse_bdi_init(struct fuse_conn *fc, struct super_block *sb)
 {
 	int err;
 
+	fc->bdi.name = "fuse";
 	fc->bdi.ra_pages = (VM_MAX_READAHEAD * 1024) / PAGE_CACHE_SIZE;
 	fc->bdi.unplug_io_fn = default_unplug_io_fn;
 	/* fuse does it's own writeback accounting */
diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
index cb88dac..a93b885 100644
--- a/fs/hugetlbfs/inode.c
+++ b/fs/hugetlbfs/inode.c
@@ -44,6 +44,7 @@ static const struct inode_operations hugetlbfs_dir_inode_operations;
 static const struct inode_operations hugetlbfs_inode_operations;
 
 static struct backing_dev_info hugetlbfs_backing_dev_info = {
+	.name		= "hugetlbfs",
 	.ra_pages	= 0,	/* No readahead */
 	.capabilities	= BDI_CAP_NO_ACCT_AND_WRITEBACK,
 };
diff --git a/fs/nfs/client.c b/fs/nfs/client.c
index 8d25ccb..c6be84a 100644
--- a/fs/nfs/client.c
+++ b/fs/nfs/client.c
@@ -879,6 +879,7 @@ static void nfs_server_set_fsinfo(struct nfs_server *server, struct nfs_fsinfo *
 		server->rsize = NFS_MAX_FILE_IO_SIZE;
 	server->rpages = (server->rsize + PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT;
 
+	server->backing_dev_info.name = "nfs";
 	server->backing_dev_info.ra_pages = server->rpages * NFS_MAX_READAHEAD;
 
 	if (server->wsize > max_rpc_payload)
diff --git a/fs/ocfs2/dlm/dlmfs.c b/fs/ocfs2/dlm/dlmfs.c
index 1c9efb4..02bf178 100644
--- a/fs/ocfs2/dlm/dlmfs.c
+++ b/fs/ocfs2/dlm/dlmfs.c
@@ -325,6 +325,7 @@ clear_fields:
 }
 
 static struct backing_dev_info dlmfs_backing_dev_info = {
+	.name		= "ocfs2-dlmfs",
 	.ra_pages	= 0,	/* No readahead */
 	.capabilities	= BDI_CAP_NO_ACCT_AND_WRITEBACK,
 };
diff --git a/fs/ramfs/inode.c b/fs/ramfs/inode.c
index 0ff7566..a7f0110 100644
--- a/fs/ramfs/inode.c
+++ b/fs/ramfs/inode.c
@@ -46,6 +46,7 @@ static const struct super_operations ramfs_ops;
 static const struct inode_operations ramfs_dir_inode_operations;
 
 static struct backing_dev_info ramfs_backing_dev_info = {
+	.name		= "ramfs",
 	.ra_pages	= 0,	/* No readahead */
 	.capabilities	= BDI_CAP_NO_ACCT_AND_WRITEBACK |
 			  BDI_CAP_MAP_DIRECT | BDI_CAP_MAP_COPY |
diff --git a/fs/sysfs/inode.c b/fs/sysfs/inode.c
index 555f0ff..e57f98e 100644
--- a/fs/sysfs/inode.c
+++ b/fs/sysfs/inode.c
@@ -29,6 +29,7 @@ static const struct address_space_operations sysfs_aops = {
 };
 
 static struct backing_dev_info sysfs_backing_dev_info = {
+	.name		= "sysfs",
 	.ra_pages	= 0,	/* No readahead */
 	.capabilities	= BDI_CAP_NO_ACCT_AND_WRITEBACK,
 };
diff --git a/fs/ubifs/super.c b/fs/ubifs/super.c
index 8d6050a..51763aa 100644
--- a/fs/ubifs/super.c
+++ b/fs/ubifs/super.c
@@ -1965,6 +1965,7 @@ static int ubifs_fill_super(struct super_block *sb, void *data, int silent)
 	 *
 	 * Read-ahead will be disabled because @c->bdi.ra_pages is 0.
 	 */
+	c->bdi.name = "ubifs",
 	c->bdi.capabilities = BDI_CAP_MAP_COPY;
 	c->bdi.unplug_io_fn = default_unplug_io_fn;
 	err  = bdi_init(&c->bdi);
diff --git a/include/linux/backing-dev.h b/include/linux/backing-dev.h
index d045f5f..2f218b7 100644
--- a/include/linux/backing-dev.h
+++ b/include/linux/backing-dev.h
@@ -66,6 +66,8 @@ struct backing_dev_info {
 	void (*unplug_io_fn)(struct backing_dev_info *, struct page *);
 	void *unplug_io_data;
 
+	char *name;
+
 	struct percpu_counter bdi_stat[NR_BDI_STAT_ITEMS];
 
 	struct prop_local_percpu completions;
diff --git a/kernel/cgroup.c b/kernel/cgroup.c
index b6eadfe..c7ece8f 100644
--- a/kernel/cgroup.c
+++ b/kernel/cgroup.c
@@ -600,6 +600,7 @@ static struct inode_operations cgroup_dir_inode_operations;
 static struct file_operations proc_cgroupstats_operations;
 
 static struct backing_dev_info cgroup_backing_dev_info = {
+	.name		= "cgroup",
 	.capabilities	= BDI_CAP_NO_ACCT_AND_WRITEBACK,
 };
 
diff --git a/mm/backing-dev.c b/mm/backing-dev.c
index 22c45e9..5cb32c5 100644
--- a/mm/backing-dev.c
+++ b/mm/backing-dev.c
@@ -17,6 +17,7 @@ void default_unplug_io_fn(struct backing_dev_info *bdi, struct page *page)
 EXPORT_SYMBOL(default_unplug_io_fn);
 
 struct backing_dev_info default_backing_dev_info = {
+	.name		= "default",
 	.ra_pages	= VM_MAX_READAHEAD * 1024 / PAGE_CACHE_SIZE,
 	.state		= 0,
 	.capabilities	= BDI_CAP_MAP_COPY,
diff --git a/mm/swap_state.c b/mm/swap_state.c
index 42cd38e..5ae6b8b 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -34,6 +34,7 @@ static const struct address_space_operations swap_aops = {
 };
 
 static struct backing_dev_info swap_backing_dev_info = {
+	.name		= "swap",
 	.capabilities	= BDI_CAP_NO_ACCT_AND_WRITEBACK | BDI_CAP_SWAP_BACKED,
 	.unplug_io_fn	= swap_unplug_io_fn,
 };
-- 
1.6.4.1.207.g68ea


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH 7/7] writeback: check for registered bdi in flusher add and inode dirty
  2009-09-11  7:34 [PATCH 0/7] Per-bdi writeback flusher threads v20 Jens Axboe
                   ` (5 preceding siblings ...)
  2009-09-11  7:34 ` [PATCH 6/7] writeback: add name to backing_dev_info Jens Axboe
@ 2009-09-11  7:34 ` Jens Axboe
  2009-09-11 13:42 ` [PATCH 0/7] Per-bdi writeback flusher threads v20 Theodore Tso
  7 siblings, 0 replies; 52+ messages in thread
From: Jens Axboe @ 2009-09-11  7:34 UTC (permalink / raw)
  To: linux-kernel, linux-fsdevel
  Cc: chris.mason, hch, tytso, akpm, jack, Jens Axboe

Also a debugging aid. We want to catch dirty inodes being added to
backing devices that don't do writeback.

Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
---
 fs/fs-writeback.c           |    8 ++++++++
 include/linux/backing-dev.h |    1 +
 mm/backing-dev.c            |    7 +++++++
 3 files changed, 16 insertions(+), 0 deletions(-)

diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
index 2e601ce..da86ef5 100644
--- a/fs/fs-writeback.c
+++ b/fs/fs-writeback.c
@@ -1046,6 +1046,14 @@ void __mark_inode_dirty(struct inode *inode, int flags)
 		 */
 		if (!was_dirty) {
 			struct bdi_writeback *wb = &inode_to_bdi(inode)->wb;
+			struct backing_dev_info *bdi = wb->bdi;
+
+			if (bdi_cap_writeback_dirty(bdi) &&
+			    !test_bit(BDI_registered, &bdi->state)) {
+				WARN_ON(1);
+				printk(KERN_ERR "bdi-%s not registered\n",
+								bdi->name);
+			}
 
 			inode->dirtied_when = jiffies;
 			list_move(&inode->i_list, &wb->b_dirty);
diff --git a/include/linux/backing-dev.h b/include/linux/backing-dev.h
index 2f218b7..f169bcb 100644
--- a/include/linux/backing-dev.h
+++ b/include/linux/backing-dev.h
@@ -29,6 +29,7 @@ enum bdi_state {
 	BDI_wb_alloc,		/* Default embedded wb allocated */
 	BDI_async_congested,	/* The async (write) queue is getting full */
 	BDI_sync_congested,	/* The sync queue is getting full */
+	BDI_registered,		/* bdi_register() was done */
 	BDI_unused,		/* Available bits start here */
 };
 
diff --git a/mm/backing-dev.c b/mm/backing-dev.c
index 5cb32c5..d3ca0da 100644
--- a/mm/backing-dev.c
+++ b/mm/backing-dev.c
@@ -465,6 +465,12 @@ void static bdi_add_default_flusher_task(struct backing_dev_info *bdi)
 	if (!bdi_cap_writeback_dirty(bdi))
 		return;
 
+	if (WARN_ON(!test_bit(BDI_registered, &bdi->state))) {
+		printk(KERN_ERR "bdi %p/%s is not registered!\n",
+							bdi, bdi->name);
+		return;
+	}
+
 	/*
 	 * Check with the helper whether to proceed adding a task. Will only
 	 * abort if we two or more simultanous calls to
@@ -528,6 +534,7 @@ int bdi_register(struct backing_dev_info *bdi, struct device *parent,
 	}
 
 	bdi_debug_register(bdi, dev_name(dev));
+	set_bit(BDI_registered, &bdi->state);
 exit:
 	return ret;
 }
-- 
1.6.4.1.207.g68ea


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* Re: [PATCH 0/7] Per-bdi writeback flusher threads v20
  2009-09-11  7:34 [PATCH 0/7] Per-bdi writeback flusher threads v20 Jens Axboe
                   ` (6 preceding siblings ...)
  2009-09-11  7:34 ` [PATCH 7/7] writeback: check for registered bdi in flusher add and inode dirty Jens Axboe
@ 2009-09-11 13:42 ` Theodore Tso
  2009-09-11 13:45     ` Chris Mason
  2009-09-11 14:16     ` Christoph Hellwig
  7 siblings, 2 replies; 52+ messages in thread
From: Theodore Tso @ 2009-09-11 13:42 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-kernel, linux-fsdevel, chris.mason, hch, akpm, jack, Wu Fengguang

On Fri, Sep 11, 2009 at 09:34:03AM +0200, Jens Axboe wrote:
> Hi,
> 
> (sorry if you receive this twice, the original posting had a mangled
>  From address).
> 
> This is the 20th release of the writeback patchset. Changes since
> v19 include:
> 
> - Drop the max writeback pages patch from Ted. I think we should do
>   something to that effect, but there's really no reason to entangle
>   it with this patchset.

That's reasonable, but I'd really like to know whether some VM hacker
going to try to deal with this during the 2.6.32 window?  Such as
maybe Wu Fengguang's patches, perhaps?  

Or do I need to put in some kind of hack into ext4 ala what XFS did to
work around this problem until we can come up with a longer-term fix?

     	    	 	       	      	   - Ted

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH 0/7] Per-bdi writeback flusher threads v20
  2009-09-11 13:42 ` [PATCH 0/7] Per-bdi writeback flusher threads v20 Theodore Tso
@ 2009-09-11 13:45     ` Chris Mason
  2009-09-11 14:16     ` Christoph Hellwig
  1 sibling, 0 replies; 52+ messages in thread
From: Chris Mason @ 2009-09-11 13:45 UTC (permalink / raw)
  To: Theodore Tso, Jens Axboe, linux-kernel, linux-fsdevel, hch, akpm,
	jack, Wu Fengguang

On Fri, Sep 11, 2009 at 09:42:41AM -0400, Theodore Tso wrote:
> On Fri, Sep 11, 2009 at 09:34:03AM +0200, Jens Axboe wrote:
> > Hi,
> > 
> > (sorry if you receive this twice, the original posting had a mangled
> >  From address).
> > 
> > This is the 20th release of the writeback patchset. Changes since
> > v19 include:
> > 
> > - Drop the max writeback pages patch from Ted. I think we should do
> >   something to that effect, but there's really no reason to entangle
> >   it with this patchset.
> 
> That's reasonable, but I'd really like to know whether some VM hacker
> going to try to deal with this during the 2.6.32 window?  Such as
> maybe Wu Fengguang's patches, perhaps?  

Wu Fengguang's patches seem very reasonable to me.  My only concern is
that with tossing it in all at once.  I'd rather seen Jens' work go in
and then incremental benchmarking done afterward.

-chris

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH 0/7] Per-bdi writeback flusher threads v20
@ 2009-09-11 13:45     ` Chris Mason
  0 siblings, 0 replies; 52+ messages in thread
From: Chris Mason @ 2009-09-11 13:45 UTC (permalink / raw)
  To: Theodore Tso, Jens Axboe, linux-kernel, linux-fsdevel, hch, akpm, jack

On Fri, Sep 11, 2009 at 09:42:41AM -0400, Theodore Tso wrote:
> On Fri, Sep 11, 2009 at 09:34:03AM +0200, Jens Axboe wrote:
> > Hi,
> > 
> > (sorry if you receive this twice, the original posting had a mangled
> >  From address).
> > 
> > This is the 20th release of the writeback patchset. Changes since
> > v19 include:
> > 
> > - Drop the max writeback pages patch from Ted. I think we should do
> >   something to that effect, but there's really no reason to entangle
> >   it with this patchset.
> 
> That's reasonable, but I'd really like to know whether some VM hacker
> going to try to deal with this during the 2.6.32 window?  Such as
> maybe Wu Fengguang's patches, perhaps?  

Wu Fengguang's patches seem very reasonable to me.  My only concern is
that with tossing it in all at once.  I'd rather seen Jens' work go in
and then incremental benchmarking done afterward.

-chris

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH 0/7] Per-bdi writeback flusher threads v20
  2009-09-11 13:45     ` Chris Mason
  (?)
@ 2009-09-11 14:04     ` Jens Axboe
  -1 siblings, 0 replies; 52+ messages in thread
From: Jens Axboe @ 2009-09-11 14:04 UTC (permalink / raw)
  To: Chris Mason
  Cc: Theodore Tso, linux-kernel, linux-fsdevel, hch, akpm, jack, Wu Fengguang

On Fri, Sep 11 2009, Chris Mason wrote:
> On Fri, Sep 11, 2009 at 09:42:41AM -0400, Theodore Tso wrote:
> > On Fri, Sep 11, 2009 at 09:34:03AM +0200, Jens Axboe wrote:
> > > Hi,
> > > 
> > > (sorry if you receive this twice, the original posting had a mangled
> > >  From address).
> > > 
> > > This is the 20th release of the writeback patchset. Changes since
> > > v19 include:
> > > 
> > > - Drop the max writeback pages patch from Ted. I think we should do
> > >   something to that effect, but there's really no reason to entangle
> > >   it with this patchset.
> > 
> > That's reasonable, but I'd really like to know whether some VM hacker
> > going to try to deal with this during the 2.6.32 window?  Such as
> > maybe Wu Fengguang's patches, perhaps?  
> 
> Wu Fengguang's patches seem very reasonable to me.  My only concern is
> that with tossing it in all at once.  I'd rather seen Jens' work go in
> and then incremental benchmarking done afterward.

OK, if that's the general consensus, then I'll add it back and we can
build from there.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH 0/7] Per-bdi writeback flusher threads v20
  2009-09-11 13:42 ` [PATCH 0/7] Per-bdi writeback flusher threads v20 Theodore Tso
@ 2009-09-11 14:16     ` Christoph Hellwig
  2009-09-11 14:16     ` Christoph Hellwig
  1 sibling, 0 replies; 52+ messages in thread
From: Christoph Hellwig @ 2009-09-11 14:16 UTC (permalink / raw)
  To: Theodore Tso, Jens Axboe, linux-kernel, linux-fsdevel,
	chris.mason, hch, akpm, jack, Wu Fengguang

On Fri, Sep 11, 2009 at 09:42:41AM -0400, Theodore Tso wrote:
> > v19 include:
> > 
> > - Drop the max writeback pages patch from Ted. I think we should do
> >   something to that effect, but there's really no reason to entangle
> >   it with this patchset.
> 
> That's reasonable, but I'd really like to know whether some VM hacker
> going to try to deal with this during the 2.6.32 window?  Such as
> maybe Wu Fengguang's patches, perhaps?  
> 
> Or do I need to put in some kind of hack into ext4 ala what XFS did to
> work around this problem until we can come up with a longer-term fix?

I defintively want to see something in the VM, but I also agree with
Jens' decision not to include it in the patchkit.  It's not really
related to the patchkit except for depending on it, and we still
have a lot of discussion going on.  If we can't get an agreement with
the VM people by the end of the regular merge window I'm all for pushing
your simple patch.

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH 0/7] Per-bdi writeback flusher threads v20
@ 2009-09-11 14:16     ` Christoph Hellwig
  0 siblings, 0 replies; 52+ messages in thread
From: Christoph Hellwig @ 2009-09-11 14:16 UTC (permalink / raw)
  To: Theodore Tso, Jens Axboe, linux-kernel, linux-fsdevel,
	chris.mason, hch, akpm

On Fri, Sep 11, 2009 at 09:42:41AM -0400, Theodore Tso wrote:
> > v19 include:
> > 
> > - Drop the max writeback pages patch from Ted. I think we should do
> >   something to that effect, but there's really no reason to entangle
> >   it with this patchset.
> 
> That's reasonable, but I'd really like to know whether some VM hacker
> going to try to deal with this during the 2.6.32 window?  Such as
> maybe Wu Fengguang's patches, perhaps?  
> 
> Or do I need to put in some kind of hack into ext4 ala what XFS did to
> work around this problem until we can come up with a longer-term fix?

I defintively want to see something in the VM, but I also agree with
Jens' decision not to include it in the patchkit.  It's not really
related to the patchkit except for depending on it, and we still
have a lot of discussion going on.  If we can't get an agreement with
the VM people by the end of the regular merge window I'm all for pushing
your simple patch.

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH 0/7] Per-bdi writeback flusher threads v20
  2009-09-11 14:16     ` Christoph Hellwig
  (?)
@ 2009-09-11 14:29     ` Jens Axboe
  2009-09-11 14:39       ` Wu Fengguang
  -1 siblings, 1 reply; 52+ messages in thread
From: Jens Axboe @ 2009-09-11 14:29 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Theodore Tso, linux-kernel, linux-fsdevel, chris.mason, akpm,
	jack, Wu Fengguang

On Fri, Sep 11 2009, Christoph Hellwig wrote:
> On Fri, Sep 11, 2009 at 09:42:41AM -0400, Theodore Tso wrote:
> > > v19 include:
> > > 
> > > - Drop the max writeback pages patch from Ted. I think we should do
> > >   something to that effect, but there's really no reason to entangle
> > >   it with this patchset.
> > 
> > That's reasonable, but I'd really like to know whether some VM hacker
> > going to try to deal with this during the 2.6.32 window?  Such as
> > maybe Wu Fengguang's patches, perhaps?  
> > 
> > Or do I need to put in some kind of hack into ext4 ala what XFS did to
> > work around this problem until we can come up with a longer-term fix?
> 
> I defintively want to see something in the VM, but I also agree with
> Jens' decision not to include it in the patchkit.  It's not really
> related to the patchkit except for depending on it, and we still
> have a lot of discussion going on.  If we can't get an agreement with
> the VM people by the end of the regular merge window I'm all for pushing
> your simple patch.

I'd also appreciate if we can keep Wu (or Ted's) patch out of the merge
until after -rc1 or at least with a few -git releases in between, so
it'll be easier to check for regressions.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH 0/7] Per-bdi writeback flusher threads v20
  2009-09-11 14:29     ` Jens Axboe
@ 2009-09-11 14:39       ` Wu Fengguang
  2009-09-18 17:52         ` Theodore Tso
  0 siblings, 1 reply; 52+ messages in thread
From: Wu Fengguang @ 2009-09-11 14:39 UTC (permalink / raw)
  To: Jens Axboe
  Cc: Christoph Hellwig, Theodore Tso, linux-kernel, linux-fsdevel,
	chris.mason, akpm, jack

On Fri, Sep 11, 2009 at 10:29:26PM +0800, Jens Axboe wrote:
> On Fri, Sep 11 2009, Christoph Hellwig wrote:
> > On Fri, Sep 11, 2009 at 09:42:41AM -0400, Theodore Tso wrote:
> > > > v19 include:
> > > > 
> > > > - Drop the max writeback pages patch from Ted. I think we should do
> > > >   something to that effect, but there's really no reason to entangle
> > > >   it with this patchset.
> > > 
> > > That's reasonable, but I'd really like to know whether some VM hacker
> > > going to try to deal with this during the 2.6.32 window?  Such as
> > > maybe Wu Fengguang's patches, perhaps?  
> > > 
> > > Or do I need to put in some kind of hack into ext4 ala what XFS did to
> > > work around this problem until we can come up with a longer-term fix?
> > 
> > I defintively want to see something in the VM, but I also agree with
> > Jens' decision not to include it in the patchkit.  It's not really
> > related to the patchkit except for depending on it, and we still
> > have a lot of discussion going on.  If we can't get an agreement with
> > the VM people by the end of the regular merge window I'm all for pushing
> > your simple patch.
> 
> I'd also appreciate if we can keep Wu (or Ted's) patch out of the merge
> until after -rc1 or at least with a few -git releases in between, so
> it'll be easier to check for regressions.

That would be good. Sorry for the late work. I'll allocate some time
in mid next week to help review and benchmark recent writeback works,
and hope to get things done in this merge window.

Thanks,
Fengguang

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH 0/7] Per-bdi writeback flusher threads v20
  2009-09-11 14:39       ` Wu Fengguang
@ 2009-09-18 17:52         ` Theodore Tso
  2009-09-19  3:58             ` Wu Fengguang
  0 siblings, 1 reply; 52+ messages in thread
From: Theodore Tso @ 2009-09-18 17:52 UTC (permalink / raw)
  To: Wu Fengguang
  Cc: Jens Axboe, Christoph Hellwig, linux-kernel, linux-fsdevel,
	chris.mason, akpm, jack

On Fri, Sep 11, 2009 at 10:39:29PM +0800, Wu Fengguang wrote:
> 
> That would be good. Sorry for the late work. I'll allocate some time
> in mid next week to help review and benchmark recent writeback works,
> and hope to get things done in this merge window.

Did you have some chance to get more work done on the your writeback
patches?

Thanks,

					- Ted

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH 0/7] Per-bdi writeback flusher threads v20
  2009-09-18 17:52         ` Theodore Tso
@ 2009-09-19  3:58             ` Wu Fengguang
  0 siblings, 0 replies; 52+ messages in thread
From: Wu Fengguang @ 2009-09-19  3:58 UTC (permalink / raw)
  To: Theodore Tso, Jens Axboe, Christoph Hellwig, linux-kernel,
	linux-fsdevel, chris.mason, akpm, jack

[-- Attachment #1: Type: text/plain, Size: 4581 bytes --]

On Sat, Sep 19, 2009 at 01:52:52AM +0800, Theodore Tso wrote:
> On Fri, Sep 11, 2009 at 10:39:29PM +0800, Wu Fengguang wrote:
> > 
> > That would be good. Sorry for the late work. I'll allocate some time
> > in mid next week to help review and benchmark recent writeback works,
> > and hope to get things done in this merge window.
> 
> Did you have some chance to get more work done on the your writeback
> patches?

Sorry for the delay, I'm now testing the patches with commands

 cp /dev/zero /mnt/test/zero0 &
 dd if=/dev/zero of=/mnt/test/zero1 &

and the attached debug patch.

One problem I found with ext3/4 is, redirty_tail() is called repeatedly
in the traces, which could slow down the inode writeback significantly.

Ideal is to call requeue_[partial_]io() instead of redirty_tail().

[  131.963885] fs/fs-writeback.c +809 wb_writeback(): comm=flush-8:0 pid=2816 n=0
[  131.966171] global dirty=4105 writeback=18793 nfs=0 flags=__ towrite=32768 skipped=0 file=0 written=0
[  132.780826] fs/fs-writeback.c +809 wb_writeback(): comm=flush-0:15 pid=1150 n=0
[  132.783097] global dirty=4105 writeback=16623 nfs=0 flags=__ towrite=32768 skipped=0 file=0 written=0
[  134.307094] redirty_tail() +542: inode=12
[  134.815776] redirty_tail() +542: inode=13
[  134.817709] fs/fs-writeback.c +809 wb_writeback(): comm=flush-8:0 pid=2816 n=11493
[  134.821242] global dirty=4192 writeback=16203 nfs=0 flags=__ towrite=21275 skipped=0 file=13 written=4430
[  135.599954] redirty_tail() +542: inode=12
[  136.372523] redirty_tail() +542: inode=13
[  136.386748] fs/fs-writeback.c +809 wb_writeback(): comm=flush-8:0 pid=2816 n=11570
[  136.432168] global dirty=4308 writeback=15752 nfs=0 flags=__ towrite=21198 skipped=0 file=13 written=4650
[  137.789115] fs/fs-writeback.c +809 wb_writeback(): comm=flush-0:15 pid=1150 n=0
[  138.587178] global dirty=9551 writeback=10755 nfs=0 flags=__ towrite=32768 skipped=0 file=0 written=0
[  138.962743] redirty_tail() +542: inode=12
[  139.395024] redirty_tail() +542: inode=13
[  139.403194] fs/fs-writeback.c +809 wb_writeback(): comm=flush-8:0 pid=2816 n=11893
[  139.413026] global dirty=4101 writeback=16630 nfs=0 flags=__ towrite=20875 skipped=0 file=0 written=2
[  139.426074] fs/fs-writeback.c +809 wb_writeback(): comm=flush-8:0 pid=2816 n=0
[  139.435190] global dirty=4101 writeback=16378 nfs=0 flags=__ towrite=32768 skipped=0 file=0 written=0
[  140.266713] redirty_tail() +542: inode=12
[  140.449304] redirty_tail() +542: inode=13
[  140.496241] fs/fs-writeback.c +809 wb_writeback(): comm=flush-8:0 pid=2816 n=11678
[  140.508339] global dirty=4203 writeback=19220 nfs=0 flags=__ towrite=21090 skipped=0 file=13 written=4254
[  141.649192] redirty_tail() +542: inode=12
[  141.971276] redirty_tail() +542: inode=13
[  141.988572] fs/fs-writeback.c +809 wb_writeback(): comm=flush-8:0 pid=2816 n=11304
[  142.000107] global dirty=4112 writeback=18362 nfs=0 flags=__ towrite=21464 skipped=0 file=13 written=4541

btrfs pattern is almost the same, but with an extra (metadata) inode 1.

[  464.443873] fs/fs-writeback.c +809 wb_writeback(): comm=flush-btrfs-1 pid=2865 n=0
[  464.450458] global dirty=163 writeback=4375 nfs=0 flags=__ towrite=32768 skipped=0 file=0 written=0
[  464.655999] fs/fs-writeback.c +809 wb_writeback(): comm=flush-8:0 pid=2816 n=0
[  464.664478] global dirty=3873 writeback=1175 nfs=0 flags=__ towrite=32768 skipped=0 file=0 written=0
[  465.291059] redirty_tail() +542: inode=257
[  465.331584] redirty_tail() +542: inode=258
[  465.346433] redirty_tail() +560: inode=1
[  465.352016] fs/fs-writeback.c +809 wb_writeback(): comm=flush-btrfs-1 pid=2865 n=14480
[  465.355797] global dirty=337 writeback=3980 nfs=0 flags=__ towrite=18288 skipped=0 file=1 written=0
[  466.226489] redirty_tail() +542: inode=257
[  466.280894] redirty_tail() +542: inode=258
[  466.282270] redirty_tail() +560: inode=1
[  466.288079] fs/fs-writeback.c +809 wb_writeback(): comm=flush-btrfs-1 pid=2865 n=14300
[  466.291738] global dirty=666 writeback=3807 nfs=0 flags=__ towrite=18468 skipped=0 file=1 written=0
[  467.101730] redirty_tail() +542: inode=257
[  467.134303] redirty_tail() +542: inode=258
[  467.135675] redirty_tail() +560: inode=1
[  467.144120] fs/fs-writeback.c +809 wb_writeback(): comm=flush-btrfs-1 pid=2865 n=14032
[  467.147302] global dirty=331 writeback=3665 nfs=0 flags=__ towrite=18736 skipped=0 file=1 written=0
[  467.964652] redirty_tail() +542: inode=257
[  468.002423] redirty_tail() +542: inode=258
[  468.003795] redirty_tail() +560: inode=1

I'm looking into it.

Thanks,
Fengguang

[-- Attachment #2: writeback-debug.patch --]
[-- Type: text/x-diff, Size: 2334 bytes --]

 fs/fs-writeback.c   |   30 +++++++++++++++++++++++++++++-
 mm/page-writeback.c |    1 +
 2 files changed, 30 insertions(+), 1 deletion(-)

--- linux.orig/mm/page-writeback.c	2009-09-19 10:51:09.000000000 +0800
+++ linux/mm/page-writeback.c	2009-09-19 10:51:47.000000000 +0800
@@ -536,6 +536,7 @@ static void balance_dirty_pages(struct a
 			pages_written += write_chunk - wbc.nr_to_write;
 			get_dirty_limits(&background_thresh, &dirty_thresh,
 				       &bdi_thresh, bdi);
+			writeback_debug_report(pages_written, &wbc);
 		}
 
 		/*
--- linux.orig/fs/fs-writeback.c	2009-09-19 10:51:46.000000000 +0800
+++ linux/fs/fs-writeback.c	2009-09-19 10:51:47.000000000 +0800
@@ -68,6 +68,33 @@ enum {
 #define WS_USED (1 << WS_USED_B)
 #define WS_ONSTACK (1 << WS_ONSTACK_B)
 
+void print_writeback_control(struct writeback_control *wbc)
+{
+	printk(KERN_DEBUG
+			"global dirty=%lu writeback=%lu nfs=%lu "
+			"flags=%c%c towrite=%ld skipped=%ld "
+			"file=%lu written=%lu\n",
+			global_page_state(NR_FILE_DIRTY),
+			global_page_state(NR_WRITEBACK),
+			global_page_state(NR_UNSTABLE_NFS),
+			wbc->encountered_congestion ? 'C':'_',
+			wbc->more_io ? 'M':'_',
+			wbc->nr_to_write,
+			wbc->pages_skipped,
+			wbc->last_file,
+			wbc->last_file_written);
+}
+
+void __writeback_debug_report(long n, struct writeback_control *wbc,
+		const char *file, int line, const char *func)
+{
+	printk(KERN_DEBUG "%s +%d %s(): comm=%s pid=%d n=%ld\n",
+			file, line, func,
+			current->comm, current->pid,
+			n);
+	print_writeback_control(wbc);
+}
+
 static inline bool bdi_work_on_stack(struct bdi_work *work)
 {
 	return test_bit(WS_ONSTACK_B, &work->state);
@@ -302,7 +329,7 @@ static void requeue_io(struct inode *ino
  */
 static void requeue_partial_io(struct writeback_control *wbc, struct inode *inode)
 {
-	if (time_before(wbc->last_file_time + HZ, jiffies) ||
+	if (time_before(wbc->last_file_time + 1000 * HZ, jiffies) ||
 	    wbc->last_file_written == 0 ||
 	    wbc->last_file_written >= MAX_WRITEBACK_PAGES) {
 		requeue_io(inode);
@@ -749,6 +776,7 @@ static long wb_writeback(struct bdi_writ
 		args->nr_pages -= MAX_WRITEBACK_PAGES - wbc.nr_to_write;
 		wrote += MAX_WRITEBACK_PAGES - wbc.nr_to_write;
 
+		writeback_debug_report(wrote, &wbc);
 		/*
 		 * If we ran out of stuff to write, bail unless more_io got set
 		 */

[-- Attachment #3: requeue_io-debug.patch --]
[-- Type: text/x-diff, Size: 2799 bytes --]

Subject: track redirty_tail() calls

It helps a lot to know how redirty_tail() are called.

Cc: Ken Chen <kenchen@google.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Fengguang Wu <wfg@mail.ustc.edu.cn>
---
 fs/fs-writeback.c |   36 +++++++++++++++++++++++++++++++++---
 1 file changed, 33 insertions(+), 3 deletions(-)

--- linux.orig/fs/fs-writeback.c	2009-09-19 10:51:47.000000000 +0800
+++ linux/fs/fs-writeback.c	2009-09-19 10:52:03.000000000 +0800
@@ -290,6 +290,21 @@ void bdi_start_writeback(struct backing_
 	bdi_alloc_queue_work(bdi, &args);
 }
 
+#define redirty_tail(inode)						\
+	do {								\
+		__redirty_tail(inode, __LINE__);			\
+	} while (0)
+
+#define requeue_io(inode)						\
+	do {								\
+		__requeue_io(inode, __LINE__);				\
+	} while (0)
+
+#define requeue_partial_io(wbc, inode)					\
+	do {								\
+		__requeue_partial_io(wbc, inode, __LINE__);		\
+	} while (0)
+
 /*
  * Redirty an inode: set its when-it-was dirtied timestamp and move it to the
  * furthest end of its superblock's dirty-inode list.
@@ -299,7 +314,7 @@ void bdi_start_writeback(struct backing_
  * the case then the inode must have been redirtied while it was being written
  * out and we don't reset its dirtied_when.
  */
-static void redirty_tail(struct inode *inode)
+static void __redirty_tail(struct inode *inode, int line)
 {
 	struct bdi_writeback *wb = &inode_to_bdi(inode)->wb;
 
@@ -311,23 +326,33 @@ static void redirty_tail(struct inode *i
 			inode->dirtied_when = jiffies;
 	}
 	list_move(&inode->i_list, &wb->b_dirty);
+
+	if (sysctl_dirty_debug) {
+		printk(KERN_DEBUG "redirty_tail() +%d: inode=%lu\n",
+				line, inode->i_ino);
+	}
 }
 
 /*
  * requeue inode for re-scanning after bdi->b_io list is exhausted.
  */
-static void requeue_io(struct inode *inode)
+static void __requeue_io(struct inode *inode, int line)
 {
 	struct bdi_writeback *wb = &inode_to_bdi(inode)->wb;
 
 	list_move(&inode->i_list, &wb->b_more_io);
+
+	if (sysctl_dirty_debug) {
+		printk(KERN_DEBUG "requeue_io() +%d: inode=%lu\n",
+				line, inode->i_ino);
+	}
 }
 
 /*
  * continue io on this inode on next writeback if
  * it has not accumulated large enough writeback io chunk
  */
-static void requeue_partial_io(struct writeback_control *wbc, struct inode *inode)
+static void __requeue_partial_io(struct writeback_control *wbc, struct inode *inode, int line)
 {
 	if (time_before(wbc->last_file_time + 1000 * HZ, jiffies) ||
 	    wbc->last_file_written == 0 ||
@@ -337,6 +362,11 @@ static void requeue_partial_io(struct wr
 	}
 
 	list_move_tail(&inode->i_list, &inode_to_bdi(inode)->wb.b_io);
+
+	if (sysctl_dirty_debug) {
+		printk(KERN_DEBUG "requeue_partial_io() +%d: inode=%lu\n",
+				line, inode->i_ino);
+	}
 }
 
 static void inode_sync_complete(struct inode *inode)

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH 0/7] Per-bdi writeback flusher threads v20
@ 2009-09-19  3:58             ` Wu Fengguang
  0 siblings, 0 replies; 52+ messages in thread
From: Wu Fengguang @ 2009-09-19  3:58 UTC (permalink / raw)
  To: Theodore Tso, Jens Axboe, Christoph Hellwig, linux-kernel, 

[-- Attachment #1: Type: text/plain, Size: 4581 bytes --]

On Sat, Sep 19, 2009 at 01:52:52AM +0800, Theodore Tso wrote:
> On Fri, Sep 11, 2009 at 10:39:29PM +0800, Wu Fengguang wrote:
> > 
> > That would be good. Sorry for the late work. I'll allocate some time
> > in mid next week to help review and benchmark recent writeback works,
> > and hope to get things done in this merge window.
> 
> Did you have some chance to get more work done on the your writeback
> patches?

Sorry for the delay, I'm now testing the patches with commands

 cp /dev/zero /mnt/test/zero0 &
 dd if=/dev/zero of=/mnt/test/zero1 &

and the attached debug patch.

One problem I found with ext3/4 is, redirty_tail() is called repeatedly
in the traces, which could slow down the inode writeback significantly.

Ideal is to call requeue_[partial_]io() instead of redirty_tail().

[  131.963885] fs/fs-writeback.c +809 wb_writeback(): comm=flush-8:0 pid=2816 n=0
[  131.966171] global dirty=4105 writeback=18793 nfs=0 flags=__ towrite=32768 skipped=0 file=0 written=0
[  132.780826] fs/fs-writeback.c +809 wb_writeback(): comm=flush-0:15 pid=1150 n=0
[  132.783097] global dirty=4105 writeback=16623 nfs=0 flags=__ towrite=32768 skipped=0 file=0 written=0
[  134.307094] redirty_tail() +542: inode=12
[  134.815776] redirty_tail() +542: inode=13
[  134.817709] fs/fs-writeback.c +809 wb_writeback(): comm=flush-8:0 pid=2816 n=11493
[  134.821242] global dirty=4192 writeback=16203 nfs=0 flags=__ towrite=21275 skipped=0 file=13 written=4430
[  135.599954] redirty_tail() +542: inode=12
[  136.372523] redirty_tail() +542: inode=13
[  136.386748] fs/fs-writeback.c +809 wb_writeback(): comm=flush-8:0 pid=2816 n=11570
[  136.432168] global dirty=4308 writeback=15752 nfs=0 flags=__ towrite=21198 skipped=0 file=13 written=4650
[  137.789115] fs/fs-writeback.c +809 wb_writeback(): comm=flush-0:15 pid=1150 n=0
[  138.587178] global dirty=9551 writeback=10755 nfs=0 flags=__ towrite=32768 skipped=0 file=0 written=0
[  138.962743] redirty_tail() +542: inode=12
[  139.395024] redirty_tail() +542: inode=13
[  139.403194] fs/fs-writeback.c +809 wb_writeback(): comm=flush-8:0 pid=2816 n=11893
[  139.413026] global dirty=4101 writeback=16630 nfs=0 flags=__ towrite=20875 skipped=0 file=0 written=2
[  139.426074] fs/fs-writeback.c +809 wb_writeback(): comm=flush-8:0 pid=2816 n=0
[  139.435190] global dirty=4101 writeback=16378 nfs=0 flags=__ towrite=32768 skipped=0 file=0 written=0
[  140.266713] redirty_tail() +542: inode=12
[  140.449304] redirty_tail() +542: inode=13
[  140.496241] fs/fs-writeback.c +809 wb_writeback(): comm=flush-8:0 pid=2816 n=11678
[  140.508339] global dirty=4203 writeback=19220 nfs=0 flags=__ towrite=21090 skipped=0 file=13 written=4254
[  141.649192] redirty_tail() +542: inode=12
[  141.971276] redirty_tail() +542: inode=13
[  141.988572] fs/fs-writeback.c +809 wb_writeback(): comm=flush-8:0 pid=2816 n=11304
[  142.000107] global dirty=4112 writeback=18362 nfs=0 flags=__ towrite=21464 skipped=0 file=13 written=4541

btrfs pattern is almost the same, but with an extra (metadata) inode 1.

[  464.443873] fs/fs-writeback.c +809 wb_writeback(): comm=flush-btrfs-1 pid=2865 n=0
[  464.450458] global dirty=163 writeback=4375 nfs=0 flags=__ towrite=32768 skipped=0 file=0 written=0
[  464.655999] fs/fs-writeback.c +809 wb_writeback(): comm=flush-8:0 pid=2816 n=0
[  464.664478] global dirty=3873 writeback=1175 nfs=0 flags=__ towrite=32768 skipped=0 file=0 written=0
[  465.291059] redirty_tail() +542: inode=257
[  465.331584] redirty_tail() +542: inode=258
[  465.346433] redirty_tail() +560: inode=1
[  465.352016] fs/fs-writeback.c +809 wb_writeback(): comm=flush-btrfs-1 pid=2865 n=14480
[  465.355797] global dirty=337 writeback=3980 nfs=0 flags=__ towrite=18288 skipped=0 file=1 written=0
[  466.226489] redirty_tail() +542: inode=257
[  466.280894] redirty_tail() +542: inode=258
[  466.282270] redirty_tail() +560: inode=1
[  466.288079] fs/fs-writeback.c +809 wb_writeback(): comm=flush-btrfs-1 pid=2865 n=14300
[  466.291738] global dirty=666 writeback=3807 nfs=0 flags=__ towrite=18468 skipped=0 file=1 written=0
[  467.101730] redirty_tail() +542: inode=257
[  467.134303] redirty_tail() +542: inode=258
[  467.135675] redirty_tail() +560: inode=1
[  467.144120] fs/fs-writeback.c +809 wb_writeback(): comm=flush-btrfs-1 pid=2865 n=14032
[  467.147302] global dirty=331 writeback=3665 nfs=0 flags=__ towrite=18736 skipped=0 file=1 written=0
[  467.964652] redirty_tail() +542: inode=257
[  468.002423] redirty_tail() +542: inode=258
[  468.003795] redirty_tail() +560: inode=1

I'm looking into it.

Thanks,
Fengguang

[-- Attachment #2: writeback-debug.patch --]
[-- Type: text/x-diff, Size: 2334 bytes --]

 fs/fs-writeback.c   |   30 +++++++++++++++++++++++++++++-
 mm/page-writeback.c |    1 +
 2 files changed, 30 insertions(+), 1 deletion(-)

--- linux.orig/mm/page-writeback.c	2009-09-19 10:51:09.000000000 +0800
+++ linux/mm/page-writeback.c	2009-09-19 10:51:47.000000000 +0800
@@ -536,6 +536,7 @@ static void balance_dirty_pages(struct a
 			pages_written += write_chunk - wbc.nr_to_write;
 			get_dirty_limits(&background_thresh, &dirty_thresh,
 				       &bdi_thresh, bdi);
+			writeback_debug_report(pages_written, &wbc);
 		}
 
 		/*
--- linux.orig/fs/fs-writeback.c	2009-09-19 10:51:46.000000000 +0800
+++ linux/fs/fs-writeback.c	2009-09-19 10:51:47.000000000 +0800
@@ -68,6 +68,33 @@ enum {
 #define WS_USED (1 << WS_USED_B)
 #define WS_ONSTACK (1 << WS_ONSTACK_B)
 
+void print_writeback_control(struct writeback_control *wbc)
+{
+	printk(KERN_DEBUG
+			"global dirty=%lu writeback=%lu nfs=%lu "
+			"flags=%c%c towrite=%ld skipped=%ld "
+			"file=%lu written=%lu\n",
+			global_page_state(NR_FILE_DIRTY),
+			global_page_state(NR_WRITEBACK),
+			global_page_state(NR_UNSTABLE_NFS),
+			wbc->encountered_congestion ? 'C':'_',
+			wbc->more_io ? 'M':'_',
+			wbc->nr_to_write,
+			wbc->pages_skipped,
+			wbc->last_file,
+			wbc->last_file_written);
+}
+
+void __writeback_debug_report(long n, struct writeback_control *wbc,
+		const char *file, int line, const char *func)
+{
+	printk(KERN_DEBUG "%s +%d %s(): comm=%s pid=%d n=%ld\n",
+			file, line, func,
+			current->comm, current->pid,
+			n);
+	print_writeback_control(wbc);
+}
+
 static inline bool bdi_work_on_stack(struct bdi_work *work)
 {
 	return test_bit(WS_ONSTACK_B, &work->state);
@@ -302,7 +329,7 @@ static void requeue_io(struct inode *ino
  */
 static void requeue_partial_io(struct writeback_control *wbc, struct inode *inode)
 {
-	if (time_before(wbc->last_file_time + HZ, jiffies) ||
+	if (time_before(wbc->last_file_time + 1000 * HZ, jiffies) ||
 	    wbc->last_file_written == 0 ||
 	    wbc->last_file_written >= MAX_WRITEBACK_PAGES) {
 		requeue_io(inode);
@@ -749,6 +776,7 @@ static long wb_writeback(struct bdi_writ
 		args->nr_pages -= MAX_WRITEBACK_PAGES - wbc.nr_to_write;
 		wrote += MAX_WRITEBACK_PAGES - wbc.nr_to_write;
 
+		writeback_debug_report(wrote, &wbc);
 		/*
 		 * If we ran out of stuff to write, bail unless more_io got set
 		 */

[-- Attachment #3: requeue_io-debug.patch --]
[-- Type: text/x-diff, Size: 2799 bytes --]

Subject: track redirty_tail() calls

It helps a lot to know how redirty_tail() are called.

Cc: Ken Chen <kenchen@google.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Fengguang Wu <wfg@mail.ustc.edu.cn>
---
 fs/fs-writeback.c |   36 +++++++++++++++++++++++++++++++++---
 1 file changed, 33 insertions(+), 3 deletions(-)

--- linux.orig/fs/fs-writeback.c	2009-09-19 10:51:47.000000000 +0800
+++ linux/fs/fs-writeback.c	2009-09-19 10:52:03.000000000 +0800
@@ -290,6 +290,21 @@ void bdi_start_writeback(struct backing_
 	bdi_alloc_queue_work(bdi, &args);
 }
 
+#define redirty_tail(inode)						\
+	do {								\
+		__redirty_tail(inode, __LINE__);			\
+	} while (0)
+
+#define requeue_io(inode)						\
+	do {								\
+		__requeue_io(inode, __LINE__);				\
+	} while (0)
+
+#define requeue_partial_io(wbc, inode)					\
+	do {								\
+		__requeue_partial_io(wbc, inode, __LINE__);		\
+	} while (0)
+
 /*
  * Redirty an inode: set its when-it-was dirtied timestamp and move it to the
  * furthest end of its superblock's dirty-inode list.
@@ -299,7 +314,7 @@ void bdi_start_writeback(struct backing_
  * the case then the inode must have been redirtied while it was being written
  * out and we don't reset its dirtied_when.
  */
-static void redirty_tail(struct inode *inode)
+static void __redirty_tail(struct inode *inode, int line)
 {
 	struct bdi_writeback *wb = &inode_to_bdi(inode)->wb;
 
@@ -311,23 +326,33 @@ static void redirty_tail(struct inode *i
 			inode->dirtied_when = jiffies;
 	}
 	list_move(&inode->i_list, &wb->b_dirty);
+
+	if (sysctl_dirty_debug) {
+		printk(KERN_DEBUG "redirty_tail() +%d: inode=%lu\n",
+				line, inode->i_ino);
+	}
 }
 
 /*
  * requeue inode for re-scanning after bdi->b_io list is exhausted.
  */
-static void requeue_io(struct inode *inode)
+static void __requeue_io(struct inode *inode, int line)
 {
 	struct bdi_writeback *wb = &inode_to_bdi(inode)->wb;
 
 	list_move(&inode->i_list, &wb->b_more_io);
+
+	if (sysctl_dirty_debug) {
+		printk(KERN_DEBUG "requeue_io() +%d: inode=%lu\n",
+				line, inode->i_ino);
+	}
 }
 
 /*
  * continue io on this inode on next writeback if
  * it has not accumulated large enough writeback io chunk
  */
-static void requeue_partial_io(struct writeback_control *wbc, struct inode *inode)
+static void __requeue_partial_io(struct writeback_control *wbc, struct inode *inode, int line)
 {
 	if (time_before(wbc->last_file_time + 1000 * HZ, jiffies) ||
 	    wbc->last_file_written == 0 ||
@@ -337,6 +362,11 @@ static void requeue_partial_io(struct wr
 	}
 
 	list_move_tail(&inode->i_list, &inode_to_bdi(inode)->wb.b_io);
+
+	if (sysctl_dirty_debug) {
+		printk(KERN_DEBUG "requeue_partial_io() +%d: inode=%lu\n",
+				line, inode->i_ino);
+	}
 }
 
 static void inode_sync_complete(struct inode *inode)

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH 0/7] Per-bdi writeback flusher threads v20
  2009-09-19  3:58             ` Wu Fengguang
@ 2009-09-19  4:00               ` Wu Fengguang
  -1 siblings, 0 replies; 52+ messages in thread
From: Wu Fengguang @ 2009-09-19  4:00 UTC (permalink / raw)
  To: Theodore Tso, Jens Axboe, Christoph Hellwig, linux-kernel,
	linux-fsdevel, chris.mason, akpm, jack

On Sat, Sep 19, 2009 at 11:58:35AM +0800, Wu Fengguang wrote:
> On Sat, Sep 19, 2009 at 01:52:52AM +0800, Theodore Tso wrote:
> > On Fri, Sep 11, 2009 at 10:39:29PM +0800, Wu Fengguang wrote:
> > > 
> > > That would be good. Sorry for the late work. I'll allocate some time
> > > in mid next week to help review and benchmark recent writeback works,
> > > and hope to get things done in this merge window.
> > 
> > Did you have some chance to get more work done on the your writeback
> > patches?
> 
> Sorry for the delay, I'm now testing the patches with commands
> 
>  cp /dev/zero /mnt/test/zero0 &
>  dd if=/dev/zero of=/mnt/test/zero1 &
> 
> and the attached debug patch.
> 
> One problem I found with ext3/4 is, redirty_tail() is called repeatedly
> in the traces, which could slow down the inode writeback significantly.

FYI, it's this redirty_tail() called in writeback_single_inode():

                        /*
                         * Someone redirtied the inode while were writing back
                         * the pages.
                         */
                        redirty_tail(inode);

> Ideal is to call requeue_[partial_]io() instead of redirty_tail().
> 
> [  131.963885] fs/fs-writeback.c +809 wb_writeback(): comm=flush-8:0 pid=2816 n=0
> [  131.966171] global dirty=4105 writeback=18793 nfs=0 flags=__ towrite=32768 skipped=0 file=0 written=0
> [  132.780826] fs/fs-writeback.c +809 wb_writeback(): comm=flush-0:15 pid=1150 n=0
> [  132.783097] global dirty=4105 writeback=16623 nfs=0 flags=__ towrite=32768 skipped=0 file=0 written=0
> [  134.307094] redirty_tail() +542: inode=12
> [  134.815776] redirty_tail() +542: inode=13
> [  134.817709] fs/fs-writeback.c +809 wb_writeback(): comm=flush-8:0 pid=2816 n=11493
> [  134.821242] global dirty=4192 writeback=16203 nfs=0 flags=__ towrite=21275 skipped=0 file=13 written=4430
> [  135.599954] redirty_tail() +542: inode=12
> [  136.372523] redirty_tail() +542: inode=13
> [  136.386748] fs/fs-writeback.c +809 wb_writeback(): comm=flush-8:0 pid=2816 n=11570
> [  136.432168] global dirty=4308 writeback=15752 nfs=0 flags=__ towrite=21198 skipped=0 file=13 written=4650
> [  137.789115] fs/fs-writeback.c +809 wb_writeback(): comm=flush-0:15 pid=1150 n=0
> [  138.587178] global dirty=9551 writeback=10755 nfs=0 flags=__ towrite=32768 skipped=0 file=0 written=0
> [  138.962743] redirty_tail() +542: inode=12
> [  139.395024] redirty_tail() +542: inode=13
> [  139.403194] fs/fs-writeback.c +809 wb_writeback(): comm=flush-8:0 pid=2816 n=11893
> [  139.413026] global dirty=4101 writeback=16630 nfs=0 flags=__ towrite=20875 skipped=0 file=0 written=2
> [  139.426074] fs/fs-writeback.c +809 wb_writeback(): comm=flush-8:0 pid=2816 n=0
> [  139.435190] global dirty=4101 writeback=16378 nfs=0 flags=__ towrite=32768 skipped=0 file=0 written=0
> [  140.266713] redirty_tail() +542: inode=12
> [  140.449304] redirty_tail() +542: inode=13
> [  140.496241] fs/fs-writeback.c +809 wb_writeback(): comm=flush-8:0 pid=2816 n=11678
> [  140.508339] global dirty=4203 writeback=19220 nfs=0 flags=__ towrite=21090 skipped=0 file=13 written=4254
> [  141.649192] redirty_tail() +542: inode=12
> [  141.971276] redirty_tail() +542: inode=13
> [  141.988572] fs/fs-writeback.c +809 wb_writeback(): comm=flush-8:0 pid=2816 n=11304
> [  142.000107] global dirty=4112 writeback=18362 nfs=0 flags=__ towrite=21464 skipped=0 file=13 written=4541
> 
> btrfs pattern is almost the same, but with an extra (metadata) inode 1.
> 
> [  464.443873] fs/fs-writeback.c +809 wb_writeback(): comm=flush-btrfs-1 pid=2865 n=0
> [  464.450458] global dirty=163 writeback=4375 nfs=0 flags=__ towrite=32768 skipped=0 file=0 written=0
> [  464.655999] fs/fs-writeback.c +809 wb_writeback(): comm=flush-8:0 pid=2816 n=0
> [  464.664478] global dirty=3873 writeback=1175 nfs=0 flags=__ towrite=32768 skipped=0 file=0 written=0
> [  465.291059] redirty_tail() +542: inode=257
> [  465.331584] redirty_tail() +542: inode=258
> [  465.346433] redirty_tail() +560: inode=1
> [  465.352016] fs/fs-writeback.c +809 wb_writeback(): comm=flush-btrfs-1 pid=2865 n=14480
> [  465.355797] global dirty=337 writeback=3980 nfs=0 flags=__ towrite=18288 skipped=0 file=1 written=0
> [  466.226489] redirty_tail() +542: inode=257
> [  466.280894] redirty_tail() +542: inode=258
> [  466.282270] redirty_tail() +560: inode=1
> [  466.288079] fs/fs-writeback.c +809 wb_writeback(): comm=flush-btrfs-1 pid=2865 n=14300
> [  466.291738] global dirty=666 writeback=3807 nfs=0 flags=__ towrite=18468 skipped=0 file=1 written=0
> [  467.101730] redirty_tail() +542: inode=257
> [  467.134303] redirty_tail() +542: inode=258
> [  467.135675] redirty_tail() +560: inode=1
> [  467.144120] fs/fs-writeback.c +809 wb_writeback(): comm=flush-btrfs-1 pid=2865 n=14032
> [  467.147302] global dirty=331 writeback=3665 nfs=0 flags=__ towrite=18736 skipped=0 file=1 written=0
> [  467.964652] redirty_tail() +542: inode=257
> [  468.002423] redirty_tail() +542: inode=258
> [  468.003795] redirty_tail() +560: inode=1
> 
> I'm looking into it.
> 
> Thanks,
> Fengguang

>  fs/fs-writeback.c   |   30 +++++++++++++++++++++++++++++-
>  mm/page-writeback.c |    1 +
>  2 files changed, 30 insertions(+), 1 deletion(-)
> 
> --- linux.orig/mm/page-writeback.c	2009-09-19 10:51:09.000000000 +0800
> +++ linux/mm/page-writeback.c	2009-09-19 10:51:47.000000000 +0800
> @@ -536,6 +536,7 @@ static void balance_dirty_pages(struct a
>  			pages_written += write_chunk - wbc.nr_to_write;
>  			get_dirty_limits(&background_thresh, &dirty_thresh,
>  				       &bdi_thresh, bdi);
> +			writeback_debug_report(pages_written, &wbc);
>  		}
>  
>  		/*
> --- linux.orig/fs/fs-writeback.c	2009-09-19 10:51:46.000000000 +0800
> +++ linux/fs/fs-writeback.c	2009-09-19 10:51:47.000000000 +0800
> @@ -68,6 +68,33 @@ enum {
>  #define WS_USED (1 << WS_USED_B)
>  #define WS_ONSTACK (1 << WS_ONSTACK_B)
>  
> +void print_writeback_control(struct writeback_control *wbc)
> +{
> +	printk(KERN_DEBUG
> +			"global dirty=%lu writeback=%lu nfs=%lu "
> +			"flags=%c%c towrite=%ld skipped=%ld "
> +			"file=%lu written=%lu\n",
> +			global_page_state(NR_FILE_DIRTY),
> +			global_page_state(NR_WRITEBACK),
> +			global_page_state(NR_UNSTABLE_NFS),
> +			wbc->encountered_congestion ? 'C':'_',
> +			wbc->more_io ? 'M':'_',
> +			wbc->nr_to_write,
> +			wbc->pages_skipped,
> +			wbc->last_file,
> +			wbc->last_file_written);
> +}
> +
> +void __writeback_debug_report(long n, struct writeback_control *wbc,
> +		const char *file, int line, const char *func)
> +{
> +	printk(KERN_DEBUG "%s +%d %s(): comm=%s pid=%d n=%ld\n",
> +			file, line, func,
> +			current->comm, current->pid,
> +			n);
> +	print_writeback_control(wbc);
> +}
> +
>  static inline bool bdi_work_on_stack(struct bdi_work *work)
>  {
>  	return test_bit(WS_ONSTACK_B, &work->state);
> @@ -302,7 +329,7 @@ static void requeue_io(struct inode *ino
>   */
>  static void requeue_partial_io(struct writeback_control *wbc, struct inode *inode)
>  {
> -	if (time_before(wbc->last_file_time + HZ, jiffies) ||
> +	if (time_before(wbc->last_file_time + 1000 * HZ, jiffies) ||
>  	    wbc->last_file_written == 0 ||
>  	    wbc->last_file_written >= MAX_WRITEBACK_PAGES) {
>  		requeue_io(inode);
> @@ -749,6 +776,7 @@ static long wb_writeback(struct bdi_writ
>  		args->nr_pages -= MAX_WRITEBACK_PAGES - wbc.nr_to_write;
>  		wrote += MAX_WRITEBACK_PAGES - wbc.nr_to_write;
>  
> +		writeback_debug_report(wrote, &wbc);
>  		/*
>  		 * If we ran out of stuff to write, bail unless more_io got set
>  		 */

> Subject: track redirty_tail() calls
> 
> It helps a lot to know how redirty_tail() are called.
> 
> Cc: Ken Chen <kenchen@google.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Signed-off-by: Fengguang Wu <wfg@mail.ustc.edu.cn>
> ---
>  fs/fs-writeback.c |   36 +++++++++++++++++++++++++++++++++---
>  1 file changed, 33 insertions(+), 3 deletions(-)
> 
> --- linux.orig/fs/fs-writeback.c	2009-09-19 10:51:47.000000000 +0800
> +++ linux/fs/fs-writeback.c	2009-09-19 10:52:03.000000000 +0800
> @@ -290,6 +290,21 @@ void bdi_start_writeback(struct backing_
>  	bdi_alloc_queue_work(bdi, &args);
>  }
>  
> +#define redirty_tail(inode)						\
> +	do {								\
> +		__redirty_tail(inode, __LINE__);			\
> +	} while (0)
> +
> +#define requeue_io(inode)						\
> +	do {								\
> +		__requeue_io(inode, __LINE__);				\
> +	} while (0)
> +
> +#define requeue_partial_io(wbc, inode)					\
> +	do {								\
> +		__requeue_partial_io(wbc, inode, __LINE__);		\
> +	} while (0)
> +
>  /*
>   * Redirty an inode: set its when-it-was dirtied timestamp and move it to the
>   * furthest end of its superblock's dirty-inode list.
> @@ -299,7 +314,7 @@ void bdi_start_writeback(struct backing_
>   * the case then the inode must have been redirtied while it was being written
>   * out and we don't reset its dirtied_when.
>   */
> -static void redirty_tail(struct inode *inode)
> +static void __redirty_tail(struct inode *inode, int line)
>  {
>  	struct bdi_writeback *wb = &inode_to_bdi(inode)->wb;
>  
> @@ -311,23 +326,33 @@ static void redirty_tail(struct inode *i
>  			inode->dirtied_when = jiffies;
>  	}
>  	list_move(&inode->i_list, &wb->b_dirty);
> +
> +	if (sysctl_dirty_debug) {
> +		printk(KERN_DEBUG "redirty_tail() +%d: inode=%lu\n",
> +				line, inode->i_ino);
> +	}
>  }
>  
>  /*
>   * requeue inode for re-scanning after bdi->b_io list is exhausted.
>   */
> -static void requeue_io(struct inode *inode)
> +static void __requeue_io(struct inode *inode, int line)
>  {
>  	struct bdi_writeback *wb = &inode_to_bdi(inode)->wb;
>  
>  	list_move(&inode->i_list, &wb->b_more_io);
> +
> +	if (sysctl_dirty_debug) {
> +		printk(KERN_DEBUG "requeue_io() +%d: inode=%lu\n",
> +				line, inode->i_ino);
> +	}
>  }
>  
>  /*
>   * continue io on this inode on next writeback if
>   * it has not accumulated large enough writeback io chunk
>   */
> -static void requeue_partial_io(struct writeback_control *wbc, struct inode *inode)
> +static void __requeue_partial_io(struct writeback_control *wbc, struct inode *inode, int line)
>  {
>  	if (time_before(wbc->last_file_time + 1000 * HZ, jiffies) ||
>  	    wbc->last_file_written == 0 ||
> @@ -337,6 +362,11 @@ static void requeue_partial_io(struct wr
>  	}
>  
>  	list_move_tail(&inode->i_list, &inode_to_bdi(inode)->wb.b_io);
> +
> +	if (sysctl_dirty_debug) {
> +		printk(KERN_DEBUG "requeue_partial_io() +%d: inode=%lu\n",
> +				line, inode->i_ino);
> +	}
>  }
>  
>  static void inode_sync_complete(struct inode *inode)


^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH 0/7] Per-bdi writeback flusher threads v20
@ 2009-09-19  4:00               ` Wu Fengguang
  0 siblings, 0 replies; 52+ messages in thread
From: Wu Fengguang @ 2009-09-19  4:00 UTC (permalink / raw)
  To: Theodore Tso, Jens Axboe, Christoph Hellwig, linux-kernel, 

On Sat, Sep 19, 2009 at 11:58:35AM +0800, Wu Fengguang wrote:
> On Sat, Sep 19, 2009 at 01:52:52AM +0800, Theodore Tso wrote:
> > On Fri, Sep 11, 2009 at 10:39:29PM +0800, Wu Fengguang wrote:
> > > 
> > > That would be good. Sorry for the late work. I'll allocate some time
> > > in mid next week to help review and benchmark recent writeback works,
> > > and hope to get things done in this merge window.
> > 
> > Did you have some chance to get more work done on the your writeback
> > patches?
> 
> Sorry for the delay, I'm now testing the patches with commands
> 
>  cp /dev/zero /mnt/test/zero0 &
>  dd if=/dev/zero of=/mnt/test/zero1 &
> 
> and the attached debug patch.
> 
> One problem I found with ext3/4 is, redirty_tail() is called repeatedly
> in the traces, which could slow down the inode writeback significantly.

FYI, it's this redirty_tail() called in writeback_single_inode():

                        /*
                         * Someone redirtied the inode while were writing back
                         * the pages.
                         */
                        redirty_tail(inode);

> Ideal is to call requeue_[partial_]io() instead of redirty_tail().
> 
> [  131.963885] fs/fs-writeback.c +809 wb_writeback(): comm=flush-8:0 pid=2816 n=0
> [  131.966171] global dirty=4105 writeback=18793 nfs=0 flags=__ towrite=32768 skipped=0 file=0 written=0
> [  132.780826] fs/fs-writeback.c +809 wb_writeback(): comm=flush-0:15 pid=1150 n=0
> [  132.783097] global dirty=4105 writeback=16623 nfs=0 flags=__ towrite=32768 skipped=0 file=0 written=0
> [  134.307094] redirty_tail() +542: inode=12
> [  134.815776] redirty_tail() +542: inode=13
> [  134.817709] fs/fs-writeback.c +809 wb_writeback(): comm=flush-8:0 pid=2816 n=11493
> [  134.821242] global dirty=4192 writeback=16203 nfs=0 flags=__ towrite=21275 skipped=0 file=13 written=4430
> [  135.599954] redirty_tail() +542: inode=12
> [  136.372523] redirty_tail() +542: inode=13
> [  136.386748] fs/fs-writeback.c +809 wb_writeback(): comm=flush-8:0 pid=2816 n=11570
> [  136.432168] global dirty=4308 writeback=15752 nfs=0 flags=__ towrite=21198 skipped=0 file=13 written=4650
> [  137.789115] fs/fs-writeback.c +809 wb_writeback(): comm=flush-0:15 pid=1150 n=0
> [  138.587178] global dirty=9551 writeback=10755 nfs=0 flags=__ towrite=32768 skipped=0 file=0 written=0
> [  138.962743] redirty_tail() +542: inode=12
> [  139.395024] redirty_tail() +542: inode=13
> [  139.403194] fs/fs-writeback.c +809 wb_writeback(): comm=flush-8:0 pid=2816 n=11893
> [  139.413026] global dirty=4101 writeback=16630 nfs=0 flags=__ towrite=20875 skipped=0 file=0 written=2
> [  139.426074] fs/fs-writeback.c +809 wb_writeback(): comm=flush-8:0 pid=2816 n=0
> [  139.435190] global dirty=4101 writeback=16378 nfs=0 flags=__ towrite=32768 skipped=0 file=0 written=0
> [  140.266713] redirty_tail() +542: inode=12
> [  140.449304] redirty_tail() +542: inode=13
> [  140.496241] fs/fs-writeback.c +809 wb_writeback(): comm=flush-8:0 pid=2816 n=11678
> [  140.508339] global dirty=4203 writeback=19220 nfs=0 flags=__ towrite=21090 skipped=0 file=13 written=4254
> [  141.649192] redirty_tail() +542: inode=12
> [  141.971276] redirty_tail() +542: inode=13
> [  141.988572] fs/fs-writeback.c +809 wb_writeback(): comm=flush-8:0 pid=2816 n=11304
> [  142.000107] global dirty=4112 writeback=18362 nfs=0 flags=__ towrite=21464 skipped=0 file=13 written=4541
> 
> btrfs pattern is almost the same, but with an extra (metadata) inode 1.
> 
> [  464.443873] fs/fs-writeback.c +809 wb_writeback(): comm=flush-btrfs-1 pid=2865 n=0
> [  464.450458] global dirty=163 writeback=4375 nfs=0 flags=__ towrite=32768 skipped=0 file=0 written=0
> [  464.655999] fs/fs-writeback.c +809 wb_writeback(): comm=flush-8:0 pid=2816 n=0
> [  464.664478] global dirty=3873 writeback=1175 nfs=0 flags=__ towrite=32768 skipped=0 file=0 written=0
> [  465.291059] redirty_tail() +542: inode=257
> [  465.331584] redirty_tail() +542: inode=258
> [  465.346433] redirty_tail() +560: inode=1
> [  465.352016] fs/fs-writeback.c +809 wb_writeback(): comm=flush-btrfs-1 pid=2865 n=14480
> [  465.355797] global dirty=337 writeback=3980 nfs=0 flags=__ towrite=18288 skipped=0 file=1 written=0
> [  466.226489] redirty_tail() +542: inode=257
> [  466.280894] redirty_tail() +542: inode=258
> [  466.282270] redirty_tail() +560: inode=1
> [  466.288079] fs/fs-writeback.c +809 wb_writeback(): comm=flush-btrfs-1 pid=2865 n=14300
> [  466.291738] global dirty=666 writeback=3807 nfs=0 flags=__ towrite=18468 skipped=0 file=1 written=0
> [  467.101730] redirty_tail() +542: inode=257
> [  467.134303] redirty_tail() +542: inode=258
> [  467.135675] redirty_tail() +560: inode=1
> [  467.144120] fs/fs-writeback.c +809 wb_writeback(): comm=flush-btrfs-1 pid=2865 n=14032
> [  467.147302] global dirty=331 writeback=3665 nfs=0 flags=__ towrite=18736 skipped=0 file=1 written=0
> [  467.964652] redirty_tail() +542: inode=257
> [  468.002423] redirty_tail() +542: inode=258
> [  468.003795] redirty_tail() +560: inode=1
> 
> I'm looking into it.
> 
> Thanks,
> Fengguang

>  fs/fs-writeback.c   |   30 +++++++++++++++++++++++++++++-
>  mm/page-writeback.c |    1 +
>  2 files changed, 30 insertions(+), 1 deletion(-)
> 
> --- linux.orig/mm/page-writeback.c	2009-09-19 10:51:09.000000000 +0800
> +++ linux/mm/page-writeback.c	2009-09-19 10:51:47.000000000 +0800
> @@ -536,6 +536,7 @@ static void balance_dirty_pages(struct a
>  			pages_written += write_chunk - wbc.nr_to_write;
>  			get_dirty_limits(&background_thresh, &dirty_thresh,
>  				       &bdi_thresh, bdi);
> +			writeback_debug_report(pages_written, &wbc);
>  		}
>  
>  		/*
> --- linux.orig/fs/fs-writeback.c	2009-09-19 10:51:46.000000000 +0800
> +++ linux/fs/fs-writeback.c	2009-09-19 10:51:47.000000000 +0800
> @@ -68,6 +68,33 @@ enum {
>  #define WS_USED (1 << WS_USED_B)
>  #define WS_ONSTACK (1 << WS_ONSTACK_B)
>  
> +void print_writeback_control(struct writeback_control *wbc)
> +{
> +	printk(KERN_DEBUG
> +			"global dirty=%lu writeback=%lu nfs=%lu "
> +			"flags=%c%c towrite=%ld skipped=%ld "
> +			"file=%lu written=%lu\n",
> +			global_page_state(NR_FILE_DIRTY),
> +			global_page_state(NR_WRITEBACK),
> +			global_page_state(NR_UNSTABLE_NFS),
> +			wbc->encountered_congestion ? 'C':'_',
> +			wbc->more_io ? 'M':'_',
> +			wbc->nr_to_write,
> +			wbc->pages_skipped,
> +			wbc->last_file,
> +			wbc->last_file_written);
> +}
> +
> +void __writeback_debug_report(long n, struct writeback_control *wbc,
> +		const char *file, int line, const char *func)
> +{
> +	printk(KERN_DEBUG "%s +%d %s(): comm=%s pid=%d n=%ld\n",
> +			file, line, func,
> +			current->comm, current->pid,
> +			n);
> +	print_writeback_control(wbc);
> +}
> +
>  static inline bool bdi_work_on_stack(struct bdi_work *work)
>  {
>  	return test_bit(WS_ONSTACK_B, &work->state);
> @@ -302,7 +329,7 @@ static void requeue_io(struct inode *ino
>   */
>  static void requeue_partial_io(struct writeback_control *wbc, struct inode *inode)
>  {
> -	if (time_before(wbc->last_file_time + HZ, jiffies) ||
> +	if (time_before(wbc->last_file_time + 1000 * HZ, jiffies) ||
>  	    wbc->last_file_written == 0 ||
>  	    wbc->last_file_written >= MAX_WRITEBACK_PAGES) {
>  		requeue_io(inode);
> @@ -749,6 +776,7 @@ static long wb_writeback(struct bdi_writ
>  		args->nr_pages -= MAX_WRITEBACK_PAGES - wbc.nr_to_write;
>  		wrote += MAX_WRITEBACK_PAGES - wbc.nr_to_write;
>  
> +		writeback_debug_report(wrote, &wbc);
>  		/*
>  		 * If we ran out of stuff to write, bail unless more_io got set
>  		 */

> Subject: track redirty_tail() calls
> 
> It helps a lot to know how redirty_tail() are called.
> 
> Cc: Ken Chen <kenchen@google.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Signed-off-by: Fengguang Wu <wfg@mail.ustc.edu.cn>
> ---
>  fs/fs-writeback.c |   36 +++++++++++++++++++++++++++++++++---
>  1 file changed, 33 insertions(+), 3 deletions(-)
> 
> --- linux.orig/fs/fs-writeback.c	2009-09-19 10:51:47.000000000 +0800
> +++ linux/fs/fs-writeback.c	2009-09-19 10:52:03.000000000 +0800
> @@ -290,6 +290,21 @@ void bdi_start_writeback(struct backing_
>  	bdi_alloc_queue_work(bdi, &args);
>  }
>  
> +#define redirty_tail(inode)						\
> +	do {								\
> +		__redirty_tail(inode, __LINE__);			\
> +	} while (0)
> +
> +#define requeue_io(inode)						\
> +	do {								\
> +		__requeue_io(inode, __LINE__);				\
> +	} while (0)
> +
> +#define requeue_partial_io(wbc, inode)					\
> +	do {								\
> +		__requeue_partial_io(wbc, inode, __LINE__);		\
> +	} while (0)
> +
>  /*
>   * Redirty an inode: set its when-it-was dirtied timestamp and move it to the
>   * furthest end of its superblock's dirty-inode list.
> @@ -299,7 +314,7 @@ void bdi_start_writeback(struct backing_
>   * the case then the inode must have been redirtied while it was being written
>   * out and we don't reset its dirtied_when.
>   */
> -static void redirty_tail(struct inode *inode)
> +static void __redirty_tail(struct inode *inode, int line)
>  {
>  	struct bdi_writeback *wb = &inode_to_bdi(inode)->wb;
>  
> @@ -311,23 +326,33 @@ static void redirty_tail(struct inode *i
>  			inode->dirtied_when = jiffies;
>  	}
>  	list_move(&inode->i_list, &wb->b_dirty);
> +
> +	if (sysctl_dirty_debug) {
> +		printk(KERN_DEBUG "redirty_tail() +%d: inode=%lu\n",
> +				line, inode->i_ino);
> +	}
>  }
>  
>  /*
>   * requeue inode for re-scanning after bdi->b_io list is exhausted.
>   */
> -static void requeue_io(struct inode *inode)
> +static void __requeue_io(struct inode *inode, int line)
>  {
>  	struct bdi_writeback *wb = &inode_to_bdi(inode)->wb;
>  
>  	list_move(&inode->i_list, &wb->b_more_io);
> +
> +	if (sysctl_dirty_debug) {
> +		printk(KERN_DEBUG "requeue_io() +%d: inode=%lu\n",
> +				line, inode->i_ino);
> +	}
>  }
>  
>  /*
>   * continue io on this inode on next writeback if
>   * it has not accumulated large enough writeback io chunk
>   */
> -static void requeue_partial_io(struct writeback_control *wbc, struct inode *inode)
> +static void __requeue_partial_io(struct writeback_control *wbc, struct inode *inode, int line)
>  {
>  	if (time_before(wbc->last_file_time + 1000 * HZ, jiffies) ||
>  	    wbc->last_file_written == 0 ||
> @@ -337,6 +362,11 @@ static void requeue_partial_io(struct wr
>  	}
>  
>  	list_move_tail(&inode->i_list, &inode_to_bdi(inode)->wb.b_io);
> +
> +	if (sysctl_dirty_debug) {
> +		printk(KERN_DEBUG "requeue_partial_io() +%d: inode=%lu\n",
> +				line, inode->i_ino);
> +	}
>  }
>  
>  static void inode_sync_complete(struct inode *inode)


^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH 0/7] Per-bdi writeback flusher threads v20
  2009-09-19  4:00               ` Wu Fengguang
  (?)
@ 2009-09-19  4:26               ` Wu Fengguang
  2009-09-19 15:03                 ` Wu Fengguang
                                   ` (2 more replies)
  -1 siblings, 3 replies; 52+ messages in thread
From: Wu Fengguang @ 2009-09-19  4:26 UTC (permalink / raw)
  To: Theodore Tso, Jens Axboe, Christoph Hellwig, linux-kernel,
	linux-fsdevel, chris.mason, akpm, jack

On Sat, Sep 19, 2009 at 12:00:51PM +0800, Wu Fengguang wrote:
> On Sat, Sep 19, 2009 at 11:58:35AM +0800, Wu Fengguang wrote:
> > On Sat, Sep 19, 2009 at 01:52:52AM +0800, Theodore Tso wrote:
> > > On Fri, Sep 11, 2009 at 10:39:29PM +0800, Wu Fengguang wrote:
> > > > 
> > > > That would be good. Sorry for the late work. I'll allocate some time
> > > > in mid next week to help review and benchmark recent writeback works,
> > > > and hope to get things done in this merge window.
> > > 
> > > Did you have some chance to get more work done on the your writeback
> > > patches?
> > 
> > Sorry for the delay, I'm now testing the patches with commands
> > 
> >  cp /dev/zero /mnt/test/zero0 &
> >  dd if=/dev/zero of=/mnt/test/zero1 &
> > 
> > and the attached debug patch.
> > 
> > One problem I found with ext3/4 is, redirty_tail() is called repeatedly
> > in the traces, which could slow down the inode writeback significantly.
> 
> FYI, it's this redirty_tail() called in writeback_single_inode():
> 
>                         /*
>                          * Someone redirtied the inode while were writing back
>                          * the pages.
>                          */
>                         redirty_tail(inode);

Hmm, this looks like an old fashioned problem get blew up by the
128MB MAX_WRITEBACK_PAGES.

The inode was redirtied by the busy cp/dd processes. Now it takes much
more time to sync 128MB, so that a heavy dirtier can easily redirty
the inode in that time window.

One single invocation of redirty_tail() could hold up the writeback of
current inode for up to 30 seconds.

Thanks,
Fengguang

> > Ideal is to call requeue_[partial_]io() instead of redirty_tail().
> > 
> > [  131.963885] fs/fs-writeback.c +809 wb_writeback(): comm=flush-8:0 pid=2816 n=0
> > [  131.966171] global dirty=4105 writeback=18793 nfs=0 flags=__ towrite=32768 skipped=0 file=0 written=0
> > [  132.780826] fs/fs-writeback.c +809 wb_writeback(): comm=flush-0:15 pid=1150 n=0
> > [  132.783097] global dirty=4105 writeback=16623 nfs=0 flags=__ towrite=32768 skipped=0 file=0 written=0
> > [  134.307094] redirty_tail() +542: inode=12
> > [  134.815776] redirty_tail() +542: inode=13
> > [  134.817709] fs/fs-writeback.c +809 wb_writeback(): comm=flush-8:0 pid=2816 n=11493
> > [  134.821242] global dirty=4192 writeback=16203 nfs=0 flags=__ towrite=21275 skipped=0 file=13 written=4430
> > [  135.599954] redirty_tail() +542: inode=12
> > [  136.372523] redirty_tail() +542: inode=13
> > [  136.386748] fs/fs-writeback.c +809 wb_writeback(): comm=flush-8:0 pid=2816 n=11570
> > [  136.432168] global dirty=4308 writeback=15752 nfs=0 flags=__ towrite=21198 skipped=0 file=13 written=4650
> > [  137.789115] fs/fs-writeback.c +809 wb_writeback(): comm=flush-0:15 pid=1150 n=0
> > [  138.587178] global dirty=9551 writeback=10755 nfs=0 flags=__ towrite=32768 skipped=0 file=0 written=0
> > [  138.962743] redirty_tail() +542: inode=12
> > [  139.395024] redirty_tail() +542: inode=13
> > [  139.403194] fs/fs-writeback.c +809 wb_writeback(): comm=flush-8:0 pid=2816 n=11893
> > [  139.413026] global dirty=4101 writeback=16630 nfs=0 flags=__ towrite=20875 skipped=0 file=0 written=2
> > [  139.426074] fs/fs-writeback.c +809 wb_writeback(): comm=flush-8:0 pid=2816 n=0
> > [  139.435190] global dirty=4101 writeback=16378 nfs=0 flags=__ towrite=32768 skipped=0 file=0 written=0
> > [  140.266713] redirty_tail() +542: inode=12
> > [  140.449304] redirty_tail() +542: inode=13
> > [  140.496241] fs/fs-writeback.c +809 wb_writeback(): comm=flush-8:0 pid=2816 n=11678
> > [  140.508339] global dirty=4203 writeback=19220 nfs=0 flags=__ towrite=21090 skipped=0 file=13 written=4254
> > [  141.649192] redirty_tail() +542: inode=12
> > [  141.971276] redirty_tail() +542: inode=13
> > [  141.988572] fs/fs-writeback.c +809 wb_writeback(): comm=flush-8:0 pid=2816 n=11304
> > [  142.000107] global dirty=4112 writeback=18362 nfs=0 flags=__ towrite=21464 skipped=0 file=13 written=4541
> > 
> > btrfs pattern is almost the same, but with an extra (metadata) inode 1.
> > 
> > [  464.443873] fs/fs-writeback.c +809 wb_writeback(): comm=flush-btrfs-1 pid=2865 n=0
> > [  464.450458] global dirty=163 writeback=4375 nfs=0 flags=__ towrite=32768 skipped=0 file=0 written=0
> > [  464.655999] fs/fs-writeback.c +809 wb_writeback(): comm=flush-8:0 pid=2816 n=0
> > [  464.664478] global dirty=3873 writeback=1175 nfs=0 flags=__ towrite=32768 skipped=0 file=0 written=0
> > [  465.291059] redirty_tail() +542: inode=257
> > [  465.331584] redirty_tail() +542: inode=258
> > [  465.346433] redirty_tail() +560: inode=1
> > [  465.352016] fs/fs-writeback.c +809 wb_writeback(): comm=flush-btrfs-1 pid=2865 n=14480
> > [  465.355797] global dirty=337 writeback=3980 nfs=0 flags=__ towrite=18288 skipped=0 file=1 written=0
> > [  466.226489] redirty_tail() +542: inode=257
> > [  466.280894] redirty_tail() +542: inode=258
> > [  466.282270] redirty_tail() +560: inode=1
> > [  466.288079] fs/fs-writeback.c +809 wb_writeback(): comm=flush-btrfs-1 pid=2865 n=14300
> > [  466.291738] global dirty=666 writeback=3807 nfs=0 flags=__ towrite=18468 skipped=0 file=1 written=0
> > [  467.101730] redirty_tail() +542: inode=257
> > [  467.134303] redirty_tail() +542: inode=258
> > [  467.135675] redirty_tail() +560: inode=1
> > [  467.144120] fs/fs-writeback.c +809 wb_writeback(): comm=flush-btrfs-1 pid=2865 n=14032
> > [  467.147302] global dirty=331 writeback=3665 nfs=0 flags=__ towrite=18736 skipped=0 file=1 written=0
> > [  467.964652] redirty_tail() +542: inode=257
> > [  468.002423] redirty_tail() +542: inode=258
> > [  468.003795] redirty_tail() +560: inode=1
> > 
> > I'm looking into it.
> > 
> > Thanks,
> > Fengguang
> 
> >  fs/fs-writeback.c   |   30 +++++++++++++++++++++++++++++-
> >  mm/page-writeback.c |    1 +
> >  2 files changed, 30 insertions(+), 1 deletion(-)
> > 
> > --- linux.orig/mm/page-writeback.c	2009-09-19 10:51:09.000000000 +0800
> > +++ linux/mm/page-writeback.c	2009-09-19 10:51:47.000000000 +0800
> > @@ -536,6 +536,7 @@ static void balance_dirty_pages(struct a
> >  			pages_written += write_chunk - wbc.nr_to_write;
> >  			get_dirty_limits(&background_thresh, &dirty_thresh,
> >  				       &bdi_thresh, bdi);
> > +			writeback_debug_report(pages_written, &wbc);
> >  		}
> >  
> >  		/*
> > --- linux.orig/fs/fs-writeback.c	2009-09-19 10:51:46.000000000 +0800
> > +++ linux/fs/fs-writeback.c	2009-09-19 10:51:47.000000000 +0800
> > @@ -68,6 +68,33 @@ enum {
> >  #define WS_USED (1 << WS_USED_B)
> >  #define WS_ONSTACK (1 << WS_ONSTACK_B)
> >  
> > +void print_writeback_control(struct writeback_control *wbc)
> > +{
> > +	printk(KERN_DEBUG
> > +			"global dirty=%lu writeback=%lu nfs=%lu "
> > +			"flags=%c%c towrite=%ld skipped=%ld "
> > +			"file=%lu written=%lu\n",
> > +			global_page_state(NR_FILE_DIRTY),
> > +			global_page_state(NR_WRITEBACK),
> > +			global_page_state(NR_UNSTABLE_NFS),
> > +			wbc->encountered_congestion ? 'C':'_',
> > +			wbc->more_io ? 'M':'_',
> > +			wbc->nr_to_write,
> > +			wbc->pages_skipped,
> > +			wbc->last_file,
> > +			wbc->last_file_written);
> > +}
> > +
> > +void __writeback_debug_report(long n, struct writeback_control *wbc,
> > +		const char *file, int line, const char *func)
> > +{
> > +	printk(KERN_DEBUG "%s +%d %s(): comm=%s pid=%d n=%ld\n",
> > +			file, line, func,
> > +			current->comm, current->pid,
> > +			n);
> > +	print_writeback_control(wbc);
> > +}
> > +
> >  static inline bool bdi_work_on_stack(struct bdi_work *work)
> >  {
> >  	return test_bit(WS_ONSTACK_B, &work->state);
> > @@ -302,7 +329,7 @@ static void requeue_io(struct inode *ino
> >   */
> >  static void requeue_partial_io(struct writeback_control *wbc, struct inode *inode)
> >  {
> > -	if (time_before(wbc->last_file_time + HZ, jiffies) ||
> > +	if (time_before(wbc->last_file_time + 1000 * HZ, jiffies) ||
> >  	    wbc->last_file_written == 0 ||
> >  	    wbc->last_file_written >= MAX_WRITEBACK_PAGES) {
> >  		requeue_io(inode);
> > @@ -749,6 +776,7 @@ static long wb_writeback(struct bdi_writ
> >  		args->nr_pages -= MAX_WRITEBACK_PAGES - wbc.nr_to_write;
> >  		wrote += MAX_WRITEBACK_PAGES - wbc.nr_to_write;
> >  
> > +		writeback_debug_report(wrote, &wbc);
> >  		/*
> >  		 * If we ran out of stuff to write, bail unless more_io got set
> >  		 */
> 
> > Subject: track redirty_tail() calls
> > 
> > It helps a lot to know how redirty_tail() are called.
> > 
> > Cc: Ken Chen <kenchen@google.com>
> > Cc: Andrew Morton <akpm@linux-foundation.org>
> > Signed-off-by: Fengguang Wu <wfg@mail.ustc.edu.cn>
> > ---
> >  fs/fs-writeback.c |   36 +++++++++++++++++++++++++++++++++---
> >  1 file changed, 33 insertions(+), 3 deletions(-)
> > 
> > --- linux.orig/fs/fs-writeback.c	2009-09-19 10:51:47.000000000 +0800
> > +++ linux/fs/fs-writeback.c	2009-09-19 10:52:03.000000000 +0800
> > @@ -290,6 +290,21 @@ void bdi_start_writeback(struct backing_
> >  	bdi_alloc_queue_work(bdi, &args);
> >  }
> >  
> > +#define redirty_tail(inode)						\
> > +	do {								\
> > +		__redirty_tail(inode, __LINE__);			\
> > +	} while (0)
> > +
> > +#define requeue_io(inode)						\
> > +	do {								\
> > +		__requeue_io(inode, __LINE__);				\
> > +	} while (0)
> > +
> > +#define requeue_partial_io(wbc, inode)					\
> > +	do {								\
> > +		__requeue_partial_io(wbc, inode, __LINE__);		\
> > +	} while (0)
> > +
> >  /*
> >   * Redirty an inode: set its when-it-was dirtied timestamp and move it to the
> >   * furthest end of its superblock's dirty-inode list.
> > @@ -299,7 +314,7 @@ void bdi_start_writeback(struct backing_
> >   * the case then the inode must have been redirtied while it was being written
> >   * out and we don't reset its dirtied_when.
> >   */
> > -static void redirty_tail(struct inode *inode)
> > +static void __redirty_tail(struct inode *inode, int line)
> >  {
> >  	struct bdi_writeback *wb = &inode_to_bdi(inode)->wb;
> >  
> > @@ -311,23 +326,33 @@ static void redirty_tail(struct inode *i
> >  			inode->dirtied_when = jiffies;
> >  	}
> >  	list_move(&inode->i_list, &wb->b_dirty);
> > +
> > +	if (sysctl_dirty_debug) {
> > +		printk(KERN_DEBUG "redirty_tail() +%d: inode=%lu\n",
> > +				line, inode->i_ino);
> > +	}
> >  }
> >  
> >  /*
> >   * requeue inode for re-scanning after bdi->b_io list is exhausted.
> >   */
> > -static void requeue_io(struct inode *inode)
> > +static void __requeue_io(struct inode *inode, int line)
> >  {
> >  	struct bdi_writeback *wb = &inode_to_bdi(inode)->wb;
> >  
> >  	list_move(&inode->i_list, &wb->b_more_io);
> > +
> > +	if (sysctl_dirty_debug) {
> > +		printk(KERN_DEBUG "requeue_io() +%d: inode=%lu\n",
> > +				line, inode->i_ino);
> > +	}
> >  }
> >  
> >  /*
> >   * continue io on this inode on next writeback if
> >   * it has not accumulated large enough writeback io chunk
> >   */
> > -static void requeue_partial_io(struct writeback_control *wbc, struct inode *inode)
> > +static void __requeue_partial_io(struct writeback_control *wbc, struct inode *inode, int line)
> >  {
> >  	if (time_before(wbc->last_file_time + 1000 * HZ, jiffies) ||
> >  	    wbc->last_file_written == 0 ||
> > @@ -337,6 +362,11 @@ static void requeue_partial_io(struct wr
> >  	}
> >  
> >  	list_move_tail(&inode->i_list, &inode_to_bdi(inode)->wb.b_io);
> > +
> > +	if (sysctl_dirty_debug) {
> > +		printk(KERN_DEBUG "requeue_partial_io() +%d: inode=%lu\n",
> > +				line, inode->i_ino);
> > +	}
> >  }
> >  
> >  static void inode_sync_complete(struct inode *inode)
> 

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH 0/7] Per-bdi writeback flusher threads v20
  2009-09-19  4:00               ` Wu Fengguang
  (?)
  (?)
@ 2009-09-19  4:26               ` Wu Fengguang
  -1 siblings, 0 replies; 52+ messages in thread
From: Wu Fengguang @ 2009-09-19  4:26 UTC (permalink / raw)
  To: Theodore Tso, Jens Axboe, Christoph Hellwig, linux-kernel, 

On Sat, Sep 19, 2009 at 12:00:51PM +0800, Wu Fengguang wrote:
> On Sat, Sep 19, 2009 at 11:58:35AM +0800, Wu Fengguang wrote:
> > On Sat, Sep 19, 2009 at 01:52:52AM +0800, Theodore Tso wrote:
> > > On Fri, Sep 11, 2009 at 10:39:29PM +0800, Wu Fengguang wrote:
> > > > 
> > > > That would be good. Sorry for the late work. I'll allocate some time
> > > > in mid next week to help review and benchmark recent writeback works,
> > > > and hope to get things done in this merge window.
> > > 
> > > Did you have some chance to get more work done on the your writeback
> > > patches?
> > 
> > Sorry for the delay, I'm now testing the patches with commands
> > 
> >  cp /dev/zero /mnt/test/zero0 &
> >  dd if=/dev/zero of=/mnt/test/zero1 &
> > 
> > and the attached debug patch.
> > 
> > One problem I found with ext3/4 is, redirty_tail() is called repeatedly
> > in the traces, which could slow down the inode writeback significantly.
> 
> FYI, it's this redirty_tail() called in writeback_single_inode():
> 
>                         /*
>                          * Someone redirtied the inode while were writing back
>                          * the pages.
>                          */
>                         redirty_tail(inode);

Hmm, this looks like an old fashioned problem get blew up by the
128MB MAX_WRITEBACK_PAGES.

The inode was redirtied by the busy cp/dd processes. Now it takes much
more time to sync 128MB, so that a heavy dirtier can easily redirty
the inode in that time window.

One single invocation of redirty_tail() could hold up the writeback of
current inode for up to 30 seconds.

Thanks,
Fengguang

> > Ideal is to call requeue_[partial_]io() instead of redirty_tail().
> > 
> > [  131.963885] fs/fs-writeback.c +809 wb_writeback(): comm=flush-8:0 pid=2816 n=0
> > [  131.966171] global dirty=4105 writeback=18793 nfs=0 flags=__ towrite=32768 skipped=0 file=0 written=0
> > [  132.780826] fs/fs-writeback.c +809 wb_writeback(): comm=flush-0:15 pid=1150 n=0
> > [  132.783097] global dirty=4105 writeback=16623 nfs=0 flags=__ towrite=32768 skipped=0 file=0 written=0
> > [  134.307094] redirty_tail() +542: inode=12
> > [  134.815776] redirty_tail() +542: inode=13
> > [  134.817709] fs/fs-writeback.c +809 wb_writeback(): comm=flush-8:0 pid=2816 n=11493
> > [  134.821242] global dirty=4192 writeback=16203 nfs=0 flags=__ towrite=21275 skipped=0 file=13 written=4430
> > [  135.599954] redirty_tail() +542: inode=12
> > [  136.372523] redirty_tail() +542: inode=13
> > [  136.386748] fs/fs-writeback.c +809 wb_writeback(): comm=flush-8:0 pid=2816 n=11570
> > [  136.432168] global dirty=4308 writeback=15752 nfs=0 flags=__ towrite=21198 skipped=0 file=13 written=4650
> > [  137.789115] fs/fs-writeback.c +809 wb_writeback(): comm=flush-0:15 pid=1150 n=0
> > [  138.587178] global dirty=9551 writeback=10755 nfs=0 flags=__ towrite=32768 skipped=0 file=0 written=0
> > [  138.962743] redirty_tail() +542: inode=12
> > [  139.395024] redirty_tail() +542: inode=13
> > [  139.403194] fs/fs-writeback.c +809 wb_writeback(): comm=flush-8:0 pid=2816 n=11893
> > [  139.413026] global dirty=4101 writeback=16630 nfs=0 flags=__ towrite=20875 skipped=0 file=0 written=2
> > [  139.426074] fs/fs-writeback.c +809 wb_writeback(): comm=flush-8:0 pid=2816 n=0
> > [  139.435190] global dirty=4101 writeback=16378 nfs=0 flags=__ towrite=32768 skipped=0 file=0 written=0
> > [  140.266713] redirty_tail() +542: inode=12
> > [  140.449304] redirty_tail() +542: inode=13
> > [  140.496241] fs/fs-writeback.c +809 wb_writeback(): comm=flush-8:0 pid=2816 n=11678
> > [  140.508339] global dirty=4203 writeback=19220 nfs=0 flags=__ towrite=21090 skipped=0 file=13 written=4254
> > [  141.649192] redirty_tail() +542: inode=12
> > [  141.971276] redirty_tail() +542: inode=13
> > [  141.988572] fs/fs-writeback.c +809 wb_writeback(): comm=flush-8:0 pid=2816 n=11304
> > [  142.000107] global dirty=4112 writeback=18362 nfs=0 flags=__ towrite=21464 skipped=0 file=13 written=4541
> > 
> > btrfs pattern is almost the same, but with an extra (metadata) inode 1.
> > 
> > [  464.443873] fs/fs-writeback.c +809 wb_writeback(): comm=flush-btrfs-1 pid=2865 n=0
> > [  464.450458] global dirty=163 writeback=4375 nfs=0 flags=__ towrite=32768 skipped=0 file=0 written=0
> > [  464.655999] fs/fs-writeback.c +809 wb_writeback(): comm=flush-8:0 pid=2816 n=0
> > [  464.664478] global dirty=3873 writeback=1175 nfs=0 flags=__ towrite=32768 skipped=0 file=0 written=0
> > [  465.291059] redirty_tail() +542: inode=257
> > [  465.331584] redirty_tail() +542: inode=258
> > [  465.346433] redirty_tail() +560: inode=1
> > [  465.352016] fs/fs-writeback.c +809 wb_writeback(): comm=flush-btrfs-1 pid=2865 n=14480
> > [  465.355797] global dirty=337 writeback=3980 nfs=0 flags=__ towrite=18288 skipped=0 file=1 written=0
> > [  466.226489] redirty_tail() +542: inode=257
> > [  466.280894] redirty_tail() +542: inode=258
> > [  466.282270] redirty_tail() +560: inode=1
> > [  466.288079] fs/fs-writeback.c +809 wb_writeback(): comm=flush-btrfs-1 pid=2865 n=14300
> > [  466.291738] global dirty=666 writeback=3807 nfs=0 flags=__ towrite=18468 skipped=0 file=1 written=0
> > [  467.101730] redirty_tail() +542: inode=257
> > [  467.134303] redirty_tail() +542: inode=258
> > [  467.135675] redirty_tail() +560: inode=1
> > [  467.144120] fs/fs-writeback.c +809 wb_writeback(): comm=flush-btrfs-1 pid=2865 n=14032
> > [  467.147302] global dirty=331 writeback=3665 nfs=0 flags=__ towrite=18736 skipped=0 file=1 written=0
> > [  467.964652] redirty_tail() +542: inode=257
> > [  468.002423] redirty_tail() +542: inode=258
> > [  468.003795] redirty_tail() +560: inode=1
> > 
> > I'm looking into it.
> > 
> > Thanks,
> > Fengguang
> 
> >  fs/fs-writeback.c   |   30 +++++++++++++++++++++++++++++-
> >  mm/page-writeback.c |    1 +
> >  2 files changed, 30 insertions(+), 1 deletion(-)
> > 
> > --- linux.orig/mm/page-writeback.c	2009-09-19 10:51:09.000000000 +0800
> > +++ linux/mm/page-writeback.c	2009-09-19 10:51:47.000000000 +0800
> > @@ -536,6 +536,7 @@ static void balance_dirty_pages(struct a
> >  			pages_written += write_chunk - wbc.nr_to_write;
> >  			get_dirty_limits(&background_thresh, &dirty_thresh,
> >  				       &bdi_thresh, bdi);
> > +			writeback_debug_report(pages_written, &wbc);
> >  		}
> >  
> >  		/*
> > --- linux.orig/fs/fs-writeback.c	2009-09-19 10:51:46.000000000 +0800
> > +++ linux/fs/fs-writeback.c	2009-09-19 10:51:47.000000000 +0800
> > @@ -68,6 +68,33 @@ enum {
> >  #define WS_USED (1 << WS_USED_B)
> >  #define WS_ONSTACK (1 << WS_ONSTACK_B)
> >  
> > +void print_writeback_control(struct writeback_control *wbc)
> > +{
> > +	printk(KERN_DEBUG
> > +			"global dirty=%lu writeback=%lu nfs=%lu "
> > +			"flags=%c%c towrite=%ld skipped=%ld "
> > +			"file=%lu written=%lu\n",
> > +			global_page_state(NR_FILE_DIRTY),
> > +			global_page_state(NR_WRITEBACK),
> > +			global_page_state(NR_UNSTABLE_NFS),
> > +			wbc->encountered_congestion ? 'C':'_',
> > +			wbc->more_io ? 'M':'_',
> > +			wbc->nr_to_write,
> > +			wbc->pages_skipped,
> > +			wbc->last_file,
> > +			wbc->last_file_written);
> > +}
> > +
> > +void __writeback_debug_report(long n, struct writeback_control *wbc,
> > +		const char *file, int line, const char *func)
> > +{
> > +	printk(KERN_DEBUG "%s +%d %s(): comm=%s pid=%d n=%ld\n",
> > +			file, line, func,
> > +			current->comm, current->pid,
> > +			n);
> > +	print_writeback_control(wbc);
> > +}
> > +
> >  static inline bool bdi_work_on_stack(struct bdi_work *work)
> >  {
> >  	return test_bit(WS_ONSTACK_B, &work->state);
> > @@ -302,7 +329,7 @@ static void requeue_io(struct inode *ino
> >   */
> >  static void requeue_partial_io(struct writeback_control *wbc, struct inode *inode)
> >  {
> > -	if (time_before(wbc->last_file_time + HZ, jiffies) ||
> > +	if (time_before(wbc->last_file_time + 1000 * HZ, jiffies) ||
> >  	    wbc->last_file_written == 0 ||
> >  	    wbc->last_file_written >= MAX_WRITEBACK_PAGES) {
> >  		requeue_io(inode);
> > @@ -749,6 +776,7 @@ static long wb_writeback(struct bdi_writ
> >  		args->nr_pages -= MAX_WRITEBACK_PAGES - wbc.nr_to_write;
> >  		wrote += MAX_WRITEBACK_PAGES - wbc.nr_to_write;
> >  
> > +		writeback_debug_report(wrote, &wbc);
> >  		/*
> >  		 * If we ran out of stuff to write, bail unless more_io got set
> >  		 */
> 
> > Subject: track redirty_tail() calls
> > 
> > It helps a lot to know how redirty_tail() are called.
> > 
> > Cc: Ken Chen <kenchen@google.com>
> > Cc: Andrew Morton <akpm@linux-foundation.org>
> > Signed-off-by: Fengguang Wu <wfg@mail.ustc.edu.cn>
> > ---
> >  fs/fs-writeback.c |   36 +++++++++++++++++++++++++++++++++---
> >  1 file changed, 33 insertions(+), 3 deletions(-)
> > 
> > --- linux.orig/fs/fs-writeback.c	2009-09-19 10:51:47.000000000 +0800
> > +++ linux/fs/fs-writeback.c	2009-09-19 10:52:03.000000000 +0800
> > @@ -290,6 +290,21 @@ void bdi_start_writeback(struct backing_
> >  	bdi_alloc_queue_work(bdi, &args);
> >  }
> >  
> > +#define redirty_tail(inode)						\
> > +	do {								\
> > +		__redirty_tail(inode, __LINE__);			\
> > +	} while (0)
> > +
> > +#define requeue_io(inode)						\
> > +	do {								\
> > +		__requeue_io(inode, __LINE__);				\
> > +	} while (0)
> > +
> > +#define requeue_partial_io(wbc, inode)					\
> > +	do {								\
> > +		__requeue_partial_io(wbc, inode, __LINE__);		\
> > +	} while (0)
> > +
> >  /*
> >   * Redirty an inode: set its when-it-was dirtied timestamp and move it to the
> >   * furthest end of its superblock's dirty-inode list.
> > @@ -299,7 +314,7 @@ void bdi_start_writeback(struct backing_
> >   * the case then the inode must have been redirtied while it was being written
> >   * out and we don't reset its dirtied_when.
> >   */
> > -static void redirty_tail(struct inode *inode)
> > +static void __redirty_tail(struct inode *inode, int line)
> >  {
> >  	struct bdi_writeback *wb = &inode_to_bdi(inode)->wb;
> >  
> > @@ -311,23 +326,33 @@ static void redirty_tail(struct inode *i
> >  			inode->dirtied_when = jiffies;
> >  	}
> >  	list_move(&inode->i_list, &wb->b_dirty);
> > +
> > +	if (sysctl_dirty_debug) {
> > +		printk(KERN_DEBUG "redirty_tail() +%d: inode=%lu\n",
> > +				line, inode->i_ino);
> > +	}
> >  }
> >  
> >  /*
> >   * requeue inode for re-scanning after bdi->b_io list is exhausted.
> >   */
> > -static void requeue_io(struct inode *inode)
> > +static void __requeue_io(struct inode *inode, int line)
> >  {
> >  	struct bdi_writeback *wb = &inode_to_bdi(inode)->wb;
> >  
> >  	list_move(&inode->i_list, &wb->b_more_io);
> > +
> > +	if (sysctl_dirty_debug) {
> > +		printk(KERN_DEBUG "requeue_io() +%d: inode=%lu\n",
> > +				line, inode->i_ino);
> > +	}
> >  }
> >  
> >  /*
> >   * continue io on this inode on next writeback if
> >   * it has not accumulated large enough writeback io chunk
> >   */
> > -static void requeue_partial_io(struct writeback_control *wbc, struct inode *inode)
> > +static void __requeue_partial_io(struct writeback_control *wbc, struct inode *inode, int line)
> >  {
> >  	if (time_before(wbc->last_file_time + 1000 * HZ, jiffies) ||
> >  	    wbc->last_file_written == 0 ||
> > @@ -337,6 +362,11 @@ static void requeue_partial_io(struct wr
> >  	}
> >  
> >  	list_move_tail(&inode->i_list, &inode_to_bdi(inode)->wb.b_io);
> > +
> > +	if (sysctl_dirty_debug) {
> > +		printk(KERN_DEBUG "requeue_partial_io() +%d: inode=%lu\n",
> > +				line, inode->i_ino);
> > +	}
> >  }
> >  
> >  static void inode_sync_complete(struct inode *inode)
> 

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH 0/7] Per-bdi writeback flusher threads v20
  2009-09-19  4:26               ` Wu Fengguang
  2009-09-19 15:03                 ` Wu Fengguang
@ 2009-09-19 15:03                 ` Wu Fengguang
  2009-09-20 19:00                   ` Jan Kara
  2009-09-21 13:53                 ` Chris Mason
  2 siblings, 1 reply; 52+ messages in thread
From: Wu Fengguang @ 2009-09-19 15:03 UTC (permalink / raw)
  To: Theodore Tso, Jens Axboe, Christoph Hellwig, linux-kernel,
	linux-fsdevel, chris.mason, akpm, jack

On Sat, Sep 19, 2009 at 12:26:07PM +0800, Wu Fengguang wrote:
> On Sat, Sep 19, 2009 at 12:00:51PM +0800, Wu Fengguang wrote:
> > On Sat, Sep 19, 2009 at 11:58:35AM +0800, Wu Fengguang wrote:
> > > On Sat, Sep 19, 2009 at 01:52:52AM +0800, Theodore Tso wrote:
> > > > On Fri, Sep 11, 2009 at 10:39:29PM +0800, Wu Fengguang wrote:
> > > > > 
> > > > > That would be good. Sorry for the late work. I'll allocate some time
> > > > > in mid next week to help review and benchmark recent writeback works,
> > > > > and hope to get things done in this merge window.
> > > > 
> > > > Did you have some chance to get more work done on the your writeback
> > > > patches?
> > > 
> > > Sorry for the delay, I'm now testing the patches with commands
> > > 
> > >  cp /dev/zero /mnt/test/zero0 &
> > >  dd if=/dev/zero of=/mnt/test/zero1 &
> > > 
> > > and the attached debug patch.
> > > 
> > > One problem I found with ext3/4 is, redirty_tail() is called repeatedly
> > > in the traces, which could slow down the inode writeback significantly.
> > 
> > FYI, it's this redirty_tail() called in writeback_single_inode():
> > 
> >                         /*
> >                          * Someone redirtied the inode while were writing back
> >                          * the pages.
> >                          */
> >                         redirty_tail(inode);
> 
> Hmm, this looks like an old fashioned problem get blew up by the
> 128MB MAX_WRITEBACK_PAGES.
> 
> The inode was redirtied by the busy cp/dd processes. Now it takes much
> more time to sync 128MB, so that a heavy dirtier can easily redirty
> the inode in that time window.
> 
> One single invocation of redirty_tail() could hold up the writeback of
> current inode for up to 30 seconds.

It seems that this patch helps. However I'm afraid it's too late to
risk merging such kind of patches now..

Thanks,
Fengguang
---

writeback: don't delay redirtied inode by a fast dirtier

The large 128MB MAX_WRITEBACK_PAGES greatly increases the chance
for an inode to be dirtied by a fast dirtier during the writeback.

We used to call redirty_tail() in this case, which could delay inode
writeback for up to 30s. This becomes unacceptable now even for simple
dd.

But still delay these cases:
- only inode metadata is dirtied (by the fs)
- the writeback_index wrapped around
  (to protect against fast dirtier that do repeated overwrites)

CC: Jan Kara <jack@suse.cz>
CC: Theodore Ts'o <tytso@mit.edu>
CC: Dave Chinner <david@fromorbit.com>
CC: Jens Axboe <jens.axboe@oracle.com>
CC: Chris Mason <chris.mason@oracle.com>
CC: Christoph Hellwig <hch@infradead.org>
Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
---
 fs/fs-writeback.c |   18 ++++++++++++++----
 1 file changed, 14 insertions(+), 4 deletions(-)

--- linux.orig/fs/fs-writeback.c	2009-09-19 18:09:50.000000000 +0800
+++ linux/fs/fs-writeback.c	2009-09-19 19:00:18.000000000 +0800
@@ -466,6 +466,7 @@ writeback_single_inode(struct inode *ino
 	long last_file_written;
 	long nr_to_write;
 	unsigned dirty;
+	pgoff_t writeback_index;
 	int ret;
 
 	if (!atomic_read(&inode->i_count))
@@ -508,6 +509,7 @@ writeback_single_inode(struct inode *ino
 		last_file_written = wbc->last_file_written;
 	wbc->nr_to_write -= last_file_written;
 	nr_to_write = wbc->nr_to_write;
+	writeback_index = mapping->writeback_index;
 
 	ret = do_writepages(mapping, wbc);
 
@@ -534,10 +536,15 @@ writeback_single_inode(struct inode *ino
 	spin_lock(&inode_lock);
 	inode->i_state &= ~I_SYNC;
 	if (!(inode->i_state & (I_FREEING | I_CLEAR))) {
-		if (inode->i_state & I_DIRTY) {
+		if (inode->i_state & I_DIRTY_PAGES) {
 			/*
-			 * Someone redirtied the inode while were writing back
-			 * the pages.
+			 * More pages get dirtied by a fast dirtier.
+			 */
+			goto select_queue;
+		} else if (inode->i_state & I_DIRTY) {
+			/*
+			 * At least XFS will redirty the inode during the
+			 * writeback (delalloc) and on io completion (isize).
 			 */
 			redirty_tail(inode);
 		} else if (mapping_tagged(mapping, PAGECACHE_TAG_DIRTY)) {
@@ -546,8 +553,10 @@ writeback_single_inode(struct inode *ino
 			 * sometimes bales out without doing anything.
 			 */
 			inode->i_state |= I_DIRTY_PAGES;
+select_queue:
 			if (wbc->encountered_congestion ||
-			    wbc->nr_to_write <= 0) {
+			    wbc->nr_to_write <= 0 ||
+			    writeback_index < mapping->writeback_index) {
 				/*
 				 * if slice used up, queue for next round;
 				 * otherwise continue this inode after return
@@ -556,6 +565,7 @@ writeback_single_inode(struct inode *ino
 			} else {
 				/*
 				 * somehow blocked: retry later
+				 * also protect against busy rewrites.
 				 */
 				redirty_tail(inode);
 			}

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH 0/7] Per-bdi writeback flusher threads v20
  2009-09-19  4:26               ` Wu Fengguang
@ 2009-09-19 15:03                 ` Wu Fengguang
  2009-09-19 15:03                 ` Wu Fengguang
  2009-09-21 13:53                 ` Chris Mason
  2 siblings, 0 replies; 52+ messages in thread
From: Wu Fengguang @ 2009-09-19 15:03 UTC (permalink / raw)
  To: Theodore Tso, Jens Axboe, Christoph Hellwig, linux-kernel, 

On Sat, Sep 19, 2009 at 12:26:07PM +0800, Wu Fengguang wrote:
> On Sat, Sep 19, 2009 at 12:00:51PM +0800, Wu Fengguang wrote:
> > On Sat, Sep 19, 2009 at 11:58:35AM +0800, Wu Fengguang wrote:
> > > On Sat, Sep 19, 2009 at 01:52:52AM +0800, Theodore Tso wrote:
> > > > On Fri, Sep 11, 2009 at 10:39:29PM +0800, Wu Fengguang wrote:
> > > > > 
> > > > > That would be good. Sorry for the late work. I'll allocate some time
> > > > > in mid next week to help review and benchmark recent writeback works,
> > > > > and hope to get things done in this merge window.
> > > > 
> > > > Did you have some chance to get more work done on the your writeback
> > > > patches?
> > > 
> > > Sorry for the delay, I'm now testing the patches with commands
> > > 
> > >  cp /dev/zero /mnt/test/zero0 &
> > >  dd if=/dev/zero of=/mnt/test/zero1 &
> > > 
> > > and the attached debug patch.
> > > 
> > > One problem I found with ext3/4 is, redirty_tail() is called repeatedly
> > > in the traces, which could slow down the inode writeback significantly.
> > 
> > FYI, it's this redirty_tail() called in writeback_single_inode():
> > 
> >                         /*
> >                          * Someone redirtied the inode while were writing back
> >                          * the pages.
> >                          */
> >                         redirty_tail(inode);
> 
> Hmm, this looks like an old fashioned problem get blew up by the
> 128MB MAX_WRITEBACK_PAGES.
> 
> The inode was redirtied by the busy cp/dd processes. Now it takes much
> more time to sync 128MB, so that a heavy dirtier can easily redirty
> the inode in that time window.
> 
> One single invocation of redirty_tail() could hold up the writeback of
> current inode for up to 30 seconds.

It seems that this patch helps. However I'm afraid it's too late to
risk merging such kind of patches now..

Thanks,
Fengguang
---

writeback: don't delay redirtied inode by a fast dirtier

The large 128MB MAX_WRITEBACK_PAGES greatly increases the chance
for an inode to be dirtied by a fast dirtier during the writeback.

We used to call redirty_tail() in this case, which could delay inode
writeback for up to 30s. This becomes unacceptable now even for simple
dd.

But still delay these cases:
- only inode metadata is dirtied (by the fs)
- the writeback_index wrapped around
  (to protect against fast dirtier that do repeated overwrites)

CC: Jan Kara <jack@suse.cz>
CC: Theodore Ts'o <tytso@mit.edu>
CC: Dave Chinner <david@fromorbit.com>
CC: Jens Axboe <jens.axboe@oracle.com>
CC: Chris Mason <chris.mason@oracle.com>
CC: Christoph Hellwig <hch@infradead.org>
Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
---
 fs/fs-writeback.c |   18 ++++++++++++++----
 1 file changed, 14 insertions(+), 4 deletions(-)

--- linux.orig/fs/fs-writeback.c	2009-09-19 18:09:50.000000000 +0800
+++ linux/fs/fs-writeback.c	2009-09-19 19:00:18.000000000 +0800
@@ -466,6 +466,7 @@ writeback_single_inode(struct inode *ino
 	long last_file_written;
 	long nr_to_write;
 	unsigned dirty;
+	pgoff_t writeback_index;
 	int ret;
 
 	if (!atomic_read(&inode->i_count))
@@ -508,6 +509,7 @@ writeback_single_inode(struct inode *ino
 		last_file_written = wbc->last_file_written;
 	wbc->nr_to_write -= last_file_written;
 	nr_to_write = wbc->nr_to_write;
+	writeback_index = mapping->writeback_index;
 
 	ret = do_writepages(mapping, wbc);
 
@@ -534,10 +536,15 @@ writeback_single_inode(struct inode *ino
 	spin_lock(&inode_lock);
 	inode->i_state &= ~I_SYNC;
 	if (!(inode->i_state & (I_FREEING | I_CLEAR))) {
-		if (inode->i_state & I_DIRTY) {
+		if (inode->i_state & I_DIRTY_PAGES) {
 			/*
-			 * Someone redirtied the inode while were writing back
-			 * the pages.
+			 * More pages get dirtied by a fast dirtier.
+			 */
+			goto select_queue;
+		} else if (inode->i_state & I_DIRTY) {
+			/*
+			 * At least XFS will redirty the inode during the
+			 * writeback (delalloc) and on io completion (isize).
 			 */
 			redirty_tail(inode);
 		} else if (mapping_tagged(mapping, PAGECACHE_TAG_DIRTY)) {
@@ -546,8 +553,10 @@ writeback_single_inode(struct inode *ino
 			 * sometimes bales out without doing anything.
 			 */
 			inode->i_state |= I_DIRTY_PAGES;
+select_queue:
 			if (wbc->encountered_congestion ||
-			    wbc->nr_to_write <= 0) {
+			    wbc->nr_to_write <= 0 ||
+			    writeback_index < mapping->writeback_index) {
 				/*
 				 * if slice used up, queue for next round;
 				 * otherwise continue this inode after return
@@ -556,6 +565,7 @@ writeback_single_inode(struct inode *ino
 			} else {
 				/*
 				 * somehow blocked: retry later
+				 * also protect against busy rewrites.
 				 */
 				redirty_tail(inode);
 			}

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH 0/7] Per-bdi writeback flusher threads v20
  2009-09-19 15:03                 ` Wu Fengguang
@ 2009-09-20 19:00                   ` Jan Kara
  2009-09-21  3:04                     ` Wu Fengguang
  0 siblings, 1 reply; 52+ messages in thread
From: Jan Kara @ 2009-09-20 19:00 UTC (permalink / raw)
  To: Wu Fengguang
  Cc: Theodore Tso, Jens Axboe, Christoph Hellwig, linux-kernel,
	linux-fsdevel, chris.mason, akpm, jack

On Sat 19-09-09 23:03:51, Wu Fengguang wrote:
> On Sat, Sep 19, 2009 at 12:26:07PM +0800, Wu Fengguang wrote:
> > On Sat, Sep 19, 2009 at 12:00:51PM +0800, Wu Fengguang wrote:
> > > On Sat, Sep 19, 2009 at 11:58:35AM +0800, Wu Fengguang wrote:
> > > > On Sat, Sep 19, 2009 at 01:52:52AM +0800, Theodore Tso wrote:
> > > > > On Fri, Sep 11, 2009 at 10:39:29PM +0800, Wu Fengguang wrote:
> > > > > > 
> > > > > > That would be good. Sorry for the late work. I'll allocate some time
> > > > > > in mid next week to help review and benchmark recent writeback works,
> > > > > > and hope to get things done in this merge window.
> > > > > 
> > > > > Did you have some chance to get more work done on the your writeback
> > > > > patches?
> > > > 
> > > > Sorry for the delay, I'm now testing the patches with commands
> > > > 
> > > >  cp /dev/zero /mnt/test/zero0 &
> > > >  dd if=/dev/zero of=/mnt/test/zero1 &
> > > > 
> > > > and the attached debug patch.
> > > > 
> > > > One problem I found with ext3/4 is, redirty_tail() is called repeatedly
> > > > in the traces, which could slow down the inode writeback significantly.
> > > 
> > > FYI, it's this redirty_tail() called in writeback_single_inode():
> > > 
> > >                         /*
> > >                          * Someone redirtied the inode while were writing back
> > >                          * the pages.
> > >                          */
> > >                         redirty_tail(inode);
> > 
> > Hmm, this looks like an old fashioned problem get blew up by the
> > 128MB MAX_WRITEBACK_PAGES.
> > 
> > The inode was redirtied by the busy cp/dd processes. Now it takes much
> > more time to sync 128MB, so that a heavy dirtier can easily redirty
> > the inode in that time window.
> > 
> > One single invocation of redirty_tail() could hold up the writeback of
> > current inode for up to 30 seconds.
> 
> It seems that this patch helps. However I'm afraid it's too late to
> risk merging such kind of patches now..
  Fenguang, could we maybe write down how the logic should look like
and then look at the code and modify it as needed to fit the logic?
Because I couldn't find a compact description of the logic anywhere
in the code.
  Here is how I'd imaging the writeout logic should work:
We would have just two lists - b_dirty and b_more_io. Both would be
ordered by dirtied_when.
  A thread doing WB_SYNC_ALL writeback will just walk the list and cleanup
everything (we should be resistant against livelocks because we stop at
inode which has been dirtied after the sync has started).
  A thread doing WB_SYNC_NONE writeback will start walking the list. If the
inode has I_SYNC set, it puts it on b_more_io. Otherwise it takes I_SYNC
and writes as much as it finds necessary from the first inode. If it
stopped before it wrote everything, it puts the inode at the end of
b_more_io.  If it wrote everything (writeback_index cycled or scanned the
whole range) but inode is dirty, it puts the inode at the end of b_dirty
and resets dirtied_when to the current time. Then it continues with the
next inode.
  kupdate style writeback stops scanning dirty list when dirtied_when is
new enough. Then if b_more_io is nonempty, it splices it into the beginning
of the dirty list and restarts.
  Other types of writeback splice b_more_io to b_dirty when b_dirty gets
empty. pdflush style writeback writes until we drop below background dirty
limit. Other kinds of writeback (throttled threads, writeback submitted by
filesystem itself) write while nr_to_write > 0.
  If we didn't write anything during the b_dirty scan, we wait until I_SYNC
of the first inode on b_more_io gets cleared before starting the next scan.
  Does this look reasonably complete and cover all the cases?

									Honza
-- 
Jan Kara <jack@suse.cz>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH 0/7] Per-bdi writeback flusher threads v20
  2009-09-20 19:00                   ` Jan Kara
@ 2009-09-21  3:04                     ` Wu Fengguang
  2009-09-21  5:35                       ` Wu Fengguang
  2009-09-21 12:42                       ` Jan Kara
  0 siblings, 2 replies; 52+ messages in thread
From: Wu Fengguang @ 2009-09-21  3:04 UTC (permalink / raw)
  To: Jan Kara
  Cc: Theodore Tso, Jens Axboe, Christoph Hellwig, linux-kernel,
	linux-fsdevel, chris.mason, akpm

On Mon, Sep 21, 2009 at 03:00:06AM +0800, Jan Kara wrote:
> On Sat 19-09-09 23:03:51, Wu Fengguang wrote:
> > On Sat, Sep 19, 2009 at 12:26:07PM +0800, Wu Fengguang wrote:
> > > On Sat, Sep 19, 2009 at 12:00:51PM +0800, Wu Fengguang wrote:
> > > > On Sat, Sep 19, 2009 at 11:58:35AM +0800, Wu Fengguang wrote:
> > > > > On Sat, Sep 19, 2009 at 01:52:52AM +0800, Theodore Tso wrote:
> > > > > > On Fri, Sep 11, 2009 at 10:39:29PM +0800, Wu Fengguang wrote:
> > > > > > > 
> > > > > > > That would be good. Sorry for the late work. I'll allocate some time
> > > > > > > in mid next week to help review and benchmark recent writeback works,
> > > > > > > and hope to get things done in this merge window.
> > > > > > 
> > > > > > Did you have some chance to get more work done on the your writeback
> > > > > > patches?
> > > > > 
> > > > > Sorry for the delay, I'm now testing the patches with commands
> > > > > 
> > > > >  cp /dev/zero /mnt/test/zero0 &
> > > > >  dd if=/dev/zero of=/mnt/test/zero1 &
> > > > > 
> > > > > and the attached debug patch.
> > > > > 
> > > > > One problem I found with ext3/4 is, redirty_tail() is called repeatedly
> > > > > in the traces, which could slow down the inode writeback significantly.
> > > > 
> > > > FYI, it's this redirty_tail() called in writeback_single_inode():
> > > > 
> > > >                         /*
> > > >                          * Someone redirtied the inode while were writing back
> > > >                          * the pages.
> > > >                          */
> > > >                         redirty_tail(inode);
> > > 
> > > Hmm, this looks like an old fashioned problem get blew up by the
> > > 128MB MAX_WRITEBACK_PAGES.
> > > 
> > > The inode was redirtied by the busy cp/dd processes. Now it takes much
> > > more time to sync 128MB, so that a heavy dirtier can easily redirty
> > > the inode in that time window.
> > > 
> > > One single invocation of redirty_tail() could hold up the writeback of
> > > current inode for up to 30 seconds.
> > 
> > It seems that this patch helps. However I'm afraid it's too late to
> > risk merging such kind of patches now..
>   Fenguang, could we maybe write down how the logic should look like
> and then look at the code and modify it as needed to fit the logic?
> Because I couldn't find a compact description of the logic anywhere
> in the code.

Good idea. It makes sense to write something down in Documentation/
or embedded as code comments.

>   Here is how I'd imaging the writeout logic should work:
> We would have just two lists - b_dirty and b_more_io. Both would be
> ordered by dirtied_when.

Andrew has a very good description for the dirty/io/more_io queues:

http://lkml.org/lkml/2006/2/7/5

| So the protocol would be:
|
| s_io: contains expired and non-expired dirty inodes, with expired ones at
| the head.  Unexpired ones (at least) are in time order.
|
| s_more_io: contains dirty expired inodes which haven't been fully written. 
| Ordering doesn't matter (unless someone goes and changes
| dirty_expire_centisecs - but as long as we don't do anything really bad in
| response to this we'll be OK).
|
| s_dirty: contains expired and non-expired dirty inodes.  The non-expired
| ones are in time-of-dirtying order.

Since then s_io was changed to hold only _expired_ dirty inodes at the
beginning of a full scan. It serves as a bounded set of dirty inodes.
So that when finished a full scan of it, the writeback can go on to
the next superblock, and old dirty files' writeback won't be delayed
infinitely by poring in newly dirty files.

It seems that the boundary could also be provided by some
older_than_this timestamp. So removal of b_io is possible
at least on this purpose.

>   A thread doing WB_SYNC_ALL writeback will just walk the list and cleanup
> everything (we should be resistant against livelocks because we stop at
> inode which has been dirtied after the sync has started).

Yes, that would mean

- older_than_this=now     for WB_SYNC_ALL
- older_than_this=now-30s for WB_SYNC_NONE

>   A thread doing WB_SYNC_NONE writeback will start walking the list. If the
> inode has I_SYNC set, it puts it on b_more_io. Otherwise it takes I_SYNC
> and writes as much as it finds necessary from the first inode. If it
> stopped before it wrote everything, it puts the inode at the end of
> b_more_io.

Agreed. The current code is doing that, and it is reasonably easy to
reuse the code path for WB_SYNC_NONE/WB_SYNC_ALL?

> If it wrote everything (writeback_index cycled or scanned the
> whole range) but inode is dirty, it puts the inode at the end of b_dirty
> and resets dirtied_when to the current time. Then it continues with the
> next inode.

Agreed. I think it makes sense to reset dirtied_when (thus delay 30s)
if an inode still has dirty pages when we have finished a full scan of
it, in order to
- prevent pointless writeback IO of overwritten pages
- somehow throttle IO for busy inodes

>   kupdate style writeback stops scanning dirty list when dirtied_when is
> new enough. Then if b_more_io is nonempty, it splices it into the beginning
> of the dirty list and restarts.

Right.

>   Other types of writeback splice b_more_io to b_dirty when b_dirty gets
> empty. pdflush style writeback writes until we drop below background dirty
> limit. Other kinds of writeback (throttled threads, writeback submitted by
> filesystem itself) write while nr_to_write > 0.

I'd propose to always check older_than_this. For non-kupdate sync, it
still makes sense to give some priority to expired inodes (generally
it's suboptimal to sync those dirtied-just-now inodes). That is, to
sync expired inodes first if there are any.

>   If we didn't write anything during the b_dirty scan, we wait until I_SYNC
> of the first inode on b_more_io gets cleared before starting the next scan.
>   Does this look reasonably complete and cover all the cases?

What about the congested case?

Thanks,
Fengguang

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH 0/7] Per-bdi writeback flusher threads v20
  2009-09-21  3:04                     ` Wu Fengguang
@ 2009-09-21  5:35                       ` Wu Fengguang
  2009-09-21  9:53                         ` Wu Fengguang
  2009-09-21 12:42                       ` Jan Kara
  1 sibling, 1 reply; 52+ messages in thread
From: Wu Fengguang @ 2009-09-21  5:35 UTC (permalink / raw)
  To: Jan Kara
  Cc: Theodore Tso, Jens Axboe, Christoph Hellwig, linux-kernel,
	linux-fsdevel, chris.mason, akpm

On Mon, Sep 21, 2009 at 11:04:02AM +0800, Wu Fengguang wrote:
> On Mon, Sep 21, 2009 at 03:00:06AM +0800, Jan Kara wrote:
> > On Sat 19-09-09 23:03:51, Wu Fengguang wrote:
> > > On Sat, Sep 19, 2009 at 12:26:07PM +0800, Wu Fengguang wrote:
> > > > On Sat, Sep 19, 2009 at 12:00:51PM +0800, Wu Fengguang wrote:
> > > > > On Sat, Sep 19, 2009 at 11:58:35AM +0800, Wu Fengguang wrote:
> > > > > > On Sat, Sep 19, 2009 at 01:52:52AM +0800, Theodore Tso wrote:
> > > > > > > On Fri, Sep 11, 2009 at 10:39:29PM +0800, Wu Fengguang wrote:
> > > > > > > > 
> > > > > > > > That would be good. Sorry for the late work. I'll allocate some time
> > > > > > > > in mid next week to help review and benchmark recent writeback works,
> > > > > > > > and hope to get things done in this merge window.
> > > > > > > 
> > > > > > > Did you have some chance to get more work done on the your writeback
> > > > > > > patches?
> > > > > > 
> > > > > > Sorry for the delay, I'm now testing the patches with commands
> > > > > > 
> > > > > >  cp /dev/zero /mnt/test/zero0 &
> > > > > >  dd if=/dev/zero of=/mnt/test/zero1 &
> > > > > > 
> > > > > > and the attached debug patch.
> > > > > > 
> > > > > > One problem I found with ext3/4 is, redirty_tail() is called repeatedly
> > > > > > in the traces, which could slow down the inode writeback significantly.
> > > > > 
> > > > > FYI, it's this redirty_tail() called in writeback_single_inode():
> > > > > 
> > > > >                         /*
> > > > >                          * Someone redirtied the inode while were writing back
> > > > >                          * the pages.
> > > > >                          */
> > > > >                         redirty_tail(inode);
> > > > 
> > > > Hmm, this looks like an old fashioned problem get blew up by the
> > > > 128MB MAX_WRITEBACK_PAGES.
> > > > 
> > > > The inode was redirtied by the busy cp/dd processes. Now it takes much
> > > > more time to sync 128MB, so that a heavy dirtier can easily redirty
> > > > the inode in that time window.
> > > > 
> > > > One single invocation of redirty_tail() could hold up the writeback of
> > > > current inode for up to 30 seconds.
> > > 
> > > It seems that this patch helps. However I'm afraid it's too late to
> > > risk merging such kind of patches now..
> >   Fenguang, could we maybe write down how the logic should look like
> > and then look at the code and modify it as needed to fit the logic?
> > Because I couldn't find a compact description of the logic anywhere
> > in the code.
> 
> Good idea. It makes sense to write something down in Documentation/
> or embedded as code comments.
> 
> >   Here is how I'd imaging the writeout logic should work:
> > We would have just two lists - b_dirty and b_more_io. Both would be
> > ordered by dirtied_when.
> 
> Andrew has a very good description for the dirty/io/more_io queues:
> 
> http://lkml.org/lkml/2006/2/7/5
> 
> | So the protocol would be:
> |
> | s_io: contains expired and non-expired dirty inodes, with expired ones at
> | the head.  Unexpired ones (at least) are in time order.
> |
> | s_more_io: contains dirty expired inodes which haven't been fully written. 
> | Ordering doesn't matter (unless someone goes and changes
> | dirty_expire_centisecs - but as long as we don't do anything really bad in
> | response to this we'll be OK).
> |
> | s_dirty: contains expired and non-expired dirty inodes.  The non-expired
> | ones are in time-of-dirtying order.
> 
> Since then s_io was changed to hold only _expired_ dirty inodes at the
> beginning of a full scan. It serves as a bounded set of dirty inodes.
> So that when finished a full scan of it, the writeback can go on to
> the next superblock, and old dirty files' writeback won't be delayed
> infinitely by poring in newly dirty files.
> 
> It seems that the boundary could also be provided by some
> older_than_this timestamp. So removal of b_io is possible
> at least on this purpose.

Yeah, this is a scratch patch to remove b_io, I see no obvious
difficulties in doing so.

Thanks,
Fengguang
---
 fs/btrfs/extent_io.c        |    2 -
 fs/fs-writeback.c           |   65 +++++++++-------------------------
 include/linux/backing-dev.h |    2 -
 include/linux/writeback.h   |    4 +-
 mm/backing-dev.c            |    1 
 mm/page-writeback.c         |    1 
 6 files changed, 21 insertions(+), 54 deletions(-)

--- linux.orig/fs/fs-writeback.c	2009-09-21 13:12:56.000000000 +0800
+++ linux/fs/fs-writeback.c	2009-09-21 13:12:57.000000000 +0800
@@ -284,7 +284,7 @@ static void redirty_tail(struct inode *i
 }
 
 /*
- * requeue inode for re-scanning after bdi->b_io list is exhausted.
+ * requeue inode for re-scanning.
  */
 static void requeue_io(struct inode *inode)
 {
@@ -317,32 +317,6 @@ static bool inode_dirtied_after(struct i
 	return ret;
 }
 
-/*
- * Move expired dirty inodes from @delaying_queue to @dispatch_queue.
- */
-static void move_expired_inodes(struct list_head *delaying_queue,
-			       struct list_head *dispatch_queue,
-				unsigned long *older_than_this)
-{
-	while (!list_empty(delaying_queue)) {
-		struct inode *inode = list_entry(delaying_queue->prev,
-						struct inode, i_list);
-		if (older_than_this &&
-		    inode_dirtied_after(inode, *older_than_this))
-			break;
-		list_move(&inode->i_list, dispatch_queue);
-	}
-}
-
-/*
- * Queue all expired dirty inodes for io, eldest first.
- */
-static void queue_io(struct bdi_writeback *wb, unsigned long *older_than_this)
-{
-	list_splice_init(&wb->b_more_io, wb->b_io.prev);
-	move_expired_inodes(&wb->b_dirty, &wb->b_io, older_than_this);
-}
-
 static int write_inode(struct inode *inode, int sync)
 {
 	if (inode->i_sb->s_op->write_inode && !is_bad_inode(inode))
@@ -399,7 +373,7 @@ writeback_single_inode(struct inode *ino
 		 * writeback can proceed with the other inodes on s_io.
 		 *
 		 * We'll have another go at writing back this inode when we
-		 * completed a full scan of b_io.
+		 * completed a full scan.
 		 */
 		if (!wait) {
 			requeue_io(inode);
@@ -540,11 +514,11 @@ static void writeback_inodes_wb(struct b
 
 	spin_lock(&inode_lock);
 
-	if (!wbc->for_kupdate || list_empty(&wb->b_io))
-		queue_io(wb, wbc->older_than_this);
+	if (list_empty(&wb->b_dirty))
+		list_splice_init(&wb->b_more_io, &wb->b_dirty);
 
-	while (!list_empty(&wb->b_io)) {
-		struct inode *inode = list_entry(wb->b_io.prev,
+	while (!list_empty(&wb->b_dirty)) {
+		struct inode *inode = list_entry(wb->b_dirty.prev,
 						struct inode, i_list);
 		long pages_skipped;
 
@@ -590,8 +564,12 @@ static void writeback_inodes_wb(struct b
 		 * Was this inode dirtied after sync_sb_inodes was called?
 		 * This keeps sync from extra jobs and livelock.
 		 */
-		if (inode_dirtied_after(inode, start))
-			break;
+		if (inode_dirtied_after(inode, wbc->older_than_this)) {
+			if (list_empty(&wb->b_more_io))
+				break;
+			list_splice_init(&wb->b_more_io, wb->b_dirty.prev);
+			continue;
+		}
 
 		if (pin_sb_for_writeback(wbc, inode)) {
 			requeue_io(inode);
@@ -623,7 +601,7 @@ static void writeback_inodes_wb(struct b
 	}
 
 	spin_unlock(&inode_lock);
-	/* Leave any unwritten inodes on b_io */
+	/* Leave any unwritten inodes on b_dirty */
 }
 
 void writeback_inodes_wbc(struct writeback_control *wbc)
@@ -674,18 +652,18 @@ static long wb_writeback(struct bdi_writ
 		.bdi			= wb->bdi,
 		.sb			= args->sb,
 		.sync_mode		= args->sync_mode,
-		.older_than_this	= NULL,
 		.for_kupdate		= args->for_kupdate,
 		.range_cyclic		= args->range_cyclic,
 	};
 	unsigned long oldest_jif;
 	long wrote = 0;
 
-	if (wbc.for_kupdate) {
-		wbc.older_than_this = &oldest_jif;
-		oldest_jif = jiffies -
+	if (wbc.for_kupdate)
+		wbc.older_than_this = jiffies -
 				msecs_to_jiffies(dirty_expire_interval * 10);
-	}
+	else
+		wbc.older_than_this = jiffies;
+
 	if (!wbc.range_cyclic) {
 		wbc.range_start = 0;
 		wbc.range_end = LLONG_MAX;
@@ -1004,7 +982,7 @@ void __mark_inode_dirty(struct inode *in
 			goto out;
 
 		/*
-		 * If the inode was already on b_dirty/b_io/b_more_io, don't
+		 * If the inode was already on b_dirty/b_more_io, don't
 		 * reposition it (that would break b_dirty time-ordering).
 		 */
 		if (!was_dirty) {
@@ -1041,11 +1019,6 @@ EXPORT_SYMBOL(__mark_inode_dirty);
  * This function assumes that the blockdev superblock's inodes are backed by
  * a variety of queues, so all inodes are searched.  For other superblocks,
  * assume that all inodes are backed by the same queue.
- *
- * The inodes to be written are parked on bdi->b_io.  They are moved back onto
- * bdi->b_dirty as they are selected for writing.  This way, none can be missed
- * on the writer throttling path, and we get decent balancing between many
- * throttled threads: we don't want them all piling up on inode_sync_wait.
  */
 static void wait_sb_inodes(struct super_block *sb)
 {
--- linux.orig/fs/btrfs/extent_io.c	2009-09-21 13:12:24.000000000 +0800
+++ linux/fs/btrfs/extent_io.c	2009-09-21 13:12:57.000000000 +0800
@@ -2467,7 +2467,6 @@ int extent_write_full_page(struct extent
 	struct writeback_control wbc_writepages = {
 		.bdi		= wbc->bdi,
 		.sync_mode	= wbc->sync_mode,
-		.older_than_this = NULL,
 		.nr_to_write	= 64,
 		.range_start	= page_offset(page) + PAGE_CACHE_SIZE,
 		.range_end	= (loff_t)-1,
@@ -2501,7 +2500,6 @@ int extent_write_locked_range(struct ext
 	struct writeback_control wbc_writepages = {
 		.bdi		= inode->i_mapping->backing_dev_info,
 		.sync_mode	= mode,
-		.older_than_this = NULL,
 		.nr_to_write	= nr_pages * 2,
 		.range_start	= start,
 		.range_end	= end + 1,
--- linux.orig/include/linux/writeback.h	2009-09-21 13:12:24.000000000 +0800
+++ linux/include/linux/writeback.h	2009-09-21 13:12:57.000000000 +0800
@@ -32,8 +32,8 @@ struct writeback_control {
 	struct super_block *sb;		/* if !NULL, only write inodes from
 					   this super_block */
 	enum writeback_sync_modes sync_mode;
-	unsigned long *older_than_this;	/* If !NULL, only write back inodes
-					   older than this */
+	unsigned long older_than_this;	/* only write back inodes older than
+					   this */
 	long nr_to_write;		/* Write this many pages, and decrement
 					   this for each page written */
 	long pages_skipped;		/* Pages which were not written */
--- linux.orig/mm/backing-dev.c	2009-09-21 13:12:24.000000000 +0800
+++ linux/mm/backing-dev.c	2009-09-21 13:12:57.000000000 +0800
@@ -333,7 +333,6 @@ static void bdi_flush_io(struct backing_
 	struct writeback_control wbc = {
 		.bdi			= bdi,
 		.sync_mode		= WB_SYNC_NONE,
-		.older_than_this	= NULL,
 		.range_cyclic		= 1,
 		.nr_to_write		= 1024,
 	};
--- linux.orig/mm/page-writeback.c	2009-09-21 13:12:56.000000000 +0800
+++ linux/mm/page-writeback.c	2009-09-21 13:12:57.000000000 +0800
@@ -492,7 +492,6 @@ static void balance_dirty_pages(struct a
 		struct writeback_control wbc = {
 			.bdi		= bdi,
 			.sync_mode	= WB_SYNC_NONE,
-			.older_than_this = NULL,
 			.nr_to_write	= write_chunk,
 			.range_cyclic	= 1,
 		};
--- linux.orig/include/linux/backing-dev.h	2009-09-21 13:12:24.000000000 +0800
+++ linux/include/linux/backing-dev.h	2009-09-21 13:12:57.000000000 +0800
@@ -53,7 +53,6 @@ struct bdi_writeback {
 
 	struct task_struct	*task;		/* writeback task */
 	struct list_head	b_dirty;	/* dirty inodes */
-	struct list_head	b_io;		/* parked for writeback */
 	struct list_head	b_more_io;	/* parked for more writeback */
 };
 
@@ -111,7 +110,6 @@ extern struct list_head bdi_list;
 static inline int wb_has_dirty_io(struct bdi_writeback *wb)
 {
 	return !list_empty(&wb->b_dirty) ||
-	       !list_empty(&wb->b_io) ||
 	       !list_empty(&wb->b_more_io);
 }
 

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH 0/7] Per-bdi writeback flusher threads v20
  2009-09-21  5:35                       ` Wu Fengguang
@ 2009-09-21  9:53                         ` Wu Fengguang
  2009-09-21 10:02                           ` Jan Kara
  0 siblings, 1 reply; 52+ messages in thread
From: Wu Fengguang @ 2009-09-21  9:53 UTC (permalink / raw)
  To: Jan Kara
  Cc: Theodore Tso, Jens Axboe, Christoph Hellwig, linux-kernel,
	linux-fsdevel, chris.mason, akpm

On Mon, Sep 21, 2009 at 01:35:46PM +0800, Wu Fengguang wrote:
> On Mon, Sep 21, 2009 at 11:04:02AM +0800, Wu Fengguang wrote:
> > On Mon, Sep 21, 2009 at 03:00:06AM +0800, Jan Kara wrote:
> > > On Sat 19-09-09 23:03:51, Wu Fengguang wrote:
> > > > On Sat, Sep 19, 2009 at 12:26:07PM +0800, Wu Fengguang wrote:
> > > > > On Sat, Sep 19, 2009 at 12:00:51PM +0800, Wu Fengguang wrote:
> > > > > > On Sat, Sep 19, 2009 at 11:58:35AM +0800, Wu Fengguang wrote:
> > > > > > > On Sat, Sep 19, 2009 at 01:52:52AM +0800, Theodore Tso wrote:
> > > > > > > > On Fri, Sep 11, 2009 at 10:39:29PM +0800, Wu Fengguang wrote:
> > > > > > > > > 
> > > > > > > > > That would be good. Sorry for the late work. I'll allocate some time
> > > > > > > > > in mid next week to help review and benchmark recent writeback works,
> > > > > > > > > and hope to get things done in this merge window.
> > > > > > > > 
> > > > > > > > Did you have some chance to get more work done on the your writeback
> > > > > > > > patches?
> > > > > > > 
> > > > > > > Sorry for the delay, I'm now testing the patches with commands
> > > > > > > 
> > > > > > >  cp /dev/zero /mnt/test/zero0 &
> > > > > > >  dd if=/dev/zero of=/mnt/test/zero1 &
> > > > > > > 
> > > > > > > and the attached debug patch.
> > > > > > > 
> > > > > > > One problem I found with ext3/4 is, redirty_tail() is called repeatedly
> > > > > > > in the traces, which could slow down the inode writeback significantly.
> > > > > > 
> > > > > > FYI, it's this redirty_tail() called in writeback_single_inode():
> > > > > > 
> > > > > >                         /*
> > > > > >                          * Someone redirtied the inode while were writing back
> > > > > >                          * the pages.
> > > > > >                          */
> > > > > >                         redirty_tail(inode);
> > > > > 
> > > > > Hmm, this looks like an old fashioned problem get blew up by the
> > > > > 128MB MAX_WRITEBACK_PAGES.
> > > > > 
> > > > > The inode was redirtied by the busy cp/dd processes. Now it takes much
> > > > > more time to sync 128MB, so that a heavy dirtier can easily redirty
> > > > > the inode in that time window.
> > > > > 
> > > > > One single invocation of redirty_tail() could hold up the writeback of
> > > > > current inode for up to 30 seconds.
> > > > 
> > > > It seems that this patch helps. However I'm afraid it's too late to
> > > > risk merging such kind of patches now..
> > >   Fenguang, could we maybe write down how the logic should look like
> > > and then look at the code and modify it as needed to fit the logic?
> > > Because I couldn't find a compact description of the logic anywhere
> > > in the code.
> > 
> > Good idea. It makes sense to write something down in Documentation/
> > or embedded as code comments.
> > 
> > >   Here is how I'd imaging the writeout logic should work:
> > > We would have just two lists - b_dirty and b_more_io. Both would be
> > > ordered by dirtied_when.
> > 
> > Andrew has a very good description for the dirty/io/more_io queues:
> > 
> > http://lkml.org/lkml/2006/2/7/5
> > 
> > | So the protocol would be:
> > |
> > | s_io: contains expired and non-expired dirty inodes, with expired ones at
> > | the head.  Unexpired ones (at least) are in time order.
> > |
> > | s_more_io: contains dirty expired inodes which haven't been fully written. 
> > | Ordering doesn't matter (unless someone goes and changes
> > | dirty_expire_centisecs - but as long as we don't do anything really bad in
> > | response to this we'll be OK).
> > |
> > | s_dirty: contains expired and non-expired dirty inodes.  The non-expired
> > | ones are in time-of-dirtying order.
> > 
> > Since then s_io was changed to hold only _expired_ dirty inodes at the
> > beginning of a full scan. It serves as a bounded set of dirty inodes.
> > So that when finished a full scan of it, the writeback can go on to
> > the next superblock, and old dirty files' writeback won't be delayed
> > infinitely by poring in newly dirty files.
> > 
> > It seems that the boundary could also be provided by some
> > older_than_this timestamp. So removal of b_io is possible
> > at least on this purpose.
> 
> Yeah, this is a scratch patch to remove b_io, I see no obvious
> difficulties in doing so.

However the removal of b_io is not that good for possible b_dirty
optimizations. For example, we could use a tree for b_dirty for more
flexible ordering. Or can introduce a b_dirty_atime to hold the inodes
dirtied by atime and expire them much lazily:

                       expire > 30m
        b_dirty_atime --------------+
                                    |
                                    +--- b_io ---> writeback
                                    |
        b_dirty --------------------+
                       expire > 30s

Thanks,
Fengguang

> ---
>  fs/btrfs/extent_io.c        |    2 -
>  fs/fs-writeback.c           |   65 +++++++++-------------------------
>  include/linux/backing-dev.h |    2 -
>  include/linux/writeback.h   |    4 +-
>  mm/backing-dev.c            |    1 
>  mm/page-writeback.c         |    1 
>  6 files changed, 21 insertions(+), 54 deletions(-)
> 
> --- linux.orig/fs/fs-writeback.c	2009-09-21 13:12:56.000000000 +0800
> +++ linux/fs/fs-writeback.c	2009-09-21 13:12:57.000000000 +0800
> @@ -284,7 +284,7 @@ static void redirty_tail(struct inode *i
>  }
>  
>  /*
> - * requeue inode for re-scanning after bdi->b_io list is exhausted.
> + * requeue inode for re-scanning.
>   */
>  static void requeue_io(struct inode *inode)
>  {
> @@ -317,32 +317,6 @@ static bool inode_dirtied_after(struct i
>  	return ret;
>  }
>  
> -/*
> - * Move expired dirty inodes from @delaying_queue to @dispatch_queue.
> - */
> -static void move_expired_inodes(struct list_head *delaying_queue,
> -			       struct list_head *dispatch_queue,
> -				unsigned long *older_than_this)
> -{
> -	while (!list_empty(delaying_queue)) {
> -		struct inode *inode = list_entry(delaying_queue->prev,
> -						struct inode, i_list);
> -		if (older_than_this &&
> -		    inode_dirtied_after(inode, *older_than_this))
> -			break;
> -		list_move(&inode->i_list, dispatch_queue);
> -	}
> -}
> -
> -/*
> - * Queue all expired dirty inodes for io, eldest first.
> - */
> -static void queue_io(struct bdi_writeback *wb, unsigned long *older_than_this)
> -{
> -	list_splice_init(&wb->b_more_io, wb->b_io.prev);
> -	move_expired_inodes(&wb->b_dirty, &wb->b_io, older_than_this);
> -}
> -
>  static int write_inode(struct inode *inode, int sync)
>  {
>  	if (inode->i_sb->s_op->write_inode && !is_bad_inode(inode))
> @@ -399,7 +373,7 @@ writeback_single_inode(struct inode *ino
>  		 * writeback can proceed with the other inodes on s_io.
>  		 *
>  		 * We'll have another go at writing back this inode when we
> -		 * completed a full scan of b_io.
> +		 * completed a full scan.
>  		 */
>  		if (!wait) {
>  			requeue_io(inode);
> @@ -540,11 +514,11 @@ static void writeback_inodes_wb(struct b
>  
>  	spin_lock(&inode_lock);
>  
> -	if (!wbc->for_kupdate || list_empty(&wb->b_io))
> -		queue_io(wb, wbc->older_than_this);
> +	if (list_empty(&wb->b_dirty))
> +		list_splice_init(&wb->b_more_io, &wb->b_dirty);
>  
> -	while (!list_empty(&wb->b_io)) {
> -		struct inode *inode = list_entry(wb->b_io.prev,
> +	while (!list_empty(&wb->b_dirty)) {
> +		struct inode *inode = list_entry(wb->b_dirty.prev,
>  						struct inode, i_list);
>  		long pages_skipped;
>  
> @@ -590,8 +564,12 @@ static void writeback_inodes_wb(struct b
>  		 * Was this inode dirtied after sync_sb_inodes was called?
>  		 * This keeps sync from extra jobs and livelock.
>  		 */
> -		if (inode_dirtied_after(inode, start))
> -			break;
> +		if (inode_dirtied_after(inode, wbc->older_than_this)) {
> +			if (list_empty(&wb->b_more_io))
> +				break;
> +			list_splice_init(&wb->b_more_io, wb->b_dirty.prev);
> +			continue;
> +		}
>  
>  		if (pin_sb_for_writeback(wbc, inode)) {
>  			requeue_io(inode);
> @@ -623,7 +601,7 @@ static void writeback_inodes_wb(struct b
>  	}
>  
>  	spin_unlock(&inode_lock);
> -	/* Leave any unwritten inodes on b_io */
> +	/* Leave any unwritten inodes on b_dirty */
>  }
>  
>  void writeback_inodes_wbc(struct writeback_control *wbc)
> @@ -674,18 +652,18 @@ static long wb_writeback(struct bdi_writ
>  		.bdi			= wb->bdi,
>  		.sb			= args->sb,
>  		.sync_mode		= args->sync_mode,
> -		.older_than_this	= NULL,
>  		.for_kupdate		= args->for_kupdate,
>  		.range_cyclic		= args->range_cyclic,
>  	};
>  	unsigned long oldest_jif;
>  	long wrote = 0;
>  
> -	if (wbc.for_kupdate) {
> -		wbc.older_than_this = &oldest_jif;
> -		oldest_jif = jiffies -
> +	if (wbc.for_kupdate)
> +		wbc.older_than_this = jiffies -
>  				msecs_to_jiffies(dirty_expire_interval * 10);
> -	}
> +	else
> +		wbc.older_than_this = jiffies;
> +
>  	if (!wbc.range_cyclic) {
>  		wbc.range_start = 0;
>  		wbc.range_end = LLONG_MAX;
> @@ -1004,7 +982,7 @@ void __mark_inode_dirty(struct inode *in
>  			goto out;
>  
>  		/*
> -		 * If the inode was already on b_dirty/b_io/b_more_io, don't
> +		 * If the inode was already on b_dirty/b_more_io, don't
>  		 * reposition it (that would break b_dirty time-ordering).
>  		 */
>  		if (!was_dirty) {
> @@ -1041,11 +1019,6 @@ EXPORT_SYMBOL(__mark_inode_dirty);
>   * This function assumes that the blockdev superblock's inodes are backed by
>   * a variety of queues, so all inodes are searched.  For other superblocks,
>   * assume that all inodes are backed by the same queue.
> - *
> - * The inodes to be written are parked on bdi->b_io.  They are moved back onto
> - * bdi->b_dirty as they are selected for writing.  This way, none can be missed
> - * on the writer throttling path, and we get decent balancing between many
> - * throttled threads: we don't want them all piling up on inode_sync_wait.
>   */
>  static void wait_sb_inodes(struct super_block *sb)
>  {
> --- linux.orig/fs/btrfs/extent_io.c	2009-09-21 13:12:24.000000000 +0800
> +++ linux/fs/btrfs/extent_io.c	2009-09-21 13:12:57.000000000 +0800
> @@ -2467,7 +2467,6 @@ int extent_write_full_page(struct extent
>  	struct writeback_control wbc_writepages = {
>  		.bdi		= wbc->bdi,
>  		.sync_mode	= wbc->sync_mode,
> -		.older_than_this = NULL,
>  		.nr_to_write	= 64,
>  		.range_start	= page_offset(page) + PAGE_CACHE_SIZE,
>  		.range_end	= (loff_t)-1,
> @@ -2501,7 +2500,6 @@ int extent_write_locked_range(struct ext
>  	struct writeback_control wbc_writepages = {
>  		.bdi		= inode->i_mapping->backing_dev_info,
>  		.sync_mode	= mode,
> -		.older_than_this = NULL,
>  		.nr_to_write	= nr_pages * 2,
>  		.range_start	= start,
>  		.range_end	= end + 1,
> --- linux.orig/include/linux/writeback.h	2009-09-21 13:12:24.000000000 +0800
> +++ linux/include/linux/writeback.h	2009-09-21 13:12:57.000000000 +0800
> @@ -32,8 +32,8 @@ struct writeback_control {
>  	struct super_block *sb;		/* if !NULL, only write inodes from
>  					   this super_block */
>  	enum writeback_sync_modes sync_mode;
> -	unsigned long *older_than_this;	/* If !NULL, only write back inodes
> -					   older than this */
> +	unsigned long older_than_this;	/* only write back inodes older than
> +					   this */
>  	long nr_to_write;		/* Write this many pages, and decrement
>  					   this for each page written */
>  	long pages_skipped;		/* Pages which were not written */
> --- linux.orig/mm/backing-dev.c	2009-09-21 13:12:24.000000000 +0800
> +++ linux/mm/backing-dev.c	2009-09-21 13:12:57.000000000 +0800
> @@ -333,7 +333,6 @@ static void bdi_flush_io(struct backing_
>  	struct writeback_control wbc = {
>  		.bdi			= bdi,
>  		.sync_mode		= WB_SYNC_NONE,
> -		.older_than_this	= NULL,
>  		.range_cyclic		= 1,
>  		.nr_to_write		= 1024,
>  	};
> --- linux.orig/mm/page-writeback.c	2009-09-21 13:12:56.000000000 +0800
> +++ linux/mm/page-writeback.c	2009-09-21 13:12:57.000000000 +0800
> @@ -492,7 +492,6 @@ static void balance_dirty_pages(struct a
>  		struct writeback_control wbc = {
>  			.bdi		= bdi,
>  			.sync_mode	= WB_SYNC_NONE,
> -			.older_than_this = NULL,
>  			.nr_to_write	= write_chunk,
>  			.range_cyclic	= 1,
>  		};
> --- linux.orig/include/linux/backing-dev.h	2009-09-21 13:12:24.000000000 +0800
> +++ linux/include/linux/backing-dev.h	2009-09-21 13:12:57.000000000 +0800
> @@ -53,7 +53,6 @@ struct bdi_writeback {
>  
>  	struct task_struct	*task;		/* writeback task */
>  	struct list_head	b_dirty;	/* dirty inodes */
> -	struct list_head	b_io;		/* parked for writeback */
>  	struct list_head	b_more_io;	/* parked for more writeback */
>  };
>  
> @@ -111,7 +110,6 @@ extern struct list_head bdi_list;
>  static inline int wb_has_dirty_io(struct bdi_writeback *wb)
>  {
>  	return !list_empty(&wb->b_dirty) ||
> -	       !list_empty(&wb->b_io) ||
>  	       !list_empty(&wb->b_more_io);
>  }
>  

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH 0/7] Per-bdi writeback flusher threads v20
  2009-09-21  9:53                         ` Wu Fengguang
@ 2009-09-21 10:02                           ` Jan Kara
  2009-09-21 10:18                             ` Wu Fengguang
  0 siblings, 1 reply; 52+ messages in thread
From: Jan Kara @ 2009-09-21 10:02 UTC (permalink / raw)
  To: Wu Fengguang
  Cc: Jan Kara, Theodore Tso, Jens Axboe, Christoph Hellwig,
	linux-kernel, linux-fsdevel, chris.mason, akpm

On Mon 21-09-09 17:53:26, Wu Fengguang wrote:
> On Mon, Sep 21, 2009 at 01:35:46PM +0800, Wu Fengguang wrote:
> > On Mon, Sep 21, 2009 at 11:04:02AM +0800, Wu Fengguang wrote:
> > > On Mon, Sep 21, 2009 at 03:00:06AM +0800, Jan Kara wrote:
> > > > On Sat 19-09-09 23:03:51, Wu Fengguang wrote:
> > > > > On Sat, Sep 19, 2009 at 12:26:07PM +0800, Wu Fengguang wrote:
> > > > > > On Sat, Sep 19, 2009 at 12:00:51PM +0800, Wu Fengguang wrote:
> > > > > > > On Sat, Sep 19, 2009 at 11:58:35AM +0800, Wu Fengguang wrote:
> > > > > > > > On Sat, Sep 19, 2009 at 01:52:52AM +0800, Theodore Tso wrote:
> > > > > > > > > On Fri, Sep 11, 2009 at 10:39:29PM +0800, Wu Fengguang wrote:
> > > > > > > > > > 
> > > > > > > > > > That would be good. Sorry for the late work. I'll allocate some time
> > > > > > > > > > in mid next week to help review and benchmark recent writeback works,
> > > > > > > > > > and hope to get things done in this merge window.
> > > > > > > > > 
> > > > > > > > > Did you have some chance to get more work done on the your writeback
> > > > > > > > > patches?
> > > > > > > > 
> > > > > > > > Sorry for the delay, I'm now testing the patches with commands
> > > > > > > > 
> > > > > > > >  cp /dev/zero /mnt/test/zero0 &
> > > > > > > >  dd if=/dev/zero of=/mnt/test/zero1 &
> > > > > > > > 
> > > > > > > > and the attached debug patch.
> > > > > > > > 
> > > > > > > > One problem I found with ext3/4 is, redirty_tail() is called repeatedly
> > > > > > > > in the traces, which could slow down the inode writeback significantly.
> > > > > > > 
> > > > > > > FYI, it's this redirty_tail() called in writeback_single_inode():
> > > > > > > 
> > > > > > >                         /*
> > > > > > >                          * Someone redirtied the inode while were writing back
> > > > > > >                          * the pages.
> > > > > > >                          */
> > > > > > >                         redirty_tail(inode);
> > > > > > 
> > > > > > Hmm, this looks like an old fashioned problem get blew up by the
> > > > > > 128MB MAX_WRITEBACK_PAGES.
> > > > > > 
> > > > > > The inode was redirtied by the busy cp/dd processes. Now it takes much
> > > > > > more time to sync 128MB, so that a heavy dirtier can easily redirty
> > > > > > the inode in that time window.
> > > > > > 
> > > > > > One single invocation of redirty_tail() could hold up the writeback of
> > > > > > current inode for up to 30 seconds.
> > > > > 
> > > > > It seems that this patch helps. However I'm afraid it's too late to
> > > > > risk merging such kind of patches now..
> > > >   Fenguang, could we maybe write down how the logic should look like
> > > > and then look at the code and modify it as needed to fit the logic?
> > > > Because I couldn't find a compact description of the logic anywhere
> > > > in the code.
> > > 
> > > Good idea. It makes sense to write something down in Documentation/
> > > or embedded as code comments.
> > > 
> > > >   Here is how I'd imaging the writeout logic should work:
> > > > We would have just two lists - b_dirty and b_more_io. Both would be
> > > > ordered by dirtied_when.
> > > 
> > > Andrew has a very good description for the dirty/io/more_io queues:
> > > 
> > > http://lkml.org/lkml/2006/2/7/5
> > > 
> > > | So the protocol would be:
> > > |
> > > | s_io: contains expired and non-expired dirty inodes, with expired ones at
> > > | the head.  Unexpired ones (at least) are in time order.
> > > |
> > > | s_more_io: contains dirty expired inodes which haven't been fully written. 
> > > | Ordering doesn't matter (unless someone goes and changes
> > > | dirty_expire_centisecs - but as long as we don't do anything really bad in
> > > | response to this we'll be OK).
> > > |
> > > | s_dirty: contains expired and non-expired dirty inodes.  The non-expired
> > > | ones are in time-of-dirtying order.
> > > 
> > > Since then s_io was changed to hold only _expired_ dirty inodes at the
> > > beginning of a full scan. It serves as a bounded set of dirty inodes.
> > > So that when finished a full scan of it, the writeback can go on to
> > > the next superblock, and old dirty files' writeback won't be delayed
> > > infinitely by poring in newly dirty files.
> > > 
> > > It seems that the boundary could also be provided by some
> > > older_than_this timestamp. So removal of b_io is possible
> > > at least on this purpose.
> > 
> > Yeah, this is a scratch patch to remove b_io, I see no obvious
> > difficulties in doing so.
> 
> However the removal of b_io is not that good for possible b_dirty
> optimizations. For example, we could use a tree for b_dirty for more
> flexible ordering. Or can introduce a b_dirty_atime to hold the inodes
> dirtied by atime and expire them much lazily:
> 
>                        expire > 30m
>         b_dirty_atime --------------+
>                                     |
>                                     +--- b_io ---> writeback
>                                     |
>         b_dirty --------------------+
>                        expire > 30s
  Well, you can still implement the above without a need for b_io list. The
kupdate-style writeback can for example check the first inode in both lists
and process the inode which is expired for a longer time.

									Honza
-- 
Jan Kara <jack@suse.cz>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH 0/7] Per-bdi writeback flusher threads v20
  2009-09-21 10:02                           ` Jan Kara
@ 2009-09-21 10:18                             ` Wu Fengguang
  0 siblings, 0 replies; 52+ messages in thread
From: Wu Fengguang @ 2009-09-21 10:18 UTC (permalink / raw)
  To: Jan Kara
  Cc: Theodore Tso, Jens Axboe, Christoph Hellwig, linux-kernel,
	linux-fsdevel, chris.mason, akpm

On Mon, Sep 21, 2009 at 06:02:42PM +0800, Jan Kara wrote:
> On Mon 21-09-09 17:53:26, Wu Fengguang wrote:
> > On Mon, Sep 21, 2009 at 01:35:46PM +0800, Wu Fengguang wrote:
> > > > >   Here is how I'd imaging the writeout logic should work:
> > > > > We would have just two lists - b_dirty and b_more_io. Both would be
> > > > > ordered by dirtied_when.
> > > > 
> > > > Andrew has a very good description for the dirty/io/more_io queues:
> > > > 
> > > > http://lkml.org/lkml/2006/2/7/5
> > > > 
> > > > | So the protocol would be:
> > > > |
> > > > | s_io: contains expired and non-expired dirty inodes, with expired ones at
> > > > | the head.  Unexpired ones (at least) are in time order.
> > > > |
> > > > | s_more_io: contains dirty expired inodes which haven't been fully written. 
> > > > | Ordering doesn't matter (unless someone goes and changes
> > > > | dirty_expire_centisecs - but as long as we don't do anything really bad in
> > > > | response to this we'll be OK).
> > > > |
> > > > | s_dirty: contains expired and non-expired dirty inodes.  The non-expired
> > > > | ones are in time-of-dirtying order.
> > > > 
> > > > Since then s_io was changed to hold only _expired_ dirty inodes at the
> > > > beginning of a full scan. It serves as a bounded set of dirty inodes.
> > > > So that when finished a full scan of it, the writeback can go on to
> > > > the next superblock, and old dirty files' writeback won't be delayed
> > > > infinitely by poring in newly dirty files.
> > > > 
> > > > It seems that the boundary could also be provided by some
> > > > older_than_this timestamp. So removal of b_io is possible
> > > > at least on this purpose.
> > > 
> > > Yeah, this is a scratch patch to remove b_io, I see no obvious
> > > difficulties in doing so.
> > 
> > However the removal of b_io is not that good for possible b_dirty
> > optimizations. For example, we could use a tree for b_dirty for more
> > flexible ordering. Or can introduce a b_dirty_atime to hold the inodes
> > dirtied by atime and expire them much lazily:
> > 
> >                        expire > 30m
> >         b_dirty_atime --------------+
> >                                     |
> >                                     +--- b_io ---> writeback
> >                                     |
> >         b_dirty --------------------+
> >                        expire > 30s
>   Well, you can still implement the above without a need for b_io list. The
> kupdate-style writeback can for example check the first inode in both lists
> and process the inode which is expired for a longer time.

OK. Given that rel_atime is default now, such optimization seems less
useful anyway.

Thanks,
Fengguang

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH 0/7] Per-bdi writeback flusher threads v20
  2009-09-21  3:04                     ` Wu Fengguang
  2009-09-21  5:35                       ` Wu Fengguang
@ 2009-09-21 12:42                       ` Jan Kara
  2009-09-21 15:12                         ` Wu Fengguang
  1 sibling, 1 reply; 52+ messages in thread
From: Jan Kara @ 2009-09-21 12:42 UTC (permalink / raw)
  To: Wu Fengguang
  Cc: Jan Kara, Theodore Tso, Jens Axboe, Christoph Hellwig,
	linux-kernel, linux-fsdevel, chris.mason, akpm

On Mon 21-09-09 11:04:02, Wu Fengguang wrote:
> On Mon, Sep 21, 2009 at 03:00:06AM +0800, Jan Kara wrote:
> > On Sat 19-09-09 23:03:51, Wu Fengguang wrote:
...
> >   Fenguang, could we maybe write down how the logic should look like
> > and then look at the code and modify it as needed to fit the logic?
> > Because I couldn't find a compact description of the logic anywhere
> > in the code.
> 
> Good idea. It makes sense to write something down in Documentation/
> or embedded as code comments.
  Yes, that would be useful. I'd probably vote for comments in the code.

> >   Here is how I'd imaging the writeout logic should work:
> > We would have just two lists - b_dirty and b_more_io. Both would be
> > ordered by dirtied_when.
> 
> Andrew has a very good description for the dirty/io/more_io queues:
> 
> http://lkml.org/lkml/2006/2/7/5
> 
> | So the protocol would be:
> |
> | s_io: contains expired and non-expired dirty inodes, with expired ones at
> | the head.  Unexpired ones (at least) are in time order.
> |
> | s_more_io: contains dirty expired inodes which haven't been fully written. 
> | Ordering doesn't matter (unless someone goes and changes
> | dirty_expire_centisecs - but as long as we don't do anything really bad in
> | response to this we'll be OK).
> |
> | s_dirty: contains expired and non-expired dirty inodes.  The non-expired
> | ones are in time-of-dirtying order.
> 
> Since then s_io was changed to hold only _expired_ dirty inodes at the
> beginning of a full scan. It serves as a bounded set of dirty inodes.
> So that when finished a full scan of it, the writeback can go on to
> the next superblock, and old dirty files' writeback won't be delayed
> infinitely by poring in newly dirty files.
> 
> It seems that the boundary could also be provided by some
> older_than_this timestamp. So removal of b_io is possible
> at least on this purpose.
> 
> >   A thread doing WB_SYNC_ALL writeback will just walk the list and cleanup
> > everything (we should be resistant against livelocks because we stop at
> > inode which has been dirtied after the sync has started).
> 
> Yes, that would mean
> 
> - older_than_this=now     for WB_SYNC_ALL
> - older_than_this=now-30s for WB_SYNC_NONE
  Exactly.

> >   A thread doing WB_SYNC_NONE writeback will start walking the list. If the
> > inode has I_SYNC set, it puts it on b_more_io. Otherwise it takes I_SYNC
> > and writes as much as it finds necessary from the first inode. If it
> > stopped before it wrote everything, it puts the inode at the end of
> > b_more_io.
> 
> Agreed. The current code is doing that, and it is reasonably easy to
> reuse the code path for WB_SYNC_NONE/WB_SYNC_ALL?
  I'm not sure we do exactly that. The I_SYNC part is fine. But looking at
the code in writeback_single_inode(), we put inode at b_more_io only if
wbc->for_kupdate is true and wbc->nr_to_write is <= 0. Otherwise we put the
inode at the tail of dirty list.

> > If it wrote everything (writeback_index cycled or scanned the
> > whole range) but inode is dirty, it puts the inode at the end of b_dirty
> > and resets dirtied_when to the current time. Then it continues with the
> > next inode.
> 
> Agreed. I think it makes sense to reset dirtied_when (thus delay 30s)
> if an inode still has dirty pages when we have finished a full scan of
> it, in order to
> - prevent pointless writeback IO of overwritten pages
> - somehow throttle IO for busy inodes
  OK, but currently the logic is subtly different. It does:
If the inode wasn't redirtied during writeback and still has dirty pages,
queue somewhere (requeue_io or redirty_tail depending on other things).
If the inode was redirtied, do redirty_tail.
  Probably, the current logic is safer in the sence that kupdate-style
writeback cannot take forever when inode is permanently redirtied. In my
proposed logic, kupdate writeback would run forever (which makes some
sence as well but probably isn't really convenient).
  Also if we skip some pages (call redirty_page_for_writepage()) the inode
will get redirtied as well and hence we'll put the inode at the back of
dirty list and thus delaying further writeback by 30s. Again, this makes
some sence (prevents busyloop waiting for a page to get prepared for a
proper writeback) although I'm not sure it's always desirable. For now
we should probably just document this somewhere.

> >   kupdate style writeback stops scanning dirty list when dirtied_when is
> > new enough. Then if b_more_io is nonempty, it splices it into the beginning
> > of the dirty list and restarts.
> 
> Right.
  But currently, we don't do the splicing. We just set more_io and return
from writeback_inodes_wb(). Should that be changed?

> >   Other types of writeback splice b_more_io to b_dirty when b_dirty gets
> > empty. pdflush style writeback writes until we drop below background dirty
> > limit. Other kinds of writeback (throttled threads, writeback submitted by
> > filesystem itself) write while nr_to_write > 0.
> 
> I'd propose to always check older_than_this. For non-kupdate sync, it
> still makes sense to give some priority to expired inodes (generally
> it's suboptimal to sync those dirtied-just-now inodes). That is, to
> sync expired inodes first if there are any.
  Well, the expired inodes are handled with priority because they are at
the beginning of the list. So we write them first and only if writing them
was not enough, we proceed with inodes that were dirtied later. You are
right that we can get to later dirtied inodes even if there are still dirty
data in the old ones because we just refuse to write too much from a single
inode. So maybe it would be good to splice b_more_io to b_dirty already
when we get to unexpired inode in b_dirty list. The good thing is it won't
livelock on a few expired inodes even in the case new data are written to
one of them while we work on the others - the other inodes on s_dirty list
will eventually expire and from that moment on, we include them in a fair
pdflush writeback.

> >   If we didn't write anything during the b_dirty scan, we wait until I_SYNC
> > of the first inode on b_more_io gets cleared before starting the next scan.
> >   Does this look reasonably complete and cover all the cases?
> 
> What about the congested case?
  With per-bdi threads, we just have to make sure we don't busyloop when
the device is congested. Just blocking is perfectly fine since the thread
has nothing to do anyway. The question is how normal processes that are
forced to do writeback or page allocation doing writeback should behave.
There probably it makes sence to bail out from the writeback and let the
caller decide. That seems to be implemented by the current code just fine
but you are right I forgot about it. Probably, we should just splice
b_more_io to b_dirty list before bailing out because of congestion...

								Honza
-- 
Jan Kara <jack@suse.cz>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH 0/7] Per-bdi writeback flusher threads v20
  2009-09-19  4:26               ` Wu Fengguang
  2009-09-19 15:03                 ` Wu Fengguang
  2009-09-19 15:03                 ` Wu Fengguang
@ 2009-09-21 13:53                 ` Chris Mason
  2009-09-22 10:13                   ` Wu Fengguang
  2009-09-22 10:13                   ` Wu Fengguang
  2 siblings, 2 replies; 52+ messages in thread
From: Chris Mason @ 2009-09-21 13:53 UTC (permalink / raw)
  To: Wu Fengguang
  Cc: Theodore Tso, Jens Axboe, Christoph Hellwig, linux-kernel,
	linux-fsdevel, akpm, jack

On Sat, Sep 19, 2009 at 12:26:07PM +0800, Wu Fengguang wrote:
> On Sat, Sep 19, 2009 at 12:00:51PM +0800, Wu Fengguang wrote:
> > On Sat, Sep 19, 2009 at 11:58:35AM +0800, Wu Fengguang wrote:
> > > On Sat, Sep 19, 2009 at 01:52:52AM +0800, Theodore Tso wrote:
> > > > On Fri, Sep 11, 2009 at 10:39:29PM +0800, Wu Fengguang wrote:
> > > > > 
> > > > > That would be good. Sorry for the late work. I'll allocate some time
> > > > > in mid next week to help review and benchmark recent writeback works,
> > > > > and hope to get things done in this merge window.
> > > > 
> > > > Did you have some chance to get more work done on the your writeback
> > > > patches?
> > > 
> > > Sorry for the delay, I'm now testing the patches with commands
> > > 
> > >  cp /dev/zero /mnt/test/zero0 &
> > >  dd if=/dev/zero of=/mnt/test/zero1 &
> > > 
> > > and the attached debug patch.
> > > 
> > > One problem I found with ext3/4 is, redirty_tail() is called repeatedly
> > > in the traces, which could slow down the inode writeback significantly.
> > 
> > FYI, it's this redirty_tail() called in writeback_single_inode():
> > 
> >                         /*
> >                          * Someone redirtied the inode while were writing back
> >                          * the pages.
> >                          */
> >                         redirty_tail(inode);
> 
> Hmm, this looks like an old fashioned problem get blew up by the
> 128MB MAX_WRITEBACK_PAGES.

I'm starting to rethink the 128MB MAX_WRITEBACK_PAGES.  128MB is the
right answer for the flusher thread on sequential IO, but definitely not
on random IO.  We don't want the flusher to get bogged down on random
writeback and start ignoring every other file.

My btrfs performance branch has long had a change to bump the
nr_to_write up based on the size of the delayed allocation that we're
doing.  It helped, but not as much as I really expected it too, and a
similar patch from Christoph for XFS was good but not great.

It turns out the problem is in write_cache_pages.  It processes a whole
pagevec at a time, something like this:

while(!done) {
	for each page in the pagegvec {
		writepage()
		if (wbc->nr_to_write <= 0)
			done = 1;
	}
}

If the filesystem decides to bump nr_to_write to cover a whole
extent (or a max reasonable size), the new value of nr_to_write may
be ignored if nr_to_write had already gone done to zero.

I fixed btrfs to recheck nr_to_write every time, and the results are
much smoother.  This is what it looks like to write out all the .o files
in the kernel.

http://oss.oracle.com/~mason/seekwatcher/btrfs-nr-to-write.png

In this graph, Btrfs is writing the full extent or 8192 pages, whichever
is smaller.  The write_cache_pages change is here, but it is local to
the btrfs copy of write_cache_pages:

http://git.kernel.org/?p=linux/kernel/git/mason/btrfs-unstable.git;a=commit;h=f85d7d6c8f2ad4a86a1f4f4e3791f36dede2fa76

I'd rather see a more formal use of hints from the FS about efficient IO
than a blanket increase of the writeback max.  It's more work than
bumping a single #define, but even with the #define at 1GB, we're going
to end up splitting extents and seeking when nr_to_write does finally
get down to zero.

Btrfs currently only bumps the nr_to_write when it creates the extent, I
need to change it to also bump it when it finds an existing extent.

-chris


^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH 0/7] Per-bdi writeback flusher threads v20
  2009-09-21 12:42                       ` Jan Kara
@ 2009-09-21 15:12                         ` Wu Fengguang
  2009-09-21 16:08                           ` Jan Kara
  0 siblings, 1 reply; 52+ messages in thread
From: Wu Fengguang @ 2009-09-21 15:12 UTC (permalink / raw)
  To: Jan Kara
  Cc: Theodore Tso, Jens Axboe, Christoph Hellwig, linux-kernel,
	linux-fsdevel, chris.mason, akpm

On Mon, Sep 21, 2009 at 08:42:51PM +0800, Jan Kara wrote:
> On Mon 21-09-09 11:04:02, Wu Fengguang wrote:
> > On Mon, Sep 21, 2009 at 03:00:06AM +0800, Jan Kara wrote:
> > > On Sat 19-09-09 23:03:51, Wu Fengguang wrote:
> ...
> > >   Fenguang, could we maybe write down how the logic should look like
> > > and then look at the code and modify it as needed to fit the logic?
> > > Because I couldn't find a compact description of the logic anywhere
> > > in the code.
> > 
> > Good idea. It makes sense to write something down in Documentation/
> > or embedded as code comments.
>   Yes, that would be useful. I'd probably vote for comments in the code.

OK.

> > >   Here is how I'd imaging the writeout logic should work:
> > > We would have just two lists - b_dirty and b_more_io. Both would be
> > > ordered by dirtied_when.
> > 
> > Andrew has a very good description for the dirty/io/more_io queues:
> > 
> > http://lkml.org/lkml/2006/2/7/5
> > 
> > | So the protocol would be:
> > |
> > | s_io: contains expired and non-expired dirty inodes, with expired ones at
> > | the head.  Unexpired ones (at least) are in time order.
> > |
> > | s_more_io: contains dirty expired inodes which haven't been fully written. 
> > | Ordering doesn't matter (unless someone goes and changes
> > | dirty_expire_centisecs - but as long as we don't do anything really bad in
> > | response to this we'll be OK).
> > |
> > | s_dirty: contains expired and non-expired dirty inodes.  The non-expired
> > | ones are in time-of-dirtying order.
> > 
> > Since then s_io was changed to hold only _expired_ dirty inodes at the
> > beginning of a full scan. It serves as a bounded set of dirty inodes.
> > So that when finished a full scan of it, the writeback can go on to
> > the next superblock, and old dirty files' writeback won't be delayed
> > infinitely by poring in newly dirty files.
> > 
> > It seems that the boundary could also be provided by some
> > older_than_this timestamp. So removal of b_io is possible
> > at least on this purpose.
> > 
> > >   A thread doing WB_SYNC_ALL writeback will just walk the list and cleanup
> > > everything (we should be resistant against livelocks because we stop at
> > > inode which has been dirtied after the sync has started).
> > 
> > Yes, that would mean
> > 
> > - older_than_this=now     for WB_SYNC_ALL
> > - older_than_this=now-30s for WB_SYNC_NONE
>   Exactly.
> 
> > >   A thread doing WB_SYNC_NONE writeback will start walking the list. If the
> > > inode has I_SYNC set, it puts it on b_more_io. Otherwise it takes I_SYNC
> > > and writes as much as it finds necessary from the first inode. If it
> > > stopped before it wrote everything, it puts the inode at the end of
> > > b_more_io.
> > 
> > Agreed. The current code is doing that, and it is reasonably easy to
> > reuse the code path for WB_SYNC_NONE/WB_SYNC_ALL?
>   I'm not sure we do exactly that. The I_SYNC part is fine. But looking at
> the code in writeback_single_inode(), we put inode at b_more_io only if
> wbc->for_kupdate is true and wbc->nr_to_write is <= 0. Otherwise we put the
> inode at the tail of dirty list.

Ah yes. I actually have posted a patch to unify the !for_kupdate
and for_kupdate cases: http://patchwork.kernel.org/patch/46399/

For the (wbc->nr_to_write <= 0) case, we have to delay the inode for
some time because it somehow cannot be written for now, hence moving
back it to b_dirty. Otherwise could busy loop.

> > > If it wrote everything (writeback_index cycled or scanned the
> > > whole range) but inode is dirty, it puts the inode at the end of b_dirty
> > > and resets dirtied_when to the current time. Then it continues with the
> > > next inode.
> > 
> > Agreed. I think it makes sense to reset dirtied_when (thus delay 30s)
> > if an inode still has dirty pages when we have finished a full scan of
> > it, in order to
> > - prevent pointless writeback IO of overwritten pages
> > - somehow throttle IO for busy inodes
>   OK, but currently the logic is subtly different. It does:
> If the inode wasn't redirtied during writeback and still has dirty pages,
> queue somewhere (requeue_io or redirty_tail depending on other things).
> If the inode was redirtied, do redirty_tail.

Yup.

>   Probably, the current logic is safer in the sence that kupdate-style
> writeback cannot take forever when inode is permanently redirtied. In my
> proposed logic, kupdate writeback would run forever (which makes some
> sence as well but probably isn't really convenient).

Yes current code is safer. Run kupdate forever for an inodes being
busy overwritten is obviously undesirable behavior.

>   Also if we skip some pages (call redirty_page_for_writepage()) the inode
> will get redirtied as well and hence we'll put the inode at the back of
> dirty list and thus delaying further writeback by 30s. Again, this makes
> some sence (prevents busyloop waiting for a page to get prepared for a
> proper writeback) although I'm not sure it's always desirable. For now
> we should probably just document this somewhere.

Agreed. Again, current code is safe, but may be delaying too much.
I have a patch that adds another queue b_more_io_wait, which delays
the inode for a shorter 5s (or whatever). Could try that if 30s is
reported to be unacceptable in some real workloads.

> > >   kupdate style writeback stops scanning dirty list when dirtied_when is
> > > new enough. Then if b_more_io is nonempty, it splices it into the beginning
> > > of the dirty list and restarts.
> > 
> > Right.
>   But currently, we don't do the splicing. We just set more_io and return
> from writeback_inodes_wb(). Should that be changed?

Yes, in fact I changed that in the b_io removal patch, to do the
splice and retry.

It was correct and required behavior to return to give other
superblocks a chance. Now with per-bdi writeback, we don't have to
worry about that, so it's safe to just splice and restart.

> > >   Other types of writeback splice b_more_io to b_dirty when b_dirty gets
> > > empty. pdflush style writeback writes until we drop below background dirty
> > > limit. Other kinds of writeback (throttled threads, writeback submitted by
> > > filesystem itself) write while nr_to_write > 0.
> > 
> > I'd propose to always check older_than_this. For non-kupdate sync, it
> > still makes sense to give some priority to expired inodes (generally
> > it's suboptimal to sync those dirtied-just-now inodes). That is, to
> > sync expired inodes first if there are any.
>   Well, the expired inodes are handled with priority because they are at
> the beginning of the list. So we write them first and only if writing them
> was not enough, we proceed with inodes that were dirtied later. You are

The list order is not enough for large files :)
One newly dirtied file; one 100MB expired dirty file. Current code
will sync only 4MB of the expired file and go on to sync the newly
dirty file, and _never_ return to serve the 100MB file as long as
there are new inodes dirtied, which is not optimal.

> right that we can get to later dirtied inodes even if there are still dirty
> data in the old ones because we just refuse to write too much from a single
> inode. So maybe it would be good to splice b_more_io to b_dirty already
> when we get to unexpired inode in b_dirty list. The good thing is it won't
> livelock on a few expired inodes even in the case new data are written to
> one of them while we work on the others - the other inodes on s_dirty list
> will eventually expire and from that moment on, we include them in a fair
> pdflush writeback.

Right. I modified wb_writeback() to first use 

        wbc.older_than_this = jiffies - msecs_to_jiffies(dirty_expire_interval * 10);

unconditionally, and then if no more writeback is possible, relax it
for !kupdate:

        wbc.older_than_this = jiffies;

> > >   If we didn't write anything during the b_dirty scan, we wait until I_SYNC
> > > of the first inode on b_more_io gets cleared before starting the next scan.
> > >   Does this look reasonably complete and cover all the cases?
> > 
> > What about the congested case?
>   With per-bdi threads, we just have to make sure we don't busyloop when
> the device is congested. Just blocking is perfectly fine since the thread
> has nothing to do anyway.

Right.

> The question is how normal processes that are forced to do writeback
> or page allocation doing writeback should behave.  There probably it
> makes sence to bail out from the writeback and let the caller
> decide. That seems to be implemented by the current code just fine
> but you are right I forgot about it.

No current code is not fine for pageout and migrate path, which sets
nonblocking=1, could return on congestion and then busy loop. (which
is being discussed in another thread with Mason.)

> Probably, we should just splice b_more_io to b_dirty list before
> bailing out because of congestion...

I'd vote for putting back the inode to tail of b_dirty, so that it
will be served once congestion stops: it's not the inode's fault :)

Thanks,
Fengguang

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH 0/7] Per-bdi writeback flusher threads v20
  2009-09-21 15:12                         ` Wu Fengguang
@ 2009-09-21 16:08                           ` Jan Kara
  2009-09-22  5:10                             ` Wu Fengguang
  0 siblings, 1 reply; 52+ messages in thread
From: Jan Kara @ 2009-09-21 16:08 UTC (permalink / raw)
  To: Wu Fengguang
  Cc: Jan Kara, Theodore Tso, Jens Axboe, Christoph Hellwig,
	linux-kernel, linux-fsdevel, chris.mason, akpm

On Mon 21-09-09 23:12:42, Wu Fengguang wrote:
> On Mon, Sep 21, 2009 at 08:42:51PM +0800, Jan Kara wrote:
> > > >   Here is how I'd imaging the writeout logic should work:
> > > > We would have just two lists - b_dirty and b_more_io. Both would be
> > > > ordered by dirtied_when.
> > > 
> > > Andrew has a very good description for the dirty/io/more_io queues:
> > > 
> > > http://lkml.org/lkml/2006/2/7/5
> > > 
> > > | So the protocol would be:
> > > |
> > > | s_io: contains expired and non-expired dirty inodes, with expired ones at
> > > | the head.  Unexpired ones (at least) are in time order.
> > > |
> > > | s_more_io: contains dirty expired inodes which haven't been fully written. 
> > > | Ordering doesn't matter (unless someone goes and changes
> > > | dirty_expire_centisecs - but as long as we don't do anything really bad in
> > > | response to this we'll be OK).
> > > |
> > > | s_dirty: contains expired and non-expired dirty inodes.  The non-expired
> > > | ones are in time-of-dirtying order.
> > > 
> > > Since then s_io was changed to hold only _expired_ dirty inodes at the
> > > beginning of a full scan. It serves as a bounded set of dirty inodes.
> > > So that when finished a full scan of it, the writeback can go on to
> > > the next superblock, and old dirty files' writeback won't be delayed
> > > infinitely by poring in newly dirty files.
> > > 
> > > It seems that the boundary could also be provided by some
> > > older_than_this timestamp. So removal of b_io is possible
> > > at least on this purpose.
> > > 
> > > >   A thread doing WB_SYNC_ALL writeback will just walk the list and cleanup
> > > > everything (we should be resistant against livelocks because we stop at
> > > > inode which has been dirtied after the sync has started).
> > > 
> > > Yes, that would mean
> > > 
> > > - older_than_this=now     for WB_SYNC_ALL
> > > - older_than_this=now-30s for WB_SYNC_NONE
> >   Exactly.
> > 
> > > >   A thread doing WB_SYNC_NONE writeback will start walking the list. If the
> > > > inode has I_SYNC set, it puts it on b_more_io. Otherwise it takes I_SYNC
> > > > and writes as much as it finds necessary from the first inode. If it
> > > > stopped before it wrote everything, it puts the inode at the end of
> > > > b_more_io.
> > > 
> > > Agreed. The current code is doing that, and it is reasonably easy to
> > > reuse the code path for WB_SYNC_NONE/WB_SYNC_ALL?
> >   I'm not sure we do exactly that. The I_SYNC part is fine. But looking at
> > the code in writeback_single_inode(), we put inode at b_more_io only if
> > wbc->for_kupdate is true and wbc->nr_to_write is <= 0. Otherwise we put the
> > inode at the tail of dirty list.
> 
> Ah yes. I actually have posted a patch to unify the !for_kupdate
> and for_kupdate cases: http://patchwork.kernel.org/patch/46399/
  Yes, this patch is basically what I had in mind :).

> For the (wbc->nr_to_write <= 0) case, we have to delay the inode for
> some time because it somehow cannot be written for now, hence moving
> back it to b_dirty. Otherwise could busy loop.
  Probably you mean wbc->nr_to_write > 0 case. With that I agree.

...
> > > >   kupdate style writeback stops scanning dirty list when dirtied_when is
> > > > new enough. Then if b_more_io is nonempty, it splices it into the beginning
> > > > of the dirty list and restarts.
> > > 
> > > Right.
> >   But currently, we don't do the splicing. We just set more_io and return
> > from writeback_inodes_wb(). Should that be changed?
> 
> Yes, in fact I changed that in the b_io removal patch, to do the
> splice and retry.
  Ah, OK. I've missed that.

> It was correct and required behavior to return to give other
> superblocks a chance. Now with per-bdi writeback, we don't have to
> worry about that, so it's safe to just splice and restart.
> 
> > > >   Other types of writeback splice b_more_io to b_dirty when b_dirty gets
> > > > empty. pdflush style writeback writes until we drop below background dirty
> > > > limit. Other kinds of writeback (throttled threads, writeback submitted by
> > > > filesystem itself) write while nr_to_write > 0.
> > > 
> > > I'd propose to always check older_than_this. For non-kupdate sync, it
> > > still makes sense to give some priority to expired inodes (generally
> > > it's suboptimal to sync those dirtied-just-now inodes). That is, to
> > > sync expired inodes first if there are any.
> >   Well, the expired inodes are handled with priority because they are at
> > the beginning of the list. So we write them first and only if writing them
> > was not enough, we proceed with inodes that were dirtied later. You are
> 
> The list order is not enough for large files :)
> One newly dirtied file; one 100MB expired dirty file. Current code
> will sync only 4MB of the expired file and go on to sync the newly
> dirty file, and _never_ return to serve the 100MB file as long as
> there are new inodes dirtied, which is not optimal.
  True.

> > right that we can get to later dirtied inodes even if there are still dirty
> > data in the old ones because we just refuse to write too much from a single
> > inode. So maybe it would be good to splice b_more_io to b_dirty already
> > when we get to unexpired inode in b_dirty list. The good thing is it won't
> > livelock on a few expired inodes even in the case new data are written to
> > one of them while we work on the others - the other inodes on s_dirty list
> > will eventually expire and from that moment on, we include them in a fair
> > pdflush writeback.
> 
> Right. I modified wb_writeback() to first use 
> 
>         wbc.older_than_this = jiffies - msecs_to_jiffies(dirty_expire_interval * 10);
> 
> unconditionally, and then if no more writeback is possible, relax it
> for !kupdate:
> 
>         wbc.older_than_this = jiffies;
  I agree with this. I'd just set wbc.older_than_this each time we restart
scanning of b_dirty list. Otherwise if there are a few large expired inodes
which are often written (but not often enough to hit us right at the moment
when we write pages of that inode) we would just cycle writing these inodes
and never get to other inodes...

> > > >   If we didn't write anything during the b_dirty scan, we wait until I_SYNC
> > > > of the first inode on b_more_io gets cleared before starting the next scan.
> > > >   Does this look reasonably complete and cover all the cases?
> > > 
> > > What about the congested case?
> >   With per-bdi threads, we just have to make sure we don't busyloop when
> > the device is congested. Just blocking is perfectly fine since the thread
> > has nothing to do anyway.
> 
> Right.
> 
> > The question is how normal processes that are forced to do writeback
> > or page allocation doing writeback should behave.  There probably it
> > makes sence to bail out from the writeback and let the caller
> > decide. That seems to be implemented by the current code just fine
> > but you are right I forgot about it.
> 
> No current code is not fine for pageout and migrate path, which sets
> nonblocking=1, could return on congestion and then busy loop. (which
> is being discussed in another thread with Mason.)
  Really? Looking at pageout and migrate code, we call directly ->writepage()
function so congestion handling doesn't really matter. But I'll have a look
at a thread with Chris Mason.

> > Probably, we should just splice b_more_io to b_dirty list before
> > bailing out because of congestion...
> 
> I'd vote for putting back the inode to tail of b_dirty, so that it
> will be served once congestion stops: it's not the inode's fault :)
  I'd rather say to 'head' exactly because it's not inode's fault and so we
want to start with the same inode next time.

								Honza
-- 
Jan Kara <jack@suse.cz>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH 0/7] Per-bdi writeback flusher threads v20
  2009-09-21 16:08                           ` Jan Kara
@ 2009-09-22  5:10                             ` Wu Fengguang
  0 siblings, 0 replies; 52+ messages in thread
From: Wu Fengguang @ 2009-09-22  5:10 UTC (permalink / raw)
  To: Jan Kara
  Cc: Theodore Tso, Jens Axboe, Christoph Hellwig, linux-kernel,
	linux-fsdevel, chris.mason, akpm

On Tue, Sep 22, 2009 at 12:08:20AM +0800, Jan Kara wrote:
> On Mon 21-09-09 23:12:42, Wu Fengguang wrote:
> > On Mon, Sep 21, 2009 at 08:42:51PM +0800, Jan Kara wrote:
> > > > >   Here is how I'd imaging the writeout logic should work:
> > > > > We would have just two lists - b_dirty and b_more_io. Both would be
> > > > > ordered by dirtied_when.
> > > > 
> > > > Andrew has a very good description for the dirty/io/more_io queues:
> > > > 
> > > > http://lkml.org/lkml/2006/2/7/5
> > > > 
> > > > | So the protocol would be:
> > > > |
> > > > | s_io: contains expired and non-expired dirty inodes, with expired ones at
> > > > | the head.  Unexpired ones (at least) are in time order.
> > > > |
> > > > | s_more_io: contains dirty expired inodes which haven't been fully written. 
> > > > | Ordering doesn't matter (unless someone goes and changes
> > > > | dirty_expire_centisecs - but as long as we don't do anything really bad in
> > > > | response to this we'll be OK).
> > > > |
> > > > | s_dirty: contains expired and non-expired dirty inodes.  The non-expired
> > > > | ones are in time-of-dirtying order.
> > > > 
> > > > Since then s_io was changed to hold only _expired_ dirty inodes at the
> > > > beginning of a full scan. It serves as a bounded set of dirty inodes.
> > > > So that when finished a full scan of it, the writeback can go on to
> > > > the next superblock, and old dirty files' writeback won't be delayed
> > > > infinitely by poring in newly dirty files.
> > > > 
> > > > It seems that the boundary could also be provided by some
> > > > older_than_this timestamp. So removal of b_io is possible
> > > > at least on this purpose.
> > > > 
> > > > >   A thread doing WB_SYNC_ALL writeback will just walk the list and cleanup
> > > > > everything (we should be resistant against livelocks because we stop at
> > > > > inode which has been dirtied after the sync has started).
> > > > 
> > > > Yes, that would mean
> > > > 
> > > > - older_than_this=now     for WB_SYNC_ALL
> > > > - older_than_this=now-30s for WB_SYNC_NONE
> > >   Exactly.
> > > 
> > > > >   A thread doing WB_SYNC_NONE writeback will start walking the list. If the
> > > > > inode has I_SYNC set, it puts it on b_more_io. Otherwise it takes I_SYNC
> > > > > and writes as much as it finds necessary from the first inode. If it
> > > > > stopped before it wrote everything, it puts the inode at the end of
> > > > > b_more_io.
> > > > 
> > > > Agreed. The current code is doing that, and it is reasonably easy to
> > > > reuse the code path for WB_SYNC_NONE/WB_SYNC_ALL?
> > >   I'm not sure we do exactly that. The I_SYNC part is fine. But looking at
> > > the code in writeback_single_inode(), we put inode at b_more_io only if
> > > wbc->for_kupdate is true and wbc->nr_to_write is <= 0. Otherwise we put the
> > > inode at the tail of dirty list.
> > 
> > Ah yes. I actually have posted a patch to unify the !for_kupdate
> > and for_kupdate cases: http://patchwork.kernel.org/patch/46399/
>   Yes, this patch is basically what I had in mind :).
> 
> > For the (wbc->nr_to_write <= 0) case, we have to delay the inode for
> > some time because it somehow cannot be written for now, hence moving
> > back it to b_dirty. Otherwise could busy loop.
>   Probably you mean wbc->nr_to_write > 0 case. With that I agree.

Ah yes!

> ...
> > > > >   kupdate style writeback stops scanning dirty list when dirtied_when is
> > > > > new enough. Then if b_more_io is nonempty, it splices it into the beginning
> > > > > of the dirty list and restarts.
> > > > 
> > > > Right.
> > >   But currently, we don't do the splicing. We just set more_io and return
> > > from writeback_inodes_wb(). Should that be changed?
> > 
> > Yes, in fact I changed that in the b_io removal patch, to do the
> > splice and retry.
>   Ah, OK. I've missed that.
> 
> > It was correct and required behavior to return to give other
> > superblocks a chance. Now with per-bdi writeback, we don't have to
> > worry about that, so it's safe to just splice and restart.
> > 
> > > > >   Other types of writeback splice b_more_io to b_dirty when b_dirty gets
> > > > > empty. pdflush style writeback writes until we drop below background dirty
> > > > > limit. Other kinds of writeback (throttled threads, writeback submitted by
> > > > > filesystem itself) write while nr_to_write > 0.
> > > > 
> > > > I'd propose to always check older_than_this. For non-kupdate sync, it
> > > > still makes sense to give some priority to expired inodes (generally
> > > > it's suboptimal to sync those dirtied-just-now inodes). That is, to
> > > > sync expired inodes first if there are any.
> > >   Well, the expired inodes are handled with priority because they are at
> > > the beginning of the list. So we write them first and only if writing them
> > > was not enough, we proceed with inodes that were dirtied later. You are
> > 
> > The list order is not enough for large files :)
> > One newly dirtied file; one 100MB expired dirty file. Current code
> > will sync only 4MB of the expired file and go on to sync the newly
> > dirty file, and _never_ return to serve the 100MB file as long as
> > there are new inodes dirtied, which is not optimal.
>   True.
> 
> > > right that we can get to later dirtied inodes even if there are still dirty
> > > data in the old ones because we just refuse to write too much from a single
> > > inode. So maybe it would be good to splice b_more_io to b_dirty already
> > > when we get to unexpired inode in b_dirty list. The good thing is it won't
> > > livelock on a few expired inodes even in the case new data are written to
> > > one of them while we work on the others - the other inodes on s_dirty list
> > > will eventually expire and from that moment on, we include them in a fair
> > > pdflush writeback.
> > 
> > Right. I modified wb_writeback() to first use 
> > 
> >         wbc.older_than_this = jiffies - msecs_to_jiffies(dirty_expire_interval * 10);
> > 
> > unconditionally, and then if no more writeback is possible, relax it
> > for !kupdate:
> > 
> >         wbc.older_than_this = jiffies;
>   I agree with this. I'd just set wbc.older_than_this each time we restart
> scanning of b_dirty list. Otherwise if there are a few large expired inodes
> which are often written (but not often enough to hit us right at the moment
> when we write pages of that inode) we would just cycle writing these inodes
> and never get to other inodes...

Good idea!

> > > > >   If we didn't write anything during the b_dirty scan, we wait until I_SYNC
> > > > > of the first inode on b_more_io gets cleared before starting the next scan.
> > > > >   Does this look reasonably complete and cover all the cases?
> > > > 
> > > > What about the congested case?
> > >   With per-bdi threads, we just have to make sure we don't busyloop when
> > > the device is congested. Just blocking is perfectly fine since the thread
> > > has nothing to do anyway.
> > 
> > Right.
> > 
> > > The question is how normal processes that are forced to do writeback
> > > or page allocation doing writeback should behave.  There probably it
> > > makes sence to bail out from the writeback and let the caller
> > > decide. That seems to be implemented by the current code just fine
> > > but you are right I forgot about it.
> > 
> > No current code is not fine for pageout and migrate path, which sets
> > nonblocking=1, could return on congestion and then busy loop. (which
> > is being discussed in another thread with Mason.)
>   Really? Looking at pageout and migrate code, we call directly ->writepage()
> function so congestion handling doesn't really matter. But I'll have a look
> at a thread with Chris Mason.

Ah yes! Sorry for the mistake: the vmscan livelock I worried won't happen.

> > > Probably, we should just splice b_more_io to b_dirty list before
> > > bailing out because of congestion...
> > 
> > I'd vote for putting back the inode to tail of b_dirty, so that it
> > will be served once congestion stops: it's not the inode's fault :)
>   I'd rather say to 'head' exactly because it's not inode's fault and so we
> want to start with the same inode next time.

Yeah, I was thinking about list head :)

Thanks,
Fengguang

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH 0/7] Per-bdi writeback flusher threads v20
  2009-09-21 13:53                 ` Chris Mason
  2009-09-22 10:13                   ` Wu Fengguang
@ 2009-09-22 10:13                   ` Wu Fengguang
  2009-09-22 11:30                     ` Jan Kara
  2009-09-22 11:30                     ` Chris Mason
  1 sibling, 2 replies; 52+ messages in thread
From: Wu Fengguang @ 2009-09-22 10:13 UTC (permalink / raw)
  To: Chris Mason, Theodore Tso, Jens Axboe, Christoph Hellwig,
	linux-kernel, linux-fsdevel, akpm, jack

On Mon, Sep 21, 2009 at 09:53:21PM +0800, Chris Mason wrote:
> On Sat, Sep 19, 2009 at 12:26:07PM +0800, Wu Fengguang wrote:
> > On Sat, Sep 19, 2009 at 12:00:51PM +0800, Wu Fengguang wrote:
> > > On Sat, Sep 19, 2009 at 11:58:35AM +0800, Wu Fengguang wrote:
> > > > On Sat, Sep 19, 2009 at 01:52:52AM +0800, Theodore Tso wrote:
> > > > > On Fri, Sep 11, 2009 at 10:39:29PM +0800, Wu Fengguang wrote:
> > > > > > 
> > > > > > That would be good. Sorry for the late work. I'll allocate some time
> > > > > > in mid next week to help review and benchmark recent writeback works,
> > > > > > and hope to get things done in this merge window.
> > > > > 
> > > > > Did you have some chance to get more work done on the your writeback
> > > > > patches?
> > > > 
> > > > Sorry for the delay, I'm now testing the patches with commands
> > > > 
> > > >  cp /dev/zero /mnt/test/zero0 &
> > > >  dd if=/dev/zero of=/mnt/test/zero1 &
> > > > 
> > > > and the attached debug patch.
> > > > 
> > > > One problem I found with ext3/4 is, redirty_tail() is called repeatedly
> > > > in the traces, which could slow down the inode writeback significantly.
> > > 
> > > FYI, it's this redirty_tail() called in writeback_single_inode():
> > > 
> > >                         /*
> > >                          * Someone redirtied the inode while were writing back
> > >                          * the pages.
> > >                          */
> > >                         redirty_tail(inode);
> > 
> > Hmm, this looks like an old fashioned problem get blew up by the
> > 128MB MAX_WRITEBACK_PAGES.
> 
> I'm starting to rethink the 128MB MAX_WRITEBACK_PAGES.  128MB is the
> right answer for the flusher thread on sequential IO, but definitely not
> on random IO.  We don't want the flusher to get bogged down on random
> writeback and start ignoring every other file.

Hmm, I'd think a larger MAX_WRITEBACK_PAGES shall never increase the
writeback randomness.

> My btrfs performance branch has long had a change to bump the
> nr_to_write up based on the size of the delayed allocation that we're
> doing.  It helped, but not as much as I really expected it too, and a
> similar patch from Christoph for XFS was good but not great.
> 
> It turns out the problem is in write_cache_pages.  It processes a whole
> pagevec at a time, something like this:
> 
> while(!done) {
> 	for each page in the pagegvec {
> 		writepage()
> 		if (wbc->nr_to_write <= 0)
> 			done = 1;
> 	}
> }
> 
> If the filesystem decides to bump nr_to_write to cover a whole
> extent (or a max reasonable size), the new value of nr_to_write may
> be ignored if nr_to_write had already gone done to zero.
> 
> I fixed btrfs to recheck nr_to_write every time, and the results are
> much smoother.  This is what it looks like to write out all the .o files
> in the kernel.
> 
> http://oss.oracle.com/~mason/seekwatcher/btrfs-nr-to-write.png
> 
> In this graph, Btrfs is writing the full extent or 8192 pages, whichever
> is smaller.  The write_cache_pages change is here, but it is local to
> the btrfs copy of write_cache_pages:
> 
> http://git.kernel.org/?p=linux/kernel/git/mason/btrfs-unstable.git;a=commit;h=f85d7d6c8f2ad4a86a1f4f4e3791f36dede2fa76

It seems you tried to an upper limit of 32-64MB:

+               if (wbc->nr_to_write < delalloc_to_write) {
+                       int thresh = 8192;
+
+                       if (delalloc_to_write < thresh * 2)
+                               thresh = delalloc_to_write;
+                       wbc->nr_to_write = min_t(u64, delalloc_to_write,
+                                                thresh);
+               }

However it is possible that btrfs bumps up nr_to_write for each inode, 
so that the accumulated bump ups are too large to be acceptable for
balance_dirty_pages().

And it's not always "bump ups". nr_to_write could be decreased if it's
already a large value.

> I'd rather see a more formal use of hints from the FS about efficient IO
> than a blanket increase of the writeback max.  It's more work than
> bumping a single #define, but even with the #define at 1GB, we're going
> to end up splitting extents and seeking when nr_to_write does finally
> get down to zero.
> 
> Btrfs currently only bumps the nr_to_write when it creates the extent, I
> need to change it to also bump it when it finds an existing extent.

Yes a more general solution would help. I'd like to propose one which
works in the other way round. In brief,
(1) the VFS give a large enough per-file writeback quota to btrfs;
(2) btrfs tells VFS "here is a (seek) boundary, stop voluntarily",
    before exhausting the quota and be force stopped.

There will be two limits (the second one is new):

- total nr to write in one wb_writeback invocation
- _max_ nr to write per file (before switching to sync the next inode)

The per-invocation limit is useful for balance_dirty_pages().
The per-file number can be accumulated across successive wb_writeback
invocations and thus can be much larger (eg. 128MB) than the legacy
per-invocation number. 

The file system will only see the per-file numbers. The "max" means
if btrfs find the current page to be the last page in the extent,
it could indicate this fact to VFS by setting wbc->would_seek=1. The
VFS will then switch to write the next inode.

The benefit of early voluntarily yield is, it reduced the possibility
to be force stopped half way in an extent. When next time VFS returns
to sync this inode, it will again be honored the full 128MB quota,
which should be enough to cover a big fresh extent.

Thanks,
Fengguang

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH 0/7] Per-bdi writeback flusher threads v20
  2009-09-21 13:53                 ` Chris Mason
@ 2009-09-22 10:13                   ` Wu Fengguang
  2009-09-22 10:13                   ` Wu Fengguang
  1 sibling, 0 replies; 52+ messages in thread
From: Wu Fengguang @ 2009-09-22 10:13 UTC (permalink / raw)
  To: Chris Mason, Theodore Tso, Jens Axboe, Christoph Hellwig,
	linux-kernel@vger.kernel.org

On Mon, Sep 21, 2009 at 09:53:21PM +0800, Chris Mason wrote:
> On Sat, Sep 19, 2009 at 12:26:07PM +0800, Wu Fengguang wrote:
> > On Sat, Sep 19, 2009 at 12:00:51PM +0800, Wu Fengguang wrote:
> > > On Sat, Sep 19, 2009 at 11:58:35AM +0800, Wu Fengguang wrote:
> > > > On Sat, Sep 19, 2009 at 01:52:52AM +0800, Theodore Tso wrote:
> > > > > On Fri, Sep 11, 2009 at 10:39:29PM +0800, Wu Fengguang wrote:
> > > > > > 
> > > > > > That would be good. Sorry for the late work. I'll allocate some time
> > > > > > in mid next week to help review and benchmark recent writeback works,
> > > > > > and hope to get things done in this merge window.
> > > > > 
> > > > > Did you have some chance to get more work done on the your writeback
> > > > > patches?
> > > > 
> > > > Sorry for the delay, I'm now testing the patches with commands
> > > > 
> > > >  cp /dev/zero /mnt/test/zero0 &
> > > >  dd if=/dev/zero of=/mnt/test/zero1 &
> > > > 
> > > > and the attached debug patch.
> > > > 
> > > > One problem I found with ext3/4 is, redirty_tail() is called repeatedly
> > > > in the traces, which could slow down the inode writeback significantly.
> > > 
> > > FYI, it's this redirty_tail() called in writeback_single_inode():
> > > 
> > >                         /*
> > >                          * Someone redirtied the inode while were writing back
> > >                          * the pages.
> > >                          */
> > >                         redirty_tail(inode);
> > 
> > Hmm, this looks like an old fashioned problem get blew up by the
> > 128MB MAX_WRITEBACK_PAGES.
> 
> I'm starting to rethink the 128MB MAX_WRITEBACK_PAGES.  128MB is the
> right answer for the flusher thread on sequential IO, but definitely not
> on random IO.  We don't want the flusher to get bogged down on random
> writeback and start ignoring every other file.

Hmm, I'd think a larger MAX_WRITEBACK_PAGES shall never increase the
writeback randomness.

> My btrfs performance branch has long had a change to bump the
> nr_to_write up based on the size of the delayed allocation that we're
> doing.  It helped, but not as much as I really expected it too, and a
> similar patch from Christoph for XFS was good but not great.
> 
> It turns out the problem is in write_cache_pages.  It processes a whole
> pagevec at a time, something like this:
> 
> while(!done) {
> 	for each page in the pagegvec {
> 		writepage()
> 		if (wbc->nr_to_write <= 0)
> 			done = 1;
> 	}
> }
> 
> If the filesystem decides to bump nr_to_write to cover a whole
> extent (or a max reasonable size), the new value of nr_to_write may
> be ignored if nr_to_write had already gone done to zero.
> 
> I fixed btrfs to recheck nr_to_write every time, and the results are
> much smoother.  This is what it looks like to write out all the .o files
> in the kernel.
> 
> http://oss.oracle.com/~mason/seekwatcher/btrfs-nr-to-write.png
> 
> In this graph, Btrfs is writing the full extent or 8192 pages, whichever
> is smaller.  The write_cache_pages change is here, but it is local to
> the btrfs copy of write_cache_pages:
> 
> http://git.kernel.org/?p=linux/kernel/git/mason/btrfs-unstable.git;a=commit;h=f85d7d6c8f2ad4a86a1f4f4e3791f36dede2fa76

It seems you tried to an upper limit of 32-64MB:

+               if (wbc->nr_to_write < delalloc_to_write) {
+                       int thresh = 8192;
+
+                       if (delalloc_to_write < thresh * 2)
+                               thresh = delalloc_to_write;
+                       wbc->nr_to_write = min_t(u64, delalloc_to_write,
+                                                thresh);
+               }

However it is possible that btrfs bumps up nr_to_write for each inode, 
so that the accumulated bump ups are too large to be acceptable for
balance_dirty_pages().

And it's not always "bump ups". nr_to_write could be decreased if it's
already a large value.

> I'd rather see a more formal use of hints from the FS about efficient IO
> than a blanket increase of the writeback max.  It's more work than
> bumping a single #define, but even with the #define at 1GB, we're going
> to end up splitting extents and seeking when nr_to_write does finally
> get down to zero.
> 
> Btrfs currently only bumps the nr_to_write when it creates the extent, I
> need to change it to also bump it when it finds an existing extent.

Yes a more general solution would help. I'd like to propose one which
works in the other way round. In brief,
(1) the VFS give a large enough per-file writeback quota to btrfs;
(2) btrfs tells VFS "here is a (seek) boundary, stop voluntarily",
    before exhausting the quota and be force stopped.

There will be two limits (the second one is new):

- total nr to write in one wb_writeback invocation
- _max_ nr to write per file (before switching to sync the next inode)

The per-invocation limit is useful for balance_dirty_pages().
The per-file number can be accumulated across successive wb_writeback
invocations and thus can be much larger (eg. 128MB) than the legacy
per-invocation number. 

The file system will only see the per-file numbers. The "max" means
if btrfs find the current page to be the last page in the extent,
it could indicate this fact to VFS by setting wbc->would_seek=1. The
VFS will then switch to write the next inode.

The benefit of early voluntarily yield is, it reduced the possibility
to be force stopped half way in an extent. When next time VFS returns
to sync this inode, it will again be honored the full 128MB quota,
which should be enough to cover a big fresh extent.

Thanks,
Fengguang

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH 0/7] Per-bdi writeback flusher threads v20
  2009-09-22 10:13                   ` Wu Fengguang
@ 2009-09-22 11:30                     ` Jan Kara
  2009-09-22 13:33                       ` Wu Fengguang
  2009-09-22 11:30                     ` Chris Mason
  1 sibling, 1 reply; 52+ messages in thread
From: Jan Kara @ 2009-09-22 11:30 UTC (permalink / raw)
  To: Wu Fengguang
  Cc: Chris Mason, Theodore Tso, Jens Axboe, Christoph Hellwig,
	linux-kernel, linux-fsdevel, akpm, jack

On Tue 22-09-09 18:13:35, Wu Fengguang wrote:
> Yes a more general solution would help. I'd like to propose one which
> works in the other way round. In brief,
> (1) the VFS give a large enough per-file writeback quota to btrfs;
> (2) btrfs tells VFS "here is a (seek) boundary, stop voluntarily",
>     before exhausting the quota and be force stopped.
> 
> There will be two limits (the second one is new):
> 
> - total nr to write in one wb_writeback invocation
> - _max_ nr to write per file (before switching to sync the next inode)
> 
> The per-invocation limit is useful for balance_dirty_pages().
> The per-file number can be accumulated across successive wb_writeback
> invocations and thus can be much larger (eg. 128MB) than the legacy
> per-invocation number. 
  Actually, it doesn't make much sence to have a per-file limit in number
of pages. I've been playing with an idea that we could have a per-file
*time* quota. That would have an advantage that if a file generates random
IO, we wouldn't block for longer time on it than when it generates linear
IO.
  I imagine that in ->writepage we would substract from given time quota in
wbc the time it takes to write the current page. It would need some context
in wbc so that it is able to tell whether the IO is linear or random to
properly account for some seek penalty but generally it seems to be
doable...
  Filesystems implementing ->writepages can then make decision whether they
have enough time quota to seek to next extent and write it out or whether
they should rather yield to other inodes...
 
> The file system will only see the per-file numbers. The "max" means
> if btrfs find the current page to be the last page in the extent,
> it could indicate this fact to VFS by setting wbc->would_seek=1. The
> VFS will then switch to write the next inode.
> 
> The benefit of early voluntarily yield is, it reduced the possibility
> to be force stopped half way in an extent. When next time VFS returns
> to sync this inode, it will again be honored the full 128MB quota,
> which should be enough to cover a big fresh extent.

								Honza
-- 
Jan Kara <jack@suse.cz>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH 0/7] Per-bdi writeback flusher threads v20
  2009-09-22 10:13                   ` Wu Fengguang
  2009-09-22 11:30                     ` Jan Kara
@ 2009-09-22 11:30                     ` Chris Mason
  2009-09-22 11:45                       ` Jan Kara
  2009-09-22 13:18                         ` Wu Fengguang
  1 sibling, 2 replies; 52+ messages in thread
From: Chris Mason @ 2009-09-22 11:30 UTC (permalink / raw)
  To: Wu Fengguang
  Cc: Theodore Tso, Jens Axboe, Christoph Hellwig, linux-kernel,
	linux-fsdevel, akpm, jack

On Tue, Sep 22, 2009 at 06:13:35PM +0800, Wu Fengguang wrote:
> On Mon, Sep 21, 2009 at 09:53:21PM +0800, Chris Mason wrote:
> > On Sat, Sep 19, 2009 at 12:26:07PM +0800, Wu Fengguang wrote:
> > > On Sat, Sep 19, 2009 at 12:00:51PM +0800, Wu Fengguang wrote:
> > > > On Sat, Sep 19, 2009 at 11:58:35AM +0800, Wu Fengguang wrote:
> > > > > On Sat, Sep 19, 2009 at 01:52:52AM +0800, Theodore Tso wrote:
> > > > > > On Fri, Sep 11, 2009 at 10:39:29PM +0800, Wu Fengguang wrote:
> > > > > > > 
> > > > > > > That would be good. Sorry for the late work. I'll allocate some time
> > > > > > > in mid next week to help review and benchmark recent writeback works,
> > > > > > > and hope to get things done in this merge window.
> > > > > > 
> > > > > > Did you have some chance to get more work done on the your writeback
> > > > > > patches?
> > > > > 
> > > > > Sorry for the delay, I'm now testing the patches with commands
> > > > > 
> > > > >  cp /dev/zero /mnt/test/zero0 &
> > > > >  dd if=/dev/zero of=/mnt/test/zero1 &
> > > > > 
> > > > > and the attached debug patch.
> > > > > 
> > > > > One problem I found with ext3/4 is, redirty_tail() is called repeatedly
> > > > > in the traces, which could slow down the inode writeback significantly.
> > > > 
> > > > FYI, it's this redirty_tail() called in writeback_single_inode():
> > > > 
> > > >                         /*
> > > >                          * Someone redirtied the inode while were writing back
> > > >                          * the pages.
> > > >                          */
> > > >                         redirty_tail(inode);
> > > 
> > > Hmm, this looks like an old fashioned problem get blew up by the
> > > 128MB MAX_WRITEBACK_PAGES.
> > 
> > I'm starting to rethink the 128MB MAX_WRITEBACK_PAGES.  128MB is the
> > right answer for the flusher thread on sequential IO, but definitely not
> > on random IO.  We don't want the flusher to get bogged down on random
> > writeback and start ignoring every other file.
> 
> Hmm, I'd think a larger MAX_WRITEBACK_PAGES shall never increase the
> writeback randomness.

It doesn't increase the randomness, but if we have a file full of
buffered random IO (say from bdb or rpm), the 128MB max will mean that
one file dominates the flusher thread writeback completely.

> 
> > My btrfs performance branch has long had a change to bump the
> > nr_to_write up based on the size of the delayed allocation that we're
> > doing.  It helped, but not as much as I really expected it too, and a
> > similar patch from Christoph for XFS was good but not great.
> > 
> > It turns out the problem is in write_cache_pages.  It processes a whole
> > pagevec at a time, something like this:
> > 
> > while(!done) {
> > 	for each page in the pagegvec {
> > 		writepage()
> > 		if (wbc->nr_to_write <= 0)
> > 			done = 1;
> > 	}
> > }
> > 
> > If the filesystem decides to bump nr_to_write to cover a whole
> > extent (or a max reasonable size), the new value of nr_to_write may
> > be ignored if nr_to_write had already gone done to zero.
> > 
> > I fixed btrfs to recheck nr_to_write every time, and the results are
> > much smoother.  This is what it looks like to write out all the .o files
> > in the kernel.
> > 
> > http://oss.oracle.com/~mason/seekwatcher/btrfs-nr-to-write.png
> > 
> > In this graph, Btrfs is writing the full extent or 8192 pages, whichever
> > is smaller.  The write_cache_pages change is here, but it is local to
> > the btrfs copy of write_cache_pages:
> > 
> > http://git.kernel.org/?p=linux/kernel/git/mason/btrfs-unstable.git;a=commit;h=f85d7d6c8f2ad4a86a1f4f4e3791f36dede2fa76
> 
> It seems you tried to an upper limit of 32-64MB:
> 
> +               if (wbc->nr_to_write < delalloc_to_write) {
> +                       int thresh = 8192;
> +
> +                       if (delalloc_to_write < thresh * 2)
> +                               thresh = delalloc_to_write;
> +                       wbc->nr_to_write = min_t(u64, delalloc_to_write,
> +                                                thresh);
> +               }
> 
> However it is possible that btrfs bumps up nr_to_write for each inode, 
> so that the accumulated bump ups are too large to be acceptable for
> balance_dirty_pages().

We bump up to a limit of 64MB more than the original nr_to_write. This
is because when we do bump we know we'll write the whole amount, and
then write_cache_pages will end.

> 
> And it's not always "bump ups". nr_to_write could be decreased if it's
> already a large value.

Sorry, I don't see where it is decreased.

> 
> > I'd rather see a more formal use of hints from the FS about efficient IO
> > than a blanket increase of the writeback max.  It's more work than
> > bumping a single #define, but even with the #define at 1GB, we're going
> > to end up splitting extents and seeking when nr_to_write does finally
> > get down to zero.
> > 
> > Btrfs currently only bumps the nr_to_write when it creates the extent, I
> > need to change it to also bump it when it finds an existing extent.
> 
> Yes a more general solution would help. I'd like to propose one which
> works in the other way round. In brief,
> (1) the VFS give a large enough per-file writeback quota to btrfs;
> (2) btrfs tells VFS "here is a (seek) boundary, stop voluntarily",
>     before exhausting the quota and be force stopped.
> 
> There will be two limits (the second one is new):
> 
> - total nr to write in one wb_writeback invocation
> - _max_ nr to write per file (before switching to sync the next inode)
> 
> The per-invocation limit is useful for balance_dirty_pages().
> The per-file number can be accumulated across successive wb_writeback
> invocations and thus can be much larger (eg. 128MB) than the legacy
> per-invocation number. 
> 
> The file system will only see the per-file numbers. The "max" means
> if btrfs find the current page to be the last page in the extent,
> it could indicate this fact to VFS by setting wbc->would_seek=1. The
> VFS will then switch to write the next inode.
> 
> The benefit of early voluntarily yield is, it reduced the possibility
> to be force stopped half way in an extent. When next time VFS returns
> to sync this inode, it will again be honored the full 128MB quota,
> which should be enough to cover a big fresh extent.

This is interesting, but it gets into a problem with defining what a
seek is.  On some hardware they are very fast and don't hurt at all.  It
might be more interesting to make timeslices.

-chris


^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH 0/7] Per-bdi writeback flusher threads v20
  2009-09-22 11:30                     ` Chris Mason
@ 2009-09-22 11:45                       ` Jan Kara
  2009-09-22 12:47                         ` Wu Fengguang
  2009-09-22 17:41                         ` Chris Mason
  2009-09-22 13:18                         ` Wu Fengguang
  1 sibling, 2 replies; 52+ messages in thread
From: Jan Kara @ 2009-09-22 11:45 UTC (permalink / raw)
  To: Chris Mason
  Cc: Wu Fengguang, Theodore Tso, Jens Axboe, Christoph Hellwig,
	linux-kernel, linux-fsdevel, akpm, jack

On Tue 22-09-09 07:30:55, Chris Mason wrote:
> > Yes a more general solution would help. I'd like to propose one which
> > works in the other way round. In brief,
> > (1) the VFS give a large enough per-file writeback quota to btrfs;
> > (2) btrfs tells VFS "here is a (seek) boundary, stop voluntarily",
> >     before exhausting the quota and be force stopped.
> > 
> > There will be two limits (the second one is new):
> > 
> > - total nr to write in one wb_writeback invocation
> > - _max_ nr to write per file (before switching to sync the next inode)
> > 
> > The per-invocation limit is useful for balance_dirty_pages().
> > The per-file number can be accumulated across successive wb_writeback
> > invocations and thus can be much larger (eg. 128MB) than the legacy
> > per-invocation number. 
> > 
> > The file system will only see the per-file numbers. The "max" means
> > if btrfs find the current page to be the last page in the extent,
> > it could indicate this fact to VFS by setting wbc->would_seek=1. The
> > VFS will then switch to write the next inode.
> > 
> > The benefit of early voluntarily yield is, it reduced the possibility
> > to be force stopped half way in an extent. When next time VFS returns
> > to sync this inode, it will again be honored the full 128MB quota,
> > which should be enough to cover a big fresh extent.
> 
> This is interesting, but it gets into a problem with defining what a
> seek is.  On some hardware they are very fast and don't hurt at all.  It
> might be more interesting to make timeslices.
  With simple timeslices there's a problem that the time it takes to submit
an IO isn't really related to the time it takes to complete the IO.  During
submission we are limited just by availablity of free requests and sizes of
request queues (which might be filled by another thread or by us writing
different inode).
  But as I described in my other email, we could probably estimate time it
takes to complete the IO. At least CFQ keeps statistics needed for that. If
we somehow generalized them and put them into BDI, we could probably use
them during writeback...

								Honza
-- 
Jan Kara <jack@suse.cz>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH 0/7] Per-bdi writeback flusher threads v20
  2009-09-22 11:45                       ` Jan Kara
@ 2009-09-22 12:47                         ` Wu Fengguang
  2009-09-22 17:41                         ` Chris Mason
  1 sibling, 0 replies; 52+ messages in thread
From: Wu Fengguang @ 2009-09-22 12:47 UTC (permalink / raw)
  To: Jan Kara
  Cc: Chris Mason, Theodore Tso, Jens Axboe, Christoph Hellwig,
	linux-kernel, linux-fsdevel, akpm

On Tue, Sep 22, 2009 at 07:45:37PM +0800, Jan Kara wrote:
> On Tue 22-09-09 07:30:55, Chris Mason wrote:
> > > Yes a more general solution would help. I'd like to propose one which
> > > works in the other way round. In brief,
> > > (1) the VFS give a large enough per-file writeback quota to btrfs;
> > > (2) btrfs tells VFS "here is a (seek) boundary, stop voluntarily",
> > >     before exhausting the quota and be force stopped.
> > > 
> > > There will be two limits (the second one is new):
> > > 
> > > - total nr to write in one wb_writeback invocation
> > > - _max_ nr to write per file (before switching to sync the next inode)
> > > 
> > > The per-invocation limit is useful for balance_dirty_pages().
> > > The per-file number can be accumulated across successive wb_writeback
> > > invocations and thus can be much larger (eg. 128MB) than the legacy
> > > per-invocation number. 
> > > 
> > > The file system will only see the per-file numbers. The "max" means
> > > if btrfs find the current page to be the last page in the extent,
> > > it could indicate this fact to VFS by setting wbc->would_seek=1. The
> > > VFS will then switch to write the next inode.
> > > 
> > > The benefit of early voluntarily yield is, it reduced the possibility
> > > to be force stopped half way in an extent. When next time VFS returns
> > > to sync this inode, it will again be honored the full 128MB quota,
> > > which should be enough to cover a big fresh extent.
> > 
> > This is interesting, but it gets into a problem with defining what a
> > seek is.  On some hardware they are very fast and don't hurt at all.  It

The hardware capability could be reported in the bdi?

> > might be more interesting to make timeslices.
>   With simple timeslices there's a problem that the time it takes to submit
> an IO isn't really related to the time it takes to complete the IO.  During
> submission we are limited just by availablity of free requests and sizes of
> request queues (which might be filled by another thread or by us writing
> different inode).

Right. When queue is congested, the submission time will be correlated
with (someone else's) completion time. So it is still necessary to
have a quota of submission time to prevent one single inode takes too
much sync (submission) time.

>   But as I described in my other email, we could probably estimate time it
> takes to complete the IO. At least CFQ keeps statistics needed for that. If
> we somehow generalized them and put them into BDI, we could probably use
> them during writeback...

As for randomness, I think write_cache_pages() could get a good
estimation by counting the number of page segments it put to io
for a single inode, without going for the block layer. 

Thanks,
Fengguang

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH 0/7] Per-bdi writeback flusher threads v20
  2009-09-22 11:30                     ` Chris Mason
@ 2009-09-22 13:18                         ` Wu Fengguang
  2009-09-22 13:18                         ` Wu Fengguang
  1 sibling, 0 replies; 52+ messages in thread
From: Wu Fengguang @ 2009-09-22 13:18 UTC (permalink / raw)
  To: Chris Mason, Theodore Tso, Jens Axboe, Christoph Hellwig,
	linux-kernel, linux-fsdevel, akpm, jack

On Tue, Sep 22, 2009 at 07:30:55PM +0800, Chris Mason wrote:
> On Tue, Sep 22, 2009 at 06:13:35PM +0800, Wu Fengguang wrote:
> > On Mon, Sep 21, 2009 at 09:53:21PM +0800, Chris Mason wrote:
> > > On Sat, Sep 19, 2009 at 12:26:07PM +0800, Wu Fengguang wrote:
> > > > On Sat, Sep 19, 2009 at 12:00:51PM +0800, Wu Fengguang wrote:
> > > > > On Sat, Sep 19, 2009 at 11:58:35AM +0800, Wu Fengguang wrote:
> > > > > > On Sat, Sep 19, 2009 at 01:52:52AM +0800, Theodore Tso wrote:
> > > > > > > On Fri, Sep 11, 2009 at 10:39:29PM +0800, Wu Fengguang wrote:
> > > > > > > > 
> > > > > > > > That would be good. Sorry for the late work. I'll allocate some time
> > > > > > > > in mid next week to help review and benchmark recent writeback works,
> > > > > > > > and hope to get things done in this merge window.
> > > > > > > 
> > > > > > > Did you have some chance to get more work done on the your writeback
> > > > > > > patches?
> > > > > > 
> > > > > > Sorry for the delay, I'm now testing the patches with commands
> > > > > > 
> > > > > >  cp /dev/zero /mnt/test/zero0 &
> > > > > >  dd if=/dev/zero of=/mnt/test/zero1 &
> > > > > > 
> > > > > > and the attached debug patch.
> > > > > > 
> > > > > > One problem I found with ext3/4 is, redirty_tail() is called repeatedly
> > > > > > in the traces, which could slow down the inode writeback significantly.
> > > > > 
> > > > > FYI, it's this redirty_tail() called in writeback_single_inode():
> > > > > 
> > > > >                         /*
> > > > >                          * Someone redirtied the inode while were writing back
> > > > >                          * the pages.
> > > > >                          */
> > > > >                         redirty_tail(inode);
> > > > 
> > > > Hmm, this looks like an old fashioned problem get blew up by the
> > > > 128MB MAX_WRITEBACK_PAGES.
> > > 
> > > I'm starting to rethink the 128MB MAX_WRITEBACK_PAGES.  128MB is the
> > > right answer for the flusher thread on sequential IO, but definitely not
> > > on random IO.  We don't want the flusher to get bogged down on random
> > > writeback and start ignoring every other file.
> > 
> > Hmm, I'd think a larger MAX_WRITEBACK_PAGES shall never increase the
> > writeback randomness.
> 
> It doesn't increase the randomness, but if we have a file full of
> buffered random IO (say from bdb or rpm), the 128MB max will mean that
> one file dominates the flusher thread writeback completely.

What if we add a bdi->max_segments quota? A segment is a continuous
run of dirty pages in the inode address space. SSD or fast RAID could
set it to a large enough value.

> > 
> > > My btrfs performance branch has long had a change to bump the
> > > nr_to_write up based on the size of the delayed allocation that we're
> > > doing.  It helped, but not as much as I really expected it too, and a
> > > similar patch from Christoph for XFS was good but not great.
> > > 
> > > It turns out the problem is in write_cache_pages.  It processes a whole
> > > pagevec at a time, something like this:
> > > 
> > > while(!done) {
> > > 	for each page in the pagegvec {
> > > 		writepage()
> > > 		if (wbc->nr_to_write <= 0)
> > > 			done = 1;
> > > 	}
> > > }
> > > 
> > > If the filesystem decides to bump nr_to_write to cover a whole
> > > extent (or a max reasonable size), the new value of nr_to_write may
> > > be ignored if nr_to_write had already gone done to zero.
> > > 
> > > I fixed btrfs to recheck nr_to_write every time, and the results are
> > > much smoother.  This is what it looks like to write out all the .o files
> > > in the kernel.
> > > 
> > > http://oss.oracle.com/~mason/seekwatcher/btrfs-nr-to-write.png
> > > 
> > > In this graph, Btrfs is writing the full extent or 8192 pages, whichever
> > > is smaller.  The write_cache_pages change is here, but it is local to
> > > the btrfs copy of write_cache_pages:
> > > 
> > > http://git.kernel.org/?p=linux/kernel/git/mason/btrfs-unstable.git;a=commit;h=f85d7d6c8f2ad4a86a1f4f4e3791f36dede2fa76
> > 
> > It seems you tried to an upper limit of 32-64MB:
> > 
> > +               if (wbc->nr_to_write < delalloc_to_write) {
> > +                       int thresh = 8192;
> > +
> > +                       if (delalloc_to_write < thresh * 2)
> > +                               thresh = delalloc_to_write;
> > +                       wbc->nr_to_write = min_t(u64, delalloc_to_write,
> > +                                                thresh);
> > +               }
> > 
> > However it is possible that btrfs bumps up nr_to_write for each inode, 
> > so that the accumulated bump ups are too large to be acceptable for
> > balance_dirty_pages().
> 
> We bump up to a limit of 64MB more than the original nr_to_write. This
> is because when we do bump we know we'll write the whole amount, and
> then write_cache_pages will end.

Imagine this scenario. There are inodes A, B, C, ...

A) delalloc_to_write=3000 but only 1000 pages dirty.
B) delalloc_to_write=3000 but only 1000 pages dirty.
C) delalloc_to_write=3000 but only 1000 pages dirty.
...

Then nr_to_write will be
A) bumped up to 3000 and fall to 2000
B) bumped up to 3000 and fall to 2000
C) bumped up to 3000 and fall to 2000
...

Because nr_to_write is non-zero after write_cache_pages() returns, so
wb_writeback() will keep calling write_cache_pages() for new inodes.
In the end, the real written pages accumulate to a very large value
for a single wb_writeback() invocation.

So there is a possibility in theory.

> > 
> > And it's not always "bump ups". nr_to_write could be decreased if it's
> > already a large value.
> 
> Sorry, I don't see where it is decreased.

When nr_to_write=2*8192, delalloc_to_write=2*8192+1,
nr_to_write will be set to 8192. However this should be harmless and
it is very unlikely someone will pass in such nr_to_write values.

> > > I'd rather see a more formal use of hints from the FS about efficient IO
> > > than a blanket increase of the writeback max.  It's more work than
> > > bumping a single #define, but even with the #define at 1GB, we're going
> > > to end up splitting extents and seeking when nr_to_write does finally
> > > get down to zero.
> > > 
> > > Btrfs currently only bumps the nr_to_write when it creates the extent, I
> > > need to change it to also bump it when it finds an existing extent.
> > 
> > Yes a more general solution would help. I'd like to propose one which
> > works in the other way round. In brief,
> > (1) the VFS give a large enough per-file writeback quota to btrfs;
> > (2) btrfs tells VFS "here is a (seek) boundary, stop voluntarily",
> >     before exhausting the quota and be force stopped.
> > 
> > There will be two limits (the second one is new):
> > 
> > - total nr to write in one wb_writeback invocation
> > - _max_ nr to write per file (before switching to sync the next inode)
> > 
> > The per-invocation limit is useful for balance_dirty_pages().
> > The per-file number can be accumulated across successive wb_writeback
> > invocations and thus can be much larger (eg. 128MB) than the legacy
> > per-invocation number. 
> > 
> > The file system will only see the per-file numbers. The "max" means
> > if btrfs find the current page to be the last page in the extent,
> > it could indicate this fact to VFS by setting wbc->would_seek=1. The
> > VFS will then switch to write the next inode.
> > 
> > The benefit of early voluntarily yield is, it reduced the possibility
> > to be force stopped half way in an extent. When next time VFS returns
> > to sync this inode, it will again be honored the full 128MB quota,
> > which should be enough to cover a big fresh extent.
> 
> This is interesting, but it gets into a problem with defining what a
> seek is.  On some hardware they are very fast and don't hurt at all.  It
> might be more interesting to make timeslices.

We could have quotas for max pages, page segments and submission time.
Will they be good enough? The first two quotas could be made per-bdi
to reflect hardware capabilities.

Thanks,
Fengguang

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH 0/7] Per-bdi writeback flusher threads v20
@ 2009-09-22 13:18                         ` Wu Fengguang
  0 siblings, 0 replies; 52+ messages in thread
From: Wu Fengguang @ 2009-09-22 13:18 UTC (permalink / raw)
  To: Chris Mason, Theodore Tso, Jens Axboe, Christoph Hellwig,
	linux-kernel@vger.kernel.org

On Tue, Sep 22, 2009 at 07:30:55PM +0800, Chris Mason wrote:
> On Tue, Sep 22, 2009 at 06:13:35PM +0800, Wu Fengguang wrote:
> > On Mon, Sep 21, 2009 at 09:53:21PM +0800, Chris Mason wrote:
> > > On Sat, Sep 19, 2009 at 12:26:07PM +0800, Wu Fengguang wrote:
> > > > On Sat, Sep 19, 2009 at 12:00:51PM +0800, Wu Fengguang wrote:
> > > > > On Sat, Sep 19, 2009 at 11:58:35AM +0800, Wu Fengguang wrote:
> > > > > > On Sat, Sep 19, 2009 at 01:52:52AM +0800, Theodore Tso wrote:
> > > > > > > On Fri, Sep 11, 2009 at 10:39:29PM +0800, Wu Fengguang wrote:
> > > > > > > > 
> > > > > > > > That would be good. Sorry for the late work. I'll allocate some time
> > > > > > > > in mid next week to help review and benchmark recent writeback works,
> > > > > > > > and hope to get things done in this merge window.
> > > > > > > 
> > > > > > > Did you have some chance to get more work done on the your writeback
> > > > > > > patches?
> > > > > > 
> > > > > > Sorry for the delay, I'm now testing the patches with commands
> > > > > > 
> > > > > >  cp /dev/zero /mnt/test/zero0 &
> > > > > >  dd if=/dev/zero of=/mnt/test/zero1 &
> > > > > > 
> > > > > > and the attached debug patch.
> > > > > > 
> > > > > > One problem I found with ext3/4 is, redirty_tail() is called repeatedly
> > > > > > in the traces, which could slow down the inode writeback significantly.
> > > > > 
> > > > > FYI, it's this redirty_tail() called in writeback_single_inode():
> > > > > 
> > > > >                         /*
> > > > >                          * Someone redirtied the inode while were writing back
> > > > >                          * the pages.
> > > > >                          */
> > > > >                         redirty_tail(inode);
> > > > 
> > > > Hmm, this looks like an old fashioned problem get blew up by the
> > > > 128MB MAX_WRITEBACK_PAGES.
> > > 
> > > I'm starting to rethink the 128MB MAX_WRITEBACK_PAGES.  128MB is the
> > > right answer for the flusher thread on sequential IO, but definitely not
> > > on random IO.  We don't want the flusher to get bogged down on random
> > > writeback and start ignoring every other file.
> > 
> > Hmm, I'd think a larger MAX_WRITEBACK_PAGES shall never increase the
> > writeback randomness.
> 
> It doesn't increase the randomness, but if we have a file full of
> buffered random IO (say from bdb or rpm), the 128MB max will mean that
> one file dominates the flusher thread writeback completely.

What if we add a bdi->max_segments quota? A segment is a continuous
run of dirty pages in the inode address space. SSD or fast RAID could
set it to a large enough value.

> > 
> > > My btrfs performance branch has long had a change to bump the
> > > nr_to_write up based on the size of the delayed allocation that we're
> > > doing.  It helped, but not as much as I really expected it too, and a
> > > similar patch from Christoph for XFS was good but not great.
> > > 
> > > It turns out the problem is in write_cache_pages.  It processes a whole
> > > pagevec at a time, something like this:
> > > 
> > > while(!done) {
> > > 	for each page in the pagegvec {
> > > 		writepage()
> > > 		if (wbc->nr_to_write <= 0)
> > > 			done = 1;
> > > 	}
> > > }
> > > 
> > > If the filesystem decides to bump nr_to_write to cover a whole
> > > extent (or a max reasonable size), the new value of nr_to_write may
> > > be ignored if nr_to_write had already gone done to zero.
> > > 
> > > I fixed btrfs to recheck nr_to_write every time, and the results are
> > > much smoother.  This is what it looks like to write out all the .o files
> > > in the kernel.
> > > 
> > > http://oss.oracle.com/~mason/seekwatcher/btrfs-nr-to-write.png
> > > 
> > > In this graph, Btrfs is writing the full extent or 8192 pages, whichever
> > > is smaller.  The write_cache_pages change is here, but it is local to
> > > the btrfs copy of write_cache_pages:
> > > 
> > > http://git.kernel.org/?p=linux/kernel/git/mason/btrfs-unstable.git;a=commit;h=f85d7d6c8f2ad4a86a1f4f4e3791f36dede2fa76
> > 
> > It seems you tried to an upper limit of 32-64MB:
> > 
> > +               if (wbc->nr_to_write < delalloc_to_write) {
> > +                       int thresh = 8192;
> > +
> > +                       if (delalloc_to_write < thresh * 2)
> > +                               thresh = delalloc_to_write;
> > +                       wbc->nr_to_write = min_t(u64, delalloc_to_write,
> > +                                                thresh);
> > +               }
> > 
> > However it is possible that btrfs bumps up nr_to_write for each inode, 
> > so that the accumulated bump ups are too large to be acceptable for
> > balance_dirty_pages().
> 
> We bump up to a limit of 64MB more than the original nr_to_write. This
> is because when we do bump we know we'll write the whole amount, and
> then write_cache_pages will end.

Imagine this scenario. There are inodes A, B, C, ...

A) delalloc_to_write=3000 but only 1000 pages dirty.
B) delalloc_to_write=3000 but only 1000 pages dirty.
C) delalloc_to_write=3000 but only 1000 pages dirty.
...

Then nr_to_write will be
A) bumped up to 3000 and fall to 2000
B) bumped up to 3000 and fall to 2000
C) bumped up to 3000 and fall to 2000
...

Because nr_to_write is non-zero after write_cache_pages() returns, so
wb_writeback() will keep calling write_cache_pages() for new inodes.
In the end, the real written pages accumulate to a very large value
for a single wb_writeback() invocation.

So there is a possibility in theory.

> > 
> > And it's not always "bump ups". nr_to_write could be decreased if it's
> > already a large value.
> 
> Sorry, I don't see where it is decreased.

When nr_to_write=2*8192, delalloc_to_write=2*8192+1,
nr_to_write will be set to 8192. However this should be harmless and
it is very unlikely someone will pass in such nr_to_write values.

> > > I'd rather see a more formal use of hints from the FS about efficient IO
> > > than a blanket increase of the writeback max.  It's more work than
> > > bumping a single #define, but even with the #define at 1GB, we're going
> > > to end up splitting extents and seeking when nr_to_write does finally
> > > get down to zero.
> > > 
> > > Btrfs currently only bumps the nr_to_write when it creates the extent, I
> > > need to change it to also bump it when it finds an existing extent.
> > 
> > Yes a more general solution would help. I'd like to propose one which
> > works in the other way round. In brief,
> > (1) the VFS give a large enough per-file writeback quota to btrfs;
> > (2) btrfs tells VFS "here is a (seek) boundary, stop voluntarily",
> >     before exhausting the quota and be force stopped.
> > 
> > There will be two limits (the second one is new):
> > 
> > - total nr to write in one wb_writeback invocation
> > - _max_ nr to write per file (before switching to sync the next inode)
> > 
> > The per-invocation limit is useful for balance_dirty_pages().
> > The per-file number can be accumulated across successive wb_writeback
> > invocations and thus can be much larger (eg. 128MB) than the legacy
> > per-invocation number. 
> > 
> > The file system will only see the per-file numbers. The "max" means
> > if btrfs find the current page to be the last page in the extent,
> > it could indicate this fact to VFS by setting wbc->would_seek=1. The
> > VFS will then switch to write the next inode.
> > 
> > The benefit of early voluntarily yield is, it reduced the possibility
> > to be force stopped half way in an extent. When next time VFS returns
> > to sync this inode, it will again be honored the full 128MB quota,
> > which should be enough to cover a big fresh extent.
> 
> This is interesting, but it gets into a problem with defining what a
> seek is.  On some hardware they are very fast and don't hurt at all.  It
> might be more interesting to make timeslices.

We could have quotas for max pages, page segments and submission time.
Will they be good enough? The first two quotas could be made per-bdi
to reflect hardware capabilities.

Thanks,
Fengguang

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH 0/7] Per-bdi writeback flusher threads v20
  2009-09-22 11:30                     ` Jan Kara
@ 2009-09-22 13:33                       ` Wu Fengguang
  0 siblings, 0 replies; 52+ messages in thread
From: Wu Fengguang @ 2009-09-22 13:33 UTC (permalink / raw)
  To: Jan Kara
  Cc: Chris Mason, Theodore Tso, Jens Axboe, Christoph Hellwig,
	linux-kernel, linux-fsdevel, akpm

On Tue, Sep 22, 2009 at 07:30:55PM +0800, Jan Kara wrote:
> On Tue 22-09-09 18:13:35, Wu Fengguang wrote:
> > Yes a more general solution would help. I'd like to propose one which
> > works in the other way round. In brief,
> > (1) the VFS give a large enough per-file writeback quota to btrfs;
> > (2) btrfs tells VFS "here is a (seek) boundary, stop voluntarily",
> >     before exhausting the quota and be force stopped.
> > 
> > There will be two limits (the second one is new):
> > 
> > - total nr to write in one wb_writeback invocation
> > - _max_ nr to write per file (before switching to sync the next inode)
> > 
> > The per-invocation limit is useful for balance_dirty_pages().
> > The per-file number can be accumulated across successive wb_writeback
> > invocations and thus can be much larger (eg. 128MB) than the legacy
> > per-invocation number. 
>   Actually, it doesn't make much sence to have a per-file limit in number
> of pages. I've been playing with an idea that we could have a per-file
> *time* quota. That would have an advantage that if a file generates random
> IO, we wouldn't block for longer time on it than when it generates linear
> IO.

Heh, FYI recently I tried per-file submission time quota:

        http://lkml.org/lkml/2009/9/10/54

Though I didn't take randomness of IO into account, which definitely
deserves some attention.

>   I imagine that in ->writepage we would substract from given time quota in
> wbc the time it takes to write the current page. It would need some context
> in wbc so that it is able to tell whether the IO is linear or random to
> properly account for some seek penalty but generally it seems to be
> doable...

Yeah, maybe page segments that are distant enough could be treated as "seeks".

>   Filesystems implementing ->writepages can then make decision whether they
> have enough time quota to seek to next extent and write it out or whether
> they should rather yield to other inodes...

Yeah, it's possible. VFS provides (one or more) quota info and
file systems decide when to yield.

Thanks,
Fengguang

> > The file system will only see the per-file numbers. The "max" means
> > if btrfs find the current page to be the last page in the extent,
> > it could indicate this fact to VFS by setting wbc->would_seek=1. The
> > VFS will then switch to write the next inode.
> > 
> > The benefit of early voluntarily yield is, it reduced the possibility
> > to be force stopped half way in an extent. When next time VFS returns
> > to sync this inode, it will again be honored the full 128MB quota,
> > which should be enough to cover a big fresh extent.
> 
> 								Honza
> -- 
> Jan Kara <jack@suse.cz>
> SUSE Labs, CR

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH 0/7] Per-bdi writeback flusher threads v20
  2009-09-22 13:18                         ` Wu Fengguang
  (?)
@ 2009-09-22 15:59                         ` Chris Mason
  2009-09-23  1:05                             ` Wu Fengguang
  -1 siblings, 1 reply; 52+ messages in thread
From: Chris Mason @ 2009-09-22 15:59 UTC (permalink / raw)
  To: Wu Fengguang
  Cc: Theodore Tso, Jens Axboe, Christoph Hellwig, linux-kernel,
	linux-fsdevel, akpm, jack

On Tue, Sep 22, 2009 at 09:18:32PM +0800, Wu Fengguang wrote:
> On Tue, Sep 22, 2009 at 07:30:55PM +0800, Chris Mason wrote:

[ using a very large MAX_WRITEBACK_PAGES ]

> > > > I'm starting to rethink the 128MB MAX_WRITEBACK_PAGES.  128MB is the
> > > > right answer for the flusher thread on sequential IO, but definitely not
> > > > on random IO.  We don't want the flusher to get bogged down on random
> > > > writeback and start ignoring every other file.
> > > 
> > > Hmm, I'd think a larger MAX_WRITEBACK_PAGES shall never increase the
> > > writeback randomness.
> > 
> > It doesn't increase the randomness, but if we have a file full of
> > buffered random IO (say from bdb or rpm), the 128MB max will mean that
> > one file dominates the flusher thread writeback completely.
> 
> What if we add a bdi->max_segments quota? A segment is a continuous
> run of dirty pages in the inode address space. SSD or fast RAID could
> set it to a large enough value.

I'd rather play with timeslice ideas first ;)  But, don't let me stop
you from trying interesting things.

> 
> > > 
> > > > My btrfs performance branch has long had a change to bump the
> > > > nr_to_write up based on the size of the delayed allocation that we're
> > > > doing.  It helped, but not as much as I really expected it too, and a
> > > > similar patch from Christoph for XFS was good but not great.
> > > > 
> > > > It turns out the problem is in write_cache_pages.  It processes a whole
> > > > pagevec at a time, something like this:
> > > > 
> > > > while(!done) {
> > > > 	for each page in the pagegvec {
> > > > 		writepage()
> > > > 		if (wbc->nr_to_write <= 0)
> > > > 			done = 1;
> > > > 	}
> > > > }
> > > > 
> > > > If the filesystem decides to bump nr_to_write to cover a whole
> > > > extent (or a max reasonable size), the new value of nr_to_write may
> > > > be ignored if nr_to_write had already gone done to zero.
> > > > 
> > > > I fixed btrfs to recheck nr_to_write every time, and the results are
> > > > much smoother.  This is what it looks like to write out all the .o files
> > > > in the kernel.
> > > > 
> > > > http://oss.oracle.com/~mason/seekwatcher/btrfs-nr-to-write.png
> > > > 
> > > > In this graph, Btrfs is writing the full extent or 8192 pages, whichever
> > > > is smaller.  The write_cache_pages change is here, but it is local to
> > > > the btrfs copy of write_cache_pages:
> > > > 
> > > > http://git.kernel.org/?p=linux/kernel/git/mason/btrfs-unstable.git;a=commit;h=f85d7d6c8f2ad4a86a1f4f4e3791f36dede2fa76
> > > 
> > > It seems you tried to an upper limit of 32-64MB:
> > > 
> > > +               if (wbc->nr_to_write < delalloc_to_write) {
> > > +                       int thresh = 8192;
> > > +
> > > +                       if (delalloc_to_write < thresh * 2)
> > > +                               thresh = delalloc_to_write;
> > > +                       wbc->nr_to_write = min_t(u64, delalloc_to_write,
> > > +                                                thresh);
> > > +               }
> > > 
> > > However it is possible that btrfs bumps up nr_to_write for each inode, 
> > > so that the accumulated bump ups are too large to be acceptable for
> > > balance_dirty_pages().
> > 
> > We bump up to a limit of 64MB more than the original nr_to_write. This
> > is because when we do bump we know we'll write the whole amount, and
> > then write_cache_pages will end.
> 
> Imagine this scenario. There are inodes A, B, C, ...
> 
> A) delalloc_to_write=3000 but only 1000 pages dirty.

The part that isn't clear from the code you're reading is that if
delalloc_to_write is 3000, then there must be 3000 pages dirty.  The
count of delalloc bytes to go down always reflects IO that must be done.

So, once my writepage call bumps nr_to_write, that IO will happen.  The
only exception is if someone else jumps in and writes the pages, which
won't happen unless there is synchronous writeback.


> > > Yes a more general solution would help. I'd like to propose one which
> > > works in the other way round. In brief,
> > > (1) the VFS give a large enough per-file writeback quota to btrfs;
> > > (2) btrfs tells VFS "here is a (seek) boundary, stop voluntarily",
> > >     before exhausting the quota and be force stopped.
> > > 
> > > There will be two limits (the second one is new):
> > > 
> > > - total nr to write in one wb_writeback invocation
> > > - _max_ nr to write per file (before switching to sync the next inode)
> > > 
> > > The per-invocation limit is useful for balance_dirty_pages().
> > > The per-file number can be accumulated across successive wb_writeback
> > > invocations and thus can be much larger (eg. 128MB) than the legacy
> > > per-invocation number. 
> > > 
> > > The file system will only see the per-file numbers. The "max" means
> > > if btrfs find the current page to be the last page in the extent,
> > > it could indicate this fact to VFS by setting wbc->would_seek=1. The
> > > VFS will then switch to write the next inode.
> > > 
> > > The benefit of early voluntarily yield is, it reduced the possibility
> > > to be force stopped half way in an extent. When next time VFS returns
> > > to sync this inode, it will again be honored the full 128MB quota,
> > > which should be enough to cover a big fresh extent.
> > 
> > This is interesting, but it gets into a problem with defining what a
> > seek is.  On some hardware they are very fast and don't hurt at all.  It
> > might be more interesting to make timeslices.
> 
> We could have quotas for max pages, page segments and submission time.
> Will they be good enough? The first two quotas could be made per-bdi
> to reflect hardware capabilities.

The reason I prefer the timeslice idea is that we don't need the
hardware to tell us how fast it is.  We just write for a while and move
on.

-chris


^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH 0/7] Per-bdi writeback flusher threads v20
  2009-09-22 11:45                       ` Jan Kara
  2009-09-22 12:47                         ` Wu Fengguang
@ 2009-09-22 17:41                         ` Chris Mason
  1 sibling, 0 replies; 52+ messages in thread
From: Chris Mason @ 2009-09-22 17:41 UTC (permalink / raw)
  To: Jan Kara
  Cc: Wu Fengguang, Theodore Tso, Jens Axboe, Christoph Hellwig,
	linux-kernel, linux-fsdevel, akpm

On Tue, Sep 22, 2009 at 01:45:37PM +0200, Jan Kara wrote:
> On Tue 22-09-09 07:30:55, Chris Mason wrote:
> > > Yes a more general solution would help. I'd like to propose one which
> > > works in the other way round. In brief,
> > > (1) the VFS give a large enough per-file writeback quota to btrfs;
> > > (2) btrfs tells VFS "here is a (seek) boundary, stop voluntarily",
> > >     before exhausting the quota and be force stopped.
> > > 
> > > There will be two limits (the second one is new):
> > > 
> > > - total nr to write in one wb_writeback invocation
> > > - _max_ nr to write per file (before switching to sync the next inode)
> > > 
> > > The per-invocation limit is useful for balance_dirty_pages().
> > > The per-file number can be accumulated across successive wb_writeback
> > > invocations and thus can be much larger (eg. 128MB) than the legacy
> > > per-invocation number. 
> > > 
> > > The file system will only see the per-file numbers. The "max" means
> > > if btrfs find the current page to be the last page in the extent,
> > > it could indicate this fact to VFS by setting wbc->would_seek=1. The
> > > VFS will then switch to write the next inode.
> > > 
> > > The benefit of early voluntarily yield is, it reduced the possibility
> > > to be force stopped half way in an extent. When next time VFS returns
> > > to sync this inode, it will again be honored the full 128MB quota,
> > > which should be enough to cover a big fresh extent.
> > 
> > This is interesting, but it gets into a problem with defining what a
> > seek is.  On some hardware they are very fast and don't hurt at all.  It
> > might be more interesting to make timeslices.
>   With simple timeslices there's a problem that the time it takes to submit
> an IO isn't really related to the time it takes to complete the IO.  During
> submission we are limited just by availablity of free requests and sizes of
> request queues (which might be filled by another thread or by us writing
> different inode).

Well, what we have right now works like this:

A process writes N pages out (effectively only waiting for requests).
If those N pages were all from the same file, we move to a different
file because we don't want all the other files to get too old.

If that process is in balance_dirty_pages(), after it writes N pages, it
immediately goes back to making dirty pages.  If it wasn't able to write
N pages, it sleeps for a bit and starts over.

This is a long way of saying the time it takes to complete the IO isn't
currently factored in at all.  The only place we check for this is the
code to prevent balance_dirty_pages() from emptying the dirty list.

I think what we need for the bdi threads is a way to say: only service
this file for a given duration, then move on to the others.  The
filesystem should have a way to extend the duration slightly so that we
write big chunks of big extents.

What we need for balance_dirty_pages is a way to say: just wait for the
writeback to make progress (you had ideas on this already in the past).

Jens had ideas on all of this too, but I'd hope we can do it without
trying it to cfq.

-chris


^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH 0/7] Per-bdi writeback flusher threads v20
  2009-09-22 15:59                         ` Chris Mason
@ 2009-09-23  1:05                             ` Wu Fengguang
  0 siblings, 0 replies; 52+ messages in thread
From: Wu Fengguang @ 2009-09-23  1:05 UTC (permalink / raw)
  To: Chris Mason, Theodore Tso, Jens Axboe, Christoph Hellwig,
	linux-kernel, linux-fsdevel, akpm, jack

On Tue, Sep 22, 2009 at 11:59:41PM +0800, Chris Mason wrote:
> On Tue, Sep 22, 2009 at 09:18:32PM +0800, Wu Fengguang wrote:
> > On Tue, Sep 22, 2009 at 07:30:55PM +0800, Chris Mason wrote:
> 
> [ using a very large MAX_WRITEBACK_PAGES ]
> 
> > > > > I'm starting to rethink the 128MB MAX_WRITEBACK_PAGES.  128MB is the
> > > > > right answer for the flusher thread on sequential IO, but definitely not
> > > > > on random IO.  We don't want the flusher to get bogged down on random
> > > > > writeback and start ignoring every other file.
> > > > 
> > > > Hmm, I'd think a larger MAX_WRITEBACK_PAGES shall never increase the
> > > > writeback randomness.
> > > 
> > > It doesn't increase the randomness, but if we have a file full of
> > > buffered random IO (say from bdb or rpm), the 128MB max will mean that
> > > one file dominates the flusher thread writeback completely.
> > 
> > What if we add a bdi->max_segments quota? A segment is a continuous
> > run of dirty pages in the inode address space. SSD or fast RAID could
> > set it to a large enough value.
> 
> I'd rather play with timeslice ideas first ;)  But, don't let me stop
> you from trying interesting things.

OK.

> > 
> > > > 
> > > > > My btrfs performance branch has long had a change to bump the
> > > > > nr_to_write up based on the size of the delayed allocation that we're
> > > > > doing.  It helped, but not as much as I really expected it too, and a
> > > > > similar patch from Christoph for XFS was good but not great.
> > > > > 
> > > > > It turns out the problem is in write_cache_pages.  It processes a whole
> > > > > pagevec at a time, something like this:
> > > > > 
> > > > > while(!done) {
> > > > > 	for each page in the pagegvec {
> > > > > 		writepage()
> > > > > 		if (wbc->nr_to_write <= 0)
> > > > > 			done = 1;
> > > > > 	}
> > > > > }
> > > > > 
> > > > > If the filesystem decides to bump nr_to_write to cover a whole
> > > > > extent (or a max reasonable size), the new value of nr_to_write may
> > > > > be ignored if nr_to_write had already gone done to zero.
> > > > > 
> > > > > I fixed btrfs to recheck nr_to_write every time, and the results are
> > > > > much smoother.  This is what it looks like to write out all the .o files
> > > > > in the kernel.
> > > > > 
> > > > > http://oss.oracle.com/~mason/seekwatcher/btrfs-nr-to-write.png
> > > > > 
> > > > > In this graph, Btrfs is writing the full extent or 8192 pages, whichever
> > > > > is smaller.  The write_cache_pages change is here, but it is local to
> > > > > the btrfs copy of write_cache_pages:
> > > > > 
> > > > > http://git.kernel.org/?p=linux/kernel/git/mason/btrfs-unstable.git;a=commit;h=f85d7d6c8f2ad4a86a1f4f4e3791f36dede2fa76
> > > > 
> > > > It seems you tried to an upper limit of 32-64MB:
> > > > 
> > > > +               if (wbc->nr_to_write < delalloc_to_write) {
> > > > +                       int thresh = 8192;
> > > > +
> > > > +                       if (delalloc_to_write < thresh * 2)
> > > > +                               thresh = delalloc_to_write;
> > > > +                       wbc->nr_to_write = min_t(u64, delalloc_to_write,
> > > > +                                                thresh);
> > > > +               }
> > > > 
> > > > However it is possible that btrfs bumps up nr_to_write for each inode, 
> > > > so that the accumulated bump ups are too large to be acceptable for
> > > > balance_dirty_pages().
> > > 
> > > We bump up to a limit of 64MB more than the original nr_to_write. This
> > > is because when we do bump we know we'll write the whole amount, and
> > > then write_cache_pages will end.
> > 
> > Imagine this scenario. There are inodes A, B, C, ...
> > 
> > A) delalloc_to_write=3000 but only 1000 pages dirty.
> 
> The part that isn't clear from the code you're reading is that if
> delalloc_to_write is 3000, then there must be 3000 pages dirty.  The
> count of delalloc bytes to go down always reflects IO that must be done.
> 
> So, once my writepage call bumps nr_to_write, that IO will happen.  The
> only exception is if someone else jumps in and writes the pages, which
> won't happen unless there is synchronous writeback.

Ah thanks for the clarification.

> > > > Yes a more general solution would help. I'd like to propose one which
> > > > works in the other way round. In brief,
> > > > (1) the VFS give a large enough per-file writeback quota to btrfs;
> > > > (2) btrfs tells VFS "here is a (seek) boundary, stop voluntarily",
> > > >     before exhausting the quota and be force stopped.
> > > > 
> > > > There will be two limits (the second one is new):
> > > > 
> > > > - total nr to write in one wb_writeback invocation
> > > > - _max_ nr to write per file (before switching to sync the next inode)
> > > > 
> > > > The per-invocation limit is useful for balance_dirty_pages().
> > > > The per-file number can be accumulated across successive wb_writeback
> > > > invocations and thus can be much larger (eg. 128MB) than the legacy
> > > > per-invocation number. 
> > > > 
> > > > The file system will only see the per-file numbers. The "max" means
> > > > if btrfs find the current page to be the last page in the extent,
> > > > it could indicate this fact to VFS by setting wbc->would_seek=1. The
> > > > VFS will then switch to write the next inode.
> > > > 
> > > > The benefit of early voluntarily yield is, it reduced the possibility
> > > > to be force stopped half way in an extent. When next time VFS returns
> > > > to sync this inode, it will again be honored the full 128MB quota,
> > > > which should be enough to cover a big fresh extent.
> > > 
> > > This is interesting, but it gets into a problem with defining what a
> > > seek is.  On some hardware they are very fast and don't hurt at all.  It
> > > might be more interesting to make timeslices.
> > 
> > We could have quotas for max pages, page segments and submission time.
> > Will they be good enough? The first two quotas could be made per-bdi
> > to reflect hardware capabilities.
> 
> The reason I prefer the timeslice idea is that we don't need the
> hardware to tell us how fast it is.  We just write for a while and move
> on.

That makes sense.  Note that the triple (pages, page segments,
submission time) can somehow adapt to hardware capabilities
(and at least won't hurt fast arrays).

- max pages are set to large enough number for big arrays
- max page segments could be based on the existing blk_queue_nonrot()
- submission time = 1s, which is mainly a safeguard for slow devices
  (ie. usb stick), to prevent one single inode from taking too much
  time. This time limit has little performance impacts.

Possible merits are
- these parameters are concrete ones and easy to handle
- it's natural to implement related logics in the VFS level
- file systems can do nothing to get most benefits

Also the (now necessary) per-invocation limit could be somehow
eliminated when balance_dirty_pages() does not do IO itself.

Thanks,
Fengguang

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH 0/7] Per-bdi writeback flusher threads v20
@ 2009-09-23  1:05                             ` Wu Fengguang
  0 siblings, 0 replies; 52+ messages in thread
From: Wu Fengguang @ 2009-09-23  1:05 UTC (permalink / raw)
  To: Chris Mason, Theodore Tso, Jens Axboe, Christoph Hellwig,
	linux-kernel@vger.kernel.org

On Tue, Sep 22, 2009 at 11:59:41PM +0800, Chris Mason wrote:
> On Tue, Sep 22, 2009 at 09:18:32PM +0800, Wu Fengguang wrote:
> > On Tue, Sep 22, 2009 at 07:30:55PM +0800, Chris Mason wrote:
> 
> [ using a very large MAX_WRITEBACK_PAGES ]
> 
> > > > > I'm starting to rethink the 128MB MAX_WRITEBACK_PAGES.  128MB is the
> > > > > right answer for the flusher thread on sequential IO, but definitely not
> > > > > on random IO.  We don't want the flusher to get bogged down on random
> > > > > writeback and start ignoring every other file.
> > > > 
> > > > Hmm, I'd think a larger MAX_WRITEBACK_PAGES shall never increase the
> > > > writeback randomness.
> > > 
> > > It doesn't increase the randomness, but if we have a file full of
> > > buffered random IO (say from bdb or rpm), the 128MB max will mean that
> > > one file dominates the flusher thread writeback completely.
> > 
> > What if we add a bdi->max_segments quota? A segment is a continuous
> > run of dirty pages in the inode address space. SSD or fast RAID could
> > set it to a large enough value.
> 
> I'd rather play with timeslice ideas first ;)  But, don't let me stop
> you from trying interesting things.

OK.

> > 
> > > > 
> > > > > My btrfs performance branch has long had a change to bump the
> > > > > nr_to_write up based on the size of the delayed allocation that we're
> > > > > doing.  It helped, but not as much as I really expected it too, and a
> > > > > similar patch from Christoph for XFS was good but not great.
> > > > > 
> > > > > It turns out the problem is in write_cache_pages.  It processes a whole
> > > > > pagevec at a time, something like this:
> > > > > 
> > > > > while(!done) {
> > > > > 	for each page in the pagegvec {
> > > > > 		writepage()
> > > > > 		if (wbc->nr_to_write <= 0)
> > > > > 			done = 1;
> > > > > 	}
> > > > > }
> > > > > 
> > > > > If the filesystem decides to bump nr_to_write to cover a whole
> > > > > extent (or a max reasonable size), the new value of nr_to_write may
> > > > > be ignored if nr_to_write had already gone done to zero.
> > > > > 
> > > > > I fixed btrfs to recheck nr_to_write every time, and the results are
> > > > > much smoother.  This is what it looks like to write out all the .o files
> > > > > in the kernel.
> > > > > 
> > > > > http://oss.oracle.com/~mason/seekwatcher/btrfs-nr-to-write.png
> > > > > 
> > > > > In this graph, Btrfs is writing the full extent or 8192 pages, whichever
> > > > > is smaller.  The write_cache_pages change is here, but it is local to
> > > > > the btrfs copy of write_cache_pages:
> > > > > 
> > > > > http://git.kernel.org/?p=linux/kernel/git/mason/btrfs-unstable.git;a=commit;h=f85d7d6c8f2ad4a86a1f4f4e3791f36dede2fa76
> > > > 
> > > > It seems you tried to an upper limit of 32-64MB:
> > > > 
> > > > +               if (wbc->nr_to_write < delalloc_to_write) {
> > > > +                       int thresh = 8192;
> > > > +
> > > > +                       if (delalloc_to_write < thresh * 2)
> > > > +                               thresh = delalloc_to_write;
> > > > +                       wbc->nr_to_write = min_t(u64, delalloc_to_write,
> > > > +                                                thresh);
> > > > +               }
> > > > 
> > > > However it is possible that btrfs bumps up nr_to_write for each inode, 
> > > > so that the accumulated bump ups are too large to be acceptable for
> > > > balance_dirty_pages().
> > > 
> > > We bump up to a limit of 64MB more than the original nr_to_write. This
> > > is because when we do bump we know we'll write the whole amount, and
> > > then write_cache_pages will end.
> > 
> > Imagine this scenario. There are inodes A, B, C, ...
> > 
> > A) delalloc_to_write=3000 but only 1000 pages dirty.
> 
> The part that isn't clear from the code you're reading is that if
> delalloc_to_write is 3000, then there must be 3000 pages dirty.  The
> count of delalloc bytes to go down always reflects IO that must be done.
> 
> So, once my writepage call bumps nr_to_write, that IO will happen.  The
> only exception is if someone else jumps in and writes the pages, which
> won't happen unless there is synchronous writeback.

Ah thanks for the clarification.

> > > > Yes a more general solution would help. I'd like to propose one which
> > > > works in the other way round. In brief,
> > > > (1) the VFS give a large enough per-file writeback quota to btrfs;
> > > > (2) btrfs tells VFS "here is a (seek) boundary, stop voluntarily",
> > > >     before exhausting the quota and be force stopped.
> > > > 
> > > > There will be two limits (the second one is new):
> > > > 
> > > > - total nr to write in one wb_writeback invocation
> > > > - _max_ nr to write per file (before switching to sync the next inode)
> > > > 
> > > > The per-invocation limit is useful for balance_dirty_pages().
> > > > The per-file number can be accumulated across successive wb_writeback
> > > > invocations and thus can be much larger (eg. 128MB) than the legacy
> > > > per-invocation number. 
> > > > 
> > > > The file system will only see the per-file numbers. The "max" means
> > > > if btrfs find the current page to be the last page in the extent,
> > > > it could indicate this fact to VFS by setting wbc->would_seek=1. The
> > > > VFS will then switch to write the next inode.
> > > > 
> > > > The benefit of early voluntarily yield is, it reduced the possibility
> > > > to be force stopped half way in an extent. When next time VFS returns
> > > > to sync this inode, it will again be honored the full 128MB quota,
> > > > which should be enough to cover a big fresh extent.
> > > 
> > > This is interesting, but it gets into a problem with defining what a
> > > seek is.  On some hardware they are very fast and don't hurt at all.  It
> > > might be more interesting to make timeslices.
> > 
> > We could have quotas for max pages, page segments and submission time.
> > Will they be good enough? The first two quotas could be made per-bdi
> > to reflect hardware capabilities.
> 
> The reason I prefer the timeslice idea is that we don't need the
> hardware to tell us how fast it is.  We just write for a while and move
> on.

That makes sense.  Note that the triple (pages, page segments,
submission time) can somehow adapt to hardware capabilities
(and at least won't hurt fast arrays).

- max pages are set to large enough number for big arrays
- max page segments could be based on the existing blk_queue_nonrot()
- submission time = 1s, which is mainly a safeguard for slow devices
  (ie. usb stick), to prevent one single inode from taking too much
  time. This time limit has little performance impacts.

Possible merits are
- these parameters are concrete ones and easy to handle
- it's natural to implement related logics in the VFS level
- file systems can do nothing to get most benefits

Also the (now necessary) per-invocation limit could be somehow
eliminated when balance_dirty_pages() does not do IO itself.

Thanks,
Fengguang

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH 0/7] Per-bdi writeback flusher threads v20
  2009-09-23  1:05                             ` Wu Fengguang
  (?)
@ 2009-09-23 14:08                             ` Chris Mason
  2009-09-24  1:32                               ` Wu Fengguang
  2009-09-24  1:32                               ` Wu Fengguang
  -1 siblings, 2 replies; 52+ messages in thread
From: Chris Mason @ 2009-09-23 14:08 UTC (permalink / raw)
  To: Wu Fengguang
  Cc: Theodore Tso, Jens Axboe, Christoph Hellwig, linux-kernel,
	linux-fsdevel, akpm, jack

On Wed, Sep 23, 2009 at 09:05:41AM +0800, Wu Fengguang wrote:

[ timeslice based limits on number of pages sent by the bdi threads ]

> > 
> > The reason I prefer the timeslice idea is that we don't need the
> > hardware to tell us how fast it is.  We just write for a while and move
> > on.
> 
> That makes sense.  Note that the triple (pages, page segments,
> submission time) can somehow adapt to hardware capabilities
> (and at least won't hurt fast arrays).
> 
> - max pages are set to large enough number for big arrays
> - max page segments could be based on the existing blk_queue_nonrot()
> - submission time = 1s, which is mainly a safeguard for slow devices
>   (ie. usb stick), to prevent one single inode from taking too much
>   time. This time limit has little performance impacts.
> 
> Possible merits are
> - these parameters are concrete ones and easy to handle
> - it's natural to implement related logics in the VFS level
> - file systems can do nothing to get most benefits
> 
> Also the (now necessary) per-invocation limit could be somehow
> eliminated when balance_dirty_pages() does not do IO itself.

I think there are probably a lot of good ways to improve on our single
max number of pages metric from today, but I'm worried about the
calculation time finding page segments.  The radix tree
isn't all that well suited to it.

But, if you've got a patch I'd be happy to run a comparison against it.
Jens' box will be better at showing any CPU cost to the radix walking.

-chris


^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH 0/7] Per-bdi writeback flusher threads v20
  2009-09-23 14:08                             ` Chris Mason
@ 2009-09-24  1:32                               ` Wu Fengguang
  2009-09-24  1:32                               ` Wu Fengguang
  1 sibling, 0 replies; 52+ messages in thread
From: Wu Fengguang @ 2009-09-24  1:32 UTC (permalink / raw)
  To: Chris Mason, Theodore Tso, Jens Axboe, Christoph Hellwig,
	linux-kernel, linux-fsdevel, akpm, jack

On Wed, Sep 23, 2009 at 10:08:40PM +0800, Chris Mason wrote:
> On Wed, Sep 23, 2009 at 09:05:41AM +0800, Wu Fengguang wrote:
> 
> [ timeslice based limits on number of pages sent by the bdi threads ]
> 
> > > 
> > > The reason I prefer the timeslice idea is that we don't need the
> > > hardware to tell us how fast it is.  We just write for a while and move
> > > on.
> > 
> > That makes sense.  Note that the triple (pages, page segments,
> > submission time) can somehow adapt to hardware capabilities
> > (and at least won't hurt fast arrays).
> > 
> > - max pages are set to large enough number for big arrays
> > - max page segments could be based on the existing blk_queue_nonrot()
> > - submission time = 1s, which is mainly a safeguard for slow devices
> >   (ie. usb stick), to prevent one single inode from taking too much
> >   time. This time limit has little performance impacts.
> > 
> > Possible merits are
> > - these parameters are concrete ones and easy to handle
> > - it's natural to implement related logics in the VFS level
> > - file systems can do nothing to get most benefits
> > 
> > Also the (now necessary) per-invocation limit could be somehow
> > eliminated when balance_dirty_pages() does not do IO itself.
> 
> I think there are probably a lot of good ways to improve on our single
> max number of pages metric from today

Yes, as always, it benefits to work out some prototype solutions for
evaluation and comparison.

> , but I'm worried about the
> calculation time finding page segments.  The radix tree
> isn't all that well suited to it.

I didn't mean to "calculate" the page segments. But rather to do this
in write_cache_pages:

        if (this page index is 1MB away from prev page index)
                wbc->page_segments--;

> But, if you've got a patch I'd be happy to run a comparison against it.
> Jens' box will be better at showing any CPU cost to the radix walking.

Thanks!

Regards,
Fengguang

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH 0/7] Per-bdi writeback flusher threads v20
  2009-09-23 14:08                             ` Chris Mason
  2009-09-24  1:32                               ` Wu Fengguang
@ 2009-09-24  1:32                               ` Wu Fengguang
  1 sibling, 0 replies; 52+ messages in thread
From: Wu Fengguang @ 2009-09-24  1:32 UTC (permalink / raw)
  To: Chris Mason, Theodore Tso, Jens Axboe, Christoph Hellwig,
	linux-kernel@vger.kernel.org

On Wed, Sep 23, 2009 at 10:08:40PM +0800, Chris Mason wrote:
> On Wed, Sep 23, 2009 at 09:05:41AM +0800, Wu Fengguang wrote:
> 
> [ timeslice based limits on number of pages sent by the bdi threads ]
> 
> > > 
> > > The reason I prefer the timeslice idea is that we don't need the
> > > hardware to tell us how fast it is.  We just write for a while and move
> > > on.
> > 
> > That makes sense.  Note that the triple (pages, page segments,
> > submission time) can somehow adapt to hardware capabilities
> > (and at least won't hurt fast arrays).
> > 
> > - max pages are set to large enough number for big arrays
> > - max page segments could be based on the existing blk_queue_nonrot()
> > - submission time = 1s, which is mainly a safeguard for slow devices
> >   (ie. usb stick), to prevent one single inode from taking too much
> >   time. This time limit has little performance impacts.
> > 
> > Possible merits are
> > - these parameters are concrete ones and easy to handle
> > - it's natural to implement related logics in the VFS level
> > - file systems can do nothing to get most benefits
> > 
> > Also the (now necessary) per-invocation limit could be somehow
> > eliminated when balance_dirty_pages() does not do IO itself.
> 
> I think there are probably a lot of good ways to improve on our single
> max number of pages metric from today

Yes, as always, it benefits to work out some prototype solutions for
evaluation and comparison.

> , but I'm worried about the
> calculation time finding page segments.  The radix tree
> isn't all that well suited to it.

I didn't mean to "calculate" the page segments. But rather to do this
in write_cache_pages:

        if (this page index is 1MB away from prev page index)
                wbc->page_segments--;

> But, if you've got a patch I'd be happy to run a comparison against it.
> Jens' box will be better at showing any CPU cost to the radix walking.

Thanks!

Regards,
Fengguang

^ permalink raw reply	[flat|nested] 52+ messages in thread

end of thread, other threads:[~2009-09-24  1:32 UTC | newest]

Thread overview: 52+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-09-11  7:34 [PATCH 0/7] Per-bdi writeback flusher threads v20 Jens Axboe
2009-09-11  7:34 ` [PATCH 1/7] writeback: get rid of generic_sync_sb_inodes() export Jens Axboe
2009-09-11  7:34 ` [PATCH 2/7] writeback: move dirty inodes from super_block to backing_dev_info Jens Axboe
2009-09-11  7:34 ` [PATCH 3/7] writeback: switch to per-bdi threads for flushing data Jens Axboe
2009-09-11  7:34 ` [PATCH 4/7] writeback: get rid of pdflush completely Jens Axboe
2009-09-11  7:34 ` [PATCH 5/7] writeback: add some debug inode list counters to bdi stats Jens Axboe
2009-09-11  7:34 ` [PATCH 6/7] writeback: add name to backing_dev_info Jens Axboe
2009-09-11  7:34 ` [PATCH 7/7] writeback: check for registered bdi in flusher add and inode dirty Jens Axboe
2009-09-11 13:42 ` [PATCH 0/7] Per-bdi writeback flusher threads v20 Theodore Tso
2009-09-11 13:45   ` Chris Mason
2009-09-11 13:45     ` Chris Mason
2009-09-11 14:04     ` Jens Axboe
2009-09-11 14:16   ` Christoph Hellwig
2009-09-11 14:16     ` Christoph Hellwig
2009-09-11 14:29     ` Jens Axboe
2009-09-11 14:39       ` Wu Fengguang
2009-09-18 17:52         ` Theodore Tso
2009-09-19  3:58           ` Wu Fengguang
2009-09-19  3:58             ` Wu Fengguang
2009-09-19  4:00             ` Wu Fengguang
2009-09-19  4:00               ` Wu Fengguang
2009-09-19  4:26               ` Wu Fengguang
2009-09-19 15:03                 ` Wu Fengguang
2009-09-19 15:03                 ` Wu Fengguang
2009-09-20 19:00                   ` Jan Kara
2009-09-21  3:04                     ` Wu Fengguang
2009-09-21  5:35                       ` Wu Fengguang
2009-09-21  9:53                         ` Wu Fengguang
2009-09-21 10:02                           ` Jan Kara
2009-09-21 10:18                             ` Wu Fengguang
2009-09-21 12:42                       ` Jan Kara
2009-09-21 15:12                         ` Wu Fengguang
2009-09-21 16:08                           ` Jan Kara
2009-09-22  5:10                             ` Wu Fengguang
2009-09-21 13:53                 ` Chris Mason
2009-09-22 10:13                   ` Wu Fengguang
2009-09-22 10:13                   ` Wu Fengguang
2009-09-22 11:30                     ` Jan Kara
2009-09-22 13:33                       ` Wu Fengguang
2009-09-22 11:30                     ` Chris Mason
2009-09-22 11:45                       ` Jan Kara
2009-09-22 12:47                         ` Wu Fengguang
2009-09-22 17:41                         ` Chris Mason
2009-09-22 13:18                       ` Wu Fengguang
2009-09-22 13:18                         ` Wu Fengguang
2009-09-22 15:59                         ` Chris Mason
2009-09-23  1:05                           ` Wu Fengguang
2009-09-23  1:05                             ` Wu Fengguang
2009-09-23 14:08                             ` Chris Mason
2009-09-24  1:32                               ` Wu Fengguang
2009-09-24  1:32                               ` Wu Fengguang
2009-09-19  4:26               ` Wu Fengguang

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.