All of lore.kernel.org
 help / color / mirror / Atom feed
* regression in page writeback
@ 2009-09-22  5:49 Shaohua Li
  2009-09-22  6:40 ` Peter Zijlstra
  2009-09-22 10:49 ` Wu Fengguang
  0 siblings, 2 replies; 79+ messages in thread
From: Shaohua Li @ 2009-09-22  5:49 UTC (permalink / raw)
  To: linux-kernel; +Cc: richard, a.p.zijlstra, jens.axboe, akpm

Hi,
Commit d7831a0bdf06b9f722b947bb0c205ff7d77cebd8 causes disk io regression
in my test.
My system has 12 disks, each disk has two partitions. System runs fio sequence
write on all partitions, each partion has 8 jobs.
2.6.31-rc1, fio gives 460m/s disk io
2.6.31-rc2, fio gives about 400m/s disk io. Revert the patch, speed back to
460m/s

Under latest git: fio gives 450m/s disk io; If reverting the patch, the speed
is 484m/s.

With the patch, fio reports less io merge and more interrupts. My naive
analysis is the patch makes balance_dirty_pages_ratelimited_nr() limits
write chunk to 8 pages and then soon go to sleep in balance_dirty_pages(),
because most time the bdi_nr_reclaimable < bdi_thresh, and so when write
the pages out, the chunk is 8 pages long instead of 4M long. Without the patch,
thread can write 8 pages and then move some pages to writeback, and then
continue doing write. The patch seems to break this.

Unfortunatelly I can't figure out a fix for this issue, hopefully you have more
ideas.

Thanks,
Shaohua

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: regression in page writeback
  2009-09-22  5:49 regression in page writeback Shaohua Li
@ 2009-09-22  6:40 ` Peter Zijlstra
  2009-09-22  8:05   ` Wu Fengguang
  2009-09-22 10:49 ` Wu Fengguang
  1 sibling, 1 reply; 79+ messages in thread
From: Peter Zijlstra @ 2009-09-22  6:40 UTC (permalink / raw)
  To: Shaohua Li
  Cc: linux-kernel, richard, jens.axboe, akpm, Chris Mason, Wu Fengguang

On Tue, 2009-09-22 at 13:49 +0800, Shaohua Li wrote:
> Hi,
> Commit d7831a0bdf06b9f722b947bb0c205ff7d77cebd8 causes disk io regression
> in my test.
> My system has 12 disks, each disk has two partitions. System runs fio sequence
> write on all partitions, each partion has 8 jobs.
> 2.6.31-rc1, fio gives 460m/s disk io
> 2.6.31-rc2, fio gives about 400m/s disk io. Revert the patch, speed back to
> 460m/s
> 
> Under latest git: fio gives 450m/s disk io; If reverting the patch, the speed
> is 484m/s.
> 
> With the patch, fio reports less io merge and more interrupts. My naive
> analysis is the patch makes balance_dirty_pages_ratelimited_nr() limits
> write chunk to 8 pages and then soon go to sleep in balance_dirty_pages(),
> because most time the bdi_nr_reclaimable < bdi_thresh, and so when write
> the pages out, the chunk is 8 pages long instead of 4M long. Without the patch,
> thread can write 8 pages and then move some pages to writeback, and then
> continue doing write. The patch seems to break this.
> 
> Unfortunatelly I can't figure out a fix for this issue, hopefully you have more
> ideas.

This whole writeback business is very fragile, the patch does indeed
cure a few cases and compounds a few other cases, typical trade off.

People are looking at it.


^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: regression in page writeback
  2009-09-22  6:40 ` Peter Zijlstra
@ 2009-09-22  8:05   ` Wu Fengguang
  2009-09-22  8:09     ` Peter Zijlstra
  0 siblings, 1 reply; 79+ messages in thread
From: Wu Fengguang @ 2009-09-22  8:05 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Li, Shaohua, linux-kernel, richard, jens.axboe, akpm, Chris Mason

On Tue, Sep 22, 2009 at 02:40:12PM +0800, Peter Zijlstra wrote:
> On Tue, 2009-09-22 at 13:49 +0800, Shaohua Li wrote:
> > Hi,
> > Commit d7831a0bdf06b9f722b947bb0c205ff7d77cebd8 causes disk io regression
> > in my test.
> > My system has 12 disks, each disk has two partitions. System runs fio sequence
> > write on all partitions, each partion has 8 jobs.
> > 2.6.31-rc1, fio gives 460m/s disk io
> > 2.6.31-rc2, fio gives about 400m/s disk io. Revert the patch, speed back to
> > 460m/s
> > 
> > Under latest git: fio gives 450m/s disk io; If reverting the patch, the speed
> > is 484m/s.
> > 
> > With the patch, fio reports less io merge and more interrupts. My naive
> > analysis is the patch makes balance_dirty_pages_ratelimited_nr() limits
> > write chunk to 8 pages and then soon go to sleep in balance_dirty_pages(),
> > because most time the bdi_nr_reclaimable < bdi_thresh, and so when write
> > the pages out, the chunk is 8 pages long instead of 4M long. Without the patch,
> > thread can write 8 pages and then move some pages to writeback, and then
> > continue doing write. The patch seems to break this.
> > 
> > Unfortunatelly I can't figure out a fix for this issue, hopefully you have more
> > ideas.
> 
> This whole writeback business is very fragile,

Agreed, sorry..

> the patch does indeed cure a few cases and compounds a few other
> cases, typical trade off.
> 
> People are looking at it.

Staring at the changelog, I don't think balance_dirty_pages() could
"overshoot its limits and move all the dirty pages to writeback".
Because it will break when enough pages are written:

                if (pages_written >= write_chunk)
                        break;          /* We've done our duty */

The observed "overshooting" may well be the background_writeout()
behavior, which will hit the dirty numbers all the way down to 0.


    mm: prevent balance_dirty_pages() from doing too much work

    balance_dirty_pages can overreact and move all of the dirty pages to
    writeback unnecessarily.

    balance_dirty_pages makes its decision to throttle based on the number of
    dirty plus writeback pages that are over the calculated limit,so it will
    continue to move pages even when there are plenty of pages in writeback
    and less than the threshold still dirty.

    This allows it to overshoot its limits and move all the dirty pages to
    writeback while waiting for the drives to catch up and empty the writeback
    list.


I'm not sure how this patch stopped the "overshooting" behavior.
Maybe it managed to not start the background pdflush, or the started
pdflush thread exited because it found writeback is in progress by
someone else?

-               if (bdi_nr_reclaimable) {
+               if (bdi_nr_reclaimable > bdi_thresh) {

Thanks,
Fengguang

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: regression in page writeback
  2009-09-22  8:05   ` Wu Fengguang
@ 2009-09-22  8:09     ` Peter Zijlstra
  2009-09-22  8:24       ` Wu Fengguang
  0 siblings, 1 reply; 79+ messages in thread
From: Peter Zijlstra @ 2009-09-22  8:09 UTC (permalink / raw)
  To: Wu Fengguang
  Cc: Li, Shaohua, linux-kernel, richard, jens.axboe, akpm, Chris Mason

On Tue, 2009-09-22 at 16:05 +0800, Wu Fengguang wrote:
> 
> I'm not sure how this patch stopped the "overshooting" behavior.
> Maybe it managed to not start the background pdflush, or the started
> pdflush thread exited because it found writeback is in progress by
> someone else?
> 
> -               if (bdi_nr_reclaimable) {
> +               if (bdi_nr_reclaimable > bdi_thresh) {

The idea is that we shouldn't move more pages from dirty -> writeback
when there's not actually that much dirty left.

Now, I'm not sure about the > bdi_thresh part, I've suggested to maybe
use bdi_thresh/2 a few times, but it generally didn't seem to make much
of a difference.




^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: regression in page writeback
  2009-09-22  8:09     ` Peter Zijlstra
@ 2009-09-22  8:24       ` Wu Fengguang
  2009-09-22  8:32         ` Peter Zijlstra
  0 siblings, 1 reply; 79+ messages in thread
From: Wu Fengguang @ 2009-09-22  8:24 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Li, Shaohua, linux-kernel, richard, jens.axboe, akpm, Chris Mason

On Tue, Sep 22, 2009 at 04:09:25PM +0800, Peter Zijlstra wrote:
> On Tue, 2009-09-22 at 16:05 +0800, Wu Fengguang wrote:
> > 
> > I'm not sure how this patch stopped the "overshooting" behavior.
> > Maybe it managed to not start the background pdflush, or the started
> > pdflush thread exited because it found writeback is in progress by
> > someone else?
> > 
> > -               if (bdi_nr_reclaimable) {
> > +               if (bdi_nr_reclaimable > bdi_thresh) {
> 
> The idea is that we shouldn't move more pages from dirty -> writeback
> when there's not actually that much dirty left.

IMHO this makes little sense given that pdflush will move all dirty
pages anyway. pdflush should already be started to do background
writeback before the process is throttled, and it is designed to sync
all current dirty pages as quick as possible and as much as possible.

> Now, I'm not sure about the > bdi_thresh part, I've suggested to maybe
> use bdi_thresh/2 a few times, but it generally didn't seem to make much
> of a difference.

One possible difference is, the process may end up waiting longer time
in order to sync write_chunk pages and quit the throttle. This could
hurt the responsiveness of the throttled process.

Thanks,
Fengguang

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: regression in page writeback
  2009-09-22  8:24       ` Wu Fengguang
@ 2009-09-22  8:32         ` Peter Zijlstra
  2009-09-22  8:51           ` Wu Fengguang
                             ` (2 more replies)
  0 siblings, 3 replies; 79+ messages in thread
From: Peter Zijlstra @ 2009-09-22  8:32 UTC (permalink / raw)
  To: Wu Fengguang
  Cc: Li, Shaohua, linux-kernel, richard, jens.axboe, akpm, Chris Mason

On Tue, 2009-09-22 at 16:24 +0800, Wu Fengguang wrote:
> On Tue, Sep 22, 2009 at 04:09:25PM +0800, Peter Zijlstra wrote:
> > On Tue, 2009-09-22 at 16:05 +0800, Wu Fengguang wrote:
> > > 
> > > I'm not sure how this patch stopped the "overshooting" behavior.
> > > Maybe it managed to not start the background pdflush, or the started
> > > pdflush thread exited because it found writeback is in progress by
> > > someone else?
> > > 
> > > -               if (bdi_nr_reclaimable) {
> > > +               if (bdi_nr_reclaimable > bdi_thresh) {
> > 
> > The idea is that we shouldn't move more pages from dirty -> writeback
> > when there's not actually that much dirty left.
> 
> IMHO this makes little sense given that pdflush will move all dirty
> pages anyway. pdflush should already be started to do background
> writeback before the process is throttled, and it is designed to sync
> all current dirty pages as quick as possible and as much as possible.

Not so, pdflush (or now the bdi writer thread thingies) should not
deplete all dirty pages but should stop writing once they are below the
background limit.

> > Now, I'm not sure about the > bdi_thresh part, I've suggested to maybe
> > use bdi_thresh/2 a few times, but it generally didn't seem to make much
> > of a difference.
> 
> One possible difference is, the process may end up waiting longer time
> in order to sync write_chunk pages and quit the throttle. This could
> hurt the responsiveness of the throttled process.

Well, that's all because this congestion_wait stuff is borken..


^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: regression in page writeback
  2009-09-22  8:32         ` Peter Zijlstra
@ 2009-09-22  8:51           ` Wu Fengguang
  2009-09-22  8:52           ` Richard Kennedy
  2009-09-22 15:52           ` Chris Mason
  2 siblings, 0 replies; 79+ messages in thread
From: Wu Fengguang @ 2009-09-22  8:51 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Li, Shaohua, linux-kernel, richard, jens.axboe, akpm,
	Chris Mason, Theodore Ts'o, Dave Chinner, Christoph Hellwig,
	linux-fsdevel

On Tue, Sep 22, 2009 at 04:32:14PM +0800, Peter Zijlstra wrote:
> On Tue, 2009-09-22 at 16:24 +0800, Wu Fengguang wrote:
> > On Tue, Sep 22, 2009 at 04:09:25PM +0800, Peter Zijlstra wrote:
> > > On Tue, 2009-09-22 at 16:05 +0800, Wu Fengguang wrote:
> > > > 
> > > > I'm not sure how this patch stopped the "overshooting" behavior.
> > > > Maybe it managed to not start the background pdflush, or the started
> > > > pdflush thread exited because it found writeback is in progress by
> > > > someone else?
> > > > 
> > > > -               if (bdi_nr_reclaimable) {
> > > > +               if (bdi_nr_reclaimable > bdi_thresh) {
> > > 
> > > The idea is that we shouldn't move more pages from dirty -> writeback
> > > when there's not actually that much dirty left.
> > 
> > IMHO this makes little sense given that pdflush will move all dirty
> > pages anyway. pdflush should already be started to do background
> > writeback before the process is throttled, and it is designed to sync
> > all current dirty pages as quick as possible and as much as possible.
> 
> Not so, pdflush (or now the bdi writer thread thingies) should not
> deplete all dirty pages but should stop writing once they are below the
> background limit.

(add CC to fs people for more thoughts :)

> > > Now, I'm not sure about the > bdi_thresh part, I've suggested to maybe
> > > use bdi_thresh/2 a few times, but it generally didn't seem to make much
> > > of a difference.
> > 
> > One possible difference is, the process may end up waiting longer time
> > in order to sync write_chunk pages and quit the throttle. This could
> > hurt the responsiveness of the throttled process.
> 
> Well, that's all because this congestion_wait stuff is borken..

Yes congestion_wait is bad.. I do like the idea of lowering
bdi_thresh to help reduce the uncertainty of throttle time :)

Thanks,
Fengguang

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: regression in page writeback
  2009-09-22  8:32         ` Peter Zijlstra
  2009-09-22  8:51           ` Wu Fengguang
@ 2009-09-22  8:52           ` Richard Kennedy
  2009-09-22  9:05             ` Wu Fengguang
  2009-09-22 15:52           ` Chris Mason
  2 siblings, 1 reply; 79+ messages in thread
From: Richard Kennedy @ 2009-09-22  8:52 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Wu Fengguang, Li, Shaohua, linux-kernel, jens.axboe, akpm, Chris Mason

On Tue, 2009-09-22 at 10:32 +0200, Peter Zijlstra wrote:
> On Tue, 2009-09-22 at 16:24 +0800, Wu Fengguang wrote:
> > On Tue, Sep 22, 2009 at 04:09:25PM +0800, Peter Zijlstra wrote:
> > > On Tue, 2009-09-22 at 16:05 +0800, Wu Fengguang wrote:
> > > > 
> > > > I'm not sure how this patch stopped the "overshooting" behavior.
> > > > Maybe it managed to not start the background pdflush, or the started
> > > > pdflush thread exited because it found writeback is in progress by
> > > > someone else?
> > > > 
> > > > -               if (bdi_nr_reclaimable) {
> > > > +               if (bdi_nr_reclaimable > bdi_thresh) {
> > > 
> > > The idea is that we shouldn't move more pages from dirty -> writeback
> > > when there's not actually that much dirty left.
> > 
> > IMHO this makes little sense given that pdflush will move all dirty
> > pages anyway. pdflush should already be started to do background
> > writeback before the process is throttled, and it is designed to sync
> > all current dirty pages as quick as possible and as much as possible.
> 
> Not so, pdflush (or now the bdi writer thread thingies) should not
> deplete all dirty pages but should stop writing once they are below the
> background limit.
> 
> > > Now, I'm not sure about the > bdi_thresh part, I've suggested to maybe
> > > use bdi_thresh/2 a few times, but it generally didn't seem to make much
> > > of a difference.
> > 
> > One possible difference is, the process may end up waiting longer time
> > in order to sync write_chunk pages and quit the throttle. This could
> > hurt the responsiveness of the throttled process.
> 
> Well, that's all because this congestion_wait stuff is borken..
> 

The problem occurred as pdflush stopped when the number of dirty pages
reached the background threshold but balance_dirty_pages kept moving
pages to writeback because the total of dirty + writeback was over the
limit. 

I tried Peter's suggestion of using bdi_thresh/2 but I didn't see any
difference on my desktop hardware, but it may help RAID setups. I don't
think anyone tried it though.

Since Jens Axboe's per-bdi code got merged in the latest kernel tree,
there's a lot of change in these code paths so I'm not sure how that
reacts and if this change is still needed or relevant. 

regards
Richard



^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: regression in page writeback
  2009-09-22  8:52           ` Richard Kennedy
@ 2009-09-22  9:05             ` Wu Fengguang
  2009-09-22 11:41               ` Shaohua Li
  0 siblings, 1 reply; 79+ messages in thread
From: Wu Fengguang @ 2009-09-22  9:05 UTC (permalink / raw)
  To: Richard Kennedy
  Cc: Peter Zijlstra, Li, Shaohua, linux-kernel, jens.axboe, akpm, Chris Mason

On Tue, Sep 22, 2009 at 04:52:48PM +0800, Richard Kennedy wrote:
> On Tue, 2009-09-22 at 10:32 +0200, Peter Zijlstra wrote:
> > On Tue, 2009-09-22 at 16:24 +0800, Wu Fengguang wrote:
> > > On Tue, Sep 22, 2009 at 04:09:25PM +0800, Peter Zijlstra wrote:
> > > > On Tue, 2009-09-22 at 16:05 +0800, Wu Fengguang wrote:
> > > > > 
> > > > > I'm not sure how this patch stopped the "overshooting" behavior.
> > > > > Maybe it managed to not start the background pdflush, or the started
> > > > > pdflush thread exited because it found writeback is in progress by
> > > > > someone else?
> > > > > 
> > > > > -               if (bdi_nr_reclaimable) {
> > > > > +               if (bdi_nr_reclaimable > bdi_thresh) {
> > > > 
> > > > The idea is that we shouldn't move more pages from dirty -> writeback
> > > > when there's not actually that much dirty left.
> > > 
> > > IMHO this makes little sense given that pdflush will move all dirty
> > > pages anyway. pdflush should already be started to do background
> > > writeback before the process is throttled, and it is designed to sync
> > > all current dirty pages as quick as possible and as much as possible.
> > 
> > Not so, pdflush (or now the bdi writer thread thingies) should not
> > deplete all dirty pages but should stop writing once they are below the
> > background limit.
> > 
> > > > Now, I'm not sure about the > bdi_thresh part, I've suggested to maybe
> > > > use bdi_thresh/2 a few times, but it generally didn't seem to make much
> > > > of a difference.
> > > 
> > > One possible difference is, the process may end up waiting longer time
> > > in order to sync write_chunk pages and quit the throttle. This could
> > > hurt the responsiveness of the throttled process.
> > 
> > Well, that's all because this congestion_wait stuff is borken..
> > 
> 
> The problem occurred as pdflush stopped when the number of dirty pages
> reached the background threshold but balance_dirty_pages kept moving
> pages to writeback because the total of dirty + writeback was over the
> limit. 

Ah yes it is possible. The pdflush started by balance_dirty_pages()
does stop at the background threshold (sorry for the confusion!),
and then balance_dirty_pages() continue to sync pages in _smaller_
chunk sizes, which should be suboptimal..

Thanks,
Fengguang

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: regression in page writeback
  2009-09-22  5:49 regression in page writeback Shaohua Li
  2009-09-22  6:40 ` Peter Zijlstra
@ 2009-09-22 10:49 ` Wu Fengguang
  2009-09-22 11:50   ` Shaohua Li
  1 sibling, 1 reply; 79+ messages in thread
From: Wu Fengguang @ 2009-09-22 10:49 UTC (permalink / raw)
  To: Shaohua Li
  Cc: linux-kernel, richard, a.p.zijlstra, jens.axboe, akpm,
	linux-fsdevel, Chris Mason

[-- Attachment #1: Type: text/plain, Size: 1305 bytes --]

Shaohua,

On Tue, Sep 22, 2009 at 01:49:13PM +0800, Li, Shaohua wrote:
> Hi,
> Commit d7831a0bdf06b9f722b947bb0c205ff7d77cebd8 causes disk io regression
> in my test.
> My system has 12 disks, each disk has two partitions. System runs fio sequence
> write on all partitions, each partion has 8 jobs.
> 2.6.31-rc1, fio gives 460m/s disk io
> 2.6.31-rc2, fio gives about 400m/s disk io. Revert the patch, speed back to
> 460m/s
> 
> Under latest git: fio gives 450m/s disk io; If reverting the patch, the speed
> is 484m/s.
> 
> With the patch, fio reports less io merge and more interrupts. My naive
> analysis is the patch makes balance_dirty_pages_ratelimited_nr() limits
> write chunk to 8 pages and then soon go to sleep in balance_dirty_pages(),
> because most time the bdi_nr_reclaimable < bdi_thresh, and so when write
> the pages out, the chunk is 8 pages long instead of 4M long. Without the patch,
> thread can write 8 pages and then move some pages to writeback, and then
> continue doing write. The patch seems to break this.

Do you have trace/numbers for above descriptions?

> Unfortunatelly I can't figure out a fix for this issue, hopefully
> you have more ideas.

Attached is a very verbose writeback debug patch, hope it helps and
won't disturb the workload a lot :)

Thanks,
Fengguang


[-- Attachment #2: writeback-debug-2.6.31.patch --]
[-- Type: text/x-diff, Size: 5468 bytes --]

--- linux-2.6.orig/fs/fs-writeback.c	2009-08-23 14:44:22.000000000 +0800
+++ linux-2.6/fs/fs-writeback.c	2009-09-22 18:45:06.000000000 +0800
@@ -26,6 +26,9 @@
 #include "internal.h"
 
 
+int sysctl_dirty_debug __read_mostly;
+
+
 /**
  * writeback_acquire - attempt to get exclusive writeback access to a device
  * @bdi: the device's backing_dev_info structure
@@ -186,6 +189,11 @@ static int write_inode(struct inode *ino
 	return 0;
 }
 
+#define redirty_tail(inode)						\
+	do {								\
+		__redirty_tail(inode, __LINE__);			\
+	} while (0)
+
 /*
  * Redirty an inode: set its when-it-was dirtied timestamp and move it to the
  * furthest end of its superblock's dirty-inode list.
@@ -195,10 +203,15 @@ static int write_inode(struct inode *ino
  * the case then the inode must have been redirtied while it was being written
  * out and we don't reset its dirtied_when.
  */
-static void redirty_tail(struct inode *inode)
+static void __redirty_tail(struct inode *inode, int line)
 {
 	struct super_block *sb = inode->i_sb;
 
+	if (sysctl_dirty_debug) {
+		printk(KERN_DEBUG "redirty_tail +%d: inode %lu\n",
+				line, inode->i_ino);
+	}
+
 	if (!list_empty(&sb->s_dirty)) {
 		struct inode *tail_inode;
 
@@ -210,12 +223,22 @@ static void redirty_tail(struct inode *i
 	list_move(&inode->i_list, &sb->s_dirty);
 }
 
+#define requeue_io(inode)						\
+	do {								\
+		__requeue_io(inode, __LINE__);				\
+	} while (0)
+
 /*
  * requeue inode for re-scanning after sb->s_io list is exhausted.
  */
-static void requeue_io(struct inode *inode)
+static void __requeue_io(struct inode *inode, int line)
 {
 	list_move(&inode->i_list, &inode->i_sb->s_more_io);
+
+	if (sysctl_dirty_debug) {
+		printk(KERN_DEBUG "requeue_io +%d: inode %lu\n",
+				line, inode->i_ino);
+	}
 }
 
 static void inode_sync_complete(struct inode *inode)
--- linux-2.6.orig/include/linux/writeback.h	2009-08-23 14:44:23.000000000 +0800
+++ linux-2.6/include/linux/writeback.h	2009-09-22 18:29:05.000000000 +0800
@@ -168,5 +168,6 @@ void writeback_set_ratelimit(void);
 extern int nr_pdflush_threads;	/* Global so it can be exported to sysctl
 				   read-only. */
 
+extern int sysctl_dirty_debug;
 
 #endif		/* WRITEBACK_H */
--- linux-2.6.orig/kernel/sysctl.c	2009-08-23 14:44:23.000000000 +0800
+++ linux-2.6/kernel/sysctl.c	2009-09-22 18:29:05.000000000 +0800
@@ -1516,6 +1516,14 @@ static struct ctl_table fs_table[] = {
 		.extra1		= &zero,
 		.extra2		= &two,
 	},
+	{
+		.ctl_name	= CTL_UNNUMBERED,
+		.procname	= "dirty_debug",
+		.data		= &sysctl_dirty_debug,
+		.maxlen		= sizeof(int),
+		.mode		= 0644,
+		.proc_handler	= &proc_dointvec,
+	},
 #if defined(CONFIG_BINFMT_MISC) || defined(CONFIG_BINFMT_MISC_MODULE)
 	{
 		.ctl_name	= CTL_UNNUMBERED,
--- linux-2.6.orig/mm/page-writeback.c	2009-08-23 14:44:23.000000000 +0800
+++ linux-2.6/mm/page-writeback.c	2009-09-22 18:45:50.000000000 +0800
@@ -116,6 +116,35 @@ EXPORT_SYMBOL(laptop_mode);
 
 /* End of sysctl-exported parameters */
 
+#define writeback_debug_report(n, wbc) do {                             \
+	if(sysctl_dirty_debug)						\
+		__writeback_debug_report(n, wbc,			\
+				__FILE__, __LINE__, __FUNCTION__);	\
+} while (0)
+
+void print_writeback_control(struct writeback_control *wbc)
+{
+	printk(KERN_DEBUG
+			"global dirty=%lu writeback=%lu nfs=%lu "
+			"flags=%c%c towrite=%ld skipped=%ld\n",
+			global_page_state(NR_FILE_DIRTY),
+			global_page_state(NR_WRITEBACK),
+			global_page_state(NR_UNSTABLE_NFS),
+			wbc->encountered_congestion ? 'C':'_',
+			wbc->more_io ? 'M':'_',
+			wbc->nr_to_write,
+			wbc->pages_skipped);
+}
+
+void __writeback_debug_report(long n, struct writeback_control *wbc,
+		const char *file, int line, const char *func)
+{
+	printk(KERN_DEBUG "%s %d %s: comm=%s pid=%d n=%ld\n",
+			file, line, func,
+			current->comm, current->pid,
+			n);
+	print_writeback_control(wbc);
+}
 
 static void background_writeout(unsigned long _min_pages);
 
@@ -550,7 +579,12 @@ static void balance_dirty_pages(struct a
 			pages_written += write_chunk - wbc.nr_to_write;
 			get_dirty_limits(&background_thresh, &dirty_thresh,
 				       &bdi_thresh, bdi);
+			writeback_debug_report(pages_written, &wbc);
 		}
+		printk("bdi_nr_reclaimable=%lu, bdi_thresh=%lu, "
+		       "background_thresh=%lu, dirty_thresh=%lu\n",
+		       bdi_nr_reclaimable, bdi_thresh,
+		       background_thresh, dirty_thresh);
 
 		/*
 		 * In order to avoid the stacked BDI deadlock we need
@@ -670,6 +704,11 @@ void throttle_vm_writeout(gfp_t gfp_mask
 			global_page_state(NR_WRITEBACK) <= dirty_thresh)
                         	break;
                 congestion_wait(BLK_RW_ASYNC, HZ/10);
+		printk(KERN_DEBUG "throttle_vm_writeout: "
+				"congestion_wait on %lu+%lu > %lu\n",
+				global_page_state(NR_UNSTABLE_NFS),
+				global_page_state(NR_WRITEBACK),
+				dirty_thresh);
 
 		/*
 		 * The caller might hold locks which can prevent IO completion
@@ -719,7 +758,9 @@ static void background_writeout(unsigned
 			else
 				break;
 		}
+		writeback_debug_report(min_pages, &wbc);
 	}
+	writeback_debug_report(min_pages, &wbc);
 }
 
 /*
@@ -792,7 +833,9 @@ static void wb_kupdate(unsigned long arg
 				break;	/* All the old data is written */
 		}
 		nr_to_write -= MAX_WRITEBACK_PAGES - wbc.nr_to_write;
+		writeback_debug_report(nr_to_write, &wbc);
 	}
+	writeback_debug_report(nr_to_write, &wbc);
 	if (time_before(next_jif, jiffies + HZ))
 		next_jif = jiffies + HZ;
 	if (dirty_writeback_interval)

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: regression in page writeback
  2009-09-22  9:05             ` Wu Fengguang
@ 2009-09-22 11:41               ` Shaohua Li
  0 siblings, 0 replies; 79+ messages in thread
From: Shaohua Li @ 2009-09-22 11:41 UTC (permalink / raw)
  To: Wu, Fengguang
  Cc: Richard Kennedy, Peter Zijlstra, linux-kernel, jens.axboe, akpm,
	Chris Mason

On Tue, Sep 22, 2009 at 05:05:01PM +0800, Wu, Fengguang wrote:
> On Tue, Sep 22, 2009 at 04:52:48PM +0800, Richard Kennedy wrote:
> > On Tue, 2009-09-22 at 10:32 +0200, Peter Zijlstra wrote:
> > > On Tue, 2009-09-22 at 16:24 +0800, Wu Fengguang wrote:
> > > > On Tue, Sep 22, 2009 at 04:09:25PM +0800, Peter Zijlstra wrote:
> > > > > On Tue, 2009-09-22 at 16:05 +0800, Wu Fengguang wrote:
> > > > > > 
> > > > > > I'm not sure how this patch stopped the "overshooting" behavior.
> > > > > > Maybe it managed to not start the background pdflush, or the started
> > > > > > pdflush thread exited because it found writeback is in progress by
> > > > > > someone else?
> > > > > > 
> > > > > > -               if (bdi_nr_reclaimable) {
> > > > > > +               if (bdi_nr_reclaimable > bdi_thresh) {
> > > > > 
> > > > > The idea is that we shouldn't move more pages from dirty -> writeback
> > > > > when there's not actually that much dirty left.
> > > > 
> > > > IMHO this makes little sense given that pdflush will move all dirty
> > > > pages anyway. pdflush should already be started to do background
> > > > writeback before the process is throttled, and it is designed to sync
> > > > all current dirty pages as quick as possible and as much as possible.
> > > 
> > > Not so, pdflush (or now the bdi writer thread thingies) should not
> > > deplete all dirty pages but should stop writing once they are below the
> > > background limit.
> > > 
> > > > > Now, I'm not sure about the > bdi_thresh part, I've suggested to maybe
> > > > > use bdi_thresh/2 a few times, but it generally didn't seem to make much
> > > > > of a difference.
> > > > 
> > > > One possible difference is, the process may end up waiting longer time
> > > > in order to sync write_chunk pages and quit the throttle. This could
> > > > hurt the responsiveness of the throttled process.
> > > 
> > > Well, that's all because this congestion_wait stuff is borken..
> > > 
> > 
> > The problem occurred as pdflush stopped when the number of dirty pages
> > reached the background threshold but balance_dirty_pages kept moving
> > pages to writeback because the total of dirty + writeback was over the
> > limit. 
> 
> Ah yes it is possible. The pdflush started by balance_dirty_pages()
> does stop at the background threshold (sorry for the confusion!),
> and then balance_dirty_pages() continue to sync pages in _smaller_
> chunk sizes, which should be suboptimal..
This is possible. Without the patch, balance_dirty_pages() can move some pages
to writeback and don't need do congestion_wait(), so the task can continue doing
write. The patch seems to break this.
I tried to set dirty_exceeded only when bdi_nr_reclaimable > bdi_thresh, this helps
a little in my test, but still not reach the best.

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: regression in page writeback
  2009-09-22 10:49 ` Wu Fengguang
@ 2009-09-22 11:50   ` Shaohua Li
  2009-09-22 13:39     ` Wu Fengguang
  0 siblings, 1 reply; 79+ messages in thread
From: Shaohua Li @ 2009-09-22 11:50 UTC (permalink / raw)
  To: Wu, Fengguang
  Cc: linux-kernel, richard, a.p.zijlstra, jens.axboe, akpm,
	linux-fsdevel, Chris Mason

On Tue, Sep 22, 2009 at 06:49:15PM +0800, Wu, Fengguang wrote:
> Shaohua,
> 
> On Tue, Sep 22, 2009 at 01:49:13PM +0800, Li, Shaohua wrote:
> > Hi,
> > Commit d7831a0bdf06b9f722b947bb0c205ff7d77cebd8 causes disk io regression
> > in my test.
> > My system has 12 disks, each disk has two partitions. System runs fio sequence
> > write on all partitions, each partion has 8 jobs.
> > 2.6.31-rc1, fio gives 460m/s disk io
> > 2.6.31-rc2, fio gives about 400m/s disk io. Revert the patch, speed back to
> > 460m/s
> > 
> > Under latest git: fio gives 450m/s disk io; If reverting the patch, the speed
> > is 484m/s.
> > 
> > With the patch, fio reports less io merge and more interrupts. My naive
> > analysis is the patch makes balance_dirty_pages_ratelimited_nr() limits
> > write chunk to 8 pages and then soon go to sleep in balance_dirty_pages(),
> > because most time the bdi_nr_reclaimable < bdi_thresh, and so when write
> > the pages out, the chunk is 8 pages long instead of 4M long. Without the patch,
> > thread can write 8 pages and then move some pages to writeback, and then
> > continue doing write. The patch seems to break this.
> 
> Do you have trace/numbers for above descriptions?
No. Just guess, because there is less io merge. And watch each bdi's states,
bdi_nr_reclaimable < bdi_thresh seems always true.
 
> > Unfortunatelly I can't figure out a fix for this issue, hopefully
> > you have more ideas.
> 
> Attached is a very verbose writeback debug patch, hope it helps and
> won't disturb the workload a lot :)
Hmm, the log buf will get overflowed soon, there is > 400m/s io. I tried
to produce this issue in a system with two disks, but fail. Anyway, I'll try
it out tomorrow.

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: regression in page writeback
  2009-09-22 11:50   ` Shaohua Li
@ 2009-09-22 13:39     ` Wu Fengguang
  2009-09-23  1:52       ` Shaohua Li
  0 siblings, 1 reply; 79+ messages in thread
From: Wu Fengguang @ 2009-09-22 13:39 UTC (permalink / raw)
  To: Li, Shaohua
  Cc: linux-kernel, richard, a.p.zijlstra, jens.axboe, akpm,
	linux-fsdevel, Chris Mason

On Tue, Sep 22, 2009 at 07:50:15PM +0800, Li, Shaohua wrote:
> On Tue, Sep 22, 2009 at 06:49:15PM +0800, Wu, Fengguang wrote:
> > Shaohua,
> > 
> > On Tue, Sep 22, 2009 at 01:49:13PM +0800, Li, Shaohua wrote:
> > > Hi,
> > > Commit d7831a0bdf06b9f722b947bb0c205ff7d77cebd8 causes disk io regression
> > > in my test.
> > > My system has 12 disks, each disk has two partitions. System runs fio sequence
> > > write on all partitions, each partion has 8 jobs.
> > > 2.6.31-rc1, fio gives 460m/s disk io
> > > 2.6.31-rc2, fio gives about 400m/s disk io. Revert the patch, speed back to
> > > 460m/s
> > > 
> > > Under latest git: fio gives 450m/s disk io; If reverting the patch, the speed
> > > is 484m/s.
> > > 
> > > With the patch, fio reports less io merge and more interrupts. My naive
> > > analysis is the patch makes balance_dirty_pages_ratelimited_nr() limits
> > > write chunk to 8 pages and then soon go to sleep in balance_dirty_pages(),
> > > because most time the bdi_nr_reclaimable < bdi_thresh, and so when write
> > > the pages out, the chunk is 8 pages long instead of 4M long. Without the patch,
> > > thread can write 8 pages and then move some pages to writeback, and then
> > > continue doing write. The patch seems to break this.
> > 
> > Do you have trace/numbers for above descriptions?
> No. Just guess, because there is less io merge. And watch each bdi's states,
> bdi_nr_reclaimable < bdi_thresh seems always true.

Ah OK.

> > > Unfortunatelly I can't figure out a fix for this issue, hopefully
> > > you have more ideas.
> > 
> > Attached is a very verbose writeback debug patch, hope it helps and
> > won't disturb the workload a lot :)
> Hmm, the log buf will get overflowed soon, there is > 400m/s io. I tried
> to produce this issue in a system with two disks, but fail. Anyway, I'll try
> it out tomorrow.

Thank you~  I'd recommend to use netconsole or serial line, and stop
local klogd because the write of log messages could add noises. 

Thanks,
Fengguang


^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: regression in page writeback
  2009-09-22  8:32         ` Peter Zijlstra
  2009-09-22  8:51           ` Wu Fengguang
  2009-09-22  8:52           ` Richard Kennedy
@ 2009-09-22 15:52           ` Chris Mason
  2009-09-23  0:22             ` Wu Fengguang
  2009-09-23  6:41             ` Shaohua Li
  2 siblings, 2 replies; 79+ messages in thread
From: Chris Mason @ 2009-09-22 15:52 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Wu Fengguang, Li, Shaohua, linux-kernel, richard, jens.axboe, akpm

On Tue, Sep 22, 2009 at 10:32:14AM +0200, Peter Zijlstra wrote:
> On Tue, 2009-09-22 at 16:24 +0800, Wu Fengguang wrote:
> > On Tue, Sep 22, 2009 at 04:09:25PM +0800, Peter Zijlstra wrote:
> > > On Tue, 2009-09-22 at 16:05 +0800, Wu Fengguang wrote:
> > > > 
> > > > I'm not sure how this patch stopped the "overshooting" behavior.
> > > > Maybe it managed to not start the background pdflush, or the started
> > > > pdflush thread exited because it found writeback is in progress by
> > > > someone else?
> > > > 
> > > > -               if (bdi_nr_reclaimable) {
> > > > +               if (bdi_nr_reclaimable > bdi_thresh) {
> > > 
> > > The idea is that we shouldn't move more pages from dirty -> writeback
> > > when there's not actually that much dirty left.
> > 
> > IMHO this makes little sense given that pdflush will move all dirty
> > pages anyway. pdflush should already be started to do background
> > writeback before the process is throttled, and it is designed to sync
> > all current dirty pages as quick as possible and as much as possible.
> 
> Not so, pdflush (or now the bdi writer thread thingies) should not
> deplete all dirty pages but should stop writing once they are below the
> background limit.
> 
> > > Now, I'm not sure about the > bdi_thresh part, I've suggested to maybe
> > > use bdi_thresh/2 a few times, but it generally didn't seem to make much
> > > of a difference.
> > 
> > One possible difference is, the process may end up waiting longer time
> > in order to sync write_chunk pages and quit the throttle. This could
> > hurt the responsiveness of the throttled process.
> 
> Well, that's all because this congestion_wait stuff is borken..
> 

I'd suggest retesting with a new baseline against the code in Linus' git
today.  Overall I think the change to make balance_dirty_pages() sleep
instead of kick more IO out is a very good one.  It helps in most
workloads here.

The congestion_wait() from 2.6.31 may just be too long to sleep waiting
for progress on very fast IO rigs.  Try switching to
schedule_timeout_interruptible(1);

-chris


^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: regression in page writeback
  2009-09-22 15:52           ` Chris Mason
@ 2009-09-23  0:22             ` Wu Fengguang
  2009-09-23  0:54               ` Andrew Morton
  2009-09-23  6:41             ` Shaohua Li
  1 sibling, 1 reply; 79+ messages in thread
From: Wu Fengguang @ 2009-09-23  0:22 UTC (permalink / raw)
  To: Chris Mason, Peter Zijlstra, Li, Shaohua, linux-kernel, richard,
	jens.axboe, akpm

On Tue, Sep 22, 2009 at 11:52:59PM +0800, Chris Mason wrote:
> On Tue, Sep 22, 2009 at 10:32:14AM +0200, Peter Zijlstra wrote:
> > On Tue, 2009-09-22 at 16:24 +0800, Wu Fengguang wrote:
> > > On Tue, Sep 22, 2009 at 04:09:25PM +0800, Peter Zijlstra wrote:
> > > > On Tue, 2009-09-22 at 16:05 +0800, Wu Fengguang wrote:
> > > > > 
> > > > > I'm not sure how this patch stopped the "overshooting" behavior.
> > > > > Maybe it managed to not start the background pdflush, or the started
> > > > > pdflush thread exited because it found writeback is in progress by
> > > > > someone else?
> > > > > 
> > > > > -               if (bdi_nr_reclaimable) {
> > > > > +               if (bdi_nr_reclaimable > bdi_thresh) {
> > > > 
> > > > The idea is that we shouldn't move more pages from dirty -> writeback
> > > > when there's not actually that much dirty left.
> > > 
> > > IMHO this makes little sense given that pdflush will move all dirty
> > > pages anyway. pdflush should already be started to do background
> > > writeback before the process is throttled, and it is designed to sync
> > > all current dirty pages as quick as possible and as much as possible.
> > 
> > Not so, pdflush (or now the bdi writer thread thingies) should not
> > deplete all dirty pages but should stop writing once they are below the
> > background limit.
> > 
> > > > Now, I'm not sure about the > bdi_thresh part, I've suggested to maybe
> > > > use bdi_thresh/2 a few times, but it generally didn't seem to make much
> > > > of a difference.
> > > 
> > > One possible difference is, the process may end up waiting longer time
> > > in order to sync write_chunk pages and quit the throttle. This could
> > > hurt the responsiveness of the throttled process.
> > 
> > Well, that's all because this congestion_wait stuff is borken..
> > 
> 
> I'd suggest retesting with a new baseline against the code in Linus' git
> today.  Overall I think the change to make balance_dirty_pages() sleep
> instead of kick more IO out is a very good one.  It helps in most
> workloads here.
> 
> The congestion_wait() from 2.6.31 may just be too long to sleep waiting
> for progress on very fast IO rigs.  Try switching to
> schedule_timeout_interruptible(1);

Jens' per-bdi writeback has another improvement. In 2.6.31, when
superblocks A and B both have 100000 dirty pages, it will first
exhaust A's 100000 dirty pages before going on to sync B's.
In latest git, A and B will roughly make progress at the same time.

So for 2.6.31 without this patch, it is possible for pdflush to sync
A's most dirty pages and for balance_dirty_pages() to sync B's most
dirty pages because B is over its bdi thresh.

Thanks,
Fengguang

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: regression in page writeback
  2009-09-23  0:22             ` Wu Fengguang
@ 2009-09-23  0:54               ` Andrew Morton
  2009-09-23  1:17                 ` Wu Fengguang
  0 siblings, 1 reply; 79+ messages in thread
From: Andrew Morton @ 2009-09-23  0:54 UTC (permalink / raw)
  To: Wu Fengguang
  Cc: Chris Mason, Peter Zijlstra, Li, Shaohua, linux-kernel, richard,
	jens.axboe

On Wed, 23 Sep 2009 08:22:20 +0800 Wu Fengguang <fengguang.wu@intel.com> wrote:

> Jens' per-bdi writeback has another improvement. In 2.6.31, when
> superblocks A and B both have 100000 dirty pages, it will first
> exhaust A's 100000 dirty pages before going on to sync B's.

That would only be true if someone broke 2.6.31.  Did they?

SYSCALL_DEFINE0(sync)
{
	wakeup_pdflush(0);
	sync_filesystems(0);
	sync_filesystems(1);
	if (unlikely(laptop_mode))
		laptop_sync_completion();
	return 0;
}

the sync_filesystems(0) is supposed to non-blockingly start IO against
all devices.  It used to do that correctly.  But people mucked with it
so perhaps it no longer does.


^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: regression in page writeback
  2009-09-23  0:54               ` Andrew Morton
@ 2009-09-23  1:17                 ` Wu Fengguang
  2009-09-23  1:27                   ` Wu Fengguang
  2009-09-23  1:28                   ` Andrew Morton
  0 siblings, 2 replies; 79+ messages in thread
From: Wu Fengguang @ 2009-09-23  1:17 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Chris Mason, Peter Zijlstra, Li, Shaohua, linux-kernel, richard,
	jens.axboe

On Wed, Sep 23, 2009 at 08:54:52AM +0800, Andrew Morton wrote:
> On Wed, 23 Sep 2009 08:22:20 +0800 Wu Fengguang <fengguang.wu@intel.com> wrote:
> 
> > Jens' per-bdi writeback has another improvement. In 2.6.31, when
> > superblocks A and B both have 100000 dirty pages, it will first
> > exhaust A's 100000 dirty pages before going on to sync B's.
> 
> That would only be true if someone broke 2.6.31.  Did they?
> 
> SYSCALL_DEFINE0(sync)
> {
> 	wakeup_pdflush(0);
> 	sync_filesystems(0);
> 	sync_filesystems(1);
> 	if (unlikely(laptop_mode))
> 		laptop_sync_completion();
> 	return 0;
> }
> 
> the sync_filesystems(0) is supposed to non-blockingly start IO against
> all devices.  It used to do that correctly.  But people mucked with it
> so perhaps it no longer does.

I'm referring to writeback_inodes(). Each invocation of which (to sync
4MB) will do the same iteration over superblocks A => B => C ... So if
A has dirty pages, it will always be served first.

So if wbc->bdi == NULL (which is true for kupdate/background sync), it
will have to first exhaust A before going on to B and C.

There are no "cursor" in the superblock level iterations.

sync wants to exhaust all new inodes in A,B,C anyway, and it has the
live lock prevention logic based on dirtied_when, so that's not a big
problem for sync.

Thanks,
Fengguang


^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: regression in page writeback
  2009-09-23  1:17                 ` Wu Fengguang
@ 2009-09-23  1:27                   ` Wu Fengguang
  2009-09-23  1:28                   ` Andrew Morton
  1 sibling, 0 replies; 79+ messages in thread
From: Wu Fengguang @ 2009-09-23  1:27 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Chris Mason, Peter Zijlstra, Li, Shaohua, linux-kernel, richard,
	jens.axboe

On Wed, Sep 23, 2009 at 09:17:58AM +0800, Wu Fengguang wrote:
> On Wed, Sep 23, 2009 at 08:54:52AM +0800, Andrew Morton wrote:
> > On Wed, 23 Sep 2009 08:22:20 +0800 Wu Fengguang <fengguang.wu@intel.com> wrote:
> > 
> > > Jens' per-bdi writeback has another improvement. In 2.6.31, when
> > > superblocks A and B both have 100000 dirty pages, it will first
> > > exhaust A's 100000 dirty pages before going on to sync B's.
> > 
> > That would only be true if someone broke 2.6.31.  Did they?
> > 
> > SYSCALL_DEFINE0(sync)
> > {
> > 	wakeup_pdflush(0);
> > 	sync_filesystems(0);
> > 	sync_filesystems(1);
> > 	if (unlikely(laptop_mode))
> > 		laptop_sync_completion();
> > 	return 0;
> > }
> > 
> > the sync_filesystems(0) is supposed to non-blockingly start IO against
> > all devices.  It used to do that correctly.  But people mucked with it
> > so perhaps it no longer does.
> 
> I'm referring to writeback_inodes(). Each invocation of which (to sync
> 4MB) will do the same iteration over superblocks A => B => C ... So if
> A has dirty pages, it will always be served first.
> 
> So if wbc->bdi == NULL (which is true for kupdate/background sync), it
> will have to first exhaust A before going on to B and C.
> 
> There are no "cursor" in the superblock level iterations.

I even have an old patch for it. But Jens' patches are more general solution.

Thanks,
Fengguang
---
writeback: continue from the last super_block in syncing

Cc: David Chinner <dgc@sgi.com>
Cc: Michael Rubin <mrubin@google.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Fengguang Wu <wfg@mail.ustc.edu.cn>
---
 fs/fs-writeback.c         |   12 ++++++++++++
 include/linux/writeback.h |    2 ++
 2 files changed, 14 insertions(+)

--- linux-2.6.orig/fs/fs-writeback.c
+++ linux-2.6/fs/fs-writeback.c
@@ -494,11 +494,19 @@ void
 writeback_inodes(struct writeback_control *wbc)
 {
 	struct super_block *sb;
+	int i;
+
+	if (wbc->sb_index)
+		wbc->more_io = 1;
 
 	might_sleep();
 	spin_lock(&sb_lock);
 restart:
+	i = -1;
 	list_for_each_entry_reverse(sb, &super_blocks, s_list) {
+		i++;
+		if (i < wbc->sb_index)
+			continue;
 		if (sb_has_dirty_inodes(sb)) {
 			/* we're making our own get_super here */
 			sb->s_count++;
@@ -520,9 +528,13 @@ restart:
 			if (__put_super_and_need_restart(sb))
 				goto restart;
 		}
+		if (list_empty(&sb->s_io))
+			wbc->sb_index++;
 		if (wbc->nr_to_write <= 0)
 			break;
 	}
+	if (&sb->s_list == &super_blocks)
+		wbc->sb_index = 0;
 	spin_unlock(&sb_lock);
 }
 
--- linux-2.6.orig/include/linux/writeback.h
+++ linux-2.6/include/linux/writeback.h
@@ -48,6 +48,8 @@ struct writeback_control {
 					   this for each page written */
 	long pages_skipped;		/* Pages which were not written */
 
+	int sb_index;			/* the superblock to continue from */
+
 	/*
 	 * For a_ops->writepages(): is start or end are non-zero then this is
 	 * a hint that the filesystem need only write out the pages inside that

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: regression in page writeback
  2009-09-23  1:17                 ` Wu Fengguang
  2009-09-23  1:27                   ` Wu Fengguang
@ 2009-09-23  1:28                   ` Andrew Morton
  2009-09-23  1:32                     ` Wu Fengguang
  2009-09-23  1:45                     ` Wu Fengguang
  1 sibling, 2 replies; 79+ messages in thread
From: Andrew Morton @ 2009-09-23  1:28 UTC (permalink / raw)
  To: Wu Fengguang
  Cc: Chris Mason, Peter Zijlstra, Li, Shaohua, linux-kernel, richard,
	jens.axboe

On Wed, 23 Sep 2009 09:17:58 +0800 Wu Fengguang <fengguang.wu@intel.com> wrote:

> On Wed, Sep 23, 2009 at 08:54:52AM +0800, Andrew Morton wrote:
> > On Wed, 23 Sep 2009 08:22:20 +0800 Wu Fengguang <fengguang.wu@intel.com> wrote:
> > 
> > > Jens' per-bdi writeback has another improvement. In 2.6.31, when
> > > superblocks A and B both have 100000 dirty pages, it will first
> > > exhaust A's 100000 dirty pages before going on to sync B's.
> > 
> > That would only be true if someone broke 2.6.31.  Did they?
> > 
> > SYSCALL_DEFINE0(sync)
> > {
> > 	wakeup_pdflush(0);
> > 	sync_filesystems(0);
> > 	sync_filesystems(1);
> > 	if (unlikely(laptop_mode))
> > 		laptop_sync_completion();
> > 	return 0;
> > }
> > 
> > the sync_filesystems(0) is supposed to non-blockingly start IO against
> > all devices.  It used to do that correctly.  But people mucked with it
> > so perhaps it no longer does.
> 
> I'm referring to writeback_inodes(). Each invocation of which (to sync
> 4MB) will do the same iteration over superblocks A => B => C ... So if
> A has dirty pages, it will always be served first.
> 
> So if wbc->bdi == NULL (which is true for kupdate/background sync), it
> will have to first exhaust A before going on to B and C.

But that works OK.  We fill the first device's queue, then it gets
congested and sync_sb_inodes() does nothing and we advance to the next
queue.

If a device has more than a queue's worth of dirty data then we'll
probably leave some of that dirty memory un-queued, so there's some
lack of concurrency in that situation.



^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: regression in page writeback
  2009-09-23  1:28                   ` Andrew Morton
@ 2009-09-23  1:32                     ` Wu Fengguang
  2009-09-23  1:47                       ` Andrew Morton
  2009-09-23  1:45                     ` Wu Fengguang
  1 sibling, 1 reply; 79+ messages in thread
From: Wu Fengguang @ 2009-09-23  1:32 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Chris Mason, Peter Zijlstra, Li, Shaohua, linux-kernel, richard,
	jens.axboe

On Wed, Sep 23, 2009 at 09:28:32AM +0800, Andrew Morton wrote:
> On Wed, 23 Sep 2009 09:17:58 +0800 Wu Fengguang <fengguang.wu@intel.com> wrote:
> 
> > On Wed, Sep 23, 2009 at 08:54:52AM +0800, Andrew Morton wrote:
> > > On Wed, 23 Sep 2009 08:22:20 +0800 Wu Fengguang <fengguang.wu@intel.com> wrote:
> > > 
> > > > Jens' per-bdi writeback has another improvement. In 2.6.31, when
> > > > superblocks A and B both have 100000 dirty pages, it will first
> > > > exhaust A's 100000 dirty pages before going on to sync B's.
> > > 
> > > That would only be true if someone broke 2.6.31.  Did they?
> > > 
> > > SYSCALL_DEFINE0(sync)
> > > {
> > > 	wakeup_pdflush(0);
> > > 	sync_filesystems(0);
> > > 	sync_filesystems(1);
> > > 	if (unlikely(laptop_mode))
> > > 		laptop_sync_completion();
> > > 	return 0;
> > > }
> > > 
> > > the sync_filesystems(0) is supposed to non-blockingly start IO against
> > > all devices.  It used to do that correctly.  But people mucked with it
> > > so perhaps it no longer does.
> > 
> > I'm referring to writeback_inodes(). Each invocation of which (to sync
> > 4MB) will do the same iteration over superblocks A => B => C ... So if
> > A has dirty pages, it will always be served first.
> > 
> > So if wbc->bdi == NULL (which is true for kupdate/background sync), it
> > will have to first exhaust A before going on to B and C.
> 
> But that works OK.  We fill the first device's queue, then it gets
> congested and sync_sb_inodes() does nothing and we advance to the next
> queue.
> 
> If a device has more than a queue's worth of dirty data then we'll
> probably leave some of that dirty memory un-queued, so there's some
> lack of concurrency in that situation.

Yes, exactly if block device is not fast enough. In that case
balance_dirty_pages() may also kick in with non-NULL bdi.

Thanks,
Fengguang

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: regression in page writeback
  2009-09-23  1:28                   ` Andrew Morton
  2009-09-23  1:32                     ` Wu Fengguang
@ 2009-09-23  1:45                     ` Wu Fengguang
  2009-09-23  1:59                       ` Andrew Morton
  1 sibling, 1 reply; 79+ messages in thread
From: Wu Fengguang @ 2009-09-23  1:45 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Chris Mason, Peter Zijlstra, Li, Shaohua, linux-kernel, richard,
	jens.axboe

On Wed, Sep 23, 2009 at 09:28:32AM +0800, Andrew Morton wrote:
> On Wed, 23 Sep 2009 09:17:58 +0800 Wu Fengguang <fengguang.wu@intel.com> wrote:
> 
> > On Wed, Sep 23, 2009 at 08:54:52AM +0800, Andrew Morton wrote:
> > > On Wed, 23 Sep 2009 08:22:20 +0800 Wu Fengguang <fengguang.wu@intel.com> wrote:
> > > 
> > > > Jens' per-bdi writeback has another improvement. In 2.6.31, when
> > > > superblocks A and B both have 100000 dirty pages, it will first
> > > > exhaust A's 100000 dirty pages before going on to sync B's.
> > > 
> > > That would only be true if someone broke 2.6.31.  Did they?
> > > 
> > > SYSCALL_DEFINE0(sync)
> > > {
> > > 	wakeup_pdflush(0);
> > > 	sync_filesystems(0);
> > > 	sync_filesystems(1);
> > > 	if (unlikely(laptop_mode))
> > > 		laptop_sync_completion();
> > > 	return 0;
> > > }
> > > 
> > > the sync_filesystems(0) is supposed to non-blockingly start IO against
> > > all devices.  It used to do that correctly.  But people mucked with it
> > > so perhaps it no longer does.
> > 
> > I'm referring to writeback_inodes(). Each invocation of which (to sync
> > 4MB) will do the same iteration over superblocks A => B => C ... So if
> > A has dirty pages, it will always be served first.
> > 
> > So if wbc->bdi == NULL (which is true for kupdate/background sync), it
> > will have to first exhaust A before going on to B and C.
> 
> But that works OK.  We fill the first device's queue, then it gets
> congested and sync_sb_inodes() does nothing and we advance to the next
> queue.

So in common cases "exhaust" is a bit exaggerated, but A does receive
much more opportunity than B. Computation resources for IO submission
are unbalanced for A, and there are pointless overheads in rechecking A.

> If a device has more than a queue's worth of dirty data then we'll
> probably leave some of that dirty memory un-queued, so there's some
> lack of concurrency in that situation.

Good insight. That possibly explains one major factor of the
performance gains of Jens' per-bdi writeback.

Thanks,
Fengguang

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: regression in page writeback
  2009-09-23  1:32                     ` Wu Fengguang
@ 2009-09-23  1:47                       ` Andrew Morton
  2009-09-23  2:01                         ` Wu Fengguang
  0 siblings, 1 reply; 79+ messages in thread
From: Andrew Morton @ 2009-09-23  1:47 UTC (permalink / raw)
  To: Wu Fengguang
  Cc: Chris Mason, Peter Zijlstra, Li, Shaohua, linux-kernel, richard,
	jens.axboe

On Wed, 23 Sep 2009 09:32:36 +0800 Wu Fengguang <fengguang.wu@intel.com> wrote:

> On Wed, Sep 23, 2009 at 09:28:32AM +0800, Andrew Morton wrote:
> > On Wed, 23 Sep 2009 09:17:58 +0800 Wu Fengguang <fengguang.wu@intel.com> wrote:
> > 
> > > On Wed, Sep 23, 2009 at 08:54:52AM +0800, Andrew Morton wrote:
> > > > On Wed, 23 Sep 2009 08:22:20 +0800 Wu Fengguang <fengguang.wu@intel.com> wrote:
> > > > 
> > > > > Jens' per-bdi writeback has another improvement. In 2.6.31, when
> > > > > superblocks A and B both have 100000 dirty pages, it will first
> > > > > exhaust A's 100000 dirty pages before going on to sync B's.
> > > > 
> > > > That would only be true if someone broke 2.6.31.  Did they?
> > > > 
> > > > SYSCALL_DEFINE0(sync)
> > > > {
> > > > 	wakeup_pdflush(0);
> > > > 	sync_filesystems(0);
> > > > 	sync_filesystems(1);
> > > > 	if (unlikely(laptop_mode))
> > > > 		laptop_sync_completion();
> > > > 	return 0;
> > > > }
> > > > 
> > > > the sync_filesystems(0) is supposed to non-blockingly start IO against
> > > > all devices.  It used to do that correctly.  But people mucked with it
> > > > so perhaps it no longer does.
> > > 
> > > I'm referring to writeback_inodes(). Each invocation of which (to sync
> > > 4MB) will do the same iteration over superblocks A => B => C ... So if
> > > A has dirty pages, it will always be served first.
> > > 
> > > So if wbc->bdi == NULL (which is true for kupdate/background sync), it
> > > will have to first exhaust A before going on to B and C.
> > 
> > But that works OK.  We fill the first device's queue, then it gets
> > congested and sync_sb_inodes() does nothing and we advance to the next
> > queue.
> > 
> > If a device has more than a queue's worth of dirty data then we'll
> > probably leave some of that dirty memory un-queued, so there's some
> > lack of concurrency in that situation.
> 
> Yes, exactly if block device is not fast enough.

Actually, no.

If there's still outstanding dirty data for any of those queues, both
wb_kupdate() and background_writeout() will take a teeny sleep and then
will re-poll the queues.

Did that logic get broken?

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: regression in page writeback
  2009-09-22 13:39     ` Wu Fengguang
@ 2009-09-23  1:52       ` Shaohua Li
  2009-09-23  4:00         ` Wu Fengguang
  0 siblings, 1 reply; 79+ messages in thread
From: Shaohua Li @ 2009-09-23  1:52 UTC (permalink / raw)
  To: Wu, Fengguang
  Cc: linux-kernel, richard, a.p.zijlstra, jens.axboe, akpm,
	linux-fsdevel, Chris Mason

[-- Attachment #1: Type: text/plain, Size: 2400 bytes --]

On Tue, 2009-09-22 at 21:39 +0800, Wu, Fengguang wrote:
> On Tue, Sep 22, 2009 at 07:50:15PM +0800, Li, Shaohua wrote:
> > On Tue, Sep 22, 2009 at 06:49:15PM +0800, Wu, Fengguang wrote:
> > > Shaohua,
> > > 
> > > On Tue, Sep 22, 2009 at 01:49:13PM +0800, Li, Shaohua wrote:
> > > > Hi,
> > > > Commit d7831a0bdf06b9f722b947bb0c205ff7d77cebd8 causes disk io regression
> > > > in my test.
> > > > My system has 12 disks, each disk has two partitions. System runs fio sequence
> > > > write on all partitions, each partion has 8 jobs.
> > > > 2.6.31-rc1, fio gives 460m/s disk io
> > > > 2.6.31-rc2, fio gives about 400m/s disk io. Revert the patch, speed back to
> > > > 460m/s
> > > > 
> > > > Under latest git: fio gives 450m/s disk io; If reverting the patch, the speed
> > > > is 484m/s.
> > > > 
> > > > With the patch, fio reports less io merge and more interrupts. My naive
> > > > analysis is the patch makes balance_dirty_pages_ratelimited_nr() limits
> > > > write chunk to 8 pages and then soon go to sleep in balance_dirty_pages(),
> > > > because most time the bdi_nr_reclaimable < bdi_thresh, and so when write
> > > > the pages out, the chunk is 8 pages long instead of 4M long. Without the patch,
> > > > thread can write 8 pages and then move some pages to writeback, and then
> > > > continue doing write. The patch seems to break this.
> > > 
> > > Do you have trace/numbers for above descriptions?
> > No. Just guess, because there is less io merge. And watch each bdi's states,
> > bdi_nr_reclaimable < bdi_thresh seems always true.
> 
> Ah OK.
> 
> > > > Unfortunatelly I can't figure out a fix for this issue, hopefully
> > > > you have more ideas.
> > > 
> > > Attached is a very verbose writeback debug patch, hope it helps and
> > > won't disturb the workload a lot :)
> > Hmm, the log buf will get overflowed soon, there is > 400m/s io. I tried
> > to produce this issue in a system with two disks, but fail. Anyway, I'll try
> > it out tomorrow.
> 
> Thank you~  I'd recommend to use netconsole or serial line, and stop
> local klogd because the write of log messages could add noises. 
attached is a short log. I'll try to get a full log after finish latest
git test.
bdi_nr_reclaimable is always less than bdi_thresh in the log. because
when bdi_nr_reclaimable + bdi_nr_writeback > bdi_thresh, background
writeback is already started, so bdi_nr_writeback should be > 0.

[-- Attachment #2: msg2 --]
[-- Type: text/plain, Size: 97611 bytes --]

0, bdi_thresh=23810, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=24160, bdi_thresh=27860, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=23080, bdi_thresh=26626, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=23920, bdi_thresh=27422, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=15640, bdi_thresh=19031, background_thresh=145141, dirty_thresh=290283
mm/page-writeback.c 762 background_writeout: comm=pdflush pid=599 n=-6696602
bdi_nr_reclaimable=21040, bdi_thresh=24479, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=20480, bdi_thresh=23802, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=21480, bdi_thresh=24906, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=24160, bdi_thresh=27862, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=23920, bdi_thresh=27416, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=15240, bdi_thresh=16824, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=15880, bdi_thresh=18247, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=15640, bdi_thresh=19031, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=20480, bdi_thresh=23809, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=21480, bdi_thresh=24903, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=21480, bdi_thresh=24907, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=23080, bdi_thresh=26623, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=15640, bdi_thresh=19032, background_thresh=145141, dirty_thresh=290283
global dirty=246756 writeback=43800 nfs=0 flags=C_ towrite=1018 skipped=0
bdi_nr_reclaimable=15640, bdi_thresh=19030, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=21040, bdi_thresh=24481, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=24160, bdi_thresh=27850, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=21040, bdi_thresh=24493, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=21320, bdi_thresh=24944, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=21320, bdi_thresh=24940, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=15880, bdi_thresh=18244, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=21040, bdi_thresh=24493, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=24160, bdi_thresh=27864, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=22840, bdi_thresh=26423, background_thresh=145141, dirty_thresh=290283
redirty_tail +435: inode 174623
bdi_nr_reclaimable=21480, bdi_thresh=24894, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=23920, bdi_thresh=27412, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=21480, bdi_thresh=24900, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=24160, bdi_thresh=27862, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=24160, bdi_thresh=27855, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=15640, bdi_thresh=19026, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=23080, bdi_thresh=26620, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=21040, bdi_thresh=24484, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=15640, bdi_thresh=19027, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=22840, bdi_thresh=26420, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=21320, bdi_thresh=24943, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=15240, bdi_thresh=16824, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=23080, bdi_thresh=26611, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=21480, bdi_thresh=24907, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=20480, bdi_thresh=23793, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=21040, bdi_thresh=24479, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=20480, bdi_thresh=23802, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=24160, bdi_thresh=27862, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=15240, bdi_thresh=16825, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=21040, bdi_thresh=24491, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=23920, bdi_thresh=27425, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=23080, bdi_thresh=26615, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=15880, bdi_thresh=18247, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=21320, bdi_thresh=24942, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=23080, bdi_thresh=26625, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=24160, bdi_thresh=27860, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=22840, bdi_thresh=26421, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=19720, bdi_thresh=23425, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=23080, bdi_thresh=26621, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=21480, bdi_thresh=24904, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=21320, bdi_thresh=24946, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=15880, bdi_thresh=18246, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=21480, bdi_thresh=24906, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=23920, bdi_thresh=27422, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=15640, bdi_thresh=19032, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=23920, bdi_thresh=27416, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=22840, bdi_thresh=26418, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=15640, bdi_thresh=19031, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=20480, bdi_thresh=23809, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=15880, bdi_thresh=18246, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=21480, bdi_thresh=24907, background_thresh=145141, dirty_thresh=290283
redirty_tail +435: inode 4784181
bdi_nr_reclaimable=21480, bdi_thresh=24902, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=15240, bdi_thresh=16826, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=15240, bdi_thresh=16824, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=21320, bdi_thresh=24944, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=21040, bdi_thresh=24495, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=23920, bdi_thresh=27413, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=15240, bdi_thresh=16824, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=24160, bdi_thresh=27863, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=23080, bdi_thresh=26618, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=21040, bdi_thresh=24492, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=21480, bdi_thresh=24905, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=21480, bdi_thresh=24903, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=23080, bdi_thresh=26621, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=15640, bdi_thresh=19031, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=21480, bdi_thresh=24903, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=22840, bdi_thresh=26423, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=15880, bdi_thresh=18246, background_thresh=145141, dirty_thresh=290283
redirty_tail +435: inode 3474323
bdi_nr_reclaimable=23080, bdi_thresh=26621, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=21480, bdi_thresh=24864, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=24160, bdi_thresh=27863, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=20480, bdi_thresh=23811, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=15880, bdi_thresh=18245, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=21480, bdi_thresh=24864, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=22840, bdi_thresh=26425, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=15240, bdi_thresh=16826, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=21040, bdi_thresh=24496, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=21320, bdi_thresh=24945, background_thresh=145141, dirty_thresh=290283
requeue_io +527: inode 0
bdi_nr_reclaimable=15640, bdi_thresh=19034, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=15880, bdi_thresh=18247, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=23080, bdi_thresh=26620, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=23920, bdi_thresh=27426, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=15880, bdi_thresh=18247, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=23080, bdi_thresh=26626, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=15880, bdi_thresh=18245, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=24160, bdi_thresh=27861, background_thresh=145141, dirty_thresh=290283
requeue_io +527: inode 0
bdi_nr_reclaimable=23920, bdi_thresh=27427, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145141, dirty_thresh=290283
bdi_nr_reclaimable=22840, bdi_thresh=26421, background_thresh=145150, dirty_thresh=290301
bdi_nr_reclaimable=22840, bdi_thresh=26424, background_thresh=145150, dirty_thresh=290301
bdi_nr_reclaimable=21040, bdi_thresh=24489, background_thresh=145150, dirty_thresh=290301
bdi_nr_reclaimable=21320, bdi_thresh=24906, background_thresh=145150, dirty_thresh=290301
bdi_nr_reclaimable=23920, bdi_thresh=27426, background_thresh=145150, dirty_thresh=290301
redirty_tail +502: inode 12415
bdi_nr_reclaimable=24160, bdi_thresh=27861, background_thresh=145150, dirty_thresh=290301
bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145150, dirty_thresh=290301
bdi_nr_reclaimable=21320, bdi_thresh=24905, background_thresh=145150, dirty_thresh=290301
bdi_nr_reclaimable=15880, bdi_thresh=18247, background_thresh=145150, dirty_thresh=290301
bdi_nr_reclaimable=15880, bdi_thresh=18247, background_thresh=145150, dirty_thresh=290301
bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145159, dirty_thresh=290319
bdi_nr_reclaimable=15640, bdi_thresh=19027, background_thresh=145159, dirty_thresh=290319
bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145159, dirty_thresh=290319
bdi_nr_reclaimable=21640, bdi_thresh=24923, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21640, bdi_thresh=24919, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21640, bdi_thresh=24921, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21640, bdi_thresh=24918, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23920, bdi_thresh=27421, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21320, bdi_thresh=24905, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15880, bdi_thresh=18247, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=20480, bdi_thresh=23812, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23080, bdi_thresh=26623, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21040, bdi_thresh=24495, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21040, bdi_thresh=24490, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21320, bdi_thresh=24903, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23920, bdi_thresh=27423, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21320, bdi_thresh=24903, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15880, bdi_thresh=18246, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23920, bdi_thresh=27421, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15880, bdi_thresh=18247, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15880, bdi_thresh=18250, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21640, bdi_thresh=24919, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21040, bdi_thresh=24496, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=22840, bdi_thresh=26414, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=24160, bdi_thresh=27820, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19030, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21640, bdi_thresh=24919, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23920, bdi_thresh=27418, background_thresh=145168, dirty_thresh=290337
mm/page-writeback.c 762 background_writeout: comm=pdflush pid=10090 n=-6621688
bdi_nr_reclaimable=23080, bdi_thresh=26624, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19032, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15240, bdi_thresh=16826, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=22840, bdi_thresh=26420, background_thresh=145168, dirty_thresh=290337
global dirty=246847 writeback=43800 nfs=0 flags=C_ towrite=1023 skipped=0
bdi_nr_reclaimable=21320, bdi_thresh=24900, background_thresh=145168, dirty_thresh=290337
redirty_tail +435: inode 174623
bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=24160, bdi_thresh=27824, background_thresh=145168, dirty_thresh=290337
redirty_tail +435: inode 4784181
redirty_tail +435: inode 3474323
requeue_io +527: inode 0
requeue_io +527: inode 0
redirty_tail +502: inode 12396
mm/page-writeback.c 762 background_writeout: comm=pdflush pid=10090 n=-6621689
bdi_nr_reclaimable=24160, bdi_thresh=27824, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21320, bdi_thresh=24900, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15240, bdi_thresh=16826, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23080, bdi_thresh=26624, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=24160, bdi_thresh=27820, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21040, bdi_thresh=24490, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=20480, bdi_thresh=23812, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23920, bdi_thresh=27421, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21640, bdi_thresh=24919, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23920, bdi_thresh=27427, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23080, bdi_thresh=26620, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15240, bdi_thresh=16827, background_thresh=145168, dirty_thresh=290337
global dirty=246847 writeback=43800 nfs=0 flags=C_ towrite=1023 skipped=0
bdi_nr_reclaimable=22840, bdi_thresh=26420, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21640, bdi_thresh=24910, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=20480, bdi_thresh=23811, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21320, bdi_thresh=24903, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21320, bdi_thresh=24904, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21320, bdi_thresh=24900, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23080, bdi_thresh=26620, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21640, bdi_thresh=24923, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23080, bdi_thresh=26611, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21640, bdi_thresh=24916, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=24160, bdi_thresh=27824, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=24160, bdi_thresh=27822, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15880, bdi_thresh=18245, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=24160, bdi_thresh=27810, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23080, bdi_thresh=26615, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23920, bdi_thresh=27412, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21040, bdi_thresh=24495, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23920, bdi_thresh=27425, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15880, bdi_thresh=18248, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23920, bdi_thresh=27422, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=19720, bdi_thresh=23425, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=22840, bdi_thresh=26423, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=24160, bdi_thresh=27822, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21040, bdi_thresh=24481, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=24160, bdi_thresh=27820, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=20480, bdi_thresh=23804, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19031, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19026, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21040, bdi_thresh=24486, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=20480, bdi_thresh=23795, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23080, bdi_thresh=26625, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19027, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21040, bdi_thresh=24493, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19027, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21640, bdi_thresh=24923, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15880, bdi_thresh=18247, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19030, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15240, bdi_thresh=16825, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21040, bdi_thresh=24483, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21040, bdi_thresh=24494, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15240, bdi_thresh=16826, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23920, bdi_thresh=27416, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19032, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19032, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=22840, bdi_thresh=26418, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23080, bdi_thresh=26623, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23080, bdi_thresh=26623, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23080, bdi_thresh=26621, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=24160, bdi_thresh=27815, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21640, bdi_thresh=24920, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15880, bdi_thresh=18247, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21320, bdi_thresh=24906, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21640, bdi_thresh=24920, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21640, bdi_thresh=24922, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21040, bdi_thresh=24495, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23080, bdi_thresh=26621, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=20480, bdi_thresh=23811, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15240, bdi_thresh=16825, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23080, bdi_thresh=26618, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23920, bdi_thresh=27413, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23920, bdi_thresh=27421, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=22840, bdi_thresh=26425, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21320, bdi_thresh=24904, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15240, bdi_thresh=16827, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21040, bdi_thresh=24498, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23920, bdi_thresh=27426, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21320, bdi_thresh=24902, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21040, bdi_thresh=24497, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=24160, bdi_thresh=27823, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21640, bdi_thresh=24918, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21640, bdi_thresh=24921, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21640, bdi_thresh=24920, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15880, bdi_thresh=18246, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=20480, bdi_thresh=23813, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=22840, bdi_thresh=26414, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23080, bdi_thresh=26626, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15880, bdi_thresh=18247, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23920, bdi_thresh=27423, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15240, bdi_thresh=16825, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=24160, bdi_thresh=27821, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15880, bdi_thresh=18249, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23080, bdi_thresh=26621, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=24160, bdi_thresh=27823, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19031, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21640, bdi_thresh=24919, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15880, bdi_thresh=18248, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23920, bdi_thresh=27426, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15880, bdi_thresh=18248, background_thresh=145168, dirty_thresh=290337
mm/page-writeback.c 762 background_writeout: comm=pdflush pid=599 n=-6696603
bdi_nr_reclaimable=22840, bdi_thresh=26421, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21320, bdi_thresh=24905, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21640, bdi_thresh=24918, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=22840, bdi_thresh=26423, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=22840, bdi_thresh=26424, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19034, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=22840, bdi_thresh=26420, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21640, bdi_thresh=24923, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21320, bdi_thresh=24906, background_thresh=145168, dirty_thresh=290337
global dirty=246847 writeback=43800 nfs=0 flags=C_ towrite=1023 skipped=0
bdi_nr_reclaimable=21040, bdi_thresh=24491, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=22840, bdi_thresh=26421, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15880, bdi_thresh=18250, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15880, bdi_thresh=18248, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21320, bdi_thresh=24903, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=24160, bdi_thresh=27821, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21320, bdi_thresh=24905, background_thresh=145168, dirty_thresh=290337
requeue_io +527: inode 0
bdi_nr_reclaimable=15880, bdi_thresh=18248, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19030, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15880, bdi_thresh=18247, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21320, bdi_thresh=24903, background_thresh=145168, dirty_thresh=290337
requeue_io +527: inode 0
bdi_nr_reclaimable=15880, bdi_thresh=18246, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21040, bdi_thresh=24495, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21640, bdi_thresh=24919, background_thresh=145168, dirty_thresh=290337
redirty_tail +502: inode 12533
bdi_nr_reclaimable=21040, bdi_thresh=24496, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15880, bdi_thresh=18247, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23920, bdi_thresh=27418, background_thresh=145168, dirty_thresh=290337
redirty_tail +435: inode 174623
bdi_nr_reclaimable=15640, bdi_thresh=19032, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145168, dirty_thresh=290337
redirty_tail +435: inode 4784181
redirty_tail +435: inode 3474323
requeue_io +527: inode 0
requeue_io +527: inode 0
redirty_tail +502: inode 13474
mm/page-writeback.c 762 background_writeout: comm=pdflush pid=10090 n=-6621691
bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15880, bdi_thresh=18246, background_thresh=145168, dirty_thresh=290337
mm/page-writeback.c 762 background_writeout: comm=pdflush pid=599 n=-6696603
bdi_nr_reclaimable=15640, bdi_thresh=19030, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15880, bdi_thresh=18247, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21320, bdi_thresh=24903, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=22840, bdi_thresh=26421, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19034, background_thresh=145168, dirty_thresh=290337
global dirty=246847 writeback=43709 nfs=0 flags=C_ towrite=1024 skipped=0
bdi_nr_reclaimable=22840, bdi_thresh=26423, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19031, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=24160, bdi_thresh=27823, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=24160, bdi_thresh=27821, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=19720, bdi_thresh=23425, background_thresh=145168, dirty_thresh=290337
global dirty=246847 writeback=43709 nfs=0 flags=C_ towrite=1022 skipped=0
bdi_nr_reclaimable=21640, bdi_thresh=24920, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=24160, bdi_thresh=27824, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23080, bdi_thresh=26620, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23920, bdi_thresh=27425, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23080, bdi_thresh=26611, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21040, bdi_thresh=24489, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21640, bdi_thresh=24926, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21320, bdi_thresh=24903, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23920, bdi_thresh=27421, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21640, bdi_thresh=24929, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=24160, bdi_thresh=27822, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19026, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23920, bdi_thresh=27412, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23080, bdi_thresh=26615, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23080, bdi_thresh=26624, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=20480, bdi_thresh=23810, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21320, bdi_thresh=24904, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=20480, bdi_thresh=23811, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23920, bdi_thresh=27422, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15880, bdi_thresh=18247, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21040, bdi_thresh=24480, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21320, bdi_thresh=24900, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=24160, bdi_thresh=27822, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=24160, bdi_thresh=27820, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=20480, bdi_thresh=23803, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15240, bdi_thresh=16825, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=20480, bdi_thresh=23794, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21040, bdi_thresh=24485, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21640, bdi_thresh=24933, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15880, bdi_thresh=18248, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21640, bdi_thresh=24933, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21640, bdi_thresh=24920, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21320, bdi_thresh=24903, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=22840, bdi_thresh=26421, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23080, bdi_thresh=26620, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21640, bdi_thresh=24926, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=22840, bdi_thresh=26423, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=24160, bdi_thresh=27822, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21640, bdi_thresh=24932, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23920, bdi_thresh=27412, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23080, bdi_thresh=26615, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=19720, bdi_thresh=23425, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23920, bdi_thresh=27425, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23080, bdi_thresh=26611, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21320, bdi_thresh=24900, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21320, bdi_thresh=24903, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21040, bdi_thresh=24492, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19030, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15240, bdi_thresh=16824, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=24160, bdi_thresh=27821, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19026, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19031, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23920, bdi_thresh=27416, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19027, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19031, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21320, bdi_thresh=24904, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15880, bdi_thresh=18247, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19027, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23080, bdi_thresh=26625, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15880, bdi_thresh=18246, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21640, bdi_thresh=24920, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21040, bdi_thresh=24480, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23080, bdi_thresh=26620, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=24160, bdi_thresh=27820, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=20480, bdi_thresh=23794, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21040, bdi_thresh=24485, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=20480, bdi_thresh=23803, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15240, bdi_thresh=16825, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21640, bdi_thresh=24933, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15880, bdi_thresh=18248, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23080, bdi_thresh=26611, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15880, bdi_thresh=18247, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21320, bdi_thresh=24903, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=24160, bdi_thresh=27822, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21640, bdi_thresh=24926, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23920, bdi_thresh=27422, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15880, bdi_thresh=18246, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23920, bdi_thresh=27412, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23080, bdi_thresh=26615, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=24160, bdi_thresh=27822, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=24160, bdi_thresh=27810, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=19720, bdi_thresh=23425, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=24160, bdi_thresh=27824, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23920, bdi_thresh=27425, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21640, bdi_thresh=24933, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21320, bdi_thresh=24900, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21320, bdi_thresh=24900, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21640, bdi_thresh=24930, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15240, bdi_thresh=16824, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19032, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21040, bdi_thresh=24493, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21040, bdi_thresh=24481, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21640, bdi_thresh=24930, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19031, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23080, bdi_thresh=26611, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23080, bdi_thresh=26620, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21320, bdi_thresh=24864, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21320, bdi_thresh=24863, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19027, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21640, bdi_thresh=24918, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15880, bdi_thresh=18245, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21640, bdi_thresh=24924, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15240, bdi_thresh=16823, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21320, bdi_thresh=24860, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23920, bdi_thresh=27412, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23920, bdi_thresh=27416, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=24160, bdi_thresh=27822, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19026, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15880, bdi_thresh=18243, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15880, bdi_thresh=18246, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21040, bdi_thresh=24490, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23080, bdi_thresh=26625, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21640, bdi_thresh=24931, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=24160, bdi_thresh=27822, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21040, bdi_thresh=24478, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=24160, bdi_thresh=27820, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19027, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=20480, bdi_thresh=23792, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23080, bdi_thresh=26615, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=20480, bdi_thresh=23801, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=24160, bdi_thresh=27810, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21040, bdi_thresh=24491, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=20480, bdi_thresh=23809, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21640, bdi_thresh=24918, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23920, bdi_thresh=27425, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21320, bdi_thresh=24863, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23080, bdi_thresh=26620, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=19720, bdi_thresh=23425, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21640, bdi_thresh=24924, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15240, bdi_thresh=16823, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21320, bdi_thresh=24860, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23920, bdi_thresh=27412, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21040, bdi_thresh=24483, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=24160, bdi_thresh=27822, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21640, bdi_thresh=24930, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=24160, bdi_thresh=27824, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23080, bdi_thresh=26611, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21320, bdi_thresh=24864, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=24160, bdi_thresh=27815, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23920, bdi_thresh=27422, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19027, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21640, bdi_thresh=24928, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15240, bdi_thresh=16823, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15880, bdi_thresh=18246, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19032, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23920, bdi_thresh=27416, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21040, bdi_thresh=24480, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15240, bdi_thresh=16824, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19031, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21640, bdi_thresh=24931, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19032, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21040, bdi_thresh=24492, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21320, bdi_thresh=24864, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23080, bdi_thresh=26623, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15240, bdi_thresh=16825, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15880, bdi_thresh=18245, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=22840, bdi_thresh=26418, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23080, bdi_thresh=26623, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23080, bdi_thresh=26621, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21040, bdi_thresh=24492, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21320, bdi_thresh=24866, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19026, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15880, bdi_thresh=18243, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=22840, bdi_thresh=26423, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15240, bdi_thresh=16823, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=20480, bdi_thresh=23808, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15880, bdi_thresh=18245, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23920, bdi_thresh=27421, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19030, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=24160, bdi_thresh=27824, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23080, bdi_thresh=26618, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=20480, bdi_thresh=23791, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23080, bdi_thresh=26615, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21040, bdi_thresh=24490, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23080, bdi_thresh=26611, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=24160, bdi_thresh=27810, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=20480, bdi_thresh=23800, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21320, bdi_thresh=24863, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=24160, bdi_thresh=27822, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21640, bdi_thresh=24941, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21040, bdi_thresh=24477, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19027, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21640, bdi_thresh=24927, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23080, bdi_thresh=26620, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23920, bdi_thresh=27412, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21040, bdi_thresh=24481, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19031, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=20480, bdi_thresh=23807, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23920, bdi_thresh=27425, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=24200, bdi_thresh=27822, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=24200, bdi_thresh=27824, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23080, bdi_thresh=26625, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=19720, bdi_thresh=23425, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=24200, bdi_thresh=27810, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21640, bdi_thresh=24933, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15880, bdi_thresh=18245, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15880, bdi_thresh=18242, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23080, bdi_thresh=26611, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23080, bdi_thresh=26623, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=20480, bdi_thresh=23790, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23080, bdi_thresh=26615, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15240, bdi_thresh=16822, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=20480, bdi_thresh=23799, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=24200, bdi_thresh=27820, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21320, bdi_thresh=24863, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15880, bdi_thresh=18244, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23920, bdi_thresh=27421, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19026, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21040, bdi_thresh=24476, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23080, bdi_thresh=26618, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21640, bdi_thresh=24927, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=24200, bdi_thresh=27824, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=22840, bdi_thresh=26423, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23920, bdi_thresh=27422, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23080, bdi_thresh=26621, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23920, bdi_thresh=27416, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21320, bdi_thresh=24864, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21320, bdi_thresh=24866, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15240, bdi_thresh=16822, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=20480, bdi_thresh=23806, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=24200, bdi_thresh=27822, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21640, bdi_thresh=24940, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15240, bdi_thresh=16823, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19030, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21320, bdi_thresh=24860, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21040, bdi_thresh=24489, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21640, bdi_thresh=24940, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21040, bdi_thresh=24488, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19032, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21040, bdi_thresh=24490, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21040, bdi_thresh=24478, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15880, bdi_thresh=18244, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=24200, bdi_thresh=27815, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19027, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21640, bdi_thresh=24939, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21640, bdi_thresh=24937, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19027, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23080, bdi_thresh=26621, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21640, bdi_thresh=24935, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23920, bdi_thresh=27413, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=24200, bdi_thresh=27823, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19032, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15240, bdi_thresh=16822, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21040, bdi_thresh=24490, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15880, bdi_thresh=18244, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23080, bdi_thresh=26623, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21040, bdi_thresh=24492, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15240, bdi_thresh=16824, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=22840, bdi_thresh=26418, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21640, bdi_thresh=24937, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=22840, bdi_thresh=26425, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19031, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=20480, bdi_thresh=23808, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=24200, bdi_thresh=27823, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21640, bdi_thresh=24937, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23920, bdi_thresh=27426, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21640, bdi_thresh=24936, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21640, bdi_thresh=24938, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21320, bdi_thresh=24864, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=22840, bdi_thresh=26414, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23080, bdi_thresh=26621, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23920, bdi_thresh=27423, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15880, bdi_thresh=18243, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21320, bdi_thresh=24862, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23080, bdi_thresh=26626, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15240, bdi_thresh=16824, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15880, bdi_thresh=18246, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23080, bdi_thresh=26620, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15880, bdi_thresh=18245, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=24200, bdi_thresh=27821, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=22840, bdi_thresh=26423, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21640, bdi_thresh=24936, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23920, bdi_thresh=27421, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23080, bdi_thresh=26624, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19034, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21640, bdi_thresh=24935, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23920, bdi_thresh=27426, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15880, bdi_thresh=18246, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=22840, bdi_thresh=26421, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15240, bdi_thresh=16824, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21640, bdi_thresh=24927, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21040, bdi_thresh=24481, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23920, bdi_thresh=27425, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21320, bdi_thresh=24863, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=24200, bdi_thresh=27822, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=24200, bdi_thresh=27824, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=22840, bdi_thresh=26420, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=19720, bdi_thresh=23425, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15880, bdi_thresh=18245, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=24200, bdi_thresh=27810, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19031, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23080, bdi_thresh=26611, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21640, bdi_thresh=24933, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23080, bdi_thresh=26620, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23920, bdi_thresh=27412, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23080, bdi_thresh=26615, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21320, bdi_thresh=24864, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23080, bdi_thresh=26625, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=24200, bdi_thresh=27822, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=20480, bdi_thresh=23799, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=24200, bdi_thresh=27820, background_thresh=145168, dirty_thresh=290337
requeue_io +527: inode 0
bdi_nr_reclaimable=21640, bdi_thresh=24940, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21320, bdi_thresh=24860, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15240, bdi_thresh=16823, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=24200, bdi_thresh=27824, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=20480, bdi_thresh=23807, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21040, bdi_thresh=24476, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23920, bdi_thresh=27422, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=20480, bdi_thresh=23790, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23920, bdi_thresh=27416, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=24200, bdi_thresh=27815, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21040, bdi_thresh=24478, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15240, bdi_thresh=16822, background_thresh=145168, dirty_thresh=290337
requeue_io +527: inode 0
bdi_nr_reclaimable=15640, bdi_thresh=19027, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19027, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21640, bdi_thresh=24939, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15880, bdi_thresh=18242, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21640, bdi_thresh=24940, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21640, bdi_thresh=24937, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21040, bdi_thresh=24488, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19032, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=24200, bdi_thresh=27823, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19032, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21040, bdi_thresh=24490, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15880, bdi_thresh=18244, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21320, bdi_thresh=24866, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15880, bdi_thresh=18244, background_thresh=145168, dirty_thresh=290337
redirty_tail +502: inode 13925
bdi_nr_reclaimable=23080, bdi_thresh=26623, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=22840, bdi_thresh=26418, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19026, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21040, bdi_thresh=24489, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23080, bdi_thresh=26621, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15240, bdi_thresh=16824, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21640, bdi_thresh=24935, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21040, bdi_thresh=24490, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15240, bdi_thresh=16821, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=20480, bdi_thresh=23805, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=20480, bdi_thresh=23798, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23920, bdi_thresh=27421, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=22840, bdi_thresh=26420, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=22840, bdi_thresh=26423, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=19720, bdi_thresh=23425, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15880, bdi_thresh=18245, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23080, bdi_thresh=26611, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15240, bdi_thresh=16822, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23920, bdi_thresh=27426, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21640, bdi_thresh=24932, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23080, bdi_thresh=26620, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23920, bdi_thresh=27412, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23080, bdi_thresh=26615, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23920, bdi_thresh=27425, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21320, bdi_thresh=24823, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=24200, bdi_thresh=27822, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=24200, bdi_thresh=27822, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=22840, bdi_thresh=26424, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=24200, bdi_thresh=27820, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23080, bdi_thresh=26624, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19031, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=22840, bdi_thresh=26421, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21640, bdi_thresh=24938, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15880, bdi_thresh=18244, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21040, bdi_thresh=24475, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=20480, bdi_thresh=23789, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21040, bdi_thresh=24479, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15880, bdi_thresh=18243, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21640, bdi_thresh=24943, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15880, bdi_thresh=18244, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=24200, bdi_thresh=27821, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15880, bdi_thresh=18243, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15240, bdi_thresh=16820, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21640, bdi_thresh=24947, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=24200, bdi_thresh=27824, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21640, bdi_thresh=24947, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=24200, bdi_thresh=27810, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21320, bdi_thresh=24820, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21640, bdi_thresh=24942, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15240, bdi_thresh=16822, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23920, bdi_thresh=27416, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21320, bdi_thresh=24824, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23080, bdi_thresh=26625, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23920, bdi_thresh=27422, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21040, bdi_thresh=24476, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15880, bdi_thresh=18242, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15880, bdi_thresh=18242, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15240, bdi_thresh=16820, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19027, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15880, bdi_thresh=18240, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=20480, bdi_thresh=23805, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21040, bdi_thresh=24490, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19027, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21640, bdi_thresh=24944, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19032, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23080, bdi_thresh=26620, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23080, bdi_thresh=26623, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21320, bdi_thresh=24825, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15880, bdi_thresh=18241, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21640, bdi_thresh=24943, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23920, bdi_thresh=27421, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21320, bdi_thresh=24826, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=22840, bdi_thresh=26425, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19034, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23080, bdi_thresh=26618, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23920, bdi_thresh=27423, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23080, bdi_thresh=26621, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=24200, bdi_thresh=27823, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19032, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21040, bdi_thresh=24488, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23920, bdi_thresh=27413, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=24200, bdi_thresh=27815, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21040, bdi_thresh=24489, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=24200, bdi_thresh=27824, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=22840, bdi_thresh=26414, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21640, bdi_thresh=24944, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=22840, bdi_thresh=26423, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23080, bdi_thresh=26621, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15880, bdi_thresh=18241, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19030, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19031, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=24200, bdi_thresh=27823, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21640, bdi_thresh=24944, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21640, bdi_thresh=24945, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=20480, bdi_thresh=23806, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21320, bdi_thresh=24825, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21320, bdi_thresh=24822, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21040, bdi_thresh=24491, background_thresh=145168, dirty_thresh=290337
redirty_tail +435: inode 174623
bdi_nr_reclaimable=20480, bdi_thresh=23804, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23920, bdi_thresh=27426, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23080, bdi_thresh=26626, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23920, bdi_thresh=27427, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21640, bdi_thresh=24947, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15880, bdi_thresh=18245, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=22840, bdi_thresh=26420, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21320, bdi_thresh=24826, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=22840, bdi_thresh=26421, background_thresh=145168, dirty_thresh=290337
redirty_tail +435: inode 4784181
bdi_nr_reclaimable=21040, bdi_thresh=24484, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21040, bdi_thresh=24483, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15240, bdi_thresh=16821, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=24200, bdi_thresh=27821, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21320, bdi_thresh=24823, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21320, bdi_thresh=24825, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15880, bdi_thresh=18243, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19030, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21320, bdi_thresh=24823, background_thresh=145168, dirty_thresh=290337
redirty_tail +442: inode 3474323
bdi_nr_reclaimable=21040, bdi_thresh=24488, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21640, bdi_thresh=24943, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21320, bdi_thresh=24820, background_thresh=145168, dirty_thresh=290337
requeue_io +527: inode 0
bdi_nr_reclaimable=15880, bdi_thresh=18242, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23920, bdi_thresh=27418, background_thresh=145168, dirty_thresh=290337
redirty_tail +502: inode 14004
bdi_nr_reclaimable=24200, bdi_thresh=27820, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19032, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=24200, bdi_thresh=27820, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21640, bdi_thresh=24934, background_thresh=145168, dirty_thresh=290337
mm/page-writeback.c 762 background_writeout: comm=pdflush pid=599 n=-6696607
bdi_nr_reclaimable=21320, bdi_thresh=24780, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19030, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15880, bdi_thresh=18242, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21040, bdi_thresh=24481, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23920, bdi_thresh=27427, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23920, bdi_thresh=27426, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=20480, bdi_thresh=23803, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21040, bdi_thresh=24489, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21320, bdi_thresh=24782, background_thresh=145168, dirty_thresh=290337
global dirty=247029 writeback=43527 nfs=0 flags=CM towrite=1020 skipped=0
bdi_nr_reclaimable=24200, bdi_thresh=27823, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19031, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21320, bdi_thresh=24783, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15880, bdi_thresh=18241, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=22840, bdi_thresh=26418, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23080, bdi_thresh=26611, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21640, bdi_thresh=24940, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21040, bdi_thresh=24485, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=20480, bdi_thresh=23796, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19026, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=22840, bdi_thresh=26420, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15880, bdi_thresh=18241, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23080, bdi_thresh=26620, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=19720, bdi_thresh=23425, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23080, bdi_thresh=26615, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23920, bdi_thresh=27412, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=20480, bdi_thresh=23803, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=24200, bdi_thresh=27822, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15880, bdi_thresh=18242, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23080, bdi_thresh=26621, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21040, bdi_thresh=24472, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19031, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=24200, bdi_thresh=27822, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15240, bdi_thresh=16820, background_thresh=145168, dirty_thresh=290337
mm/page-writeback.c 762 background_writeout: comm=pdflush pid=10090 n=-6621691
bdi_nr_reclaimable=24200, bdi_thresh=27820, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23920, bdi_thresh=27422, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23080, bdi_thresh=26623, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21040, bdi_thresh=24477, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=20480, bdi_thresh=23787, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23080, bdi_thresh=26625, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21320, bdi_thresh=24781, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21640, bdi_thresh=24947, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15240, bdi_thresh=16821, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=24200, bdi_thresh=27824, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21040, bdi_thresh=24474, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21640, bdi_thresh=24946, background_thresh=145168, dirty_thresh=290337
global dirty=247029 writeback=43527 nfs=0 flags=C_ towrite=1024 skipped=0
bdi_nr_reclaimable=15240, bdi_thresh=16819, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21040, bdi_thresh=24484, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23920, bdi_thresh=27416, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21640, bdi_thresh=24942, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21320, bdi_thresh=24785, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19027, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=20480, bdi_thresh=23804, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21640, bdi_thresh=24944, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15880, bdi_thresh=18239, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21320, bdi_thresh=24826, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=24200, bdi_thresh=27810, background_thresh=145168, dirty_thresh=290337
redirty_tail +435: inode 174623
bdi_nr_reclaimable=15240, bdi_thresh=16819, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23080, bdi_thresh=26623, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21040, bdi_thresh=24486, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19032, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21040, bdi_thresh=24486, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15880, bdi_thresh=18241, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23080, bdi_thresh=26618, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=22840, bdi_thresh=26425, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23920, bdi_thresh=27423, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21040, bdi_thresh=24488, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21640, bdi_thresh=24943, background_thresh=145168, dirty_thresh=290337
redirty_tail +435: inode 4784181
bdi_nr_reclaimable=23920, bdi_thresh=27413, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=24200, bdi_thresh=27823, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23080, bdi_thresh=26621, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21640, bdi_thresh=24945, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=24200, bdi_thresh=27815, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15880, bdi_thresh=18240, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=22840, bdi_thresh=26423, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19030, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21640, bdi_thresh=24944, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=24200, bdi_thresh=27821, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23080, bdi_thresh=26621, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=24200, bdi_thresh=27824, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23080, bdi_thresh=26620, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15240, bdi_thresh=16821, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21240, bdi_thresh=24825, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=20480, bdi_thresh=23804, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=23920, bdi_thresh=27426, background_thresh=145168, dirty_thresh=290337
redirty_tail +442: inode 3474323
bdi_nr_reclaimable=23080, bdi_thresh=26626, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=21640, bdi_thresh=24902, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=22840, bdi_thresh=26423, background_thresh=145168, dirty_thresh=290337
bdi_nr_reclaimable=15640, bdi_thresh=19034, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=23920, bdi_thresh=27421, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=15880, bdi_thresh=18243, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=21640, bdi_thresh=24907, background_thresh=145177, dirty_thresh=290355
requeue_io +527: inode 0
bdi_nr_reclaimable=21040, bdi_thresh=24483, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=15880, bdi_thresh=18244, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=21640, bdi_thresh=24904, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=22840, bdi_thresh=26421, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=21080, bdi_thresh=24785, background_thresh=145177, dirty_thresh=290355
redirty_tail +502: inode 14023
bdi_nr_reclaimable=22840, bdi_thresh=26421, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=15240, bdi_thresh=16820, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=15880, bdi_thresh=18243, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=21040, bdi_thresh=24786, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=22840, bdi_thresh=26424, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=22840, bdi_thresh=26420, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=15640, bdi_thresh=19027, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=15880, bdi_thresh=18241, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=15880, bdi_thresh=18240, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=21000, bdi_thresh=24783, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=21640, bdi_thresh=24903, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=21640, bdi_thresh=24907, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=15880, bdi_thresh=18242, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=21040, bdi_thresh=24488, background_thresh=145177, dirty_thresh=290355
redirty_tail +435: inode 21
bdi_nr_reclaimable=21000, bdi_thresh=24823, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=23080, bdi_thresh=26624, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=21640, bdi_thresh=24903, background_thresh=145177, dirty_thresh=290355
redirty_tail +435: inode 174623
bdi_nr_reclaimable=23920, bdi_thresh=27421, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=23920, bdi_thresh=27418, background_thresh=145177, dirty_thresh=290355
redirty_tail +435: inode 4784181
bdi_nr_reclaimable=22840, bdi_thresh=26414, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=15640, bdi_thresh=19032, background_thresh=145177, dirty_thresh=290355
redirty_tail +435: inode 3474323
requeue_io +527: inode 0
bdi_nr_reclaimable=23920, bdi_thresh=27425, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=15240, bdi_thresh=16819, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=21640, bdi_thresh=24904, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=21000, bdi_thresh=24824, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=24200, bdi_thresh=27821, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=15640, bdi_thresh=19032, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=21040, bdi_thresh=24487, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=15880, bdi_thresh=18242, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=15880, bdi_thresh=18241, background_thresh=145177, dirty_thresh=290355
redirty_tail +502: inode 14042
bdi_nr_reclaimable=21640, bdi_thresh=24905, background_thresh=145177, dirty_thresh=290355
mm/page-writeback.c 762 background_writeout: comm=pdflush pid=599 n=-6696985
bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=21000, bdi_thresh=24823, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=21040, bdi_thresh=24486, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=15880, bdi_thresh=18241, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=22840, bdi_thresh=26421, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=21640, bdi_thresh=24867, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=15640, bdi_thresh=19034, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=22840, bdi_thresh=26423, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=15240, bdi_thresh=16820, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=23080, bdi_thresh=26620, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=24200, bdi_thresh=27824, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=21040, bdi_thresh=24485, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=21640, bdi_thresh=24860, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=24200, bdi_thresh=27820, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=23080, bdi_thresh=26611, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=22840, bdi_thresh=26420, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=20480, bdi_thresh=23801, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=21040, bdi_thresh=24480, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=21640, bdi_thresh=24854, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=15640, bdi_thresh=19026, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=24200, bdi_thresh=27822, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=21000, bdi_thresh=24823, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=23080, bdi_thresh=26615, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=23920, bdi_thresh=27412, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=21040, bdi_thresh=24488, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=21040, bdi_thresh=24471, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=23080, bdi_thresh=26620, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=15240, bdi_thresh=16818, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=15880, bdi_thresh=18241, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=21000, bdi_thresh=24822, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=15640, bdi_thresh=19031, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=24200, bdi_thresh=27822, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=15880, bdi_thresh=18241, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=15240, bdi_thresh=16819, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=20480, bdi_thresh=23794, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=21000, bdi_thresh=24820, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=21040, bdi_thresh=24476, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=21040, bdi_thresh=24487, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=20480, bdi_thresh=23785, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=24200, bdi_thresh=27820, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=15640, bdi_thresh=19030, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=23920, bdi_thresh=27422, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=21640, bdi_thresh=24867, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=22840, bdi_thresh=26424, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=24200, bdi_thresh=27824, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=21000, bdi_thresh=24820, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=23920, bdi_thresh=27427, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=23920, bdi_thresh=27426, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=23080, bdi_thresh=26626, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=23920, bdi_thresh=27416, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=21640, bdi_thresh=24866, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=15640, bdi_thresh=19031, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=15240, bdi_thresh=16818, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=21640, bdi_thresh=24867, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=23920, bdi_thresh=27425, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=24200, bdi_thresh=27821, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=20480, bdi_thresh=23802, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=15880, bdi_thresh=18239, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=15880, bdi_thresh=18238, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=15880, bdi_thresh=18240, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=15640, bdi_thresh=19027, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=23080, bdi_thresh=26621, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=21040, bdi_thresh=24473, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=24200, bdi_thresh=27823, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=24200, bdi_thresh=27810, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=15640, bdi_thresh=19032, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=15880, bdi_thresh=18240, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=21640, bdi_thresh=24864, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=21040, bdi_thresh=24485, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=23080, bdi_thresh=26621, background_thresh=145177, dirty_thresh=290355
mm/page-writeback.c 762 background_writeout: comm=pdflush pid=10090 n=-6621693
bdi_nr_reclaimable=21040, bdi_thresh=24482, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=21040, bdi_thresh=24483, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=15880, bdi_thresh=18239, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=22840, bdi_thresh=26418, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=21000, bdi_thresh=24786, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=20480, bdi_thresh=23800, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=24200, bdi_thresh=27823, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=21640, bdi_thresh=24862, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=15240, bdi_thresh=16817, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=15640, bdi_thresh=19027, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=15880, bdi_thresh=18239, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=22840, bdi_thresh=26425, background_thresh=145177, dirty_thresh=290355
global dirty=246665 writeback=43800 nfs=0 flags=C_ towrite=1022 skipped=0
bdi_nr_reclaimable=21000, bdi_thresh=24784, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=21000, bdi_thresh=24785, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=23080, bdi_thresh=26618, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=21040, bdi_thresh=24480, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=21040, bdi_thresh=24484, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=23080, bdi_thresh=26621, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=21640, bdi_thresh=24863, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=15240, bdi_thresh=16819, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=23080, bdi_thresh=26623, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=21640, bdi_thresh=24863, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=21640, bdi_thresh=24864, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=24200, bdi_thresh=27815, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=23920, bdi_thresh=27426, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=24200, bdi_thresh=27821, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=22840, bdi_thresh=26423, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=23920, bdi_thresh=27421, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=23920, bdi_thresh=27413, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=22840, bdi_thresh=26420, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=15640, bdi_thresh=19032, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=15880, bdi_thresh=18241, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=21000, bdi_thresh=24785, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=15880, bdi_thresh=18238, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=23080, bdi_thresh=26626, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=23920, bdi_thresh=27423, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=20480, bdi_thresh=23802, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=22840, bdi_thresh=26421, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=21440, bdi_thresh=24864, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=15880, bdi_thresh=18239, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=23920, bdi_thresh=27421, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=15640, bdi_thresh=19030, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=15880, bdi_thresh=18241, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=15880, bdi_thresh=18240, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=21360, bdi_thresh=24862, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=21360, bdi_thresh=24903, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=23920, bdi_thresh=27418, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=22840, bdi_thresh=26414, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=21320, bdi_thresh=24865, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=23080, bdi_thresh=26624, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=15640, bdi_thresh=19032, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=15880, bdi_thresh=18242, background_thresh=145177, dirty_thresh=290355
global dirty=246301 writeback=44164 nfs=0 flags=C_ towrite=646 skipped=0
bdi_nr_reclaimable=15240, bdi_thresh=16818, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=19720, bdi_thresh=23385, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=21000, bdi_thresh=24786, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=21280, bdi_thresh=24863, background_thresh=145177, dirty_thresh=290355
redirty_tail +435: inode 174623
bdi_nr_reclaimable=21000, bdi_thresh=24785, background_thresh=145177, dirty_thresh=290355
bdi_nr_reclaimable=21000, bdi_thresh=24783, background_thresh=145177, dirty_thresh=290355
redirty_tail +435: inode 4784181
redirty_tail +435: inode 3474323
requeue_io +527: inode 0
redirty_tail +502: inode 14061
bdi_nr_reclaimable=23080, bdi_thresh=26623, background_thresh=145177, dirty_thresh=290355
redirty_tail +442: inode 27
redirty_tail +435: inode 174623
redirty_tail +435: inode 4784181
redirty_tail +435: inode 3474323
requeue_io +527: inode 0
redirty_tail +502: inode 14096

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: regression in page writeback
  2009-09-23  1:45                     ` Wu Fengguang
@ 2009-09-23  1:59                       ` Andrew Morton
  2009-09-23  2:26                         ` Wu Fengguang
  2009-09-23  9:19                         ` Richard Kennedy
  0 siblings, 2 replies; 79+ messages in thread
From: Andrew Morton @ 2009-09-23  1:59 UTC (permalink / raw)
  To: Wu Fengguang
  Cc: Chris Mason, Peter Zijlstra, Li, Shaohua, linux-kernel, richard,
	jens.axboe

On Wed, 23 Sep 2009 09:45:00 +0800 Wu Fengguang <fengguang.wu@intel.com> wrote:

> On Wed, Sep 23, 2009 at 09:28:32AM +0800, Andrew Morton wrote:
> > On Wed, 23 Sep 2009 09:17:58 +0800 Wu Fengguang <fengguang.wu@intel.com> wrote:
> > 
> > > On Wed, Sep 23, 2009 at 08:54:52AM +0800, Andrew Morton wrote:
> > > > On Wed, 23 Sep 2009 08:22:20 +0800 Wu Fengguang <fengguang.wu@intel.com> wrote:
> > > > 
> > > > > Jens' per-bdi writeback has another improvement. In 2.6.31, when
> > > > > superblocks A and B both have 100000 dirty pages, it will first
> > > > > exhaust A's 100000 dirty pages before going on to sync B's.
> > > > 
> > > > That would only be true if someone broke 2.6.31.  Did they?
> > > > 
> > > > SYSCALL_DEFINE0(sync)
> > > > {
> > > > 	wakeup_pdflush(0);
> > > > 	sync_filesystems(0);
> > > > 	sync_filesystems(1);
> > > > 	if (unlikely(laptop_mode))
> > > > 		laptop_sync_completion();
> > > > 	return 0;
> > > > }
> > > > 
> > > > the sync_filesystems(0) is supposed to non-blockingly start IO against
> > > > all devices.  It used to do that correctly.  But people mucked with it
> > > > so perhaps it no longer does.
> > > 
> > > I'm referring to writeback_inodes(). Each invocation of which (to sync
> > > 4MB) will do the same iteration over superblocks A => B => C ... So if
> > > A has dirty pages, it will always be served first.
> > > 
> > > So if wbc->bdi == NULL (which is true for kupdate/background sync), it
> > > will have to first exhaust A before going on to B and C.
> > 
> > But that works OK.  We fill the first device's queue, then it gets
> > congested and sync_sb_inodes() does nothing and we advance to the next
> > queue.
> 
> So in common cases "exhaust" is a bit exaggerated, but A does receive
> much more opportunity than B. Computation resources for IO submission
> are unbalanced for A, and there are pointless overheads in rechecking A.

That's unquantified handwaving.  One CPU can do a *lot* of IO.

> > If a device has more than a queue's worth of dirty data then we'll
> > probably leave some of that dirty memory un-queued, so there's some
> > lack of concurrency in that situation.
> 
> Good insight.

It was wrong.  See the other email.

> That possibly explains one major factor of the
> performance gains of Jens' per-bdi writeback.

I've yet to see any believable and complete explanation for these
gains.  I've asked about these things multiple times and nothing happened.

I suspect that what happened over time was that previously-working code
got broken, then later people noticed the breakage but failed to
analyse and fix it in favour of simply ripping everything out and
starting again.

So for the want of analysing and fixing several possible regressions,
we've tossed away some very sensitive core kernel code which had tens
of millions of machine-years testing.  I find this incredibly rash.

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: regression in page writeback
  2009-09-23  1:47                       ` Andrew Morton
@ 2009-09-23  2:01                         ` Wu Fengguang
  2009-09-23  2:09                           ` Andrew Morton
  0 siblings, 1 reply; 79+ messages in thread
From: Wu Fengguang @ 2009-09-23  2:01 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Chris Mason, Peter Zijlstra, Li, Shaohua, linux-kernel, richard,
	jens.axboe

On Wed, Sep 23, 2009 at 09:47:26AM +0800, Andrew Morton wrote:
> On Wed, 23 Sep 2009 09:32:36 +0800 Wu Fengguang <fengguang.wu@intel.com> wrote:
> 
> > On Wed, Sep 23, 2009 at 09:28:32AM +0800, Andrew Morton wrote:
> > > On Wed, 23 Sep 2009 09:17:58 +0800 Wu Fengguang <fengguang.wu@intel.com> wrote:
> > > 
> > > > On Wed, Sep 23, 2009 at 08:54:52AM +0800, Andrew Morton wrote:
> > > > > On Wed, 23 Sep 2009 08:22:20 +0800 Wu Fengguang <fengguang.wu@intel.com> wrote:
> > > > > 
> > > > > > Jens' per-bdi writeback has another improvement. In 2.6.31, when
> > > > > > superblocks A and B both have 100000 dirty pages, it will first
> > > > > > exhaust A's 100000 dirty pages before going on to sync B's.
> > > > > 
> > > > > That would only be true if someone broke 2.6.31.  Did they?
> > > > > 
> > > > > SYSCALL_DEFINE0(sync)
> > > > > {
> > > > > 	wakeup_pdflush(0);
> > > > > 	sync_filesystems(0);
> > > > > 	sync_filesystems(1);
> > > > > 	if (unlikely(laptop_mode))
> > > > > 		laptop_sync_completion();
> > > > > 	return 0;
> > > > > }
> > > > > 
> > > > > the sync_filesystems(0) is supposed to non-blockingly start IO against
> > > > > all devices.  It used to do that correctly.  But people mucked with it
> > > > > so perhaps it no longer does.
> > > > 
> > > > I'm referring to writeback_inodes(). Each invocation of which (to sync
> > > > 4MB) will do the same iteration over superblocks A => B => C ... So if
> > > > A has dirty pages, it will always be served first.
> > > > 
> > > > So if wbc->bdi == NULL (which is true for kupdate/background sync), it
> > > > will have to first exhaust A before going on to B and C.
> > > 
> > > But that works OK.  We fill the first device's queue, then it gets
> > > congested and sync_sb_inodes() does nothing and we advance to the next
> > > queue.
> > > 
> > > If a device has more than a queue's worth of dirty data then we'll
> > > probably leave some of that dirty memory un-queued, so there's some
> > > lack of concurrency in that situation.
> > 
> > Yes, exactly if block device is not fast enough.
> 
> Actually, no.

Sorry my "yes" is mainly for the first paragraph. The concurrency
problem exists for both fast/slow devices.

> If there's still outstanding dirty data for any of those queues, both
> wb_kupdate() and background_writeout() will take a teeny sleep and then
> will re-poll the queues.
> 
> Did that logic get broken?

No, but the "teeny sleep" is normally much smaller. When io queue is
not congested, every io completion event will wakeup the congestion
waiters. Also A's event could wake up B's waiters.

__freed_request() always calls blk_clear_queue_congested() if under
congestion threshold which in turn wakes up congestion waiters:

        if (rl->count[sync] < queue_congestion_off_threshold(q))
                blk_clear_queue_congested(q, sync);

Thanks,
Fengguang

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: regression in page writeback
  2009-09-23  2:01                         ` Wu Fengguang
@ 2009-09-23  2:09                           ` Andrew Morton
  2009-09-23  3:07                             ` Wu Fengguang
  0 siblings, 1 reply; 79+ messages in thread
From: Andrew Morton @ 2009-09-23  2:09 UTC (permalink / raw)
  To: Wu Fengguang
  Cc: Chris Mason, Peter Zijlstra, Li, Shaohua, linux-kernel, richard,
	jens.axboe

On Wed, 23 Sep 2009 10:01:04 +0800 Wu Fengguang <fengguang.wu@intel.com> wrote:

> > If there's still outstanding dirty data for any of those queues, both
> > wb_kupdate() and background_writeout() will take a teeny sleep and then
> > will re-poll the queues.
> > 
> > Did that logic get broken?
> 
> No, but the "teeny sleep" is normally much smaller. When io queue is
> not congested, every io completion event will wakeup the congestion
> waiters. Also A's event could wake up B's waiters.
> 
> __freed_request() always calls blk_clear_queue_congested() if under
> congestion threshold which in turn wakes up congestion waiters:
> 
>         if (rl->count[sync] < queue_congestion_off_threshold(q))
>                 blk_clear_queue_congested(q, sync);
> 

Yes.  Have any problems been demonstrated due to that?

And what's _sufficiently_ wrong with that to justify adding potentially
thousands of kernel threads?  It was always a design objective to avoid
doing that.


^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: regression in page writeback
  2009-09-23  1:59                       ` Andrew Morton
@ 2009-09-23  2:26                         ` Wu Fengguang
  2009-09-23  2:36                           ` Andrew Morton
  2009-09-23  9:19                         ` Richard Kennedy
  1 sibling, 1 reply; 79+ messages in thread
From: Wu Fengguang @ 2009-09-23  2:26 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Chris Mason, Peter Zijlstra, Li, Shaohua, linux-kernel, richard,
	jens.axboe

On Wed, Sep 23, 2009 at 09:59:41AM +0800, Andrew Morton wrote:
> On Wed, 23 Sep 2009 09:45:00 +0800 Wu Fengguang <fengguang.wu@intel.com> wrote:
> 
> > On Wed, Sep 23, 2009 at 09:28:32AM +0800, Andrew Morton wrote:
> > > On Wed, 23 Sep 2009 09:17:58 +0800 Wu Fengguang <fengguang.wu@intel.com> wrote:
> > > 
> > > > On Wed, Sep 23, 2009 at 08:54:52AM +0800, Andrew Morton wrote:
> > > > > On Wed, 23 Sep 2009 08:22:20 +0800 Wu Fengguang <fengguang.wu@intel.com> wrote:
> > > > > 
> > > > > > Jens' per-bdi writeback has another improvement. In 2.6.31, when
> > > > > > superblocks A and B both have 100000 dirty pages, it will first
> > > > > > exhaust A's 100000 dirty pages before going on to sync B's.
> > > > > 
> > > > > That would only be true if someone broke 2.6.31.  Did they?
> > > > > 
> > > > > SYSCALL_DEFINE0(sync)
> > > > > {
> > > > > 	wakeup_pdflush(0);
> > > > > 	sync_filesystems(0);
> > > > > 	sync_filesystems(1);
> > > > > 	if (unlikely(laptop_mode))
> > > > > 		laptop_sync_completion();
> > > > > 	return 0;
> > > > > }
> > > > > 
> > > > > the sync_filesystems(0) is supposed to non-blockingly start IO against
> > > > > all devices.  It used to do that correctly.  But people mucked with it
> > > > > so perhaps it no longer does.
> > > > 
> > > > I'm referring to writeback_inodes(). Each invocation of which (to sync
> > > > 4MB) will do the same iteration over superblocks A => B => C ... So if
> > > > A has dirty pages, it will always be served first.
> > > > 
> > > > So if wbc->bdi == NULL (which is true for kupdate/background sync), it
> > > > will have to first exhaust A before going on to B and C.
> > > 
> > > But that works OK.  We fill the first device's queue, then it gets
> > > congested and sync_sb_inodes() does nothing and we advance to the next
> > > queue.
> > 
> > So in common cases "exhaust" is a bit exaggerated, but A does receive
> > much more opportunity than B. Computation resources for IO submission
> > are unbalanced for A, and there are pointless overheads in rechecking A.
> 
> That's unquantified handwaving.  One CPU can do a *lot* of IO.

Yes.. I had the impression that the writeback submission can be pretty slow.
It should be because of the congestion_wait. Now that it is removed,
things are going faster when queue is not full.

> > > If a device has more than a queue's worth of dirty data then we'll
> > > probably leave some of that dirty memory un-queued, so there's some
> > > lack of concurrency in that situation.
> > 
> > Good insight.
> 
> It was wrong.  See the other email.

No your first insight is correct. Because the (unnecessary) teeny
sleeps is independent of the A=>B=>C traversing order. Only queue
congestion could help skip A.

> > That possibly explains one major factor of the
> > performance gains of Jens' per-bdi writeback.
> 
> I've yet to see any believable and complete explanation for these
> gains.  I've asked about these things multiple times and nothing happened.
 
The per-bdi writeback threads does make things more straight.
But given that Jens also piggy backed some other behavior changes,
it's hard to judge the pure gain of the per-bdi writeback.

> I suspect that what happened over time was that previously-working code
> got broken, then later people noticed the breakage but failed to
> analyse and fix it in favour of simply ripping everything out and
> starting again.
>
> So for the want of analysing and fixing several possible regressions,
> we've tossed away some very sensitive core kernel code which had tens
> of millions of machine-years testing.  I find this incredibly rash.

Sorry.. 

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: regression in page writeback
  2009-09-23  2:26                         ` Wu Fengguang
@ 2009-09-23  2:36                           ` Andrew Morton
  2009-09-23  2:49                             ` Wu Fengguang
  2009-09-23 14:00                             ` Chris Mason
  0 siblings, 2 replies; 79+ messages in thread
From: Andrew Morton @ 2009-09-23  2:36 UTC (permalink / raw)
  To: Wu Fengguang
  Cc: Chris Mason, Peter Zijlstra, Li, Shaohua, linux-kernel, richard,
	jens.axboe

On Wed, 23 Sep 2009 10:26:22 +0800 Wu Fengguang <fengguang.wu@intel.com> wrote:

> On Wed, Sep 23, 2009 at 09:59:41AM +0800, Andrew Morton wrote:
> > On Wed, 23 Sep 2009 09:45:00 +0800 Wu Fengguang <fengguang.wu@intel.com> wrote:
> > 
> > > On Wed, Sep 23, 2009 at 09:28:32AM +0800, Andrew Morton wrote:
> > > > On Wed, 23 Sep 2009 09:17:58 +0800 Wu Fengguang <fengguang.wu@intel.com> wrote:
> > > > 
> > > > > On Wed, Sep 23, 2009 at 08:54:52AM +0800, Andrew Morton wrote:
> > > > > > On Wed, 23 Sep 2009 08:22:20 +0800 Wu Fengguang <fengguang.wu@intel.com> wrote:
> > > > > > 
> > > > > > > Jens' per-bdi writeback has another improvement. In 2.6.31, when
> > > > > > > superblocks A and B both have 100000 dirty pages, it will first
> > > > > > > exhaust A's 100000 dirty pages before going on to sync B's.
> > > > > > 
> > > > > > That would only be true if someone broke 2.6.31.  Did they?
> > > > > > 
> > > > > > SYSCALL_DEFINE0(sync)
> > > > > > {
> > > > > > 	wakeup_pdflush(0);
> > > > > > 	sync_filesystems(0);
> > > > > > 	sync_filesystems(1);
> > > > > > 	if (unlikely(laptop_mode))
> > > > > > 		laptop_sync_completion();
> > > > > > 	return 0;
> > > > > > }
> > > > > > 
> > > > > > the sync_filesystems(0) is supposed to non-blockingly start IO against
> > > > > > all devices.  It used to do that correctly.  But people mucked with it
> > > > > > so perhaps it no longer does.
> > > > > 
> > > > > I'm referring to writeback_inodes(). Each invocation of which (to sync
> > > > > 4MB) will do the same iteration over superblocks A => B => C ... So if
> > > > > A has dirty pages, it will always be served first.
> > > > > 
> > > > > So if wbc->bdi == NULL (which is true for kupdate/background sync), it
> > > > > will have to first exhaust A before going on to B and C.
> > > > 
> > > > But that works OK.  We fill the first device's queue, then it gets
> > > > congested and sync_sb_inodes() does nothing and we advance to the next
> > > > queue.
> > > 
> > > So in common cases "exhaust" is a bit exaggerated, but A does receive
> > > much more opportunity than B. Computation resources for IO submission
> > > are unbalanced for A, and there are pointless overheads in rechecking A.
> > 
> > That's unquantified handwaving.  One CPU can do a *lot* of IO.
> 
> Yes.. I had the impression that the writeback submission can be pretty slow.
> It should be because of the congestion_wait. Now that it is removed,
> things are going faster when queue is not full.

What?  The wait is short.  The design intent there is that we repoll
all previously-congested queues well before they start to run empty.

> > > > If a device has more than a queue's worth of dirty data then we'll
> > > > probably leave some of that dirty memory un-queued, so there's some
> > > > lack of concurrency in that situation.
> > > 
> > > Good insight.
> > 
> > It was wrong.  See the other email.
> 
> No your first insight is correct. Because the (unnecessary) teeny
> sleeps is independent of the A=>B=>C traversing order. Only queue
> congestion could help skip A.

The sleeps are completely necessary!  Otherwise we end up busywaiting.

After the sleep we repoll all queues.

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: regression in page writeback
  2009-09-23  2:36                           ` Andrew Morton
@ 2009-09-23  2:49                             ` Wu Fengguang
  2009-09-23  2:56                               ` Andrew Morton
  2009-09-23  3:10                               ` Shaohua Li
  2009-09-23 14:00                             ` Chris Mason
  1 sibling, 2 replies; 79+ messages in thread
From: Wu Fengguang @ 2009-09-23  2:49 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Chris Mason, Peter Zijlstra, Li, Shaohua, linux-kernel, richard,
	jens.axboe

On Wed, Sep 23, 2009 at 10:36:22AM +0800, Andrew Morton wrote:
> On Wed, 23 Sep 2009 10:26:22 +0800 Wu Fengguang <fengguang.wu@intel.com> wrote:
> 
> > On Wed, Sep 23, 2009 at 09:59:41AM +0800, Andrew Morton wrote:
> > > On Wed, 23 Sep 2009 09:45:00 +0800 Wu Fengguang <fengguang.wu@intel.com> wrote:
> > > 
> > > > On Wed, Sep 23, 2009 at 09:28:32AM +0800, Andrew Morton wrote:
> > > > > On Wed, 23 Sep 2009 09:17:58 +0800 Wu Fengguang <fengguang.wu@intel.com> wrote:
> > > > > 
> > > > > > On Wed, Sep 23, 2009 at 08:54:52AM +0800, Andrew Morton wrote:
> > > > > > > On Wed, 23 Sep 2009 08:22:20 +0800 Wu Fengguang <fengguang.wu@intel.com> wrote:
> > > > > > > 
> > > > > > > > Jens' per-bdi writeback has another improvement. In 2.6.31, when
> > > > > > > > superblocks A and B both have 100000 dirty pages, it will first
> > > > > > > > exhaust A's 100000 dirty pages before going on to sync B's.
> > > > > > > 
> > > > > > > That would only be true if someone broke 2.6.31.  Did they?
> > > > > > > 
> > > > > > > SYSCALL_DEFINE0(sync)
> > > > > > > {
> > > > > > > 	wakeup_pdflush(0);
> > > > > > > 	sync_filesystems(0);
> > > > > > > 	sync_filesystems(1);
> > > > > > > 	if (unlikely(laptop_mode))
> > > > > > > 		laptop_sync_completion();
> > > > > > > 	return 0;
> > > > > > > }
> > > > > > > 
> > > > > > > the sync_filesystems(0) is supposed to non-blockingly start IO against
> > > > > > > all devices.  It used to do that correctly.  But people mucked with it
> > > > > > > so perhaps it no longer does.
> > > > > > 
> > > > > > I'm referring to writeback_inodes(). Each invocation of which (to sync
> > > > > > 4MB) will do the same iteration over superblocks A => B => C ... So if
> > > > > > A has dirty pages, it will always be served first.
> > > > > > 
> > > > > > So if wbc->bdi == NULL (which is true for kupdate/background sync), it
> > > > > > will have to first exhaust A before going on to B and C.
> > > > > 
> > > > > But that works OK.  We fill the first device's queue, then it gets
> > > > > congested and sync_sb_inodes() does nothing and we advance to the next
> > > > > queue.
> > > > 
> > > > So in common cases "exhaust" is a bit exaggerated, but A does receive
> > > > much more opportunity than B. Computation resources for IO submission
> > > > are unbalanced for A, and there are pointless overheads in rechecking A.
> > > 
> > > That's unquantified handwaving.  One CPU can do a *lot* of IO.
> > 
> > Yes.. I had the impression that the writeback submission can be pretty slow.
> > It should be because of the congestion_wait. Now that it is removed,
> > things are going faster when queue is not full.
> 
> What?  The wait is short.  The design intent there is that we repoll
> all previously-congested queues well before they start to run empty.

When queue is not congested (in which case congestion_wait is not
necessary), the congestion_wait() degrades io submission speed to near
io completion speed.

> > > > > If a device has more than a queue's worth of dirty data then we'll
> > > > > probably leave some of that dirty memory un-queued, so there's some
> > > > > lack of concurrency in that situation.
> > > > 
> > > > Good insight.
> > > 
> > > It was wrong.  See the other email.
> > 
> > No your first insight is correct. Because the (unnecessary) teeny
> > sleeps is independent of the A=>B=>C traversing order. Only queue
> > congestion could help skip A.
> 
> The sleeps are completely necessary!  Otherwise we end up busywaiting.
> 
> After the sleep we repoll all queues.

I mean, it is not always necessary. Only when _all_ superblocks cannot
writeback their inodes (eg. all in congestion), we should wait.

Just before Jens' work, I had patch to convert

-                       if (wbc.encountered_congestion || wbc.more_io)
-                               congestion_wait(WRITE, HZ/10);
-                       else
-                               break;

to

+       if (wbc->encountered_congestion && wbc->nr_to_write == MAX_WRITEBACK_PAGES)
+               congestion_wait(WRITE, HZ/10);

Note that wbc->encountered_congestion only means "at least one bdi
encountered congestion". We may still make progress in other bdis
hence should not sleep.

Thanks,
Fengguang

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: regression in page writeback
  2009-09-23  2:49                             ` Wu Fengguang
@ 2009-09-23  2:56                               ` Andrew Morton
  2009-09-23  3:11                                 ` Wu Fengguang
  2009-09-23  3:10                               ` Shaohua Li
  1 sibling, 1 reply; 79+ messages in thread
From: Andrew Morton @ 2009-09-23  2:56 UTC (permalink / raw)
  To: Wu Fengguang
  Cc: Chris Mason, Peter Zijlstra, Li, Shaohua, linux-kernel, richard,
	jens.axboe

On Wed, 23 Sep 2009 10:49:58 +0800 Wu Fengguang <fengguang.wu@intel.com> wrote:

> > After the sleep we repoll all queues.
> 
> I mean, it is not always necessary. Only when _all_ superblocks cannot
> writeback their inodes (eg. all in congestion), we should wait.

Well.  The code will still submit MAX_WRITEBACK_PAGES pages to _some_
queue in that situation.

But yup, maybe that's a bug!  Sounds easy to fix.  It doesn't justify
rewriting the whole world and then saying "look, the new code is faster
than the old code which had a bug which I knew about but didn't fix".


^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: regression in page writeback
  2009-09-23  2:09                           ` Andrew Morton
@ 2009-09-23  3:07                             ` Wu Fengguang
  0 siblings, 0 replies; 79+ messages in thread
From: Wu Fengguang @ 2009-09-23  3:07 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Chris Mason, Peter Zijlstra, Li, Shaohua, linux-kernel, richard,
	jens.axboe

On Wed, Sep 23, 2009 at 10:09:15AM +0800, Andrew Morton wrote:
> On Wed, 23 Sep 2009 10:01:04 +0800 Wu Fengguang <fengguang.wu@intel.com> wrote:
> 
> > > If there's still outstanding dirty data for any of those queues, both
> > > wb_kupdate() and background_writeout() will take a teeny sleep and then
> > > will re-poll the queues.
> > > 
> > > Did that logic get broken?
> > 
> > No, but the "teeny sleep" is normally much smaller. When io queue is
> > not congested, every io completion event will wakeup the congestion
> > waiters. Also A's event could wake up B's waiters.
> > 
> > __freed_request() always calls blk_clear_queue_congested() if under
> > congestion threshold which in turn wakes up congestion waiters:
> > 
> >         if (rl->count[sync] < queue_congestion_off_threshold(q))
> >                 blk_clear_queue_congested(q, sync);
> > 
> 
> Yes.  Have any problems been demonstrated due to that?

Hmm, I was merely clarifying a fact.

Chris Mason listed some reasons to convert the congestion_wait() based
polls to the some queue waiting:
 
  http://lkml.org/lkml/2009/9/8/210

My impression is, it changed obviously too fast without enough discussions.

> And what's _sufficiently_ wrong with that to justify adding potentially
> thousands of kernel threads?  It was always a design objective to avoid
> doing that.

Yeah, the number of threads could be a problem. 

I guess the per-bdi threads and congestion_wait are mostly two
independent changes.  congestion_wait is not the reason to do per-bdi
threads.  Just that Jens piggy backed some changes to congestion_wait.

Thanks,
Fengguang

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: regression in page writeback
  2009-09-23  2:49                             ` Wu Fengguang
  2009-09-23  2:56                               ` Andrew Morton
@ 2009-09-23  3:10                               ` Shaohua Li
  2009-09-23  3:14                                 ` Wu Fengguang
  1 sibling, 1 reply; 79+ messages in thread
From: Shaohua Li @ 2009-09-23  3:10 UTC (permalink / raw)
  To: Wu, Fengguang
  Cc: Andrew Morton, Chris Mason, Peter Zijlstra, linux-kernel,
	richard, jens.axboe

On Wed, Sep 23, 2009 at 10:49:58AM +0800, Wu, Fengguang wrote:
> On Wed, Sep 23, 2009 at 10:36:22AM +0800, Andrew Morton wrote:
> > On Wed, 23 Sep 2009 10:26:22 +0800 Wu Fengguang <fengguang.wu@intel.com> wrote:
> > 
> > > On Wed, Sep 23, 2009 at 09:59:41AM +0800, Andrew Morton wrote:
> > > > On Wed, 23 Sep 2009 09:45:00 +0800 Wu Fengguang <fengguang.wu@intel.com> wrote:
> > > > 
> > > > > On Wed, Sep 23, 2009 at 09:28:32AM +0800, Andrew Morton wrote:
> > > > > > On Wed, 23 Sep 2009 09:17:58 +0800 Wu Fengguang <fengguang.wu@intel.com> wrote:
> > > > > > 
> > > > > > > On Wed, Sep 23, 2009 at 08:54:52AM +0800, Andrew Morton wrote:
> > > > > > > > On Wed, 23 Sep 2009 08:22:20 +0800 Wu Fengguang <fengguang.wu@intel.com> wrote:
> > > > > > > > 
> > > > > > > > > Jens' per-bdi writeback has another improvement. In 2.6.31, when
> > > > > > > > > superblocks A and B both have 100000 dirty pages, it will first
> > > > > > > > > exhaust A's 100000 dirty pages before going on to sync B's.
> > > > > > > > 
> > > > > > > > That would only be true if someone broke 2.6.31.  Did they?
> > > > > > > > 
> > > > > > > > SYSCALL_DEFINE0(sync)
> > > > > > > > {
> > > > > > > > 	wakeup_pdflush(0);
> > > > > > > > 	sync_filesystems(0);
> > > > > > > > 	sync_filesystems(1);
> > > > > > > > 	if (unlikely(laptop_mode))
> > > > > > > > 		laptop_sync_completion();
> > > > > > > > 	return 0;
> > > > > > > > }
> > > > > > > > 
> > > > > > > > the sync_filesystems(0) is supposed to non-blockingly start IO against
> > > > > > > > all devices.  It used to do that correctly.  But people mucked with it
> > > > > > > > so perhaps it no longer does.
> > > > > > > 
> > > > > > > I'm referring to writeback_inodes(). Each invocation of which (to sync
> > > > > > > 4MB) will do the same iteration over superblocks A => B => C ... So if
> > > > > > > A has dirty pages, it will always be served first.
> > > > > > > 
> > > > > > > So if wbc->bdi == NULL (which is true for kupdate/background sync), it
> > > > > > > will have to first exhaust A before going on to B and C.
> > > > > > 
> > > > > > But that works OK.  We fill the first device's queue, then it gets
> > > > > > congested and sync_sb_inodes() does nothing and we advance to the next
> > > > > > queue.
> > > > > 
> > > > > So in common cases "exhaust" is a bit exaggerated, but A does receive
> > > > > much more opportunity than B. Computation resources for IO submission
> > > > > are unbalanced for A, and there are pointless overheads in rechecking A.
> > > > 
> > > > That's unquantified handwaving.  One CPU can do a *lot* of IO.
> > > 
> > > Yes.. I had the impression that the writeback submission can be pretty slow.
> > > It should be because of the congestion_wait. Now that it is removed,
> > > things are going faster when queue is not full.
> > 
> > What?  The wait is short.  The design intent there is that we repoll
> > all previously-congested queues well before they start to run empty.
> 
> When queue is not congested (in which case congestion_wait is not
> necessary), the congestion_wait() degrades io submission speed to near
> io completion speed.
> 
> > > > > > If a device has more than a queue's worth of dirty data then we'll
> > > > > > probably leave some of that dirty memory un-queued, so there's some
> > > > > > lack of concurrency in that situation.
> > > > > 
> > > > > Good insight.
> > > > 
> > > > It was wrong.  See the other email.
> > > 
> > > No your first insight is correct. Because the (unnecessary) teeny
> > > sleeps is independent of the A=>B=>C traversing order. Only queue
> > > congestion could help skip A.
> > 
> > The sleeps are completely necessary!  Otherwise we end up busywaiting.
> > 
> > After the sleep we repoll all queues.
> 
> I mean, it is not always necessary. Only when _all_ superblocks cannot
> writeback their inodes (eg. all in congestion), we should wait.
> 
> Just before Jens' work, I had patch to convert
> 
> -                       if (wbc.encountered_congestion || wbc.more_io)
> -                               congestion_wait(WRITE, HZ/10);
> -                       else
> -                               break;
> 
> to
> 
> +       if (wbc->encountered_congestion && wbc->nr_to_write == MAX_WRITEBACK_PAGES)
> +               congestion_wait(WRITE, HZ/10);
> 
> Note that wbc->encountered_congestion only means "at least one bdi
> encountered congestion". We may still make progress in other bdis
> hence should not sleep.
Hi,
encountered_congestion only is checked when nr_to_write > 0, if some superblocks
aren't congestions, nr_to_write should be 0, right?

Thanks,
Shaohua

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: regression in page writeback
  2009-09-23  2:56                               ` Andrew Morton
@ 2009-09-23  3:11                                 ` Wu Fengguang
  0 siblings, 0 replies; 79+ messages in thread
From: Wu Fengguang @ 2009-09-23  3:11 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Chris Mason, Peter Zijlstra, Li, Shaohua, linux-kernel, richard,
	jens.axboe

On Wed, Sep 23, 2009 at 10:56:05AM +0800, Andrew Morton wrote:
> On Wed, 23 Sep 2009 10:49:58 +0800 Wu Fengguang <fengguang.wu@intel.com> wrote:
> 
> > > After the sleep we repoll all queues.
> > 
> > I mean, it is not always necessary. Only when _all_ superblocks cannot
> > writeback their inodes (eg. all in congestion), we should wait.
> 
> Well.  The code will still submit MAX_WRITEBACK_PAGES pages to _some_
> queue in that situation.
> 
> But yup, maybe that's a bug!  Sounds easy to fix.  It doesn't justify
> rewriting the whole world and then saying "look, the new code is faster
> than the old code which had a bug which I knew about but didn't fix".

I didn't submit the patches to avoid conflicts with Jens' work.
Sorry for doing that - that's not good attitude towards bug fixes..

Thanks,
Fengguang


^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: regression in page writeback
  2009-09-23  3:10                               ` Shaohua Li
@ 2009-09-23  3:14                                 ` Wu Fengguang
  2009-09-23  3:25                                   ` Wu Fengguang
  0 siblings, 1 reply; 79+ messages in thread
From: Wu Fengguang @ 2009-09-23  3:14 UTC (permalink / raw)
  To: Li, Shaohua
  Cc: Andrew Morton, Chris Mason, Peter Zijlstra, linux-kernel,
	richard, jens.axboe

On Wed, Sep 23, 2009 at 11:10:12AM +0800, Li, Shaohua wrote:
> On Wed, Sep 23, 2009 at 10:49:58AM +0800, Wu, Fengguang wrote:
> > On Wed, Sep 23, 2009 at 10:36:22AM +0800, Andrew Morton wrote:
> > > On Wed, 23 Sep 2009 10:26:22 +0800 Wu Fengguang <fengguang.wu@intel.com> wrote:
> > > 
> > > > On Wed, Sep 23, 2009 at 09:59:41AM +0800, Andrew Morton wrote:
> > > > > On Wed, 23 Sep 2009 09:45:00 +0800 Wu Fengguang <fengguang.wu@intel.com> wrote:
> > > > > 
> > > > > > On Wed, Sep 23, 2009 at 09:28:32AM +0800, Andrew Morton wrote:
> > > > > > > On Wed, 23 Sep 2009 09:17:58 +0800 Wu Fengguang <fengguang.wu@intel.com> wrote:
> > > > > > > 
> > > > > > > > On Wed, Sep 23, 2009 at 08:54:52AM +0800, Andrew Morton wrote:
> > > > > > > > > On Wed, 23 Sep 2009 08:22:20 +0800 Wu Fengguang <fengguang.wu@intel.com> wrote:
> > > > > > > > > 
> > > > > > > > > > Jens' per-bdi writeback has another improvement. In 2.6.31, when
> > > > > > > > > > superblocks A and B both have 100000 dirty pages, it will first
> > > > > > > > > > exhaust A's 100000 dirty pages before going on to sync B's.
> > > > > > > > > 
> > > > > > > > > That would only be true if someone broke 2.6.31.  Did they?
> > > > > > > > > 
> > > > > > > > > SYSCALL_DEFINE0(sync)
> > > > > > > > > {
> > > > > > > > > 	wakeup_pdflush(0);
> > > > > > > > > 	sync_filesystems(0);
> > > > > > > > > 	sync_filesystems(1);
> > > > > > > > > 	if (unlikely(laptop_mode))
> > > > > > > > > 		laptop_sync_completion();
> > > > > > > > > 	return 0;
> > > > > > > > > }
> > > > > > > > > 
> > > > > > > > > the sync_filesystems(0) is supposed to non-blockingly start IO against
> > > > > > > > > all devices.  It used to do that correctly.  But people mucked with it
> > > > > > > > > so perhaps it no longer does.
> > > > > > > > 
> > > > > > > > I'm referring to writeback_inodes(). Each invocation of which (to sync
> > > > > > > > 4MB) will do the same iteration over superblocks A => B => C ... So if
> > > > > > > > A has dirty pages, it will always be served first.
> > > > > > > > 
> > > > > > > > So if wbc->bdi == NULL (which is true for kupdate/background sync), it
> > > > > > > > will have to first exhaust A before going on to B and C.
> > > > > > > 
> > > > > > > But that works OK.  We fill the first device's queue, then it gets
> > > > > > > congested and sync_sb_inodes() does nothing and we advance to the next
> > > > > > > queue.
> > > > > > 
> > > > > > So in common cases "exhaust" is a bit exaggerated, but A does receive
> > > > > > much more opportunity than B. Computation resources for IO submission
> > > > > > are unbalanced for A, and there are pointless overheads in rechecking A.
> > > > > 
> > > > > That's unquantified handwaving.  One CPU can do a *lot* of IO.
> > > > 
> > > > Yes.. I had the impression that the writeback submission can be pretty slow.
> > > > It should be because of the congestion_wait. Now that it is removed,
> > > > things are going faster when queue is not full.
> > > 
> > > What?  The wait is short.  The design intent there is that we repoll
> > > all previously-congested queues well before they start to run empty.
> > 
> > When queue is not congested (in which case congestion_wait is not
> > necessary), the congestion_wait() degrades io submission speed to near
> > io completion speed.
> > 
> > > > > > > If a device has more than a queue's worth of dirty data then we'll
> > > > > > > probably leave some of that dirty memory un-queued, so there's some
> > > > > > > lack of concurrency in that situation.
> > > > > > 
> > > > > > Good insight.
> > > > > 
> > > > > It was wrong.  See the other email.
> > > > 
> > > > No your first insight is correct. Because the (unnecessary) teeny
> > > > sleeps is independent of the A=>B=>C traversing order. Only queue
> > > > congestion could help skip A.
> > > 
> > > The sleeps are completely necessary!  Otherwise we end up busywaiting.
> > > 
> > > After the sleep we repoll all queues.
> > 
> > I mean, it is not always necessary. Only when _all_ superblocks cannot
> > writeback their inodes (eg. all in congestion), we should wait.
> > 
> > Just before Jens' work, I had patch to convert
> > 
> > -                       if (wbc.encountered_congestion || wbc.more_io)
> > -                               congestion_wait(WRITE, HZ/10);
> > -                       else
> > -                               break;
> > 
> > to
> > 
> > +       if (wbc->encountered_congestion && wbc->nr_to_write == MAX_WRITEBACK_PAGES)
> > +               congestion_wait(WRITE, HZ/10);
> > 
> > Note that wbc->encountered_congestion only means "at least one bdi
> > encountered congestion". We may still make progress in other bdis
> > hence should not sleep.
> Hi,
> encountered_congestion only is checked when nr_to_write > 0, if some superblocks
> aren't congestions, nr_to_write should be 0, right?

Yeah, good spot! So the change only helps some corner cases.

Thanks,
Fengguang

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: regression in page writeback
  2009-09-23  3:14                                 ` Wu Fengguang
@ 2009-09-23  3:25                                   ` Wu Fengguang
  0 siblings, 0 replies; 79+ messages in thread
From: Wu Fengguang @ 2009-09-23  3:25 UTC (permalink / raw)
  To: Li, Shaohua
  Cc: Andrew Morton, Chris Mason, Peter Zijlstra, linux-kernel,
	richard, jens.axboe

On Wed, Sep 23, 2009 at 11:14:50AM +0800, Wu Fengguang wrote:
> On Wed, Sep 23, 2009 at 11:10:12AM +0800, Li, Shaohua wrote:
> > On Wed, Sep 23, 2009 at 10:49:58AM +0800, Wu, Fengguang wrote:
> > > On Wed, Sep 23, 2009 at 10:36:22AM +0800, Andrew Morton wrote:
> > > > On Wed, 23 Sep 2009 10:26:22 +0800 Wu Fengguang <fengguang.wu@intel.com> wrote:
> > > > 
> > > > > On Wed, Sep 23, 2009 at 09:59:41AM +0800, Andrew Morton wrote:
> > > > > > On Wed, 23 Sep 2009 09:45:00 +0800 Wu Fengguang <fengguang.wu@intel.com> wrote:
> > > > > > 
> > > > > > > On Wed, Sep 23, 2009 at 09:28:32AM +0800, Andrew Morton wrote:
> > > > > > > > On Wed, 23 Sep 2009 09:17:58 +0800 Wu Fengguang <fengguang.wu@intel.com> wrote:
> > > > > > > > 
> > > > > > > > > On Wed, Sep 23, 2009 at 08:54:52AM +0800, Andrew Morton wrote:
> > > > > > > > > > On Wed, 23 Sep 2009 08:22:20 +0800 Wu Fengguang <fengguang.wu@intel.com> wrote:
> > > > > > > > > > 
> > > > > > > > > > > Jens' per-bdi writeback has another improvement. In 2.6.31, when
> > > > > > > > > > > superblocks A and B both have 100000 dirty pages, it will first
> > > > > > > > > > > exhaust A's 100000 dirty pages before going on to sync B's.
> > > > > > > > > > 
> > > > > > > > > > That would only be true if someone broke 2.6.31.  Did they?
> > > > > > > > > > 
> > > > > > > > > > SYSCALL_DEFINE0(sync)
> > > > > > > > > > {
> > > > > > > > > > 	wakeup_pdflush(0);
> > > > > > > > > > 	sync_filesystems(0);
> > > > > > > > > > 	sync_filesystems(1);
> > > > > > > > > > 	if (unlikely(laptop_mode))
> > > > > > > > > > 		laptop_sync_completion();
> > > > > > > > > > 	return 0;
> > > > > > > > > > }
> > > > > > > > > > 
> > > > > > > > > > the sync_filesystems(0) is supposed to non-blockingly start IO against
> > > > > > > > > > all devices.  It used to do that correctly.  But people mucked with it
> > > > > > > > > > so perhaps it no longer does.
> > > > > > > > > 
> > > > > > > > > I'm referring to writeback_inodes(). Each invocation of which (to sync
> > > > > > > > > 4MB) will do the same iteration over superblocks A => B => C ... So if
> > > > > > > > > A has dirty pages, it will always be served first.
> > > > > > > > > 
> > > > > > > > > So if wbc->bdi == NULL (which is true for kupdate/background sync), it
> > > > > > > > > will have to first exhaust A before going on to B and C.
> > > > > > > > 
> > > > > > > > But that works OK.  We fill the first device's queue, then it gets
> > > > > > > > congested and sync_sb_inodes() does nothing and we advance to the next
> > > > > > > > queue.
> > > > > > > 
> > > > > > > So in common cases "exhaust" is a bit exaggerated, but A does receive
> > > > > > > much more opportunity than B. Computation resources for IO submission
> > > > > > > are unbalanced for A, and there are pointless overheads in rechecking A.
> > > > > > 
> > > > > > That's unquantified handwaving.  One CPU can do a *lot* of IO.
> > > > > 
> > > > > Yes.. I had the impression that the writeback submission can be pretty slow.
> > > > > It should be because of the congestion_wait. Now that it is removed,
> > > > > things are going faster when queue is not full.
> > > > 
> > > > What?  The wait is short.  The design intent there is that we repoll
> > > > all previously-congested queues well before they start to run empty.
> > > 
> > > When queue is not congested (in which case congestion_wait is not
> > > necessary), the congestion_wait() degrades io submission speed to near
> > > io completion speed.
> > > 
> > > > > > > > If a device has more than a queue's worth of dirty data then we'll
> > > > > > > > probably leave some of that dirty memory un-queued, so there's some
> > > > > > > > lack of concurrency in that situation.
> > > > > > > 
> > > > > > > Good insight.
> > > > > > 
> > > > > > It was wrong.  See the other email.
> > > > > 
> > > > > No your first insight is correct. Because the (unnecessary) teeny
> > > > > sleeps is independent of the A=>B=>C traversing order. Only queue
> > > > > congestion could help skip A.
> > > > 
> > > > The sleeps are completely necessary!  Otherwise we end up busywaiting.
> > > > 
> > > > After the sleep we repoll all queues.
> > > 
> > > I mean, it is not always necessary. Only when _all_ superblocks cannot
> > > writeback their inodes (eg. all in congestion), we should wait.
> > > 
> > > Just before Jens' work, I had patch to convert
> > > 
> > > -                       if (wbc.encountered_congestion || wbc.more_io)
> > > -                               congestion_wait(WRITE, HZ/10);
> > > -                       else
> > > -                               break;
> > > 
> > > to
> > > 
> > > +       if (wbc->encountered_congestion && wbc->nr_to_write == MAX_WRITEBACK_PAGES)
> > > +               congestion_wait(WRITE, HZ/10);
> > > 
> > > Note that wbc->encountered_congestion only means "at least one bdi
> > > encountered congestion". We may still make progress in other bdis
> > > hence should not sleep.
> > Hi,
> > encountered_congestion only is checked when nr_to_write > 0, if some superblocks
> > aren't congestions, nr_to_write should be 0, right?
> 
> Yeah, good spot! So the change only helps some corner cases.

Then it remains a problem why the io submission is slow when !congested.

For example, this trace shows that the io submission speed is about

        4MB / 0.01s = 400MB/s

Workload is a plain copy on a 2Ghz Intel Core 2 CPU.

[   71.487121] mm/page-writeback.c 761 background_writeout: comm=pdflush pid=343 n=-4096
[   71.489635] global dirty=65513 writeback=5925 nfs=1 flags=_M towrite=0 skipped=0
[   71.496019] redirty_tail +442: inode 79232
[   71.497432] mm/page-writeback.c 761 background_writeout: comm=pdflush pid=343 n=-5120
[   71.498890] global dirty=64490 writeback=6700 nfs=1 flags=_M towrite=0 skipped=0
[   71.506355] redirty_tail +442: inode 79232
[   71.508473] mm/page-writeback.c 761 background_writeout: comm=pdflush pid=343 n=-6144
[   71.510538] global dirty=62475 writeback=7599 nfs=1 flags=_M towrite=0 skipped=0
[   71.511910] redirty_tail +502: inode 3438
[   71.512846] redirty_tail +502: inode 1920

Thanks,
Fengguang

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: regression in page writeback
  2009-09-23  1:52       ` Shaohua Li
@ 2009-09-23  4:00         ` Wu Fengguang
  2009-09-25  6:14           ` Wu Fengguang
  0 siblings, 1 reply; 79+ messages in thread
From: Wu Fengguang @ 2009-09-23  4:00 UTC (permalink / raw)
  To: Li, Shaohua
  Cc: linux-kernel, richard, a.p.zijlstra, jens.axboe, akpm,
	linux-fsdevel, Chris Mason

On Wed, Sep 23, 2009 at 09:52:55AM +0800, Li, Shaohua wrote:
> On Tue, 2009-09-22 at 21:39 +0800, Wu, Fengguang wrote:
> > On Tue, Sep 22, 2009 at 07:50:15PM +0800, Li, Shaohua wrote:
> > > On Tue, Sep 22, 2009 at 06:49:15PM +0800, Wu, Fengguang wrote:
> > > > Shaohua,
> > > > 
> > > > On Tue, Sep 22, 2009 at 01:49:13PM +0800, Li, Shaohua wrote:
> > > > > Hi,
> > > > > Commit d7831a0bdf06b9f722b947bb0c205ff7d77cebd8 causes disk io regression
> > > > > in my test.
> > > > > My system has 12 disks, each disk has two partitions. System runs fio sequence
> > > > > write on all partitions, each partion has 8 jobs.
> > > > > 2.6.31-rc1, fio gives 460m/s disk io
> > > > > 2.6.31-rc2, fio gives about 400m/s disk io. Revert the patch, speed back to
> > > > > 460m/s
> > > > > 
> > > > > Under latest git: fio gives 450m/s disk io; If reverting the patch, the speed
> > > > > is 484m/s.
> > > > > 
> > > > > With the patch, fio reports less io merge and more interrupts. My naive
> > > > > analysis is the patch makes balance_dirty_pages_ratelimited_nr() limits
> > > > > write chunk to 8 pages and then soon go to sleep in balance_dirty_pages(),
> > > > > because most time the bdi_nr_reclaimable < bdi_thresh, and so when write
> > > > > the pages out, the chunk is 8 pages long instead of 4M long. Without the patch,
> > > > > thread can write 8 pages and then move some pages to writeback, and then
> > > > > continue doing write. The patch seems to break this.
> > > > 
> > > > Do you have trace/numbers for above descriptions?
> > > No. Just guess, because there is less io merge. And watch each bdi's states,
> > > bdi_nr_reclaimable < bdi_thresh seems always true.
> > 
> > Ah OK.
> > 
> > > > > Unfortunatelly I can't figure out a fix for this issue, hopefully
> > > > > you have more ideas.
> > > > 
> > > > Attached is a very verbose writeback debug patch, hope it helps and
> > > > won't disturb the workload a lot :)
> > > Hmm, the log buf will get overflowed soon, there is > 400m/s io. I tried
> > > to produce this issue in a system with two disks, but fail. Anyway, I'll try
> > > it out tomorrow.
> > 
> > Thank you~  I'd recommend to use netconsole or serial line, and stop
> > local klogd because the write of log messages could add noises. 
> attached is a short log. I'll try to get a full log after finish latest
> git test.
> bdi_nr_reclaimable is always less than bdi_thresh in the log. because

Yes. Only background_writeout() is working and queue is congested.

> when bdi_nr_reclaimable + bdi_nr_writeback > bdi_thresh, background
> writeback is already started, so bdi_nr_writeback should be > 0.

Yes, when process is throttled, bdi_nr_writeback > (bdi_thresh - bdi_nr_reclaimable).

For each background_writeout() loop (each takes about 0.1s), there are
~100 balance_dirty_pages() loops.  The latter makes pretty much loop overheads.

Most importantly, most background_writeout() loops only wrote several pages
(the large towrite values), which is extremely inefficient. I wonder why it
cannot write more?

> global dirty=246756 writeback=43800 nfs=0 flags=C_ towrite=1018 skipped=0
> global dirty=246847 writeback=43800 nfs=0 flags=C_ towrite=1023 skipped=0
> global dirty=246847 writeback=43800 nfs=0 flags=C_ towrite=1023 skipped=0
> global dirty=246847 writeback=43800 nfs=0 flags=C_ towrite=1023 skipped=0
> global dirty=246847 writeback=43709 nfs=0 flags=C_ towrite=1024 skipped=0
> global dirty=246847 writeback=43709 nfs=0 flags=C_ towrite=1022 skipped=0
> global dirty=247029 writeback=43527 nfs=0 flags=CM towrite=1020 skipped=0
> global dirty=247029 writeback=43527 nfs=0 flags=C_ towrite=1024 skipped=0
> global dirty=246665 writeback=43800 nfs=0 flags=C_ towrite=1022 skipped=0
> global dirty=246301 writeback=44164 nfs=0 flags=C_ towrite=646 skipped=0

Thanks,
Fengguang

Content-Description: msg2
> 0, bdi_thresh=23810, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=24160, bdi_thresh=27860, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=23080, bdi_thresh=26626, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=23920, bdi_thresh=27422, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=15640, bdi_thresh=19031, background_thresh=145141, dirty_thresh=290283
> mm/page-writeback.c 762 background_writeout: comm=pdflush pid=599 n=-6696602
> bdi_nr_reclaimable=21040, bdi_thresh=24479, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=20480, bdi_thresh=23802, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=21480, bdi_thresh=24906, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=24160, bdi_thresh=27862, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=23920, bdi_thresh=27416, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=15240, bdi_thresh=16824, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=15880, bdi_thresh=18247, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=15640, bdi_thresh=19031, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=20480, bdi_thresh=23809, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=21480, bdi_thresh=24903, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=21480, bdi_thresh=24907, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=23080, bdi_thresh=26623, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=15640, bdi_thresh=19032, background_thresh=145141, dirty_thresh=290283
> global dirty=246756 writeback=43800 nfs=0 flags=C_ towrite=1018 skipped=0
> bdi_nr_reclaimable=15640, bdi_thresh=19030, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=21040, bdi_thresh=24481, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=24160, bdi_thresh=27850, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=21040, bdi_thresh=24493, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=21320, bdi_thresh=24944, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=21320, bdi_thresh=24940, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=15880, bdi_thresh=18244, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=21040, bdi_thresh=24493, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=24160, bdi_thresh=27864, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=22840, bdi_thresh=26423, background_thresh=145141, dirty_thresh=290283
> redirty_tail +435: inode 174623
> bdi_nr_reclaimable=21480, bdi_thresh=24894, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=23920, bdi_thresh=27412, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=21480, bdi_thresh=24900, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=24160, bdi_thresh=27862, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=24160, bdi_thresh=27855, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=15640, bdi_thresh=19026, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=23080, bdi_thresh=26620, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=21040, bdi_thresh=24484, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=15640, bdi_thresh=19027, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=22840, bdi_thresh=26420, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=21320, bdi_thresh=24943, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=15240, bdi_thresh=16824, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=23080, bdi_thresh=26611, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=21480, bdi_thresh=24907, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=20480, bdi_thresh=23793, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=21040, bdi_thresh=24479, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=20480, bdi_thresh=23802, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=24160, bdi_thresh=27862, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=15240, bdi_thresh=16825, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=21040, bdi_thresh=24491, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=23920, bdi_thresh=27425, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=23080, bdi_thresh=26615, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=15880, bdi_thresh=18247, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=21320, bdi_thresh=24942, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=23080, bdi_thresh=26625, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=24160, bdi_thresh=27860, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=22840, bdi_thresh=26421, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=19720, bdi_thresh=23425, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=23080, bdi_thresh=26621, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=21480, bdi_thresh=24904, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=21320, bdi_thresh=24946, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=15880, bdi_thresh=18246, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=21480, bdi_thresh=24906, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=23920, bdi_thresh=27422, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=15640, bdi_thresh=19032, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=23920, bdi_thresh=27416, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=22840, bdi_thresh=26418, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=15640, bdi_thresh=19031, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=20480, bdi_thresh=23809, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=15880, bdi_thresh=18246, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=21480, bdi_thresh=24907, background_thresh=145141, dirty_thresh=290283
> redirty_tail +435: inode 4784181
> bdi_nr_reclaimable=21480, bdi_thresh=24902, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=15240, bdi_thresh=16826, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=15240, bdi_thresh=16824, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=21320, bdi_thresh=24944, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=21040, bdi_thresh=24495, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=23920, bdi_thresh=27413, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=15240, bdi_thresh=16824, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=24160, bdi_thresh=27863, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=23080, bdi_thresh=26618, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=21040, bdi_thresh=24492, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=21480, bdi_thresh=24905, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=21480, bdi_thresh=24903, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=23080, bdi_thresh=26621, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=15640, bdi_thresh=19031, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=21480, bdi_thresh=24903, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=22840, bdi_thresh=26423, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=15880, bdi_thresh=18246, background_thresh=145141, dirty_thresh=290283
> redirty_tail +435: inode 3474323
> bdi_nr_reclaimable=23080, bdi_thresh=26621, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=21480, bdi_thresh=24864, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=24160, bdi_thresh=27863, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=20480, bdi_thresh=23811, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=15880, bdi_thresh=18245, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=21480, bdi_thresh=24864, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=22840, bdi_thresh=26425, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=15240, bdi_thresh=16826, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=21040, bdi_thresh=24496, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=21320, bdi_thresh=24945, background_thresh=145141, dirty_thresh=290283
> requeue_io +527: inode 0
> bdi_nr_reclaimable=15640, bdi_thresh=19034, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=15880, bdi_thresh=18247, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=23080, bdi_thresh=26620, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=23920, bdi_thresh=27426, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=15880, bdi_thresh=18247, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=23080, bdi_thresh=26626, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=15880, bdi_thresh=18245, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=24160, bdi_thresh=27861, background_thresh=145141, dirty_thresh=290283
> requeue_io +527: inode 0
> bdi_nr_reclaimable=23920, bdi_thresh=27427, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145141, dirty_thresh=290283
> bdi_nr_reclaimable=22840, bdi_thresh=26421, background_thresh=145150, dirty_thresh=290301
> bdi_nr_reclaimable=22840, bdi_thresh=26424, background_thresh=145150, dirty_thresh=290301
> bdi_nr_reclaimable=21040, bdi_thresh=24489, background_thresh=145150, dirty_thresh=290301
> bdi_nr_reclaimable=21320, bdi_thresh=24906, background_thresh=145150, dirty_thresh=290301
> bdi_nr_reclaimable=23920, bdi_thresh=27426, background_thresh=145150, dirty_thresh=290301
> redirty_tail +502: inode 12415
> bdi_nr_reclaimable=24160, bdi_thresh=27861, background_thresh=145150, dirty_thresh=290301
> bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145150, dirty_thresh=290301
> bdi_nr_reclaimable=21320, bdi_thresh=24905, background_thresh=145150, dirty_thresh=290301
> bdi_nr_reclaimable=15880, bdi_thresh=18247, background_thresh=145150, dirty_thresh=290301
> bdi_nr_reclaimable=15880, bdi_thresh=18247, background_thresh=145150, dirty_thresh=290301
> bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145159, dirty_thresh=290319
> bdi_nr_reclaimable=15640, bdi_thresh=19027, background_thresh=145159, dirty_thresh=290319
> bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145159, dirty_thresh=290319
> bdi_nr_reclaimable=21640, bdi_thresh=24923, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21640, bdi_thresh=24919, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21640, bdi_thresh=24921, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21640, bdi_thresh=24918, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23920, bdi_thresh=27421, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21320, bdi_thresh=24905, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15880, bdi_thresh=18247, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=20480, bdi_thresh=23812, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23080, bdi_thresh=26623, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21040, bdi_thresh=24495, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21040, bdi_thresh=24490, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21320, bdi_thresh=24903, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23920, bdi_thresh=27423, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21320, bdi_thresh=24903, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15880, bdi_thresh=18246, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23920, bdi_thresh=27421, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15880, bdi_thresh=18247, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15880, bdi_thresh=18250, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21640, bdi_thresh=24919, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21040, bdi_thresh=24496, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=22840, bdi_thresh=26414, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=24160, bdi_thresh=27820, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19030, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21640, bdi_thresh=24919, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23920, bdi_thresh=27418, background_thresh=145168, dirty_thresh=290337
> mm/page-writeback.c 762 background_writeout: comm=pdflush pid=10090 n=-6621688
> bdi_nr_reclaimable=23080, bdi_thresh=26624, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19032, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15240, bdi_thresh=16826, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=22840, bdi_thresh=26420, background_thresh=145168, dirty_thresh=290337
> global dirty=246847 writeback=43800 nfs=0 flags=C_ towrite=1023 skipped=0
> bdi_nr_reclaimable=21320, bdi_thresh=24900, background_thresh=145168, dirty_thresh=290337
> redirty_tail +435: inode 174623
> bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=24160, bdi_thresh=27824, background_thresh=145168, dirty_thresh=290337
> redirty_tail +435: inode 4784181
> redirty_tail +435: inode 3474323
> requeue_io +527: inode 0
> requeue_io +527: inode 0
> redirty_tail +502: inode 12396
> mm/page-writeback.c 762 background_writeout: comm=pdflush pid=10090 n=-6621689
> bdi_nr_reclaimable=24160, bdi_thresh=27824, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21320, bdi_thresh=24900, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15240, bdi_thresh=16826, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23080, bdi_thresh=26624, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=24160, bdi_thresh=27820, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21040, bdi_thresh=24490, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=20480, bdi_thresh=23812, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23920, bdi_thresh=27421, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21640, bdi_thresh=24919, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23920, bdi_thresh=27427, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23080, bdi_thresh=26620, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15240, bdi_thresh=16827, background_thresh=145168, dirty_thresh=290337
> global dirty=246847 writeback=43800 nfs=0 flags=C_ towrite=1023 skipped=0
> bdi_nr_reclaimable=22840, bdi_thresh=26420, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21640, bdi_thresh=24910, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=20480, bdi_thresh=23811, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21320, bdi_thresh=24903, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21320, bdi_thresh=24904, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21320, bdi_thresh=24900, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23080, bdi_thresh=26620, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21640, bdi_thresh=24923, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23080, bdi_thresh=26611, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21640, bdi_thresh=24916, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=24160, bdi_thresh=27824, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=24160, bdi_thresh=27822, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15880, bdi_thresh=18245, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=24160, bdi_thresh=27810, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23080, bdi_thresh=26615, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23920, bdi_thresh=27412, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21040, bdi_thresh=24495, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23920, bdi_thresh=27425, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15880, bdi_thresh=18248, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23920, bdi_thresh=27422, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=19720, bdi_thresh=23425, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=22840, bdi_thresh=26423, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=24160, bdi_thresh=27822, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21040, bdi_thresh=24481, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=24160, bdi_thresh=27820, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=20480, bdi_thresh=23804, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19031, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19026, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21040, bdi_thresh=24486, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=20480, bdi_thresh=23795, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23080, bdi_thresh=26625, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19027, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21040, bdi_thresh=24493, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19027, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21640, bdi_thresh=24923, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15880, bdi_thresh=18247, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19030, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15240, bdi_thresh=16825, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21040, bdi_thresh=24483, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21040, bdi_thresh=24494, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15240, bdi_thresh=16826, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23920, bdi_thresh=27416, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19032, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19032, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=22840, bdi_thresh=26418, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23080, bdi_thresh=26623, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23080, bdi_thresh=26623, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23080, bdi_thresh=26621, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=24160, bdi_thresh=27815, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21640, bdi_thresh=24920, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15880, bdi_thresh=18247, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21320, bdi_thresh=24906, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21640, bdi_thresh=24920, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21640, bdi_thresh=24922, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21040, bdi_thresh=24495, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23080, bdi_thresh=26621, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=20480, bdi_thresh=23811, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15240, bdi_thresh=16825, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23080, bdi_thresh=26618, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23920, bdi_thresh=27413, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23920, bdi_thresh=27421, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=22840, bdi_thresh=26425, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21320, bdi_thresh=24904, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15240, bdi_thresh=16827, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21040, bdi_thresh=24498, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23920, bdi_thresh=27426, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21320, bdi_thresh=24902, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21040, bdi_thresh=24497, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=24160, bdi_thresh=27823, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21640, bdi_thresh=24918, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21640, bdi_thresh=24921, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21640, bdi_thresh=24920, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15880, bdi_thresh=18246, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=20480, bdi_thresh=23813, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=22840, bdi_thresh=26414, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23080, bdi_thresh=26626, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15880, bdi_thresh=18247, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23920, bdi_thresh=27423, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15240, bdi_thresh=16825, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=24160, bdi_thresh=27821, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15880, bdi_thresh=18249, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23080, bdi_thresh=26621, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=24160, bdi_thresh=27823, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19031, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21640, bdi_thresh=24919, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15880, bdi_thresh=18248, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23920, bdi_thresh=27426, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15880, bdi_thresh=18248, background_thresh=145168, dirty_thresh=290337
> mm/page-writeback.c 762 background_writeout: comm=pdflush pid=599 n=-6696603
> bdi_nr_reclaimable=22840, bdi_thresh=26421, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21320, bdi_thresh=24905, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21640, bdi_thresh=24918, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=22840, bdi_thresh=26423, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=22840, bdi_thresh=26424, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19034, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=22840, bdi_thresh=26420, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21640, bdi_thresh=24923, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21320, bdi_thresh=24906, background_thresh=145168, dirty_thresh=290337
> global dirty=246847 writeback=43800 nfs=0 flags=C_ towrite=1023 skipped=0
> bdi_nr_reclaimable=21040, bdi_thresh=24491, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=22840, bdi_thresh=26421, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15880, bdi_thresh=18250, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15880, bdi_thresh=18248, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21320, bdi_thresh=24903, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=24160, bdi_thresh=27821, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21320, bdi_thresh=24905, background_thresh=145168, dirty_thresh=290337
> requeue_io +527: inode 0
> bdi_nr_reclaimable=15880, bdi_thresh=18248, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19030, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15880, bdi_thresh=18247, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21320, bdi_thresh=24903, background_thresh=145168, dirty_thresh=290337
> requeue_io +527: inode 0
> bdi_nr_reclaimable=15880, bdi_thresh=18246, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21040, bdi_thresh=24495, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21640, bdi_thresh=24919, background_thresh=145168, dirty_thresh=290337
> redirty_tail +502: inode 12533
> bdi_nr_reclaimable=21040, bdi_thresh=24496, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15880, bdi_thresh=18247, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23920, bdi_thresh=27418, background_thresh=145168, dirty_thresh=290337
> redirty_tail +435: inode 174623
> bdi_nr_reclaimable=15640, bdi_thresh=19032, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145168, dirty_thresh=290337
> redirty_tail +435: inode 4784181
> redirty_tail +435: inode 3474323
> requeue_io +527: inode 0
> requeue_io +527: inode 0
> redirty_tail +502: inode 13474
> mm/page-writeback.c 762 background_writeout: comm=pdflush pid=10090 n=-6621691
> bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15880, bdi_thresh=18246, background_thresh=145168, dirty_thresh=290337
> mm/page-writeback.c 762 background_writeout: comm=pdflush pid=599 n=-6696603
> bdi_nr_reclaimable=15640, bdi_thresh=19030, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15880, bdi_thresh=18247, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21320, bdi_thresh=24903, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=22840, bdi_thresh=26421, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19034, background_thresh=145168, dirty_thresh=290337
> global dirty=246847 writeback=43709 nfs=0 flags=C_ towrite=1024 skipped=0
> bdi_nr_reclaimable=22840, bdi_thresh=26423, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19031, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=24160, bdi_thresh=27823, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=24160, bdi_thresh=27821, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=19720, bdi_thresh=23425, background_thresh=145168, dirty_thresh=290337
> global dirty=246847 writeback=43709 nfs=0 flags=C_ towrite=1022 skipped=0
> bdi_nr_reclaimable=21640, bdi_thresh=24920, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=24160, bdi_thresh=27824, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23080, bdi_thresh=26620, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23920, bdi_thresh=27425, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23080, bdi_thresh=26611, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21040, bdi_thresh=24489, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21640, bdi_thresh=24926, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21320, bdi_thresh=24903, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23920, bdi_thresh=27421, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21640, bdi_thresh=24929, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=24160, bdi_thresh=27822, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19026, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23920, bdi_thresh=27412, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23080, bdi_thresh=26615, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23080, bdi_thresh=26624, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=20480, bdi_thresh=23810, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21320, bdi_thresh=24904, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=20480, bdi_thresh=23811, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23920, bdi_thresh=27422, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15880, bdi_thresh=18247, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21040, bdi_thresh=24480, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21320, bdi_thresh=24900, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=24160, bdi_thresh=27822, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=24160, bdi_thresh=27820, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=20480, bdi_thresh=23803, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15240, bdi_thresh=16825, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=20480, bdi_thresh=23794, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21040, bdi_thresh=24485, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21640, bdi_thresh=24933, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15880, bdi_thresh=18248, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21640, bdi_thresh=24933, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21640, bdi_thresh=24920, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21320, bdi_thresh=24903, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=22840, bdi_thresh=26421, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23080, bdi_thresh=26620, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21640, bdi_thresh=24926, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=22840, bdi_thresh=26423, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=24160, bdi_thresh=27822, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21640, bdi_thresh=24932, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23920, bdi_thresh=27412, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23080, bdi_thresh=26615, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=19720, bdi_thresh=23425, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23920, bdi_thresh=27425, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23080, bdi_thresh=26611, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21320, bdi_thresh=24900, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21320, bdi_thresh=24903, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21040, bdi_thresh=24492, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19030, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15240, bdi_thresh=16824, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=24160, bdi_thresh=27821, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19026, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19031, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23920, bdi_thresh=27416, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19027, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19031, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21320, bdi_thresh=24904, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15880, bdi_thresh=18247, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19027, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23080, bdi_thresh=26625, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15880, bdi_thresh=18246, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21640, bdi_thresh=24920, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21040, bdi_thresh=24480, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23080, bdi_thresh=26620, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=24160, bdi_thresh=27820, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=20480, bdi_thresh=23794, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21040, bdi_thresh=24485, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=20480, bdi_thresh=23803, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15240, bdi_thresh=16825, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21640, bdi_thresh=24933, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15880, bdi_thresh=18248, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23080, bdi_thresh=26611, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15880, bdi_thresh=18247, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21320, bdi_thresh=24903, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=24160, bdi_thresh=27822, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21640, bdi_thresh=24926, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23920, bdi_thresh=27422, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15880, bdi_thresh=18246, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23920, bdi_thresh=27412, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23080, bdi_thresh=26615, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=24160, bdi_thresh=27822, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=24160, bdi_thresh=27810, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=19720, bdi_thresh=23425, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=24160, bdi_thresh=27824, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23920, bdi_thresh=27425, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21640, bdi_thresh=24933, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21320, bdi_thresh=24900, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21320, bdi_thresh=24900, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21640, bdi_thresh=24930, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15240, bdi_thresh=16824, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19032, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21040, bdi_thresh=24493, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21040, bdi_thresh=24481, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21640, bdi_thresh=24930, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19031, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23080, bdi_thresh=26611, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23080, bdi_thresh=26620, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21320, bdi_thresh=24864, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21320, bdi_thresh=24863, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19027, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21640, bdi_thresh=24918, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15880, bdi_thresh=18245, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21640, bdi_thresh=24924, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15240, bdi_thresh=16823, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21320, bdi_thresh=24860, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23920, bdi_thresh=27412, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23920, bdi_thresh=27416, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=24160, bdi_thresh=27822, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19026, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15880, bdi_thresh=18243, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15880, bdi_thresh=18246, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21040, bdi_thresh=24490, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23080, bdi_thresh=26625, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21640, bdi_thresh=24931, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=24160, bdi_thresh=27822, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21040, bdi_thresh=24478, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=24160, bdi_thresh=27820, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19027, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=20480, bdi_thresh=23792, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23080, bdi_thresh=26615, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=20480, bdi_thresh=23801, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=24160, bdi_thresh=27810, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21040, bdi_thresh=24491, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=20480, bdi_thresh=23809, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21640, bdi_thresh=24918, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23920, bdi_thresh=27425, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21320, bdi_thresh=24863, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23080, bdi_thresh=26620, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=19720, bdi_thresh=23425, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21640, bdi_thresh=24924, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15240, bdi_thresh=16823, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21320, bdi_thresh=24860, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23920, bdi_thresh=27412, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21040, bdi_thresh=24483, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=24160, bdi_thresh=27822, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21640, bdi_thresh=24930, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=24160, bdi_thresh=27824, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23080, bdi_thresh=26611, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21320, bdi_thresh=24864, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=24160, bdi_thresh=27815, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23920, bdi_thresh=27422, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19027, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21640, bdi_thresh=24928, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15240, bdi_thresh=16823, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15880, bdi_thresh=18246, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19032, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23920, bdi_thresh=27416, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21040, bdi_thresh=24480, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15240, bdi_thresh=16824, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19031, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21640, bdi_thresh=24931, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19032, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21040, bdi_thresh=24492, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21320, bdi_thresh=24864, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23080, bdi_thresh=26623, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15240, bdi_thresh=16825, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15880, bdi_thresh=18245, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=22840, bdi_thresh=26418, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23080, bdi_thresh=26623, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23080, bdi_thresh=26621, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21040, bdi_thresh=24492, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21320, bdi_thresh=24866, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19026, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15880, bdi_thresh=18243, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=22840, bdi_thresh=26423, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15240, bdi_thresh=16823, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=20480, bdi_thresh=23808, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15880, bdi_thresh=18245, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23920, bdi_thresh=27421, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19030, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=24160, bdi_thresh=27824, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23080, bdi_thresh=26618, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=20480, bdi_thresh=23791, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23080, bdi_thresh=26615, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21040, bdi_thresh=24490, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23080, bdi_thresh=26611, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=24160, bdi_thresh=27810, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=20480, bdi_thresh=23800, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21320, bdi_thresh=24863, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=24160, bdi_thresh=27822, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21640, bdi_thresh=24941, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21040, bdi_thresh=24477, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19027, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21640, bdi_thresh=24927, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23080, bdi_thresh=26620, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23920, bdi_thresh=27412, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21040, bdi_thresh=24481, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19031, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=20480, bdi_thresh=23807, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23920, bdi_thresh=27425, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=24200, bdi_thresh=27822, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=24200, bdi_thresh=27824, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23080, bdi_thresh=26625, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=19720, bdi_thresh=23425, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=24200, bdi_thresh=27810, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21640, bdi_thresh=24933, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15880, bdi_thresh=18245, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15880, bdi_thresh=18242, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23080, bdi_thresh=26611, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23080, bdi_thresh=26623, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=20480, bdi_thresh=23790, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23080, bdi_thresh=26615, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15240, bdi_thresh=16822, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=20480, bdi_thresh=23799, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=24200, bdi_thresh=27820, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21320, bdi_thresh=24863, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15880, bdi_thresh=18244, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23920, bdi_thresh=27421, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19026, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21040, bdi_thresh=24476, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23080, bdi_thresh=26618, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21640, bdi_thresh=24927, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=24200, bdi_thresh=27824, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=22840, bdi_thresh=26423, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23920, bdi_thresh=27422, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23080, bdi_thresh=26621, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23920, bdi_thresh=27416, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21320, bdi_thresh=24864, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21320, bdi_thresh=24866, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15240, bdi_thresh=16822, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=20480, bdi_thresh=23806, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=24200, bdi_thresh=27822, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21640, bdi_thresh=24940, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15240, bdi_thresh=16823, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19030, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21320, bdi_thresh=24860, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21040, bdi_thresh=24489, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21640, bdi_thresh=24940, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21040, bdi_thresh=24488, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19032, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21040, bdi_thresh=24490, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21040, bdi_thresh=24478, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15880, bdi_thresh=18244, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=24200, bdi_thresh=27815, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19027, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21640, bdi_thresh=24939, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21640, bdi_thresh=24937, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19027, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23080, bdi_thresh=26621, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21640, bdi_thresh=24935, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23920, bdi_thresh=27413, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=24200, bdi_thresh=27823, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19032, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15240, bdi_thresh=16822, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21040, bdi_thresh=24490, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15880, bdi_thresh=18244, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23080, bdi_thresh=26623, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21040, bdi_thresh=24492, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15240, bdi_thresh=16824, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=22840, bdi_thresh=26418, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21640, bdi_thresh=24937, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=22840, bdi_thresh=26425, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19031, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=20480, bdi_thresh=23808, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=24200, bdi_thresh=27823, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21640, bdi_thresh=24937, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23920, bdi_thresh=27426, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21640, bdi_thresh=24936, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21640, bdi_thresh=24938, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21320, bdi_thresh=24864, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=22840, bdi_thresh=26414, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23080, bdi_thresh=26621, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23920, bdi_thresh=27423, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15880, bdi_thresh=18243, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21320, bdi_thresh=24862, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23080, bdi_thresh=26626, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15240, bdi_thresh=16824, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15880, bdi_thresh=18246, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23080, bdi_thresh=26620, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15880, bdi_thresh=18245, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=24200, bdi_thresh=27821, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=22840, bdi_thresh=26423, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21640, bdi_thresh=24936, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23920, bdi_thresh=27421, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23080, bdi_thresh=26624, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19034, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21640, bdi_thresh=24935, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23920, bdi_thresh=27426, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15880, bdi_thresh=18246, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=22840, bdi_thresh=26421, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15240, bdi_thresh=16824, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21640, bdi_thresh=24927, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21040, bdi_thresh=24481, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23920, bdi_thresh=27425, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21320, bdi_thresh=24863, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=24200, bdi_thresh=27822, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=24200, bdi_thresh=27824, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=22840, bdi_thresh=26420, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=19720, bdi_thresh=23425, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15880, bdi_thresh=18245, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=24200, bdi_thresh=27810, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19031, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23080, bdi_thresh=26611, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21640, bdi_thresh=24933, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23080, bdi_thresh=26620, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23920, bdi_thresh=27412, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23080, bdi_thresh=26615, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21320, bdi_thresh=24864, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23080, bdi_thresh=26625, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=24200, bdi_thresh=27822, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=20480, bdi_thresh=23799, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=24200, bdi_thresh=27820, background_thresh=145168, dirty_thresh=290337
> requeue_io +527: inode 0
> bdi_nr_reclaimable=21640, bdi_thresh=24940, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21320, bdi_thresh=24860, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15240, bdi_thresh=16823, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=24200, bdi_thresh=27824, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=20480, bdi_thresh=23807, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21040, bdi_thresh=24476, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23920, bdi_thresh=27422, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=20480, bdi_thresh=23790, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23920, bdi_thresh=27416, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=24200, bdi_thresh=27815, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21040, bdi_thresh=24478, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15240, bdi_thresh=16822, background_thresh=145168, dirty_thresh=290337
> requeue_io +527: inode 0
> bdi_nr_reclaimable=15640, bdi_thresh=19027, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19027, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21640, bdi_thresh=24939, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15880, bdi_thresh=18242, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21640, bdi_thresh=24940, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21640, bdi_thresh=24937, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21040, bdi_thresh=24488, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19032, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=24200, bdi_thresh=27823, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19032, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21040, bdi_thresh=24490, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15880, bdi_thresh=18244, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21320, bdi_thresh=24866, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15880, bdi_thresh=18244, background_thresh=145168, dirty_thresh=290337
> redirty_tail +502: inode 13925
> bdi_nr_reclaimable=23080, bdi_thresh=26623, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=22840, bdi_thresh=26418, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19026, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21040, bdi_thresh=24489, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23080, bdi_thresh=26621, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15240, bdi_thresh=16824, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21640, bdi_thresh=24935, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21040, bdi_thresh=24490, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15240, bdi_thresh=16821, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=20480, bdi_thresh=23805, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=20480, bdi_thresh=23798, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23920, bdi_thresh=27421, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=22840, bdi_thresh=26420, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=22840, bdi_thresh=26423, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=19720, bdi_thresh=23425, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15880, bdi_thresh=18245, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23080, bdi_thresh=26611, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15240, bdi_thresh=16822, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23920, bdi_thresh=27426, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21640, bdi_thresh=24932, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23080, bdi_thresh=26620, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23920, bdi_thresh=27412, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23080, bdi_thresh=26615, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23920, bdi_thresh=27425, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21320, bdi_thresh=24823, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=24200, bdi_thresh=27822, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=24200, bdi_thresh=27822, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=22840, bdi_thresh=26424, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=24200, bdi_thresh=27820, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23080, bdi_thresh=26624, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19031, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=22840, bdi_thresh=26421, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21640, bdi_thresh=24938, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15880, bdi_thresh=18244, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21040, bdi_thresh=24475, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=20480, bdi_thresh=23789, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21040, bdi_thresh=24479, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15880, bdi_thresh=18243, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21640, bdi_thresh=24943, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15880, bdi_thresh=18244, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=24200, bdi_thresh=27821, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15880, bdi_thresh=18243, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15240, bdi_thresh=16820, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21640, bdi_thresh=24947, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=24200, bdi_thresh=27824, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21640, bdi_thresh=24947, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=24200, bdi_thresh=27810, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21320, bdi_thresh=24820, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21640, bdi_thresh=24942, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15240, bdi_thresh=16822, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23920, bdi_thresh=27416, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21320, bdi_thresh=24824, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23080, bdi_thresh=26625, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23920, bdi_thresh=27422, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21040, bdi_thresh=24476, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15880, bdi_thresh=18242, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15880, bdi_thresh=18242, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15240, bdi_thresh=16820, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19027, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15880, bdi_thresh=18240, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=20480, bdi_thresh=23805, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21040, bdi_thresh=24490, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19027, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21640, bdi_thresh=24944, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19032, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23080, bdi_thresh=26620, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23080, bdi_thresh=26623, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21320, bdi_thresh=24825, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15880, bdi_thresh=18241, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21640, bdi_thresh=24943, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23920, bdi_thresh=27421, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21320, bdi_thresh=24826, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=22840, bdi_thresh=26425, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19034, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23080, bdi_thresh=26618, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23920, bdi_thresh=27423, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23080, bdi_thresh=26621, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=24200, bdi_thresh=27823, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19032, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21040, bdi_thresh=24488, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23920, bdi_thresh=27413, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=24200, bdi_thresh=27815, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21040, bdi_thresh=24489, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=24200, bdi_thresh=27824, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=22840, bdi_thresh=26414, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21640, bdi_thresh=24944, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=22840, bdi_thresh=26423, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23080, bdi_thresh=26621, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15880, bdi_thresh=18241, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19030, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19031, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=24200, bdi_thresh=27823, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21640, bdi_thresh=24944, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21640, bdi_thresh=24945, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=20480, bdi_thresh=23806, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21320, bdi_thresh=24825, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21320, bdi_thresh=24822, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21040, bdi_thresh=24491, background_thresh=145168, dirty_thresh=290337
> redirty_tail +435: inode 174623
> bdi_nr_reclaimable=20480, bdi_thresh=23804, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23920, bdi_thresh=27426, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23080, bdi_thresh=26626, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23920, bdi_thresh=27427, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21640, bdi_thresh=24947, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15880, bdi_thresh=18245, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=22840, bdi_thresh=26420, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21320, bdi_thresh=24826, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=22840, bdi_thresh=26421, background_thresh=145168, dirty_thresh=290337
> redirty_tail +435: inode 4784181
> bdi_nr_reclaimable=21040, bdi_thresh=24484, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21040, bdi_thresh=24483, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15240, bdi_thresh=16821, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=24200, bdi_thresh=27821, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21320, bdi_thresh=24823, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21320, bdi_thresh=24825, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15880, bdi_thresh=18243, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19030, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21320, bdi_thresh=24823, background_thresh=145168, dirty_thresh=290337
> redirty_tail +442: inode 3474323
> bdi_nr_reclaimable=21040, bdi_thresh=24488, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21640, bdi_thresh=24943, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21320, bdi_thresh=24820, background_thresh=145168, dirty_thresh=290337
> requeue_io +527: inode 0
> bdi_nr_reclaimable=15880, bdi_thresh=18242, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23920, bdi_thresh=27418, background_thresh=145168, dirty_thresh=290337
> redirty_tail +502: inode 14004
> bdi_nr_reclaimable=24200, bdi_thresh=27820, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19032, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=24200, bdi_thresh=27820, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21640, bdi_thresh=24934, background_thresh=145168, dirty_thresh=290337
> mm/page-writeback.c 762 background_writeout: comm=pdflush pid=599 n=-6696607
> bdi_nr_reclaimable=21320, bdi_thresh=24780, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19030, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15880, bdi_thresh=18242, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21040, bdi_thresh=24481, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23920, bdi_thresh=27427, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23920, bdi_thresh=27426, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=20480, bdi_thresh=23803, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21040, bdi_thresh=24489, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21320, bdi_thresh=24782, background_thresh=145168, dirty_thresh=290337
> global dirty=247029 writeback=43527 nfs=0 flags=CM towrite=1020 skipped=0
> bdi_nr_reclaimable=24200, bdi_thresh=27823, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19031, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21320, bdi_thresh=24783, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15880, bdi_thresh=18241, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=22840, bdi_thresh=26418, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23080, bdi_thresh=26611, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21640, bdi_thresh=24940, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21040, bdi_thresh=24485, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=20480, bdi_thresh=23796, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19026, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=22840, bdi_thresh=26420, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15880, bdi_thresh=18241, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23080, bdi_thresh=26620, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=19720, bdi_thresh=23425, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23080, bdi_thresh=26615, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23920, bdi_thresh=27412, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=20480, bdi_thresh=23803, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=24200, bdi_thresh=27822, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15880, bdi_thresh=18242, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23080, bdi_thresh=26621, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21040, bdi_thresh=24472, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19031, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=24200, bdi_thresh=27822, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15240, bdi_thresh=16820, background_thresh=145168, dirty_thresh=290337
> mm/page-writeback.c 762 background_writeout: comm=pdflush pid=10090 n=-6621691
> bdi_nr_reclaimable=24200, bdi_thresh=27820, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23920, bdi_thresh=27422, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23080, bdi_thresh=26623, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21040, bdi_thresh=24477, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=20480, bdi_thresh=23787, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23080, bdi_thresh=26625, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21320, bdi_thresh=24781, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21640, bdi_thresh=24947, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15240, bdi_thresh=16821, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=24200, bdi_thresh=27824, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21040, bdi_thresh=24474, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21640, bdi_thresh=24946, background_thresh=145168, dirty_thresh=290337
> global dirty=247029 writeback=43527 nfs=0 flags=C_ towrite=1024 skipped=0
> bdi_nr_reclaimable=15240, bdi_thresh=16819, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21040, bdi_thresh=24484, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23920, bdi_thresh=27416, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21640, bdi_thresh=24942, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21320, bdi_thresh=24785, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19027, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=20480, bdi_thresh=23804, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21640, bdi_thresh=24944, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15880, bdi_thresh=18239, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21320, bdi_thresh=24826, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=24200, bdi_thresh=27810, background_thresh=145168, dirty_thresh=290337
> redirty_tail +435: inode 174623
> bdi_nr_reclaimable=15240, bdi_thresh=16819, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23080, bdi_thresh=26623, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21040, bdi_thresh=24486, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19032, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21040, bdi_thresh=24486, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15880, bdi_thresh=18241, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23080, bdi_thresh=26618, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=22840, bdi_thresh=26425, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23920, bdi_thresh=27423, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21040, bdi_thresh=24488, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21640, bdi_thresh=24943, background_thresh=145168, dirty_thresh=290337
> redirty_tail +435: inode 4784181
> bdi_nr_reclaimable=23920, bdi_thresh=27413, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=24200, bdi_thresh=27823, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23080, bdi_thresh=26621, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21640, bdi_thresh=24945, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=24200, bdi_thresh=27815, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15880, bdi_thresh=18240, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=22840, bdi_thresh=26423, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19030, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21640, bdi_thresh=24944, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=24200, bdi_thresh=27821, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23080, bdi_thresh=26621, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=24200, bdi_thresh=27824, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23080, bdi_thresh=26620, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15240, bdi_thresh=16821, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21240, bdi_thresh=24825, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=20480, bdi_thresh=23804, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=23920, bdi_thresh=27426, background_thresh=145168, dirty_thresh=290337
> redirty_tail +442: inode 3474323
> bdi_nr_reclaimable=23080, bdi_thresh=26626, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=21640, bdi_thresh=24902, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=22840, bdi_thresh=26423, background_thresh=145168, dirty_thresh=290337
> bdi_nr_reclaimable=15640, bdi_thresh=19034, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=23920, bdi_thresh=27421, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=15880, bdi_thresh=18243, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=21640, bdi_thresh=24907, background_thresh=145177, dirty_thresh=290355
> requeue_io +527: inode 0
> bdi_nr_reclaimable=21040, bdi_thresh=24483, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=15880, bdi_thresh=18244, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=21640, bdi_thresh=24904, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=22840, bdi_thresh=26421, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=21080, bdi_thresh=24785, background_thresh=145177, dirty_thresh=290355
> redirty_tail +502: inode 14023
> bdi_nr_reclaimable=22840, bdi_thresh=26421, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=15240, bdi_thresh=16820, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=15880, bdi_thresh=18243, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=21040, bdi_thresh=24786, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=22840, bdi_thresh=26424, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=22840, bdi_thresh=26420, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=15640, bdi_thresh=19027, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=15880, bdi_thresh=18241, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=15880, bdi_thresh=18240, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=21000, bdi_thresh=24783, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=21640, bdi_thresh=24903, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=21640, bdi_thresh=24907, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=15880, bdi_thresh=18242, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=21040, bdi_thresh=24488, background_thresh=145177, dirty_thresh=290355
> redirty_tail +435: inode 21
> bdi_nr_reclaimable=21000, bdi_thresh=24823, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=23080, bdi_thresh=26624, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=21640, bdi_thresh=24903, background_thresh=145177, dirty_thresh=290355
> redirty_tail +435: inode 174623
> bdi_nr_reclaimable=23920, bdi_thresh=27421, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=23920, bdi_thresh=27418, background_thresh=145177, dirty_thresh=290355
> redirty_tail +435: inode 4784181
> bdi_nr_reclaimable=22840, bdi_thresh=26414, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=15640, bdi_thresh=19032, background_thresh=145177, dirty_thresh=290355
> redirty_tail +435: inode 3474323
> requeue_io +527: inode 0
> bdi_nr_reclaimable=23920, bdi_thresh=27425, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=15240, bdi_thresh=16819, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=21640, bdi_thresh=24904, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=21000, bdi_thresh=24824, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=24200, bdi_thresh=27821, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=15640, bdi_thresh=19032, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=21040, bdi_thresh=24487, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=15880, bdi_thresh=18242, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=15880, bdi_thresh=18241, background_thresh=145177, dirty_thresh=290355
> redirty_tail +502: inode 14042
> bdi_nr_reclaimable=21640, bdi_thresh=24905, background_thresh=145177, dirty_thresh=290355
> mm/page-writeback.c 762 background_writeout: comm=pdflush pid=599 n=-6696985
> bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=21000, bdi_thresh=24823, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=21040, bdi_thresh=24486, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=15880, bdi_thresh=18241, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=22840, bdi_thresh=26421, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=21640, bdi_thresh=24867, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=15640, bdi_thresh=19034, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=22840, bdi_thresh=26423, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=15240, bdi_thresh=16820, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=23080, bdi_thresh=26620, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=24200, bdi_thresh=27824, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=21040, bdi_thresh=24485, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=21640, bdi_thresh=24860, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=24200, bdi_thresh=27820, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=23080, bdi_thresh=26611, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=22840, bdi_thresh=26420, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=20480, bdi_thresh=23801, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=21040, bdi_thresh=24480, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=21640, bdi_thresh=24854, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=15640, bdi_thresh=19026, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=24200, bdi_thresh=27822, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=21000, bdi_thresh=24823, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=23080, bdi_thresh=26615, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=23920, bdi_thresh=27412, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=21040, bdi_thresh=24488, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=21040, bdi_thresh=24471, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=23080, bdi_thresh=26620, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=15240, bdi_thresh=16818, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=15880, bdi_thresh=18241, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=21000, bdi_thresh=24822, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=15640, bdi_thresh=19031, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=24200, bdi_thresh=27822, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=15880, bdi_thresh=18241, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=15240, bdi_thresh=16819, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=20480, bdi_thresh=23794, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=21000, bdi_thresh=24820, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=21040, bdi_thresh=24476, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=21040, bdi_thresh=24487, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=20480, bdi_thresh=23785, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=24200, bdi_thresh=27820, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=15640, bdi_thresh=19030, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=23920, bdi_thresh=27422, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=21640, bdi_thresh=24867, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=22840, bdi_thresh=26424, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=24200, bdi_thresh=27824, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=21000, bdi_thresh=24820, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=23920, bdi_thresh=27427, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=23920, bdi_thresh=27426, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=23080, bdi_thresh=26626, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=23920, bdi_thresh=27416, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=21640, bdi_thresh=24866, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=15640, bdi_thresh=19031, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=15240, bdi_thresh=16818, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=21640, bdi_thresh=24867, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=23920, bdi_thresh=27425, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=24200, bdi_thresh=27821, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=20480, bdi_thresh=23802, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=15880, bdi_thresh=18239, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=15880, bdi_thresh=18238, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=15880, bdi_thresh=18240, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=15640, bdi_thresh=19027, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=23080, bdi_thresh=26621, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=21040, bdi_thresh=24473, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=24200, bdi_thresh=27823, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=24200, bdi_thresh=27810, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=15640, bdi_thresh=19032, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=15880, bdi_thresh=18240, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=21640, bdi_thresh=24864, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=21040, bdi_thresh=24485, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=23080, bdi_thresh=26621, background_thresh=145177, dirty_thresh=290355
> mm/page-writeback.c 762 background_writeout: comm=pdflush pid=10090 n=-6621693
> bdi_nr_reclaimable=21040, bdi_thresh=24482, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=21040, bdi_thresh=24483, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=15880, bdi_thresh=18239, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=22840, bdi_thresh=26418, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=21000, bdi_thresh=24786, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=20480, bdi_thresh=23800, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=24200, bdi_thresh=27823, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=21640, bdi_thresh=24862, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=15240, bdi_thresh=16817, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=15640, bdi_thresh=19027, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=15880, bdi_thresh=18239, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=22840, bdi_thresh=26425, background_thresh=145177, dirty_thresh=290355
> global dirty=246665 writeback=43800 nfs=0 flags=C_ towrite=1022 skipped=0
> bdi_nr_reclaimable=21000, bdi_thresh=24784, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=21000, bdi_thresh=24785, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=23080, bdi_thresh=26618, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=21040, bdi_thresh=24480, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=21040, bdi_thresh=24484, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=23080, bdi_thresh=26621, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=21640, bdi_thresh=24863, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=15240, bdi_thresh=16819, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=23080, bdi_thresh=26623, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=21640, bdi_thresh=24863, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=21640, bdi_thresh=24864, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=24200, bdi_thresh=27815, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=23920, bdi_thresh=27426, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=24200, bdi_thresh=27821, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=22840, bdi_thresh=26423, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=23920, bdi_thresh=27421, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=23920, bdi_thresh=27413, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=22840, bdi_thresh=26420, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=15640, bdi_thresh=19032, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=15880, bdi_thresh=18241, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=21000, bdi_thresh=24785, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=15880, bdi_thresh=18238, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=23080, bdi_thresh=26626, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=23920, bdi_thresh=27423, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=20480, bdi_thresh=23802, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=22840, bdi_thresh=26421, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=21440, bdi_thresh=24864, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=15880, bdi_thresh=18239, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=23920, bdi_thresh=27421, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=15640, bdi_thresh=19030, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=15880, bdi_thresh=18241, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=15880, bdi_thresh=18240, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=21360, bdi_thresh=24862, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=21360, bdi_thresh=24903, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=23920, bdi_thresh=27418, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=22840, bdi_thresh=26414, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=21320, bdi_thresh=24865, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=23080, bdi_thresh=26624, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=15640, bdi_thresh=19032, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=15880, bdi_thresh=18242, background_thresh=145177, dirty_thresh=290355
> global dirty=246301 writeback=44164 nfs=0 flags=C_ towrite=646 skipped=0
> bdi_nr_reclaimable=15240, bdi_thresh=16818, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=19720, bdi_thresh=23385, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=21000, bdi_thresh=24786, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=21280, bdi_thresh=24863, background_thresh=145177, dirty_thresh=290355
> redirty_tail +435: inode 174623
> bdi_nr_reclaimable=21000, bdi_thresh=24785, background_thresh=145177, dirty_thresh=290355
> bdi_nr_reclaimable=21000, bdi_thresh=24783, background_thresh=145177, dirty_thresh=290355
> redirty_tail +435: inode 4784181
> redirty_tail +435: inode 3474323
> requeue_io +527: inode 0
> redirty_tail +502: inode 14061
> bdi_nr_reclaimable=23080, bdi_thresh=26623, background_thresh=145177, dirty_thresh=290355
> redirty_tail +442: inode 27
> redirty_tail +435: inode 174623
> redirty_tail +435: inode 4784181
> redirty_tail +435: inode 3474323
> requeue_io +527: inode 0
> redirty_tail +502: inode 14096


^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: regression in page writeback
  2009-09-22 15:52           ` Chris Mason
  2009-09-23  0:22             ` Wu Fengguang
@ 2009-09-23  6:41             ` Shaohua Li
  1 sibling, 0 replies; 79+ messages in thread
From: Shaohua Li @ 2009-09-23  6:41 UTC (permalink / raw)
  To: Chris Mason, Peter Zijlstra, Wu Fengguang, linux-kernel, richard,
	jens.axboe, akpm

On Tue, Sep 22, 2009 at 11:52:59PM +0800, Chris Mason wrote:
> On Tue, Sep 22, 2009 at 10:32:14AM +0200, Peter Zijlstra wrote:
> > On Tue, 2009-09-22 at 16:24 +0800, Wu Fengguang wrote:
> > > On Tue, Sep 22, 2009 at 04:09:25PM +0800, Peter Zijlstra wrote:
> > > > On Tue, 2009-09-22 at 16:05 +0800, Wu Fengguang wrote:
> > > > > 
> > > > > I'm not sure how this patch stopped the "overshooting" behavior.
> > > > > Maybe it managed to not start the background pdflush, or the started
> > > > > pdflush thread exited because it found writeback is in progress by
> > > > > someone else?
> > > > > 
> > > > > -               if (bdi_nr_reclaimable) {
> > > > > +               if (bdi_nr_reclaimable > bdi_thresh) {
> > > > 
> > > > The idea is that we shouldn't move more pages from dirty -> writeback
> > > > when there's not actually that much dirty left.
> > > 
> > > IMHO this makes little sense given that pdflush will move all dirty
> > > pages anyway. pdflush should already be started to do background
> > > writeback before the process is throttled, and it is designed to sync
> > > all current dirty pages as quick as possible and as much as possible.
> > 
> > Not so, pdflush (or now the bdi writer thread thingies) should not
> > deplete all dirty pages but should stop writing once they are below the
> > background limit.
> > 
> > > > Now, I'm not sure about the > bdi_thresh part, I've suggested to maybe
> > > > use bdi_thresh/2 a few times, but it generally didn't seem to make much
> > > > of a difference.
> > > 
> > > One possible difference is, the process may end up waiting longer time
> > > in order to sync write_chunk pages and quit the throttle. This could
> > > hurt the responsiveness of the throttled process.
> > 
> > Well, that's all because this congestion_wait stuff is borken..
> > 
> 
> I'd suggest retesting with a new baseline against the code in Linus' git
> today.  Overall I think the change to make balance_dirty_pages() sleep
> instead of kick more IO out is a very good one.  It helps in most
> workloads here.
I tested today's git tree v2.6.31-7068-g43c1266, looks the regression
disappears. the disk io is almost stable to about 480m/s with/without the patch.

Thanks,
Shaohua

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: regression in page writeback
  2009-09-23  1:59                       ` Andrew Morton
  2009-09-23  2:26                         ` Wu Fengguang
@ 2009-09-23  9:19                         ` Richard Kennedy
  2009-09-23  9:23                           ` Peter Zijlstra
  1 sibling, 1 reply; 79+ messages in thread
From: Richard Kennedy @ 2009-09-23  9:19 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Wu Fengguang, Chris Mason, Peter Zijlstra, Li, Shaohua,
	linux-kernel, jens.axboe

On Tue, 2009-09-22 at 18:59 -0700, Andrew Morton wrote:
> On Wed, 23 Sep 2009 09:45:00 +0800 Wu Fengguang <fengguang.wu@intel.com> wrote:
> 
> > On Wed, Sep 23, 2009 at 09:28:32AM +0800, Andrew Morton wrote:
> > > On Wed, 23 Sep 2009 09:17:58 +0800 Wu Fengguang <fengguang.wu@intel.com> wrote:
> > > 
> > > > On Wed, Sep 23, 2009 at 08:54:52AM +0800, Andrew Morton wrote:
> > > > > On Wed, 23 Sep 2009 08:22:20 +0800 Wu Fengguang <fengguang.wu@intel.com> wrote:
> > > > > 
> > > > > > Jens' per-bdi writeback has another improvement. In 2.6.31, when
> > > > > > superblocks A and B both have 100000 dirty pages, it will first
> > > > > > exhaust A's 100000 dirty pages before going on to sync B's.
> > > > > 
> > > > > That would only be true if someone broke 2.6.31.  Did they?
> > > > > 
> > > > > SYSCALL_DEFINE0(sync)
> > > > > {
> > > > > 	wakeup_pdflush(0);
> > > > > 	sync_filesystems(0);
> > > > > 	sync_filesystems(1);
> > > > > 	if (unlikely(laptop_mode))
> > > > > 		laptop_sync_completion();
> > > > > 	return 0;
> > > > > }
> > > > > 
> > > > > the sync_filesystems(0) is supposed to non-blockingly start IO against
> > > > > all devices.  It used to do that correctly.  But people mucked with it
> > > > > so perhaps it no longer does.
> > > > 
> > > > I'm referring to writeback_inodes(). Each invocation of which (to sync
> > > > 4MB) will do the same iteration over superblocks A => B => C ... So if
> > > > A has dirty pages, it will always be served first.
> > > > 
> > > > So if wbc->bdi == NULL (which is true for kupdate/background sync), it
> > > > will have to first exhaust A before going on to B and C.
> > > 
> > > But that works OK.  We fill the first device's queue, then it gets
> > > congested and sync_sb_inodes() does nothing and we advance to the next
> > > queue.
> > 
> > So in common cases "exhaust" is a bit exaggerated, but A does receive
> > much more opportunity than B. Computation resources for IO submission
> > are unbalanced for A, and there are pointless overheads in rechecking A.
> 
> That's unquantified handwaving.  One CPU can do a *lot* of IO.
> 
> > > If a device has more than a queue's worth of dirty data then we'll
> > > probably leave some of that dirty memory un-queued, so there's some
> > > lack of concurrency in that situation.
> > 
> > Good insight.
> 
> It was wrong.  See the other email.
> 
> > That possibly explains one major factor of the
> > performance gains of Jens' per-bdi writeback.
> 
> I've yet to see any believable and complete explanation for these
> gains.  I've asked about these things multiple times and nothing happened.
> 
> I suspect that what happened over time was that previously-working code
> got broken, then later people noticed the breakage but failed to
> analyse and fix it in favour of simply ripping everything out and
> starting again.
> 
> So for the want of analysing and fixing several possible regressions,
> we've tossed away some very sensitive core kernel code which had tens
> of millions of machine-years testing.  I find this incredibly rash.

FWIW I agree, I don't think the new per-bdi code has had enough testing
yet.

On my desktop setup I have not been able to measure any significant
change in performance of linear writes.

I am concerned that the background writeout no longer stops when it
reaches the background threshold, as balance_dirty_pages requests all
dirty pages to be written. No doubt this is good for large linear writes
but what about more random write workloads? 

regards
Richard


^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: regression in page writeback
  2009-09-23  9:19                         ` Richard Kennedy
@ 2009-09-23  9:23                           ` Peter Zijlstra
  2009-09-23  9:37                             ` Wu Fengguang
  0 siblings, 1 reply; 79+ messages in thread
From: Peter Zijlstra @ 2009-09-23  9:23 UTC (permalink / raw)
  To: Richard Kennedy
  Cc: Andrew Morton, Wu Fengguang, Chris Mason, Li, Shaohua,
	linux-kernel, jens.axboe

On Wed, 2009-09-23 at 10:19 +0100, Richard Kennedy wrote:
> 
> I am concerned that the background writeout no longer stops when it
> reaches the background threshold, as balance_dirty_pages requests all
> dirty pages to be written. No doubt this is good for large linear writes
> but what about more random write workloads? 

I've not had time to look over the current code, but write-out not
stopping on reaching background threshold is a definite bug and needs to
get fixed.


^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: regression in page writeback
  2009-09-23  9:23                           ` Peter Zijlstra
@ 2009-09-23  9:37                             ` Wu Fengguang
  2009-09-23 10:30                               ` Wu Fengguang
  0 siblings, 1 reply; 79+ messages in thread
From: Wu Fengguang @ 2009-09-23  9:37 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Richard Kennedy, Andrew Morton, Chris Mason, Li, Shaohua,
	linux-kernel, jens.axboe

On Wed, Sep 23, 2009 at 05:23:31PM +0800, Peter Zijlstra wrote:
> On Wed, 2009-09-23 at 10:19 +0100, Richard Kennedy wrote:
> > 
> > I am concerned that the background writeout no longer stops when it
> > reaches the background threshold, as balance_dirty_pages requests all
> > dirty pages to be written. No doubt this is good for large linear writes
> > but what about more random write workloads? 
> 
> I've not had time to look over the current code, but write-out not
> stopping on reaching background threshold is a definite bug and needs to
> get fixed.

Yes, 2.6.31 code stops writeback when background threshold is reached.
But new behavior in latest git is to writeback all pages.

The code only checks over_bground_thresh() for kupdate works:

                if (args->for_kupdate && args->nr_pages <= 0 &&
                    !over_bground_thresh())
                        break;

However the background work started by balance_dirty_pages() won't check
over_bground_thresh(). So it will move all dirty pages.

I think it's very weird to check over_bground_thresh() for kupdate
instead of background work. Jens must intended for the latter case.

Thanks,
Fengguang

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: regression in page writeback
  2009-09-23  9:37                             ` Wu Fengguang
@ 2009-09-23 10:30                               ` Wu Fengguang
  0 siblings, 0 replies; 79+ messages in thread
From: Wu Fengguang @ 2009-09-23 10:30 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Richard Kennedy, Andrew Morton, Chris Mason, Li, Shaohua,
	linux-kernel, jens.axboe, Jan Kara

On Wed, Sep 23, 2009 at 05:37:53PM +0800, Wu Fengguang wrote:
> On Wed, Sep 23, 2009 at 05:23:31PM +0800, Peter Zijlstra wrote:
> > On Wed, 2009-09-23 at 10:19 +0100, Richard Kennedy wrote:
> > > 
> > > I am concerned that the background writeout no longer stops when it
> > > reaches the background threshold, as balance_dirty_pages requests all
> > > dirty pages to be written. No doubt this is good for large linear writes
> > > but what about more random write workloads? 
> > 
> > I've not had time to look over the current code, but write-out not
> > stopping on reaching background threshold is a definite bug and needs to
> > get fixed.
> 
> Yes, 2.6.31 code stops writeback when background threshold is reached.
> But new behavior in latest git is to writeback all pages.
> 
> The code only checks over_bground_thresh() for kupdate works:
> 
>                 if (args->for_kupdate && args->nr_pages <= 0 &&
>                     !over_bground_thresh())
>                         break;
> 
> However the background work started by balance_dirty_pages() won't check
> over_bground_thresh(). So it will move all dirty pages.
> 
> I think it's very weird to check over_bground_thresh() for kupdate
> instead of background work. Jens must intended for the latter case.

Here is the patch to fix it. Tested to work OK. This is an RFC.

Thanks,
Fengguang
---
writeback: stop background writeback when below background threshold

Treat bdi_start_writeback(0) as a special request to do background write,
and stop such work when we are below the background dirty threshold.

Also simplify the (nr_pages <= 0) checks. Since we already pass in
nr_pages=LONG_MAX for WB_SYNC_ALL and background writes, we don't
need to worry about it being decreased to zero.

Reported-by: Richard Kennedy <richard@rsk.demon.co.uk>
CC: Jan Kara <jack@suse.cz>
CC: Jens Axboe <jens.axboe@oracle.com> 
CC: Peter Zijlstra <a.p.zijlstra@chello.nl> 
Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
---
 fs/fs-writeback.c   |   28 +++++++++++++++++-----------
 mm/page-writeback.c |    6 +++---
 2 files changed, 20 insertions(+), 14 deletions(-)

--- linux.orig/fs/fs-writeback.c	2009-09-23 17:47:23.000000000 +0800
+++ linux/fs/fs-writeback.c	2009-09-23 18:13:36.000000000 +0800
@@ -41,8 +41,9 @@ struct wb_writeback_args {
 	long nr_pages;
 	struct super_block *sb;
 	enum writeback_sync_modes sync_mode;
-	int for_kupdate;
-	int range_cyclic;
+	int for_kupdate:1;
+	int range_cyclic:1;
+	int for_background:1;
 };
 
 /*
@@ -260,6 +261,15 @@ void bdi_start_writeback(struct backing_
 		.range_cyclic	= 1,
 	};
 
+	/*
+	 * We treat @nr_pages=0 as the special case to do background writeback,
+	 * ie. to sync pages until the background dirty threshold is reached.
+	 */
+	if (!nr_pages) {
+		args.nr_pages = LONG_MAX;
+		args.for_background = 1;
+	}
+
 	bdi_alloc_queue_work(bdi, &args);
 }
 
@@ -723,20 +733,16 @@ static long wb_writeback(struct bdi_writ
 
 	for (;;) {
 		/*
-		 * Don't flush anything for non-integrity writeback where
-		 * no nr_pages was given
+		 * Stop writeback when nr_pages has been consumed
 		 */
-		if (!args->for_kupdate && args->nr_pages <= 0 &&
-		     args->sync_mode == WB_SYNC_NONE)
+		if (args->nr_pages <= 0)
 			break;
 
 		/*
-		 * If no specific pages were given and this is just a
-		 * periodic background writeout and we are below the
-		 * background dirty threshold, don't do anything
+		 * For background writeout, stop when we are below the
+		 * background dirty threshold
 		 */
-		if (args->for_kupdate && args->nr_pages <= 0 &&
-		    !over_bground_thresh())
+		if (args->for_background && !over_bground_thresh())
 			break;
 
 		wbc.more_io = 0;
--- linux.orig/mm/page-writeback.c	2009-09-23 17:45:58.000000000 +0800
+++ linux/mm/page-writeback.c	2009-09-23 17:47:17.000000000 +0800
@@ -589,10 +589,10 @@ static void balance_dirty_pages(struct a
 	 * background_thresh, to keep the amount of dirty memory low.
 	 */
 	if ((laptop_mode && pages_written) ||
-	    (!laptop_mode && ((nr_writeback = global_page_state(NR_FILE_DIRTY)
-					  + global_page_state(NR_UNSTABLE_NFS))
+	    (!laptop_mode && ((global_page_state(NR_FILE_DIRTY)
+			       + global_page_state(NR_UNSTABLE_NFS))
 					  > background_thresh)))
-		bdi_start_writeback(bdi, nr_writeback);
+		bdi_start_writeback(bdi, 0);
 }
 
 void set_page_dirty_balance(struct page *page, int page_mkwrite)

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: regression in page writeback
  2009-09-23  2:36                           ` Andrew Morton
  2009-09-23  2:49                             ` Wu Fengguang
@ 2009-09-23 14:00                             ` Chris Mason
  2009-09-24  3:15                               ` Wu Fengguang
  1 sibling, 1 reply; 79+ messages in thread
From: Chris Mason @ 2009-09-23 14:00 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Wu Fengguang, Peter Zijlstra, Li, Shaohua, linux-kernel, richard,
	jens.axboe

On Tue, Sep 22, 2009 at 07:36:22PM -0700, Andrew Morton wrote:
> On Wed, 23 Sep 2009 10:26:22 +0800 Wu Fengguang <fengguang.wu@intel.com> wrote:
> 
> > On Wed, Sep 23, 2009 at 09:59:41AM +0800, Andrew Morton wrote:
> > > On Wed, 23 Sep 2009 09:45:00 +0800 Wu Fengguang <fengguang.wu@intel.com> wrote:
> > > 
> > > > On Wed, Sep 23, 2009 at 09:28:32AM +0800, Andrew Morton wrote:
> > > > > On Wed, 23 Sep 2009 09:17:58 +0800 Wu Fengguang <fengguang.wu@intel.com> wrote:
> > > > > 
> > > > > > On Wed, Sep 23, 2009 at 08:54:52AM +0800, Andrew Morton wrote:
> > > > > > > On Wed, 23 Sep 2009 08:22:20 +0800 Wu Fengguang <fengguang.wu@intel.com> wrote:
> > > > > > > 
> > > > > > > > Jens' per-bdi writeback has another improvement. In 2.6.31, when
> > > > > > > > superblocks A and B both have 100000 dirty pages, it will first
> > > > > > > > exhaust A's 100000 dirty pages before going on to sync B's.
> > > > > > > 
> > > > > > > That would only be true if someone broke 2.6.31.  Did they?
> > > > > > > 
> > > > > > > SYSCALL_DEFINE0(sync)
> > > > > > > {
> > > > > > > 	wakeup_pdflush(0);
> > > > > > > 	sync_filesystems(0);
> > > > > > > 	sync_filesystems(1);
> > > > > > > 	if (unlikely(laptop_mode))
> > > > > > > 		laptop_sync_completion();
> > > > > > > 	return 0;
> > > > > > > }
> > > > > > > 
> > > > > > > the sync_filesystems(0) is supposed to non-blockingly start IO against
> > > > > > > all devices.  It used to do that correctly.  But people mucked with it
> > > > > > > so perhaps it no longer does.
> > > > > > 
> > > > > > I'm referring to writeback_inodes(). Each invocation of which (to sync
> > > > > > 4MB) will do the same iteration over superblocks A => B => C ... So if
> > > > > > A has dirty pages, it will always be served first.
> > > > > > 
> > > > > > So if wbc->bdi == NULL (which is true for kupdate/background sync), it
> > > > > > will have to first exhaust A before going on to B and C.
> > > > > 
> > > > > But that works OK.  We fill the first device's queue, then it gets
> > > > > congested and sync_sb_inodes() does nothing and we advance to the next
> > > > > queue.
> > > > 
> > > > So in common cases "exhaust" is a bit exaggerated, but A does receive
> > > > much more opportunity than B. Computation resources for IO submission
> > > > are unbalanced for A, and there are pointless overheads in rechecking A.
> > > 
> > > That's unquantified handwaving.  One CPU can do a *lot* of IO.
> > 
> > Yes.. I had the impression that the writeback submission can be pretty slow.
> > It should be because of the congestion_wait. Now that it is removed,
> > things are going faster when queue is not full.
> 
> What?  The wait is short.  The design intent there is that we repoll
> all previously-congested queues well before they start to run empty.

The congestion code was the big reason I got behind Jens' patches.  When
working on btrfs I tried to tune the existing congestion based setup to
scale well.  What we had before is basically a poll interface hiding
behind a queue flag and a wait.  

The only place that actually honors the congestion flag is pdflush.
It's trivial to get pdflush backed up and make it sit down without
making any progress because once the queue congests, pdflush goes away.

Nothing stops other procs from keeping the queue congested forever.
This can only be fixed by making everyone wait for congestion, at which
point we might as well wait for requests.

Here are some graphs comparing 2.6.31 and 2.6.31 with Jens' latest code.
The workload is two procs doing streaming writes to 32GB files.  I've
used deadline and bumped nr_requests to 2048, so pdflush should be able
to do a significant amount of work between congestion cycles.

The hardware is 5 sata drives pushed into an LVM stripe set.

http://oss.oracle.com/~mason/seekwatcher/bdi-writeback/xfs-streaming-compare.png
http://oss.oracle.com/~mason/seekwatcher/bdi-writeback/btrfs-streaming-compare.png

In the mainline graphs, pdflush is actually doing the vast majority of
the IO thanks to Richard's fix:

http://oss.oracle.com/~mason/seekwatcher/bdi-writeback/xfs-mainline-per-process.png

We can see there are two different pdflush procs hopping in and out
of the work.  This isn't a huge problem, except that we're also hopping
between files.

I don't think this is because anyone broke pdflush, I think this is
because very fast hardware goes from congested to uncongested 
very quickly, even when we bump nr_requests to 2048 like I did in the
graphs above.

The pdflush congestion backoffs skip the batching optimizations done by
the elevator.  pdflush could easily have waited in get_request_wait,
been given a nice fat batch of requests and then said oh no, the queue
is congested, I'll just sleep for a while without submitting any more
IO.

The congestion checks prevent any attempts from the filesystem to write
a whole extent (or a large portion of an extent) at a time.

The pdflush system tried to be async, but at the end of the day it
wasn't async enough to effectively drive the hardware.  Threads did get
tied up in FS locking, metadata reads and get_request_wait.  The end
result is that not enough time was spent keeping all the drives on the
box busy.

This isn't handwaving, my harddrive is littered with pdflush patches
trying to make it scale well, and I just couldn't work it out.

-chris


^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: regression in page writeback
  2009-09-23 14:00                             ` Chris Mason
@ 2009-09-24  3:15                               ` Wu Fengguang
  2009-09-24 12:10                                 ` Chris Mason
  2009-09-25  0:11                                 ` Dave Chinner
  0 siblings, 2 replies; 79+ messages in thread
From: Wu Fengguang @ 2009-09-24  3:15 UTC (permalink / raw)
  To: Chris Mason, Andrew Morton, Peter Zijlstra, Li, Shaohua,
	linux-kernel, richard, jens.axboe

Chris,

Thanks for this excellent write up (and the previous one to Peter).
I'm glad to learn about your experiences and rationals behind this
work. As always, comments followed.

On Wed, Sep 23, 2009 at 10:00:58PM +0800, Chris Mason wrote:
> On Tue, Sep 22, 2009 at 07:36:22PM -0700, Andrew Morton wrote:
> > On Wed, 23 Sep 2009 10:26:22 +0800 Wu Fengguang <fengguang.wu@intel.com> wrote:
> > 
> > > On Wed, Sep 23, 2009 at 09:59:41AM +0800, Andrew Morton wrote:
> > > > On Wed, 23 Sep 2009 09:45:00 +0800 Wu Fengguang <fengguang.wu@intel.com> wrote:
> > > > 
> > > > > On Wed, Sep 23, 2009 at 09:28:32AM +0800, Andrew Morton wrote:
> > > > > > On Wed, 23 Sep 2009 09:17:58 +0800 Wu Fengguang <fengguang.wu@intel.com> wrote:
> > > > > > 
> > > > > > > On Wed, Sep 23, 2009 at 08:54:52AM +0800, Andrew Morton wrote:
> > > > > > > > On Wed, 23 Sep 2009 08:22:20 +0800 Wu Fengguang <fengguang.wu@intel.com> wrote:
> > > > > > > > 
> > > > > > > > > Jens' per-bdi writeback has another improvement. In 2.6.31, when
> > > > > > > > > superblocks A and B both have 100000 dirty pages, it will first
> > > > > > > > > exhaust A's 100000 dirty pages before going on to sync B's.
> > > > > > > > 
> > > > > > > > That would only be true if someone broke 2.6.31.  Did they?
> > > > > > > > 
> > > > > > > > SYSCALL_DEFINE0(sync)
> > > > > > > > {
> > > > > > > > 	wakeup_pdflush(0);
> > > > > > > > 	sync_filesystems(0);
> > > > > > > > 	sync_filesystems(1);
> > > > > > > > 	if (unlikely(laptop_mode))
> > > > > > > > 		laptop_sync_completion();
> > > > > > > > 	return 0;
> > > > > > > > }
> > > > > > > > 
> > > > > > > > the sync_filesystems(0) is supposed to non-blockingly start IO against
> > > > > > > > all devices.  It used to do that correctly.  But people mucked with it
> > > > > > > > so perhaps it no longer does.
> > > > > > > 
> > > > > > > I'm referring to writeback_inodes(). Each invocation of which (to sync
> > > > > > > 4MB) will do the same iteration over superblocks A => B => C ... So if
> > > > > > > A has dirty pages, it will always be served first.
> > > > > > > 
> > > > > > > So if wbc->bdi == NULL (which is true for kupdate/background sync), it
> > > > > > > will have to first exhaust A before going on to B and C.
> > > > > > 
> > > > > > But that works OK.  We fill the first device's queue, then it gets
> > > > > > congested and sync_sb_inodes() does nothing and we advance to the next
> > > > > > queue.
> > > > > 
> > > > > So in common cases "exhaust" is a bit exaggerated, but A does receive
> > > > > much more opportunity than B. Computation resources for IO submission
> > > > > are unbalanced for A, and there are pointless overheads in rechecking A.
> > > > 
> > > > That's unquantified handwaving.  One CPU can do a *lot* of IO.
> > > 
> > > Yes.. I had the impression that the writeback submission can be pretty slow.
> > > It should be because of the congestion_wait. Now that it is removed,
> > > things are going faster when queue is not full.
> > 
> > What?  The wait is short.  The design intent there is that we repoll
> > all previously-congested queues well before they start to run empty.
> 
> The congestion code was the big reason I got behind Jens' patches.  When
> working on btrfs I tried to tune the existing congestion based setup to
> scale well.  What we had before is basically a poll interface hiding
> behind a queue flag and a wait.  
 
So it's mainly about fast array writeback performance. 

> The only place that actually honors the congestion flag is pdflush.
> It's trivial to get pdflush backed up and make it sit down without
> making any progress because once the queue congests, pdflush goes away.

Right. I guess that's more or less intentional - to give lowest priority
to periodic/background writeback.

> Nothing stops other procs from keeping the queue congested forever.
> This can only be fixed by making everyone wait for congestion, at which
> point we might as well wait for requests.

Yes. That gives everyone somehow equal opportunity, this is a policy change
that may lead to interesting effects, as well as present a challenge to
get_request_wait(). That said, I'm not against the change to a wait queue
in general.

> Here are some graphs comparing 2.6.31 and 2.6.31 with Jens' latest code.
> The workload is two procs doing streaming writes to 32GB files.  I've
> used deadline and bumped nr_requests to 2048, so pdflush should be able
> to do a significant amount of work between congestion cycles.

The graphs show near 400MB/s throughput and about 4000-17000IO/s.

Writeback traces show that my 2Ghz laptop CPU can do IO submissions
up to 400MB/s. It takes about 0.01s to sync 4MB (one wb_kupdate =>
write_cache_pages traverse).

Given nr_requests=2048 and IOPS=10000, a congestion on-off cycle would
take (2048/16)/10000 = 0.0128s

The 0.0128s vs. 0.01s means that CPU returns just in time to see a
still congested but will soon become !congested queue. It then returns
to do congestion_wait, and be wakeup by the io completion events when
queue goes !congested. The return to write_cache_pages will again take
some time. So the end result may be, queue falls to 6/8 full, much below
the congestion off threshold 13/16.

Without congestion_wait, you get 100% full queue with get_request_wait. 

However I don't think the queue fullness can explain the performance
gain. It's sequential IO. It will only hurt performance if the queue
sometimes endangers starvation - which could happen when CPU is 100%
utilized so that IO submission cannot keep up. The congestion_wait
polls do eat more CPU power. It might impact the response to hard/soft
interrupts.

> The hardware is 5 sata drives pushed into an LVM stripe set.
> 
> http://oss.oracle.com/~mason/seekwatcher/bdi-writeback/xfs-streaming-compare.png
> http://oss.oracle.com/~mason/seekwatcher/bdi-writeback/btrfs-streaming-compare.png
> 
> In the mainline graphs, pdflush is actually doing the vast majority of
> the IO thanks to Richard's fix:
> 
> http://oss.oracle.com/~mason/seekwatcher/bdi-writeback/xfs-mainline-per-process.png
> 
> We can see there are two different pdflush procs hopping in and out
> of the work.

Yeah, that's expected. May eat some CPU cycles (race and locality issues).

> This isn't a huge problem, except that we're also hopping between
> files.

Yes, this is a problem. When encountered congestion, it may happen
that the file be synced only a dozen pages (which is very inefficient)
and then get redirty_tail (which may further delay this inode).

> I don't think this is because anyone broke pdflush, I think this is
> because very fast hardware goes from congested to uncongested 
> very quickly, even when we bump nr_requests to 2048 like I did in the
> graphs above.

What's typical CPU utilization during the test? It would be
interesting to do a comparison on %system numbers between the
poll/wait approaches.

> The pdflush congestion backoffs skip the batching optimizations done by
> the elevator.  pdflush could easily have waited in get_request_wait,
> been given a nice fat batch of requests and then said oh no, the queue
> is congested, I'll just sleep for a while without submitting any more
> IO.

I'd be surprised if the deadline batching is affected by the
interleaveness of incoming requests. Unless there are many expired
requests, which could happen when nr_requests is too large for the
device, which is not in your case.

I noticed that XFS's IOPS is almost doubled. While btrfs's IOPS and
throughput scales up by the same factor. The numbers show that the
average IO size for btrfs is near 64KB, is this your max_sectors_kb?
XFS's avg io size is a smaller 24kb, does that mean many small
metadata ios?

> The congestion checks prevent any attempts from the filesystem to write
> a whole extent (or a large portion of an extent) at a time.

Since writepage is called one by one for each page, will its
interleaveness impact filesystem decisions? Ie. between these two
writepage sequences.

        A1, B1, A2, B2, A3, B3, A4, B4
        A1, A2, A3, A4, B1, B2, B3, B4

Where each An/Bn stands for one page of file A/B, n is page index.

> The pdflush system tried to be async, but at the end of the day it
> wasn't async enough to effectively drive the hardware.  Threads did get
> tied up in FS locking, metadata reads and get_request_wait.  The end
> result is that not enough time was spent keeping all the drives on the
> box busy.

Yes, too much processing overheads with congestion_wait may be enough
to kill performance of fast arrays.

> This isn't handwaving, my harddrive is littered with pdflush patches
> trying to make it scale well, and I just couldn't work it out.

Thanks,
Fengguang

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: regression in page writeback
  2009-09-24  3:15                               ` Wu Fengguang
@ 2009-09-24 12:10                                 ` Chris Mason
  2009-09-25  3:26                                   ` Wu Fengguang
  2009-09-25  0:11                                 ` Dave Chinner
  1 sibling, 1 reply; 79+ messages in thread
From: Chris Mason @ 2009-09-24 12:10 UTC (permalink / raw)
  To: Wu Fengguang
  Cc: Andrew Morton, Peter Zijlstra, Li, Shaohua, linux-kernel,
	richard, jens.axboe

On Thu, Sep 24, 2009 at 11:15:08AM +0800, Wu Fengguang wrote:

[ why do the bdi-writeback work? ]

> > 
> > The congestion code was the big reason I got behind Jens' patches.  When
> > working on btrfs I tried to tune the existing congestion based setup to
> > scale well.  What we had before is basically a poll interface hiding
> > behind a queue flag and a wait.  
>  
> So it's mainly about fast array writeback performance. 

You can see the difference on single disks, at least in the two writer
case w/XFS.  But, each FS has its own tweaks in there, and the bigger
arrays show it better across all the filesystems.

> 
> > The only place that actually honors the congestion flag is pdflush.
> > It's trivial to get pdflush backed up and make it sit down without
> > making any progress because once the queue congests, pdflush goes away.
> 
> Right. I guess that's more or less intentional - to give lowest priority
> to periodic/background writeback.

The problem is that when we let pdflush back off, we do all of our IO
from balance_dirty_pages(), which makes much more seeky IO patterns for
the streaming writers.

> 
> > Nothing stops other procs from keeping the queue congested forever.
> > This can only be fixed by making everyone wait for congestion, at which
> > point we might as well wait for requests.
> 
> Yes. That gives everyone somehow equal opportunity, this is a policy change
> that may lead to interesting effects, as well as present a challenge to
> get_request_wait(). That said, I'm not against the change to a wait queue
> in general.

I very much agree here, relying more on get_request_wait is going to
expose some latency differences.

> 
> > Here are some graphs comparing 2.6.31 and 2.6.31 with Jens' latest code.
> > The workload is two procs doing streaming writes to 32GB files.  I've
> > used deadline and bumped nr_requests to 2048, so pdflush should be able
> > to do a significant amount of work between congestion cycles.
> 
> The graphs show near 400MB/s throughput and about 4000-17000IO/s.
> 
> Writeback traces show that my 2Ghz laptop CPU can do IO submissions
> up to 400MB/s. It takes about 0.01s to sync 4MB (one wb_kupdate =>
> write_cache_pages traverse).
> 
> Given nr_requests=2048 and IOPS=10000, a congestion on-off cycle would
> take (2048/16)/10000 = 0.0128s
> 
> The 0.0128s vs. 0.01s means that CPU returns just in time to see a
> still congested but will soon become !congested queue. It then returns
> to do congestion_wait, and be wakeup by the io completion events when
> queue goes !congested. The return to write_cache_pages will again take
> some time. So the end result may be, queue falls to 6/8 full, much below
> the congestion off threshold 13/16.
> 
> Without congestion_wait, you get 100% full queue with get_request_wait. 
> 
> However I don't think the queue fullness can explain the performance
> gain. It's sequential IO. It will only hurt performance if the queue
> sometimes endangers starvation - which could happen when CPU is 100%
> utilized so that IO submission cannot keep up. The congestion_wait
> polls do eat more CPU power. It might impact the response to hard/soft
> interrupts.

I think you're right that queue fullness alone can't explain things,
especially with streaming writes where the requests tend to be very
large.  LVM devices are a bit strange because they go in and out of
congestion based on any of the devices in the stripe set, so
things are less predictable.

The interesting difference between the XFS graph and the
btrfs graph is that btrfs has removed all congestion checks from its
write_cache_pages(), so btrfs is forcing pdflush to hang around even
when the queue is initially congested so that it can write a large
portion of an extent in each call.

This is why the btrfs IO graphs look the same for the two runs, the IO
submitted is basically the same.  The bdi thread is just submitting it
more often.

> 
> > The hardware is 5 sata drives pushed into an LVM stripe set.
> > 
> > http://oss.oracle.com/~mason/seekwatcher/bdi-writeback/xfs-streaming-compare.png
> > http://oss.oracle.com/~mason/seekwatcher/bdi-writeback/btrfs-streaming-compare.png
> > 
> > In the mainline graphs, pdflush is actually doing the vast majority of
> > the IO thanks to Richard's fix:
> > 
> > http://oss.oracle.com/~mason/seekwatcher/bdi-writeback/xfs-mainline-per-process.png
> > 
> > We can see there are two different pdflush procs hopping in and out
> > of the work.
> 
> Yeah, that's expected. May eat some CPU cycles (race and locality issues).
> 
> > This isn't a huge problem, except that we're also hopping between
> > files.
> 
> Yes, this is a problem. When encountered congestion, it may happen
> that the file be synced only a dozen pages (which is very inefficient)
> and then get redirty_tail (which may further delay this inode).
> 
> > I don't think this is because anyone broke pdflush, I think this is
> > because very fast hardware goes from congested to uncongested 
> > very quickly, even when we bump nr_requests to 2048 like I did in the
> > graphs above.
> 
> What's typical CPU utilization during the test? It would be
> interesting to do a comparison on %system numbers between the
> poll/wait approaches.

XFS averages about 20-30% CPU utilization.  Btrfs is much higher because
it is checksumming.

> 
> > The pdflush congestion backoffs skip the batching optimizations done by
> > the elevator.  pdflush could easily have waited in get_request_wait,
> > been given a nice fat batch of requests and then said oh no, the queue
> > is congested, I'll just sleep for a while without submitting any more
> > IO.
> 
> I'd be surprised if the deadline batching is affected by the
> interleaveness of incoming requests. Unless there are many expired
> requests, which could happen when nr_requests is too large for the
> device, which is not in your case.
> 
> I noticed that XFS's IOPS is almost doubled. While btrfs's IOPS and
> throughput scales up by the same factor. The numbers show that the
> average IO size for btrfs is near 64KB, is this your max_sectors_kb?
> XFS's avg io size is a smaller 24kb, does that mean many small
> metadata ios?

Since the traces were done on LVM, the IOPS come from the blktrace Q
events.  This means the IOPS graph basically reflects calls to
submit_bio and does not include any merging.

Btrfs does have an internal max of 64KB, but I'm not sure why xfs is
building smaller bios.  ext4 only builds 4k bios, and it is able to
perform just as well ;)

> 
> > The congestion checks prevent any attempts from the filesystem to write
> > a whole extent (or a large portion of an extent) at a time.
> 
> Since writepage is called one by one for each page, will its
> interleaveness impact filesystem decisions? Ie. between these two
> writepage sequences.
> 
>         A1, B1, A2, B2, A3, B3, A4, B4
>         A1, A2, A3, A4, B1, B2, B3, B4
> 
> Where each An/Bn stands for one page of file A/B, n is page index.

For XFS this is the key question.  We're doing streaming writes, so the
delayed allocation code is responsible for allocating extents, and this
is triggered from writepage.  Your first example becomes:

         A1 [allocate extent A1-A50 ], submit A1
	 B1 [allocate extent B1-B50 ], submit B1 (seek)
	 A2, (seek back to A1's extent)
	 B2, (seek back to B1's extent)
	 ...

This is why the XFS graph for pdflush isn't a straight line.   When we
back off file A and switch to file B, we seek between extents created by
delalloc.

Thanks for spending time reading through all of this.  It's a ton of data
and your improvements are much appreciated!

-chris


^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: regression in page writeback
  2009-09-24  3:15                               ` Wu Fengguang
  2009-09-24 12:10                                 ` Chris Mason
@ 2009-09-25  0:11                                 ` Dave Chinner
  2009-09-25  0:38                                   ` Chris Mason
  2009-09-25  3:19                                   ` Wu Fengguang
  1 sibling, 2 replies; 79+ messages in thread
From: Dave Chinner @ 2009-09-25  0:11 UTC (permalink / raw)
  To: Wu Fengguang
  Cc: Chris Mason, Andrew Morton, Peter Zijlstra, Li, Shaohua,
	linux-kernel, richard, jens.axboe

On Thu, Sep 24, 2009 at 11:15:08AM +0800, Wu Fengguang wrote:
> On Wed, Sep 23, 2009 at 10:00:58PM +0800, Chris Mason wrote:
> > The only place that actually honors the congestion flag is pdflush.
> > It's trivial to get pdflush backed up and make it sit down without
> > making any progress because once the queue congests, pdflush goes away.
> 
> Right. I guess that's more or less intentional - to give lowest priority
> to periodic/background writeback.

IMO, this is the wrong design. Background writeback should
have higher CPU/scheduler priority than normal tasks. If there is
sufficient dirty pages in the system for background writeback to
be active, it should be running *now* to start as much IO as it can
without being held up by other, lower priority tasks.

Cleaning pages is important to keeping the system running smoothly.
Given that IO takes time to clean pages, it is therefore important
to issue as much as possible as quickly as possible without delays
before going back to sleep. Delaying issue of the IO or doing
sub-optimal issue simply reduces performance of the system because
it takes longer to clean the same number of dirty pages.

> > Nothing stops other procs from keeping the queue congested forever.
> > This can only be fixed by making everyone wait for congestion, at which
> > point we might as well wait for requests.
> 
> Yes. That gives everyone somehow equal opportunity, this is a policy change
> that may lead to interesting effects, as well as present a challenge to
> get_request_wait(). That said, I'm not against the change to a wait queue
> in general.

If you block all threads doing _writebehind caching_ (synchronous IO
is self-throttling) to the same BDI on the same queue as the bdi
flusher then when congestion clears the higher priority background
flusher thread should run first and issue more IO.  This should
happen as a natural side-effect of our scheduling algorithms and it
gives preference to efficient background writeback over in-efficient
foreground writeback. Indeed, with this approach we can even avoid
foreground writeback altogether...

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: regression in page writeback
  2009-09-25  0:11                                 ` Dave Chinner
@ 2009-09-25  0:38                                   ` Chris Mason
  2009-09-25  5:04                                     ` Dave Chinner
  2009-09-25  3:19                                   ` Wu Fengguang
  1 sibling, 1 reply; 79+ messages in thread
From: Chris Mason @ 2009-09-25  0:38 UTC (permalink / raw)
  To: Dave Chinner
  Cc: Wu Fengguang, Andrew Morton, Peter Zijlstra, Li, Shaohua,
	linux-kernel, richard, jens.axboe

On Fri, Sep 25, 2009 at 10:11:17AM +1000, Dave Chinner wrote:
> On Thu, Sep 24, 2009 at 11:15:08AM +0800, Wu Fengguang wrote:
> > On Wed, Sep 23, 2009 at 10:00:58PM +0800, Chris Mason wrote:
> > > The only place that actually honors the congestion flag is pdflush.
> > > It's trivial to get pdflush backed up and make it sit down without
> > > making any progress because once the queue congests, pdflush goes away.
> > 
> > Right. I guess that's more or less intentional - to give lowest priority
> > to periodic/background writeback.
> 
> IMO, this is the wrong design. Background writeback should
> have higher CPU/scheduler priority than normal tasks. If there is
> sufficient dirty pages in the system for background writeback to
> be active, it should be running *now* to start as much IO as it can
> without being held up by other, lower priority tasks.

I'd say that an fsync from mutt or vi should be done at a higher prio
than a background streaming writer.

-chris

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: regression in page writeback
  2009-09-25  0:11                                 ` Dave Chinner
  2009-09-25  0:38                                   ` Chris Mason
@ 2009-09-25  3:19                                   ` Wu Fengguang
  2009-09-26  1:47                                     ` Dave Chinner
  1 sibling, 1 reply; 79+ messages in thread
From: Wu Fengguang @ 2009-09-25  3:19 UTC (permalink / raw)
  To: Dave Chinner
  Cc: Chris Mason, Andrew Morton, Peter Zijlstra, Li, Shaohua,
	linux-kernel, richard, jens.axboe

On Fri, Sep 25, 2009 at 08:11:17AM +0800, Dave Chinner wrote:
> On Thu, Sep 24, 2009 at 11:15:08AM +0800, Wu Fengguang wrote:
> > On Wed, Sep 23, 2009 at 10:00:58PM +0800, Chris Mason wrote:
> > > The only place that actually honors the congestion flag is pdflush.
> > > It's trivial to get pdflush backed up and make it sit down without
> > > making any progress because once the queue congests, pdflush goes away.
> > 
> > Right. I guess that's more or less intentional - to give lowest priority
> > to periodic/background writeback.
> 
> IMO, this is the wrong design. Background writeback should
> have higher CPU/scheduler priority than normal tasks. If there is
> sufficient dirty pages in the system for background writeback to
> be active, it should be running *now* to start as much IO as it can
> without being held up by other, lower priority tasks.
> 
> Cleaning pages is important to keeping the system running smoothly.
> Given that IO takes time to clean pages, it is therefore important
> to issue as much as possible as quickly as possible without delays
> before going back to sleep. Delaying issue of the IO or doing
> sub-optimal issue simply reduces performance of the system because
> it takes longer to clean the same number of dirty pages.
> 
> > > Nothing stops other procs from keeping the queue congested forever.
> > > This can only be fixed by making everyone wait for congestion, at which
> > > point we might as well wait for requests.
> > 
> > Yes. That gives everyone somehow equal opportunity, this is a policy change
> > that may lead to interesting effects, as well as present a challenge to
> > get_request_wait(). That said, I'm not against the change to a wait queue
> > in general.
> 
> If you block all threads doing _writebehind caching_ (synchronous IO
> is self-throttling) to the same BDI on the same queue as the bdi
> flusher then when congestion clears the higher priority background
> flusher thread should run first and issue more IO.  This should
> happen as a natural side-effect of our scheduling algorithms and it
> gives preference to efficient background writeback over in-efficient
> foreground writeback. Indeed, with this approach we can even avoid
> foreground writeback altogether...

I don't see how balance_dirty_pages() writeout is less efficient than
pdflush writeout.

They all called the same routines to do the job.
balance_dirty_pages() sets nr_to_write=1536 at least for ext4 and xfs
(unless memory is tight; btrfs is 1540), which is in fact 50% bigger
than the 1024 pages used by pdflush. And it won't back off on congestion.
The s_io/b_io queues are shared, so a balance_dirty_pages() will just
continue from where the last sync thread exited. So it does not make
much difference who initiates the IO. Did I missed something?

Thanks,
Fengguang

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: regression in page writeback
  2009-09-24 12:10                                 ` Chris Mason
@ 2009-09-25  3:26                                   ` Wu Fengguang
  0 siblings, 0 replies; 79+ messages in thread
From: Wu Fengguang @ 2009-09-25  3:26 UTC (permalink / raw)
  To: Chris Mason, Andrew Morton, Peter Zijlstra, Li, Shaohua,
	linux-kernel, richard, jens.axboe
  Cc: Dave Chinner, hch

On Thu, Sep 24, 2009 at 08:10:34PM +0800, Chris Mason wrote:
> On Thu, Sep 24, 2009 at 11:15:08AM +0800, Wu Fengguang wrote:
> 
> [ why do the bdi-writeback work? ]
> 
[snip]
> > > The congestion checks prevent any attempts from the filesystem to write
> > > a whole extent (or a large portion of an extent) at a time.
> > 
> > Since writepage is called one by one for each page, will its
> > interleaveness impact filesystem decisions? Ie. between these two
> > writepage sequences.
> > 
> >         A1, B1, A2, B2, A3, B3, A4, B4
> >         A1, A2, A3, A4, B1, B2, B3, B4
> > 
> > Where each An/Bn stands for one page of file A/B, n is page index.
> 
> For XFS this is the key question.  We're doing streaming writes, so the
> delayed allocation code is responsible for allocating extents, and this
> is triggered from writepage.  Your first example becomes:
> 
>          A1 [allocate extent A1-A50 ], submit A1
> 	 B1 [allocate extent B1-B50 ], submit B1 (seek)
> 	 A2, (seek back to A1's extent)
> 	 B2, (seek back to B1's extent)
> 	 ...
> 
> This is why the XFS graph for pdflush isn't a straight line.   When we
> back off file A and switch to file B, we seek between extents created by
> delalloc.

Does it mean XFS writeback is somehow serialized, so that the elevator
cannot do request merges well?  Hope that's not true..

> Thanks for spending time reading through all of this.  It's a ton of data
> and your improvements are much appreciated!

Thank you :)

Regards,
Fengguang

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: regression in page writeback
  2009-09-25  0:38                                   ` Chris Mason
@ 2009-09-25  5:04                                     ` Dave Chinner
  2009-09-25  6:45                                       ` Wu Fengguang
  2009-09-25 12:06                                       ` Chris Mason
  0 siblings, 2 replies; 79+ messages in thread
From: Dave Chinner @ 2009-09-25  5:04 UTC (permalink / raw)
  To: Chris Mason, Wu Fengguang, Andrew Morton, Peter Zijlstra, Li,
	Shaohua, linux-kernel, richard, jens.axboe

On Thu, Sep 24, 2009 at 08:38:20PM -0400, Chris Mason wrote:
> On Fri, Sep 25, 2009 at 10:11:17AM +1000, Dave Chinner wrote:
> > On Thu, Sep 24, 2009 at 11:15:08AM +0800, Wu Fengguang wrote:
> > > On Wed, Sep 23, 2009 at 10:00:58PM +0800, Chris Mason wrote:
> > > > The only place that actually honors the congestion flag is pdflush.
> > > > It's trivial to get pdflush backed up and make it sit down without
> > > > making any progress because once the queue congests, pdflush goes away.
> > > 
> > > Right. I guess that's more or less intentional - to give lowest priority
> > > to periodic/background writeback.
> > 
> > IMO, this is the wrong design. Background writeback should
> > have higher CPU/scheduler priority than normal tasks. If there is
> > sufficient dirty pages in the system for background writeback to
> > be active, it should be running *now* to start as much IO as it can
> > without being held up by other, lower priority tasks.
> 
> I'd say that an fsync from mutt or vi should be done at a higher prio
> than a background streaming writer.

I don't think you caught everything I said - synchronous IO is
un-throttled. Background writeback should dump async IO to the
elevator as fast as it can, then get the hell out of the way. If
you've got a UP system, then the fsync can't be issued at the same
time pdflush is running (same as right now), and if you've got a MP
system then fsync can run at the same time. On the premise that sync
IO is unthrottled and given that elevators queue and issue sync IO
sperately to async writes, fsync latency would be entirely derived
from the elevator queuing behaviour, not the CPU priority of
pdflush.

Look at it this way - it is the responsibility of pdflush to keep
the elevator full of background IO. It is the responsibility of
the elevator to ensure that background IO doesn't starve all other
types of IO. If pdflush doesn't run because it can't get CPU time,
then background IO does not get issued, and system performance
suffers as a result.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: regression in page writeback
  2009-09-23  4:00         ` Wu Fengguang
@ 2009-09-25  6:14           ` Wu Fengguang
  0 siblings, 0 replies; 79+ messages in thread
From: Wu Fengguang @ 2009-09-25  6:14 UTC (permalink / raw)
  To: Li, Shaohua
  Cc: linux-kernel, richard, a.p.zijlstra, jens.axboe, akpm,
	linux-fsdevel, Chris Mason

On Wed, Sep 23, 2009 at 12:00:24PM +0800, Wu Fengguang wrote:
[snip]
> > attached is a short log. I'll try to get a full log after finish latest
> > git test.
> > bdi_nr_reclaimable is always less than bdi_thresh in the log. because
> 
> Yes. Only background_writeout() is working and queue is congested.
> 
> > when bdi_nr_reclaimable + bdi_nr_writeback > bdi_thresh, background
> > writeback is already started, so bdi_nr_writeback should be > 0.
> 
> Yes, when process is throttled, bdi_nr_writeback > (bdi_thresh - bdi_nr_reclaimable).
> 
> For each background_writeout() loop (each takes about 0.1s), there are
> ~100 balance_dirty_pages() loops.  The latter makes pretty much loop overheads.
> 
> Most importantly, most background_writeout() loops only wrote several pages
> (the large towrite values), which is extremely inefficient. I wonder why it
> cannot write more?

Ah when queue is congested, the congestion_on => congestion_off
threshold is small. For nr_requests=128, that threshold would be
about 128/16 = 8 pages, a pretty small number.

Thanks,
Fengguang

> > global dirty=246756 writeback=43800 nfs=0 flags=C_ towrite=1018 skipped=0
> > global dirty=246847 writeback=43800 nfs=0 flags=C_ towrite=1023 skipped=0
> > global dirty=246847 writeback=43800 nfs=0 flags=C_ towrite=1023 skipped=0
> > global dirty=246847 writeback=43800 nfs=0 flags=C_ towrite=1023 skipped=0
> > global dirty=246847 writeback=43709 nfs=0 flags=C_ towrite=1024 skipped=0
> > global dirty=246847 writeback=43709 nfs=0 flags=C_ towrite=1022 skipped=0
> > global dirty=247029 writeback=43527 nfs=0 flags=CM towrite=1020 skipped=0
> > global dirty=247029 writeback=43527 nfs=0 flags=C_ towrite=1024 skipped=0
> > global dirty=246665 writeback=43800 nfs=0 flags=C_ towrite=1022 skipped=0
> > global dirty=246301 writeback=44164 nfs=0 flags=C_ towrite=646 skipped=0
> 
> Thanks,
> Fengguang
> 
> Content-Description: msg2
> > 0, bdi_thresh=23810, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=24160, bdi_thresh=27860, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=23080, bdi_thresh=26626, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=23920, bdi_thresh=27422, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=15640, bdi_thresh=19031, background_thresh=145141, dirty_thresh=290283
> > mm/page-writeback.c 762 background_writeout: comm=pdflush pid=599 n=-6696602
> > bdi_nr_reclaimable=21040, bdi_thresh=24479, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=20480, bdi_thresh=23802, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=21480, bdi_thresh=24906, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=24160, bdi_thresh=27862, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=23920, bdi_thresh=27416, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=15240, bdi_thresh=16824, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=15880, bdi_thresh=18247, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=15640, bdi_thresh=19031, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=20480, bdi_thresh=23809, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=21480, bdi_thresh=24903, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=21480, bdi_thresh=24907, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=23080, bdi_thresh=26623, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=15640, bdi_thresh=19032, background_thresh=145141, dirty_thresh=290283
> > global dirty=246756 writeback=43800 nfs=0 flags=C_ towrite=1018 skipped=0
> > bdi_nr_reclaimable=15640, bdi_thresh=19030, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=21040, bdi_thresh=24481, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=24160, bdi_thresh=27850, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=21040, bdi_thresh=24493, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=21320, bdi_thresh=24944, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=21320, bdi_thresh=24940, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=15880, bdi_thresh=18244, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=21040, bdi_thresh=24493, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=24160, bdi_thresh=27864, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=22840, bdi_thresh=26423, background_thresh=145141, dirty_thresh=290283
> > redirty_tail +435: inode 174623
> > bdi_nr_reclaimable=21480, bdi_thresh=24894, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=23920, bdi_thresh=27412, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=21480, bdi_thresh=24900, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=24160, bdi_thresh=27862, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=24160, bdi_thresh=27855, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=15640, bdi_thresh=19026, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=23080, bdi_thresh=26620, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=21040, bdi_thresh=24484, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=15640, bdi_thresh=19027, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=22840, bdi_thresh=26420, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=21320, bdi_thresh=24943, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=15240, bdi_thresh=16824, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=23080, bdi_thresh=26611, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=21480, bdi_thresh=24907, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=20480, bdi_thresh=23793, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=21040, bdi_thresh=24479, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=20480, bdi_thresh=23802, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=24160, bdi_thresh=27862, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=15240, bdi_thresh=16825, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=21040, bdi_thresh=24491, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=23920, bdi_thresh=27425, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=23080, bdi_thresh=26615, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=15880, bdi_thresh=18247, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=21320, bdi_thresh=24942, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=23080, bdi_thresh=26625, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=24160, bdi_thresh=27860, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=22840, bdi_thresh=26421, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=19720, bdi_thresh=23425, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=23080, bdi_thresh=26621, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=21480, bdi_thresh=24904, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=21320, bdi_thresh=24946, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=15880, bdi_thresh=18246, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=21480, bdi_thresh=24906, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=23920, bdi_thresh=27422, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=15640, bdi_thresh=19032, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=23920, bdi_thresh=27416, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=22840, bdi_thresh=26418, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=15640, bdi_thresh=19031, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=20480, bdi_thresh=23809, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=15880, bdi_thresh=18246, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=21480, bdi_thresh=24907, background_thresh=145141, dirty_thresh=290283
> > redirty_tail +435: inode 4784181
> > bdi_nr_reclaimable=21480, bdi_thresh=24902, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=15240, bdi_thresh=16826, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=15240, bdi_thresh=16824, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=21320, bdi_thresh=24944, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=21040, bdi_thresh=24495, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=23920, bdi_thresh=27413, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=15240, bdi_thresh=16824, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=24160, bdi_thresh=27863, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=23080, bdi_thresh=26618, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=21040, bdi_thresh=24492, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=21480, bdi_thresh=24905, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=21480, bdi_thresh=24903, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=23080, bdi_thresh=26621, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=15640, bdi_thresh=19031, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=21480, bdi_thresh=24903, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=22840, bdi_thresh=26423, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=15880, bdi_thresh=18246, background_thresh=145141, dirty_thresh=290283
> > redirty_tail +435: inode 3474323
> > bdi_nr_reclaimable=23080, bdi_thresh=26621, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=21480, bdi_thresh=24864, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=24160, bdi_thresh=27863, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=20480, bdi_thresh=23811, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=15880, bdi_thresh=18245, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=21480, bdi_thresh=24864, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=22840, bdi_thresh=26425, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=15240, bdi_thresh=16826, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=21040, bdi_thresh=24496, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=21320, bdi_thresh=24945, background_thresh=145141, dirty_thresh=290283
> > requeue_io +527: inode 0
> > bdi_nr_reclaimable=15640, bdi_thresh=19034, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=15880, bdi_thresh=18247, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=23080, bdi_thresh=26620, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=23920, bdi_thresh=27426, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=15880, bdi_thresh=18247, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=23080, bdi_thresh=26626, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=15880, bdi_thresh=18245, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=24160, bdi_thresh=27861, background_thresh=145141, dirty_thresh=290283
> > requeue_io +527: inode 0
> > bdi_nr_reclaimable=23920, bdi_thresh=27427, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145141, dirty_thresh=290283
> > bdi_nr_reclaimable=22840, bdi_thresh=26421, background_thresh=145150, dirty_thresh=290301
> > bdi_nr_reclaimable=22840, bdi_thresh=26424, background_thresh=145150, dirty_thresh=290301
> > bdi_nr_reclaimable=21040, bdi_thresh=24489, background_thresh=145150, dirty_thresh=290301
> > bdi_nr_reclaimable=21320, bdi_thresh=24906, background_thresh=145150, dirty_thresh=290301
> > bdi_nr_reclaimable=23920, bdi_thresh=27426, background_thresh=145150, dirty_thresh=290301
> > redirty_tail +502: inode 12415
> > bdi_nr_reclaimable=24160, bdi_thresh=27861, background_thresh=145150, dirty_thresh=290301
> > bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145150, dirty_thresh=290301
> > bdi_nr_reclaimable=21320, bdi_thresh=24905, background_thresh=145150, dirty_thresh=290301
> > bdi_nr_reclaimable=15880, bdi_thresh=18247, background_thresh=145150, dirty_thresh=290301
> > bdi_nr_reclaimable=15880, bdi_thresh=18247, background_thresh=145150, dirty_thresh=290301
> > bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145159, dirty_thresh=290319
> > bdi_nr_reclaimable=15640, bdi_thresh=19027, background_thresh=145159, dirty_thresh=290319
> > bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145159, dirty_thresh=290319
> > bdi_nr_reclaimable=21640, bdi_thresh=24923, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21640, bdi_thresh=24919, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21640, bdi_thresh=24921, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21640, bdi_thresh=24918, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23920, bdi_thresh=27421, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21320, bdi_thresh=24905, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15880, bdi_thresh=18247, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=20480, bdi_thresh=23812, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23080, bdi_thresh=26623, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21040, bdi_thresh=24495, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21040, bdi_thresh=24490, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21320, bdi_thresh=24903, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23920, bdi_thresh=27423, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21320, bdi_thresh=24903, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15880, bdi_thresh=18246, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23920, bdi_thresh=27421, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15880, bdi_thresh=18247, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15880, bdi_thresh=18250, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21640, bdi_thresh=24919, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21040, bdi_thresh=24496, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=22840, bdi_thresh=26414, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=24160, bdi_thresh=27820, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19030, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21640, bdi_thresh=24919, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23920, bdi_thresh=27418, background_thresh=145168, dirty_thresh=290337
> > mm/page-writeback.c 762 background_writeout: comm=pdflush pid=10090 n=-6621688
> > bdi_nr_reclaimable=23080, bdi_thresh=26624, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19032, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15240, bdi_thresh=16826, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=22840, bdi_thresh=26420, background_thresh=145168, dirty_thresh=290337
> > global dirty=246847 writeback=43800 nfs=0 flags=C_ towrite=1023 skipped=0
> > bdi_nr_reclaimable=21320, bdi_thresh=24900, background_thresh=145168, dirty_thresh=290337
> > redirty_tail +435: inode 174623
> > bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=24160, bdi_thresh=27824, background_thresh=145168, dirty_thresh=290337
> > redirty_tail +435: inode 4784181
> > redirty_tail +435: inode 3474323
> > requeue_io +527: inode 0
> > requeue_io +527: inode 0
> > redirty_tail +502: inode 12396
> > mm/page-writeback.c 762 background_writeout: comm=pdflush pid=10090 n=-6621689
> > bdi_nr_reclaimable=24160, bdi_thresh=27824, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21320, bdi_thresh=24900, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15240, bdi_thresh=16826, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23080, bdi_thresh=26624, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=24160, bdi_thresh=27820, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21040, bdi_thresh=24490, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=20480, bdi_thresh=23812, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23920, bdi_thresh=27421, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21640, bdi_thresh=24919, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23920, bdi_thresh=27427, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23080, bdi_thresh=26620, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15240, bdi_thresh=16827, background_thresh=145168, dirty_thresh=290337
> > global dirty=246847 writeback=43800 nfs=0 flags=C_ towrite=1023 skipped=0
> > bdi_nr_reclaimable=22840, bdi_thresh=26420, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21640, bdi_thresh=24910, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=20480, bdi_thresh=23811, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21320, bdi_thresh=24903, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21320, bdi_thresh=24904, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21320, bdi_thresh=24900, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23080, bdi_thresh=26620, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21640, bdi_thresh=24923, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23080, bdi_thresh=26611, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21640, bdi_thresh=24916, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=24160, bdi_thresh=27824, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=24160, bdi_thresh=27822, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15880, bdi_thresh=18245, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=24160, bdi_thresh=27810, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23080, bdi_thresh=26615, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23920, bdi_thresh=27412, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21040, bdi_thresh=24495, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23920, bdi_thresh=27425, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15880, bdi_thresh=18248, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23920, bdi_thresh=27422, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=19720, bdi_thresh=23425, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=22840, bdi_thresh=26423, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=24160, bdi_thresh=27822, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21040, bdi_thresh=24481, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=24160, bdi_thresh=27820, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=20480, bdi_thresh=23804, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19031, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19026, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21040, bdi_thresh=24486, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=20480, bdi_thresh=23795, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23080, bdi_thresh=26625, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19027, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21040, bdi_thresh=24493, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19027, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21640, bdi_thresh=24923, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15880, bdi_thresh=18247, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19030, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15240, bdi_thresh=16825, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21040, bdi_thresh=24483, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21040, bdi_thresh=24494, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15240, bdi_thresh=16826, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23920, bdi_thresh=27416, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19032, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19032, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=22840, bdi_thresh=26418, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23080, bdi_thresh=26623, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23080, bdi_thresh=26623, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23080, bdi_thresh=26621, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=24160, bdi_thresh=27815, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21640, bdi_thresh=24920, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15880, bdi_thresh=18247, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21320, bdi_thresh=24906, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21640, bdi_thresh=24920, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21640, bdi_thresh=24922, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21040, bdi_thresh=24495, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23080, bdi_thresh=26621, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=20480, bdi_thresh=23811, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15240, bdi_thresh=16825, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23080, bdi_thresh=26618, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23920, bdi_thresh=27413, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23920, bdi_thresh=27421, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=22840, bdi_thresh=26425, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21320, bdi_thresh=24904, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15240, bdi_thresh=16827, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21040, bdi_thresh=24498, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23920, bdi_thresh=27426, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21320, bdi_thresh=24902, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21040, bdi_thresh=24497, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=24160, bdi_thresh=27823, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21640, bdi_thresh=24918, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21640, bdi_thresh=24921, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21640, bdi_thresh=24920, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15880, bdi_thresh=18246, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=20480, bdi_thresh=23813, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=22840, bdi_thresh=26414, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23080, bdi_thresh=26626, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15880, bdi_thresh=18247, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23920, bdi_thresh=27423, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15240, bdi_thresh=16825, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=24160, bdi_thresh=27821, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15880, bdi_thresh=18249, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23080, bdi_thresh=26621, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=24160, bdi_thresh=27823, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19031, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21640, bdi_thresh=24919, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15880, bdi_thresh=18248, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23920, bdi_thresh=27426, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15880, bdi_thresh=18248, background_thresh=145168, dirty_thresh=290337
> > mm/page-writeback.c 762 background_writeout: comm=pdflush pid=599 n=-6696603
> > bdi_nr_reclaimable=22840, bdi_thresh=26421, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21320, bdi_thresh=24905, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21640, bdi_thresh=24918, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=22840, bdi_thresh=26423, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=22840, bdi_thresh=26424, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19034, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=22840, bdi_thresh=26420, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21640, bdi_thresh=24923, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21320, bdi_thresh=24906, background_thresh=145168, dirty_thresh=290337
> > global dirty=246847 writeback=43800 nfs=0 flags=C_ towrite=1023 skipped=0
> > bdi_nr_reclaimable=21040, bdi_thresh=24491, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=22840, bdi_thresh=26421, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15880, bdi_thresh=18250, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15880, bdi_thresh=18248, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21320, bdi_thresh=24903, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=24160, bdi_thresh=27821, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21320, bdi_thresh=24905, background_thresh=145168, dirty_thresh=290337
> > requeue_io +527: inode 0
> > bdi_nr_reclaimable=15880, bdi_thresh=18248, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19030, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15880, bdi_thresh=18247, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21320, bdi_thresh=24903, background_thresh=145168, dirty_thresh=290337
> > requeue_io +527: inode 0
> > bdi_nr_reclaimable=15880, bdi_thresh=18246, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21040, bdi_thresh=24495, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21640, bdi_thresh=24919, background_thresh=145168, dirty_thresh=290337
> > redirty_tail +502: inode 12533
> > bdi_nr_reclaimable=21040, bdi_thresh=24496, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15880, bdi_thresh=18247, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23920, bdi_thresh=27418, background_thresh=145168, dirty_thresh=290337
> > redirty_tail +435: inode 174623
> > bdi_nr_reclaimable=15640, bdi_thresh=19032, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145168, dirty_thresh=290337
> > redirty_tail +435: inode 4784181
> > redirty_tail +435: inode 3474323
> > requeue_io +527: inode 0
> > requeue_io +527: inode 0
> > redirty_tail +502: inode 13474
> > mm/page-writeback.c 762 background_writeout: comm=pdflush pid=10090 n=-6621691
> > bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15880, bdi_thresh=18246, background_thresh=145168, dirty_thresh=290337
> > mm/page-writeback.c 762 background_writeout: comm=pdflush pid=599 n=-6696603
> > bdi_nr_reclaimable=15640, bdi_thresh=19030, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15880, bdi_thresh=18247, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21320, bdi_thresh=24903, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=22840, bdi_thresh=26421, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19034, background_thresh=145168, dirty_thresh=290337
> > global dirty=246847 writeback=43709 nfs=0 flags=C_ towrite=1024 skipped=0
> > bdi_nr_reclaimable=22840, bdi_thresh=26423, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19031, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=24160, bdi_thresh=27823, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=24160, bdi_thresh=27821, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=19720, bdi_thresh=23425, background_thresh=145168, dirty_thresh=290337
> > global dirty=246847 writeback=43709 nfs=0 flags=C_ towrite=1022 skipped=0
> > bdi_nr_reclaimable=21640, bdi_thresh=24920, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=24160, bdi_thresh=27824, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23080, bdi_thresh=26620, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23920, bdi_thresh=27425, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23080, bdi_thresh=26611, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21040, bdi_thresh=24489, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21640, bdi_thresh=24926, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21320, bdi_thresh=24903, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23920, bdi_thresh=27421, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21640, bdi_thresh=24929, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=24160, bdi_thresh=27822, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19026, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23920, bdi_thresh=27412, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23080, bdi_thresh=26615, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23080, bdi_thresh=26624, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=20480, bdi_thresh=23810, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21320, bdi_thresh=24904, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=20480, bdi_thresh=23811, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23920, bdi_thresh=27422, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15880, bdi_thresh=18247, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21040, bdi_thresh=24480, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21320, bdi_thresh=24900, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=24160, bdi_thresh=27822, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=24160, bdi_thresh=27820, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=20480, bdi_thresh=23803, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15240, bdi_thresh=16825, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=20480, bdi_thresh=23794, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21040, bdi_thresh=24485, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21640, bdi_thresh=24933, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15880, bdi_thresh=18248, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21640, bdi_thresh=24933, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21640, bdi_thresh=24920, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21320, bdi_thresh=24903, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=22840, bdi_thresh=26421, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23080, bdi_thresh=26620, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21640, bdi_thresh=24926, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=22840, bdi_thresh=26423, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=24160, bdi_thresh=27822, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21640, bdi_thresh=24932, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23920, bdi_thresh=27412, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23080, bdi_thresh=26615, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=19720, bdi_thresh=23425, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23920, bdi_thresh=27425, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23080, bdi_thresh=26611, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21320, bdi_thresh=24900, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21320, bdi_thresh=24903, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21040, bdi_thresh=24492, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19030, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15240, bdi_thresh=16824, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=24160, bdi_thresh=27821, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19026, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19031, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23920, bdi_thresh=27416, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19027, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19031, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21320, bdi_thresh=24904, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15880, bdi_thresh=18247, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19027, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23080, bdi_thresh=26625, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15880, bdi_thresh=18246, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21640, bdi_thresh=24920, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21040, bdi_thresh=24480, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23080, bdi_thresh=26620, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=24160, bdi_thresh=27820, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=20480, bdi_thresh=23794, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21040, bdi_thresh=24485, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=20480, bdi_thresh=23803, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15240, bdi_thresh=16825, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21640, bdi_thresh=24933, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15880, bdi_thresh=18248, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23080, bdi_thresh=26611, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15880, bdi_thresh=18247, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21320, bdi_thresh=24903, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=24160, bdi_thresh=27822, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21640, bdi_thresh=24926, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23920, bdi_thresh=27422, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15880, bdi_thresh=18246, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23920, bdi_thresh=27412, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23080, bdi_thresh=26615, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=24160, bdi_thresh=27822, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=24160, bdi_thresh=27810, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=19720, bdi_thresh=23425, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=24160, bdi_thresh=27824, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23920, bdi_thresh=27425, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21640, bdi_thresh=24933, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21320, bdi_thresh=24900, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21320, bdi_thresh=24900, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21640, bdi_thresh=24930, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15240, bdi_thresh=16824, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19032, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21040, bdi_thresh=24493, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21040, bdi_thresh=24481, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21640, bdi_thresh=24930, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19031, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23080, bdi_thresh=26611, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23080, bdi_thresh=26620, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21320, bdi_thresh=24864, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21320, bdi_thresh=24863, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19027, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21640, bdi_thresh=24918, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15880, bdi_thresh=18245, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21640, bdi_thresh=24924, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15240, bdi_thresh=16823, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21320, bdi_thresh=24860, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23920, bdi_thresh=27412, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23920, bdi_thresh=27416, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=24160, bdi_thresh=27822, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19026, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15880, bdi_thresh=18243, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15880, bdi_thresh=18246, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21040, bdi_thresh=24490, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23080, bdi_thresh=26625, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21640, bdi_thresh=24931, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=24160, bdi_thresh=27822, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21040, bdi_thresh=24478, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=24160, bdi_thresh=27820, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19027, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=20480, bdi_thresh=23792, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23080, bdi_thresh=26615, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=20480, bdi_thresh=23801, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=24160, bdi_thresh=27810, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21040, bdi_thresh=24491, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=20480, bdi_thresh=23809, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21640, bdi_thresh=24918, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23920, bdi_thresh=27425, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21320, bdi_thresh=24863, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23080, bdi_thresh=26620, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=19720, bdi_thresh=23425, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21640, bdi_thresh=24924, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15240, bdi_thresh=16823, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21320, bdi_thresh=24860, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23920, bdi_thresh=27412, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21040, bdi_thresh=24483, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=24160, bdi_thresh=27822, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21640, bdi_thresh=24930, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=24160, bdi_thresh=27824, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23080, bdi_thresh=26611, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21320, bdi_thresh=24864, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=24160, bdi_thresh=27815, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23920, bdi_thresh=27422, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19027, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21640, bdi_thresh=24928, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15240, bdi_thresh=16823, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15880, bdi_thresh=18246, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19032, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23920, bdi_thresh=27416, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21040, bdi_thresh=24480, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15240, bdi_thresh=16824, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19031, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21640, bdi_thresh=24931, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19032, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21040, bdi_thresh=24492, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21320, bdi_thresh=24864, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23080, bdi_thresh=26623, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15240, bdi_thresh=16825, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15880, bdi_thresh=18245, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=22840, bdi_thresh=26418, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23080, bdi_thresh=26623, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23080, bdi_thresh=26621, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21040, bdi_thresh=24492, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21320, bdi_thresh=24866, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19026, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15880, bdi_thresh=18243, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=22840, bdi_thresh=26423, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15240, bdi_thresh=16823, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=20480, bdi_thresh=23808, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15880, bdi_thresh=18245, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23920, bdi_thresh=27421, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19030, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=24160, bdi_thresh=27824, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23080, bdi_thresh=26618, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=20480, bdi_thresh=23791, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23080, bdi_thresh=26615, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21040, bdi_thresh=24490, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23080, bdi_thresh=26611, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=24160, bdi_thresh=27810, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=20480, bdi_thresh=23800, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21320, bdi_thresh=24863, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=24160, bdi_thresh=27822, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21640, bdi_thresh=24941, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21040, bdi_thresh=24477, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19027, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21640, bdi_thresh=24927, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23080, bdi_thresh=26620, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23920, bdi_thresh=27412, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21040, bdi_thresh=24481, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19031, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=20480, bdi_thresh=23807, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23920, bdi_thresh=27425, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=24200, bdi_thresh=27822, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=24200, bdi_thresh=27824, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23080, bdi_thresh=26625, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=19720, bdi_thresh=23425, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=24200, bdi_thresh=27810, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21640, bdi_thresh=24933, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15880, bdi_thresh=18245, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15880, bdi_thresh=18242, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23080, bdi_thresh=26611, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23080, bdi_thresh=26623, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=20480, bdi_thresh=23790, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23080, bdi_thresh=26615, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15240, bdi_thresh=16822, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=20480, bdi_thresh=23799, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=24200, bdi_thresh=27820, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21320, bdi_thresh=24863, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15880, bdi_thresh=18244, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23920, bdi_thresh=27421, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19026, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21040, bdi_thresh=24476, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23080, bdi_thresh=26618, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21640, bdi_thresh=24927, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=24200, bdi_thresh=27824, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=22840, bdi_thresh=26423, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23920, bdi_thresh=27422, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23080, bdi_thresh=26621, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23920, bdi_thresh=27416, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21320, bdi_thresh=24864, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21320, bdi_thresh=24866, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15240, bdi_thresh=16822, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=20480, bdi_thresh=23806, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=24200, bdi_thresh=27822, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21640, bdi_thresh=24940, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15240, bdi_thresh=16823, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19030, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21320, bdi_thresh=24860, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21040, bdi_thresh=24489, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21640, bdi_thresh=24940, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21040, bdi_thresh=24488, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19032, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21040, bdi_thresh=24490, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21040, bdi_thresh=24478, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15880, bdi_thresh=18244, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=24200, bdi_thresh=27815, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19027, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21640, bdi_thresh=24939, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21640, bdi_thresh=24937, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19027, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23080, bdi_thresh=26621, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21640, bdi_thresh=24935, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23920, bdi_thresh=27413, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=24200, bdi_thresh=27823, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19032, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15240, bdi_thresh=16822, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21040, bdi_thresh=24490, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15880, bdi_thresh=18244, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23080, bdi_thresh=26623, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21040, bdi_thresh=24492, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15240, bdi_thresh=16824, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=22840, bdi_thresh=26418, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21640, bdi_thresh=24937, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=22840, bdi_thresh=26425, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19031, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=20480, bdi_thresh=23808, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=24200, bdi_thresh=27823, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21640, bdi_thresh=24937, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23920, bdi_thresh=27426, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21640, bdi_thresh=24936, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21640, bdi_thresh=24938, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21320, bdi_thresh=24864, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=22840, bdi_thresh=26414, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23080, bdi_thresh=26621, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23920, bdi_thresh=27423, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15880, bdi_thresh=18243, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21320, bdi_thresh=24862, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23080, bdi_thresh=26626, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15240, bdi_thresh=16824, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15880, bdi_thresh=18246, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23080, bdi_thresh=26620, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15880, bdi_thresh=18245, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=24200, bdi_thresh=27821, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=22840, bdi_thresh=26423, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21640, bdi_thresh=24936, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23920, bdi_thresh=27421, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23080, bdi_thresh=26624, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19034, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21640, bdi_thresh=24935, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23920, bdi_thresh=27426, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15880, bdi_thresh=18246, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=22840, bdi_thresh=26421, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15240, bdi_thresh=16824, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21640, bdi_thresh=24927, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21040, bdi_thresh=24481, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23920, bdi_thresh=27425, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21320, bdi_thresh=24863, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=24200, bdi_thresh=27822, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=24200, bdi_thresh=27824, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=22840, bdi_thresh=26420, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=19720, bdi_thresh=23425, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15880, bdi_thresh=18245, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=24200, bdi_thresh=27810, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19031, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23080, bdi_thresh=26611, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21640, bdi_thresh=24933, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23080, bdi_thresh=26620, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23920, bdi_thresh=27412, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23080, bdi_thresh=26615, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21320, bdi_thresh=24864, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23080, bdi_thresh=26625, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=24200, bdi_thresh=27822, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=20480, bdi_thresh=23799, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=24200, bdi_thresh=27820, background_thresh=145168, dirty_thresh=290337
> > requeue_io +527: inode 0
> > bdi_nr_reclaimable=21640, bdi_thresh=24940, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21320, bdi_thresh=24860, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15240, bdi_thresh=16823, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=24200, bdi_thresh=27824, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=20480, bdi_thresh=23807, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21040, bdi_thresh=24476, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23920, bdi_thresh=27422, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=20480, bdi_thresh=23790, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23920, bdi_thresh=27416, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=24200, bdi_thresh=27815, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21040, bdi_thresh=24478, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15240, bdi_thresh=16822, background_thresh=145168, dirty_thresh=290337
> > requeue_io +527: inode 0
> > bdi_nr_reclaimable=15640, bdi_thresh=19027, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19027, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21640, bdi_thresh=24939, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15880, bdi_thresh=18242, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21640, bdi_thresh=24940, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21640, bdi_thresh=24937, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21040, bdi_thresh=24488, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19032, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=24200, bdi_thresh=27823, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19032, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21040, bdi_thresh=24490, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15880, bdi_thresh=18244, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21320, bdi_thresh=24866, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15880, bdi_thresh=18244, background_thresh=145168, dirty_thresh=290337
> > redirty_tail +502: inode 13925
> > bdi_nr_reclaimable=23080, bdi_thresh=26623, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=22840, bdi_thresh=26418, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19026, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21040, bdi_thresh=24489, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23080, bdi_thresh=26621, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15240, bdi_thresh=16824, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21640, bdi_thresh=24935, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21040, bdi_thresh=24490, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15240, bdi_thresh=16821, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=20480, bdi_thresh=23805, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=20480, bdi_thresh=23798, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23920, bdi_thresh=27421, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=22840, bdi_thresh=26420, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=22840, bdi_thresh=26423, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=19720, bdi_thresh=23425, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15880, bdi_thresh=18245, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23080, bdi_thresh=26611, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15240, bdi_thresh=16822, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23920, bdi_thresh=27426, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21640, bdi_thresh=24932, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23080, bdi_thresh=26620, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23920, bdi_thresh=27412, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23080, bdi_thresh=26615, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23920, bdi_thresh=27425, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21320, bdi_thresh=24823, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=24200, bdi_thresh=27822, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=24200, bdi_thresh=27822, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=22840, bdi_thresh=26424, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=24200, bdi_thresh=27820, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23080, bdi_thresh=26624, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19031, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=22840, bdi_thresh=26421, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21640, bdi_thresh=24938, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15880, bdi_thresh=18244, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21040, bdi_thresh=24475, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=20480, bdi_thresh=23789, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21040, bdi_thresh=24479, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15880, bdi_thresh=18243, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21640, bdi_thresh=24943, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15880, bdi_thresh=18244, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=24200, bdi_thresh=27821, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15880, bdi_thresh=18243, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15240, bdi_thresh=16820, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21640, bdi_thresh=24947, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=24200, bdi_thresh=27824, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21640, bdi_thresh=24947, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=24200, bdi_thresh=27810, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21320, bdi_thresh=24820, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21640, bdi_thresh=24942, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15240, bdi_thresh=16822, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23920, bdi_thresh=27416, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21320, bdi_thresh=24824, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23080, bdi_thresh=26625, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23920, bdi_thresh=27422, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21040, bdi_thresh=24476, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15880, bdi_thresh=18242, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15880, bdi_thresh=18242, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15240, bdi_thresh=16820, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19027, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15880, bdi_thresh=18240, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=20480, bdi_thresh=23805, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21040, bdi_thresh=24490, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19027, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21640, bdi_thresh=24944, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19032, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23080, bdi_thresh=26620, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23080, bdi_thresh=26623, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21320, bdi_thresh=24825, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15880, bdi_thresh=18241, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21640, bdi_thresh=24943, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23920, bdi_thresh=27421, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21320, bdi_thresh=24826, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=22840, bdi_thresh=26425, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19034, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23080, bdi_thresh=26618, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23920, bdi_thresh=27423, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23080, bdi_thresh=26621, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=24200, bdi_thresh=27823, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19032, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21040, bdi_thresh=24488, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23920, bdi_thresh=27413, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=24200, bdi_thresh=27815, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21040, bdi_thresh=24489, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=24200, bdi_thresh=27824, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=22840, bdi_thresh=26414, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21640, bdi_thresh=24944, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=22840, bdi_thresh=26423, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23080, bdi_thresh=26621, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15880, bdi_thresh=18241, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19030, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19031, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=24200, bdi_thresh=27823, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21640, bdi_thresh=24944, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21640, bdi_thresh=24945, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=20480, bdi_thresh=23806, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21320, bdi_thresh=24825, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21320, bdi_thresh=24822, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21040, bdi_thresh=24491, background_thresh=145168, dirty_thresh=290337
> > redirty_tail +435: inode 174623
> > bdi_nr_reclaimable=20480, bdi_thresh=23804, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23920, bdi_thresh=27426, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23080, bdi_thresh=26626, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23920, bdi_thresh=27427, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21640, bdi_thresh=24947, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15880, bdi_thresh=18245, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=22840, bdi_thresh=26420, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21320, bdi_thresh=24826, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=22840, bdi_thresh=26421, background_thresh=145168, dirty_thresh=290337
> > redirty_tail +435: inode 4784181
> > bdi_nr_reclaimable=21040, bdi_thresh=24484, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21040, bdi_thresh=24483, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15240, bdi_thresh=16821, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=24200, bdi_thresh=27821, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21320, bdi_thresh=24823, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21320, bdi_thresh=24825, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15880, bdi_thresh=18243, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19030, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21320, bdi_thresh=24823, background_thresh=145168, dirty_thresh=290337
> > redirty_tail +442: inode 3474323
> > bdi_nr_reclaimable=21040, bdi_thresh=24488, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21640, bdi_thresh=24943, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21320, bdi_thresh=24820, background_thresh=145168, dirty_thresh=290337
> > requeue_io +527: inode 0
> > bdi_nr_reclaimable=15880, bdi_thresh=18242, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23920, bdi_thresh=27418, background_thresh=145168, dirty_thresh=290337
> > redirty_tail +502: inode 14004
> > bdi_nr_reclaimable=24200, bdi_thresh=27820, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19032, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=24200, bdi_thresh=27820, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21640, bdi_thresh=24934, background_thresh=145168, dirty_thresh=290337
> > mm/page-writeback.c 762 background_writeout: comm=pdflush pid=599 n=-6696607
> > bdi_nr_reclaimable=21320, bdi_thresh=24780, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19030, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15880, bdi_thresh=18242, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21040, bdi_thresh=24481, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23920, bdi_thresh=27427, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23920, bdi_thresh=27426, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=20480, bdi_thresh=23803, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21040, bdi_thresh=24489, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21320, bdi_thresh=24782, background_thresh=145168, dirty_thresh=290337
> > global dirty=247029 writeback=43527 nfs=0 flags=CM towrite=1020 skipped=0
> > bdi_nr_reclaimable=24200, bdi_thresh=27823, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19031, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21320, bdi_thresh=24783, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15880, bdi_thresh=18241, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=22840, bdi_thresh=26418, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23080, bdi_thresh=26611, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21640, bdi_thresh=24940, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21040, bdi_thresh=24485, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=20480, bdi_thresh=23796, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19026, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=22840, bdi_thresh=26420, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15880, bdi_thresh=18241, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23080, bdi_thresh=26620, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=19720, bdi_thresh=23425, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23080, bdi_thresh=26615, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23920, bdi_thresh=27412, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=20480, bdi_thresh=23803, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=24200, bdi_thresh=27822, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15880, bdi_thresh=18242, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23080, bdi_thresh=26621, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21040, bdi_thresh=24472, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19031, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=24200, bdi_thresh=27822, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15240, bdi_thresh=16820, background_thresh=145168, dirty_thresh=290337
> > mm/page-writeback.c 762 background_writeout: comm=pdflush pid=10090 n=-6621691
> > bdi_nr_reclaimable=24200, bdi_thresh=27820, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23920, bdi_thresh=27422, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23080, bdi_thresh=26623, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21040, bdi_thresh=24477, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=20480, bdi_thresh=23787, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23080, bdi_thresh=26625, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21320, bdi_thresh=24781, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21640, bdi_thresh=24947, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15240, bdi_thresh=16821, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=24200, bdi_thresh=27824, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21040, bdi_thresh=24474, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21640, bdi_thresh=24946, background_thresh=145168, dirty_thresh=290337
> > global dirty=247029 writeback=43527 nfs=0 flags=C_ towrite=1024 skipped=0
> > bdi_nr_reclaimable=15240, bdi_thresh=16819, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21040, bdi_thresh=24484, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23920, bdi_thresh=27416, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21640, bdi_thresh=24942, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21320, bdi_thresh=24785, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19027, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=20480, bdi_thresh=23804, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21640, bdi_thresh=24944, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15880, bdi_thresh=18239, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21320, bdi_thresh=24826, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=24200, bdi_thresh=27810, background_thresh=145168, dirty_thresh=290337
> > redirty_tail +435: inode 174623
> > bdi_nr_reclaimable=15240, bdi_thresh=16819, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23080, bdi_thresh=26623, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21040, bdi_thresh=24486, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19032, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21040, bdi_thresh=24486, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15880, bdi_thresh=18241, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23080, bdi_thresh=26618, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=22840, bdi_thresh=26425, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23920, bdi_thresh=27423, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21040, bdi_thresh=24488, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21640, bdi_thresh=24943, background_thresh=145168, dirty_thresh=290337
> > redirty_tail +435: inode 4784181
> > bdi_nr_reclaimable=23920, bdi_thresh=27413, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=24200, bdi_thresh=27823, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23080, bdi_thresh=26621, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21640, bdi_thresh=24945, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=24200, bdi_thresh=27815, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15880, bdi_thresh=18240, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=22840, bdi_thresh=26423, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19030, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21640, bdi_thresh=24944, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=24200, bdi_thresh=27821, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23080, bdi_thresh=26621, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=24200, bdi_thresh=27824, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23080, bdi_thresh=26620, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15240, bdi_thresh=16821, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21240, bdi_thresh=24825, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=20480, bdi_thresh=23804, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=23920, bdi_thresh=27426, background_thresh=145168, dirty_thresh=290337
> > redirty_tail +442: inode 3474323
> > bdi_nr_reclaimable=23080, bdi_thresh=26626, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=21640, bdi_thresh=24902, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=22840, bdi_thresh=26423, background_thresh=145168, dirty_thresh=290337
> > bdi_nr_reclaimable=15640, bdi_thresh=19034, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=23920, bdi_thresh=27421, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=15880, bdi_thresh=18243, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=21640, bdi_thresh=24907, background_thresh=145177, dirty_thresh=290355
> > requeue_io +527: inode 0
> > bdi_nr_reclaimable=21040, bdi_thresh=24483, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=15880, bdi_thresh=18244, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=21640, bdi_thresh=24904, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=22840, bdi_thresh=26421, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=21080, bdi_thresh=24785, background_thresh=145177, dirty_thresh=290355
> > redirty_tail +502: inode 14023
> > bdi_nr_reclaimable=22840, bdi_thresh=26421, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=15240, bdi_thresh=16820, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=15880, bdi_thresh=18243, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=21040, bdi_thresh=24786, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=22840, bdi_thresh=26424, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=22840, bdi_thresh=26420, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=15640, bdi_thresh=19027, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=15880, bdi_thresh=18241, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=15880, bdi_thresh=18240, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=21000, bdi_thresh=24783, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=21640, bdi_thresh=24903, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=21640, bdi_thresh=24907, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=15880, bdi_thresh=18242, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=21040, bdi_thresh=24488, background_thresh=145177, dirty_thresh=290355
> > redirty_tail +435: inode 21
> > bdi_nr_reclaimable=21000, bdi_thresh=24823, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=23080, bdi_thresh=26624, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=21640, bdi_thresh=24903, background_thresh=145177, dirty_thresh=290355
> > redirty_tail +435: inode 174623
> > bdi_nr_reclaimable=23920, bdi_thresh=27421, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=23920, bdi_thresh=27418, background_thresh=145177, dirty_thresh=290355
> > redirty_tail +435: inode 4784181
> > bdi_nr_reclaimable=22840, bdi_thresh=26414, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=15640, bdi_thresh=19032, background_thresh=145177, dirty_thresh=290355
> > redirty_tail +435: inode 3474323
> > requeue_io +527: inode 0
> > bdi_nr_reclaimable=23920, bdi_thresh=27425, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=15240, bdi_thresh=16819, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=21640, bdi_thresh=24904, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=21000, bdi_thresh=24824, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=24200, bdi_thresh=27821, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=15640, bdi_thresh=19032, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=21040, bdi_thresh=24487, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=15880, bdi_thresh=18242, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=15880, bdi_thresh=18241, background_thresh=145177, dirty_thresh=290355
> > redirty_tail +502: inode 14042
> > bdi_nr_reclaimable=21640, bdi_thresh=24905, background_thresh=145177, dirty_thresh=290355
> > mm/page-writeback.c 762 background_writeout: comm=pdflush pid=599 n=-6696985
> > bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=21000, bdi_thresh=24823, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=21040, bdi_thresh=24486, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=15880, bdi_thresh=18241, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=22840, bdi_thresh=26421, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=21640, bdi_thresh=24867, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=15640, bdi_thresh=19034, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=22840, bdi_thresh=26423, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=15240, bdi_thresh=16820, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=23080, bdi_thresh=26620, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=24200, bdi_thresh=27824, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=21040, bdi_thresh=24485, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=21640, bdi_thresh=24860, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=24200, bdi_thresh=27820, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=23080, bdi_thresh=26611, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=22840, bdi_thresh=26420, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=20480, bdi_thresh=23801, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=21040, bdi_thresh=24480, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=21640, bdi_thresh=24854, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=15640, bdi_thresh=19026, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=24200, bdi_thresh=27822, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=21000, bdi_thresh=24823, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=23080, bdi_thresh=26615, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=23920, bdi_thresh=27412, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=21040, bdi_thresh=24488, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=21040, bdi_thresh=24471, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=23080, bdi_thresh=26620, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=15240, bdi_thresh=16818, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=15880, bdi_thresh=18241, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=21000, bdi_thresh=24822, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=15640, bdi_thresh=19031, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=24200, bdi_thresh=27822, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=15880, bdi_thresh=18241, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=15240, bdi_thresh=16819, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=20480, bdi_thresh=23794, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=21000, bdi_thresh=24820, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=21040, bdi_thresh=24476, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=21040, bdi_thresh=24487, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=20480, bdi_thresh=23785, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=24200, bdi_thresh=27820, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=15640, bdi_thresh=19030, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=23920, bdi_thresh=27422, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=21640, bdi_thresh=24867, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=22840, bdi_thresh=26424, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=24200, bdi_thresh=27824, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=21000, bdi_thresh=24820, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=23920, bdi_thresh=27427, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=23920, bdi_thresh=27426, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=23080, bdi_thresh=26626, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=23920, bdi_thresh=27416, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=21640, bdi_thresh=24866, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=15640, bdi_thresh=19031, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=15240, bdi_thresh=16818, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=21640, bdi_thresh=24867, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=23920, bdi_thresh=27425, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=24200, bdi_thresh=27821, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=20480, bdi_thresh=23802, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=15880, bdi_thresh=18239, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=15880, bdi_thresh=18238, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=15880, bdi_thresh=18240, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=15640, bdi_thresh=19027, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=23080, bdi_thresh=26621, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=21040, bdi_thresh=24473, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=24200, bdi_thresh=27823, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=24200, bdi_thresh=27810, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=15640, bdi_thresh=19032, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=15880, bdi_thresh=18240, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=21640, bdi_thresh=24864, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=21040, bdi_thresh=24485, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=23080, bdi_thresh=26621, background_thresh=145177, dirty_thresh=290355
> > mm/page-writeback.c 762 background_writeout: comm=pdflush pid=10090 n=-6621693
> > bdi_nr_reclaimable=21040, bdi_thresh=24482, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=21040, bdi_thresh=24483, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=15880, bdi_thresh=18239, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=22840, bdi_thresh=26418, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=21000, bdi_thresh=24786, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=20480, bdi_thresh=23800, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=24200, bdi_thresh=27823, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=21640, bdi_thresh=24862, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=15240, bdi_thresh=16817, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=15640, bdi_thresh=19027, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=15880, bdi_thresh=18239, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=22840, bdi_thresh=26425, background_thresh=145177, dirty_thresh=290355
> > global dirty=246665 writeback=43800 nfs=0 flags=C_ towrite=1022 skipped=0
> > bdi_nr_reclaimable=21000, bdi_thresh=24784, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=22840, bdi_thresh=26422, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=21000, bdi_thresh=24785, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=23080, bdi_thresh=26618, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=21040, bdi_thresh=24480, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=21040, bdi_thresh=24484, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=23080, bdi_thresh=26621, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=21640, bdi_thresh=24863, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=15240, bdi_thresh=16819, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=23080, bdi_thresh=26623, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=21640, bdi_thresh=24863, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=15640, bdi_thresh=19033, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=21640, bdi_thresh=24864, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=24200, bdi_thresh=27815, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=23920, bdi_thresh=27426, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=24200, bdi_thresh=27821, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=22840, bdi_thresh=26423, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=23920, bdi_thresh=27421, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=23920, bdi_thresh=27413, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=22840, bdi_thresh=26420, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=15640, bdi_thresh=19032, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=15880, bdi_thresh=18241, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=21000, bdi_thresh=24785, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=15880, bdi_thresh=18238, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=23080, bdi_thresh=26626, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=23920, bdi_thresh=27423, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=20480, bdi_thresh=23802, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=22840, bdi_thresh=26421, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=21440, bdi_thresh=24864, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=15880, bdi_thresh=18239, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=23920, bdi_thresh=27421, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=15640, bdi_thresh=19030, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=15880, bdi_thresh=18241, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=15880, bdi_thresh=18240, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=21360, bdi_thresh=24862, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=21360, bdi_thresh=24903, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=23920, bdi_thresh=27418, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=22840, bdi_thresh=26414, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=21320, bdi_thresh=24865, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=23080, bdi_thresh=26624, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=15640, bdi_thresh=19032, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=15880, bdi_thresh=18242, background_thresh=145177, dirty_thresh=290355
> > global dirty=246301 writeback=44164 nfs=0 flags=C_ towrite=646 skipped=0
> > bdi_nr_reclaimable=15240, bdi_thresh=16818, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=19720, bdi_thresh=23385, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=21000, bdi_thresh=24786, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=21280, bdi_thresh=24863, background_thresh=145177, dirty_thresh=290355
> > redirty_tail +435: inode 174623
> > bdi_nr_reclaimable=21000, bdi_thresh=24785, background_thresh=145177, dirty_thresh=290355
> > bdi_nr_reclaimable=21000, bdi_thresh=24783, background_thresh=145177, dirty_thresh=290355
> > redirty_tail +435: inode 4784181
> > redirty_tail +435: inode 3474323
> > requeue_io +527: inode 0
> > redirty_tail +502: inode 14061
> > bdi_nr_reclaimable=23080, bdi_thresh=26623, background_thresh=145177, dirty_thresh=290355
> > redirty_tail +442: inode 27
> > redirty_tail +435: inode 174623
> > redirty_tail +435: inode 4784181
> > redirty_tail +435: inode 3474323
> > requeue_io +527: inode 0
> > redirty_tail +502: inode 14096
> 

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: regression in page writeback
  2009-09-25  5:04                                     ` Dave Chinner
@ 2009-09-25  6:45                                       ` Wu Fengguang
  2009-09-28  1:07                                         ` Dave Chinner
  2009-09-25 12:06                                       ` Chris Mason
  1 sibling, 1 reply; 79+ messages in thread
From: Wu Fengguang @ 2009-09-25  6:45 UTC (permalink / raw)
  To: Dave Chinner
  Cc: Chris Mason, Andrew Morton, Peter Zijlstra, Li, Shaohua,
	linux-kernel, richard, jens.axboe

On Fri, Sep 25, 2009 at 01:04:13PM +0800, Dave Chinner wrote:
> On Thu, Sep 24, 2009 at 08:38:20PM -0400, Chris Mason wrote:
> > On Fri, Sep 25, 2009 at 10:11:17AM +1000, Dave Chinner wrote:
> > > On Thu, Sep 24, 2009 at 11:15:08AM +0800, Wu Fengguang wrote:
> > > > On Wed, Sep 23, 2009 at 10:00:58PM +0800, Chris Mason wrote:
> > > > > The only place that actually honors the congestion flag is pdflush.
> > > > > It's trivial to get pdflush backed up and make it sit down without
> > > > > making any progress because once the queue congests, pdflush goes away.
> > > > 
> > > > Right. I guess that's more or less intentional - to give lowest priority
> > > > to periodic/background writeback.
> > > 
> > > IMO, this is the wrong design. Background writeback should
> > > have higher CPU/scheduler priority than normal tasks. If there is
> > > sufficient dirty pages in the system for background writeback to
> > > be active, it should be running *now* to start as much IO as it can
> > > without being held up by other, lower priority tasks.
> > 
> > I'd say that an fsync from mutt or vi should be done at a higher prio
> > than a background streaming writer.
> 
> I don't think you caught everything I said - synchronous IO is
> un-throttled.

O_SYNC writes may be un-throttled in theory, however it seems to be
throttled in practice:

  generic_file_aio_write
    __generic_file_aio_write
      generic_file_buffered_write
        generic_perform_write
          balance_dirty_pages_ratelimited
    generic_write_sync

Do you mean some other code path?

> Background writeback should dump async IO to the elevator as fast as
> it can, then get the hell out of the way. If you've got a UP system,
> then the fsync can't be issued at the same time pdflush is running
> (same as right now), and if you've got a MP system then fsync can
> run at the same time.

I think you are right for system wide sync.

System wide sync seems to always wait for the queued bdi writeback
works to finish, which should be fine in terms of efficiency, except
that sync could end up do more works and even live lock.

> On the premise that sync IO is unthrottled and given that elevators
> queue and issue sync IO sperately to async writes, fsync latency
> would be entirely derived from the elevator queuing behaviour, not
> the CPU priority of pdflush.

It's not exactly CPU priority, but queue fullness priority.

fsync operations always use nonblocking=0, so in fact they _used to_
enjoy better priority than pdflush. Same is vmscan pageout, which
calls writepage directly. Both won't back off on congested bdi.

So when there comes fsync/pageout, they will always be served first.

> Look at it this way - it is the responsibility of pdflush to keep
> the elevator full of background IO. It is the responsibility of
> the elevator to ensure that background IO doesn't starve all other
> types of IO.

Agreed.

> If pdflush doesn't run because it can't get CPU time,
> then background IO does not get issued, and system performance
> suffers as a result.

pdflush is able to make 80% queue fullness, which should be enough
for efficient streaming IOs. Small random IOs may hurt a bit though.

Thanks,
Fengguang

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: regression in page writeback
  2009-09-25  5:04                                     ` Dave Chinner
  2009-09-25  6:45                                       ` Wu Fengguang
@ 2009-09-25 12:06                                       ` Chris Mason
  1 sibling, 0 replies; 79+ messages in thread
From: Chris Mason @ 2009-09-25 12:06 UTC (permalink / raw)
  To: Dave Chinner
  Cc: Wu Fengguang, Andrew Morton, Peter Zijlstra, Li, Shaohua,
	linux-kernel, richard, jens.axboe

On Fri, Sep 25, 2009 at 03:04:13PM +1000, Dave Chinner wrote:
> On Thu, Sep 24, 2009 at 08:38:20PM -0400, Chris Mason wrote:
> > On Fri, Sep 25, 2009 at 10:11:17AM +1000, Dave Chinner wrote:
> > > On Thu, Sep 24, 2009 at 11:15:08AM +0800, Wu Fengguang wrote:
> > > > On Wed, Sep 23, 2009 at 10:00:58PM +0800, Chris Mason wrote:
> > > > > The only place that actually honors the congestion flag is pdflush.
> > > > > It's trivial to get pdflush backed up and make it sit down without
> > > > > making any progress because once the queue congests, pdflush goes away.
> > > > 
> > > > Right. I guess that's more or less intentional - to give lowest priority
> > > > to periodic/background writeback.
> > > 
> > > IMO, this is the wrong design. Background writeback should
> > > have higher CPU/scheduler priority than normal tasks. If there is
> > > sufficient dirty pages in the system for background writeback to
> > > be active, it should be running *now* to start as much IO as it can
> > > without being held up by other, lower priority tasks.
> > 
> > I'd say that an fsync from mutt or vi should be done at a higher prio
> > than a background streaming writer.
> 
> I don't think you caught everything I said - synchronous IO is
> un-throttled. Background writeback should dump async IO to the
> elevator as fast as it can, then get the hell out of the way. If
> you've got a UP system, then the fsync can't be issued at the same
> time pdflush is running (same as right now), and if you've got a MP
> system then fsync can run at the same time. On the premise that sync
> IO is unthrottled and given that elevators queue and issue sync IO
> sperately to async writes, fsync latency would be entirely derived
> from the elevator queuing behaviour, not the CPU priority of
> pdflush.

I think we've agreed for a long time on this in general.  The congestion
backoff comment was originally about IO priorities (I thought ;) so I
was trying to keep talking around IO priority and not CPU/scheduler
time.  When we get things tuned to the point that process scheduling
matters, I'll be a very happy boy.

The big change from the new code is that we will fill the queue
with async IO.

I think this is good, and I think the congestion backoff didn't really
consistently keep available requests in the queue all the time in a lot
of workloads.  But, its still a change, and so we need to keep an eye on
it as we look at performance reports during .32.

> 
> Look at it this way - it is the responsibility of pdflush to keep
> the elevator full of background IO. It is the responsibility of
> the elevator to ensure that background IO doesn't starve all other
> types of IO. If pdflush doesn't run because it can't get CPU time,
> then background IO does not get issued, and system performance
> suffers as a result.

Most of the time that pdflush didn't get to run in my benchmark it's
because pdflush chose to give up the CPU, not because it was starving.

-chris


^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: regression in page writeback
  2009-09-25  3:19                                   ` Wu Fengguang
@ 2009-09-26  1:47                                     ` Dave Chinner
  2009-09-26  3:02                                         ` Wu Fengguang
  0 siblings, 1 reply; 79+ messages in thread
From: Dave Chinner @ 2009-09-26  1:47 UTC (permalink / raw)
  To: Wu Fengguang
  Cc: Chris Mason, Andrew Morton, Peter Zijlstra, Li, Shaohua,
	linux-kernel, richard, jens.axboe

On Fri, Sep 25, 2009 at 11:19:20AM +0800, Wu Fengguang wrote:
> On Fri, Sep 25, 2009 at 08:11:17AM +0800, Dave Chinner wrote:
> > On Thu, Sep 24, 2009 at 11:15:08AM +0800, Wu Fengguang wrote:
> > > On Wed, Sep 23, 2009 at 10:00:58PM +0800, Chris Mason wrote:
> > > > The only place that actually honors the congestion flag is pdflush.
> > > > It's trivial to get pdflush backed up and make it sit down without
> > > > making any progress because once the queue congests, pdflush goes away.
> > > 
> > > Right. I guess that's more or less intentional - to give lowest priority
> > > to periodic/background writeback.
> > 
> > IMO, this is the wrong design. Background writeback should
> > have higher CPU/scheduler priority than normal tasks. If there is
> > sufficient dirty pages in the system for background writeback to
> > be active, it should be running *now* to start as much IO as it can
> > without being held up by other, lower priority tasks.
> > 
> > Cleaning pages is important to keeping the system running smoothly.
> > Given that IO takes time to clean pages, it is therefore important
> > to issue as much as possible as quickly as possible without delays
> > before going back to sleep. Delaying issue of the IO or doing
> > sub-optimal issue simply reduces performance of the system because
> > it takes longer to clean the same number of dirty pages.
> > 
> > > > Nothing stops other procs from keeping the queue congested forever.
> > > > This can only be fixed by making everyone wait for congestion, at which
> > > > point we might as well wait for requests.
> > > 
> > > Yes. That gives everyone somehow equal opportunity, this is a policy change
> > > that may lead to interesting effects, as well as present a challenge to
> > > get_request_wait(). That said, I'm not against the change to a wait queue
> > > in general.
> > 
> > If you block all threads doing _writebehind caching_ (synchronous IO
> > is self-throttling) to the same BDI on the same queue as the bdi
> > flusher then when congestion clears the higher priority background
> > flusher thread should run first and issue more IO.  This should
> > happen as a natural side-effect of our scheduling algorithms and it
> > gives preference to efficient background writeback over in-efficient
> > foreground writeback. Indeed, with this approach we can even avoid
> > foreground writeback altogether...
> 
> I don't see how balance_dirty_pages() writeout is less efficient than
> pdflush writeout.
> 
> They all called the same routines to do the job.
> balance_dirty_pages() sets nr_to_write=1536 at least for ext4 and xfs
> (unless memory is tight; btrfs is 1540), which is in fact 50% bigger
> than the 1024 pages used by pdflush.

Sure, but the prёblem now is that you are above the
bdi->dirty_exceeded threshold, foreground writeback tries to issue
1536 pages of IO every 8 pages that are dirtied. That means you'll
block just about every writing process in writeback at the same time
and they will all be blocked in congestion trying to write different
inodes....

> And it won't back off on congestion.

And that is, IMO, a major problem.

> The s_io/b_io queues are shared, so a balance_dirty_pages() will just
> continue from where the last sync thread exited. So it does not make
> much difference who initiates the IO. Did I missed something?

The current implementation uses the request queue to do that
blocking at IO submission time. This is based on the premise that if
we write a certain number of pages, we're guaranteed to have waited
long enough for that many pages to come clean. However, every other
thread doing writes and being throttled does the same thing.  This
leads to N IO submitters from at least N different inodes at the
same time. Which inode gets written when congestion clears is
anyone's guess - it's a thundering herd IIUC the congestion
implementation correctly.

The result is that we end up with N different sets of IO being
issued with potentially zero locality to each other, resulting in
much lower elevator sort/merge efficiency and hence we seek the disk
all over the place to service the different sets of IO.

OTOH, if there is only one submission thread, it doesn't jump
between inodes in the same way when congestion clears - if keeps
writing to the same inode, resulting in large related chunks of
sequential IOs being issued to the disk. This is more efficient than
the above foreground writeback because the elevator works better and
the disk seeks less.

As you can probably guess, I think foreground writeout is the wrong
architecture because of the behaviour it induces under heavy
multithreaded IO patterns. I agree that it works OK if continue
tweaking it to fix problems.

However, my concern is that if it isn't constantly observed, tweaked
and maintained, performance goes backwards as other code changes.
i.e. there is a significant maintenance burden and looking at the
problems once very couple of years (last big maintenance rounds were
2.6.15/16, 2.6.23/24, now 2.6.31/32) isn't good enough to prevent
performance form sliding backwards from release to release.

----

The rest of this is an idea I've been kicking around for a while
which is derived from IO throttling work I've done during a
past life.  I haven't had time to research and prototype it to see
if it performs any better under really heavy load, but I'm going to
throw it out anyway so that everyone can see a little bit more about
what I'm thinking.

My fundamental premise is that throttling does not require IO to be
issued from the thread to be throttled. The essence of write
throttling is waiting for more pages to be cleaned in a period of
time than has been dirtied. i.e.  What we are really waiting on is
pages to be cleaned.

Based on this observation, if we have a flusher thread working in
the background, we don't need to submit more IO when we need to
throttle as all we need to do is wait for a certain number of pages
to transition to the clean state.

If we take a leaf from XFS's book by doing work at IO completion
rather than submission we can keep a count of the number of pages
cleaned on the bdi. This can be used to implement a FIFO-like
throttle. If we add a simple ticket system to the bdi, when a
process needs to be throttled it can submit a ticket with the number
of pages it has dirtied to the bdi, and the bdi can then decide what
the page cleaned count needs to reach before the process can be
woken.

i.e. Take the following ascii art showing the bdi fllusher
thread running and issuing IO in the background:

bdi write thread:   +---X------Y---+-A-----ZB----+------C--------+
1st proc:		o............o
2nd proc:		       o............o
3rd proc:		                   o............o

When the 1st process comes in to be throttled, it samples the page
clean count and gets X. It submits a ticket to be woken at A (X +
some number of pages). If the flusher thread is not running, it gets
kicked.  Process 2 and 3 do the same at Y and Z to be woken at B and
C. At IO completion, the number of pages cleaned is counted and the
tickets that are now under the clean count are pulled from the queue
and the processes that own them are woken.

This avoids the thundering herd problem and applies throttling in
a deterministic, predictable fashion. And by relying on background
writeback, we only have one writeback path to optimise, not two
different paths that interchange unpredictably.

In essence, this mechanism replaces the complex path of IO
submission and congestion with a simple, deterministic counter and
queue system that probably doesn't even require any memory
allocation to implement. I think the simpler a thorttle mechanism
is the more likely it is to work effectively....

I know that words without code aren't going to convince anyone, but I
hope I've given you some food for thought about alternatives to what
we currently do. ;)

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: regression in page writeback
  2009-09-26  1:47                                     ` Dave Chinner
@ 2009-09-26  3:02                                         ` Wu Fengguang
  0 siblings, 0 replies; 79+ messages in thread
From: Wu Fengguang @ 2009-09-26  3:02 UTC (permalink / raw)
  To: Dave Chinner
  Cc: Chris Mason, Andrew Morton, Peter Zijlstra, Li, Shaohua,
	linux-kernel, richard, jens.axboe, Jan Kara, linux-fsdevel

On Sat, Sep 26, 2009 at 09:47:15AM +0800, Dave Chinner wrote:
> On Fri, Sep 25, 2009 at 11:19:20AM +0800, Wu Fengguang wrote:
> > On Fri, Sep 25, 2009 at 08:11:17AM +0800, Dave Chinner wrote:
> > > On Thu, Sep 24, 2009 at 11:15:08AM +0800, Wu Fengguang wrote:
> > > > On Wed, Sep 23, 2009 at 10:00:58PM +0800, Chris Mason wrote:
> > > > > The only place that actually honors the congestion flag is pdflush.
> > > > > It's trivial to get pdflush backed up and make it sit down without
> > > > > making any progress because once the queue congests, pdflush goes away.
> > > > 
> > > > Right. I guess that's more or less intentional - to give lowest priority
> > > > to periodic/background writeback.
> > > 
> > > IMO, this is the wrong design. Background writeback should
> > > have higher CPU/scheduler priority than normal tasks. If there is
> > > sufficient dirty pages in the system for background writeback to
> > > be active, it should be running *now* to start as much IO as it can
> > > without being held up by other, lower priority tasks.
> > > 
> > > Cleaning pages is important to keeping the system running smoothly.
> > > Given that IO takes time to clean pages, it is therefore important
> > > to issue as much as possible as quickly as possible without delays
> > > before going back to sleep. Delaying issue of the IO or doing
> > > sub-optimal issue simply reduces performance of the system because
> > > it takes longer to clean the same number of dirty pages.
> > > 
> > > > > Nothing stops other procs from keeping the queue congested forever.
> > > > > This can only be fixed by making everyone wait for congestion, at which
> > > > > point we might as well wait for requests.
> > > > 
> > > > Yes. That gives everyone somehow equal opportunity, this is a policy change
> > > > that may lead to interesting effects, as well as present a challenge to
> > > > get_request_wait(). That said, I'm not against the change to a wait queue
> > > > in general.
> > > 
> > > If you block all threads doing _writebehind caching_ (synchronous IO
> > > is self-throttling) to the same BDI on the same queue as the bdi
> > > flusher then when congestion clears the higher priority background
> > > flusher thread should run first and issue more IO.  This should
> > > happen as a natural side-effect of our scheduling algorithms and it
> > > gives preference to efficient background writeback over in-efficient
> > > foreground writeback. Indeed, with this approach we can even avoid
> > > foreground writeback altogether...
> > 
> > I don't see how balance_dirty_pages() writeout is less efficient than
> > pdflush writeout.
> > 
> > They all called the same routines to do the job.
> > balance_dirty_pages() sets nr_to_write=1536 at least for ext4 and xfs
> > (unless memory is tight; btrfs is 1540), which is in fact 50% bigger
> > than the 1024 pages used by pdflush.
> 
> Sure, but the prёblem now is that you are above the
> bdi->dirty_exceeded threshold, foreground writeback tries to issue
> 1536 pages of IO every 8 pages that are dirtied. That means you'll

I'd suggest to increase that "8 pages". It may be good when
ratelimit_pages is statically set to 32. Now that ratelimit_pages is
dynamically set to a much larger value at boot time, we'd better use
ratelimit_pages/4 for dirty_exceeded case.

> block just about every writing process in writeback at the same time
> and they will all be blocked in congestion trying to write different
> inodes....

Ah got it, balance_dirty_pages could lead to too much concurrency and
thus seeks!

> > And it won't back off on congestion.
> 
> And that is, IMO, a major problem.

Not necessarily, the above concurrency problem occurs even if it backs
off on congestion. But anyway we are switching to get_request_wait now.

> > The s_io/b_io queues are shared, so a balance_dirty_pages() will just
> > continue from where the last sync thread exited. So it does not make
> > much difference who initiates the IO. Did I missed something?
> 
> The current implementation uses the request queue to do that
> blocking at IO submission time. This is based on the premise that if
> we write a certain number of pages, we're guaranteed to have waited
> long enough for that many pages to come clean.

Right.

> However, every other
> thread doing writes and being throttled does the same thing.  This
> leads to N IO submitters from at least N different inodes at the
> same time. Which inode gets written when congestion clears is
> anyone's guess - it's a thundering herd IIUC the congestion
> implementation correctly.

Good insight, thanks for pointing this out!

> The result is that we end up with N different sets of IO being
> issued with potentially zero locality to each other, resulting in
> much lower elevator sort/merge efficiency and hence we seek the disk
> all over the place to service the different sets of IO.

Right. But note that its negative effects are not that common given
the current parameters MAX_WRITEBACK_PAGES=1024, max_sectors_kb=512
and nr_requests=128. As we are going on to the next inode anyway
when 4MB of it have been enqueued. So a request queue could hold up to
128/(4096/512) = 16 inodes.

So the problem will turn up when there are >= 16 throttled processes.

When we increase MAX_WRITEBACK_PAGES to 128MB, even _one_ foreground
writeout will hurt.

> OTOH, if there is only one submission thread, it doesn't jump
> between inodes in the same way when congestion clears - if keeps
> writing to the same inode, resulting in large related chunks of
> sequential IOs being issued to the disk. This is more efficient than
> the above foreground writeback because the elevator works better and
> the disk seeks less.
> 
> As you can probably guess, I think foreground writeout is the wrong
> architecture because of the behaviour it induces under heavy
> multithreaded IO patterns. I agree that it works OK if continue
> tweaking it to fix problems.

Agreed. 

> However, my concern is that if it isn't constantly observed, tweaked
> and maintained, performance goes backwards as other code changes.
> i.e. there is a significant maintenance burden and looking at the
> problems once very couple of years (last big maintenance rounds were
> 2.6.15/16, 2.6.23/24, now 2.6.31/32) isn't good enough to prevent
> performance form sliding backwards from release to release.

One problem is that the queues are very tightly coupled. Every change
of behavior leads to reevaluation of other factors.

> ----
> 
> The rest of this is an idea I've been kicking around for a while
> which is derived from IO throttling work I've done during a
> past life.  I haven't had time to research and prototype it to see
> if it performs any better under really heavy load, but I'm going to
> throw it out anyway so that everyone can see a little bit more about
> what I'm thinking.
> 
> My fundamental premise is that throttling does not require IO to be
> issued from the thread to be throttled. The essence of write
> throttling is waiting for more pages to be cleaned in a period of
> time than has been dirtied. i.e.  What we are really waiting on is
> pages to be cleaned.

Yes, Peter's __bdi_writeout_inc() is a good watch point.

> Based on this observation, if we have a flusher thread working in
> the background, we don't need to submit more IO when we need to
> throttle as all we need to do is wait for a certain number of pages
> to transition to the clean state.

It's great You and Mason (and others) are advocating the same idea :)
Jan and me proposed some possible solutions recently:

        http://lkml.org/lkml/2009/9/14/126

> If we take a leaf from XFS's book by doing work at IO completion
> rather than submission we can keep a count of the number of pages
> cleaned on the bdi. This can be used to implement a FIFO-like
> throttle. If we add a simple ticket system to the bdi, when a
> process needs to be throttled it can submit a ticket with the number
> of pages it has dirtied to the bdi, and the bdi can then decide what
> the page cleaned count needs to reach before the process can be
> woken.

Exactly.

> i.e. Take the following ascii art showing the bdi fllusher
> thread running and issuing IO in the background:
> 
> bdi write thread:   +---X------Y---+-A-----ZB----+------C--------+
> 1st proc:		o............o
> 2nd proc:		       o............o
> 3rd proc:		                   o............o
> 
> When the 1st process comes in to be throttled, it samples the page
> clean count and gets X. It submits a ticket to be woken at A (X +
> some number of pages). If the flusher thread is not running, it gets
> kicked.  Process 2 and 3 do the same at Y and Z to be woken at B and
> C. At IO completion, the number of pages cleaned is counted and the
> tickets that are now under the clean count are pulled from the queue
> and the processes that own them are woken.
> 
> This avoids the thundering herd problem and applies throttling in
> a deterministic, predictable fashion. And by relying on background
> writeback, we only have one writeback path to optimise, not two
> different paths that interchange unpredictably.
> 
> In essence, this mechanism replaces the complex path of IO
> submission and congestion with a simple, deterministic counter and
> queue system that probably doesn't even require any memory
> allocation to implement. I think the simpler a thorttle mechanism
> is the more likely it is to work effectively....
> 
> I know that words without code aren't going to convince anyone, but I
> hope I've given you some food for thought about alternatives to what
> we currently do. ;)

Not at all, it's clear enough - your idea is very similar to Jan's
proposal. And I'd like to present an ascii art for another scheme:


                                          \                /
                  one bdi sync thread      \............../
               =========================>   \............/
                 working on dirty pages      \........../
                                              \......../
                                               \....../
                                                \..../
                                                 \../

  throttled    |  |    |  |    |  |     |  |     |  |     wakeup when enough
============>  |  |    |  |    |  |     |  |     |::| ==> pages are put to io
  task list    +--+    +--+    +--+     +--+     +--+     on behalf of this task
              task 5  task 4  task 3   task 2   task 1

               [.] dirty page  [:] writeback page

One benefit of this scheme is, when necessary, the task list could
be converted to some tree to do priorities and thus IO controller for
buffered writes :)

Thanks,
Fengguang

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: regression in page writeback
@ 2009-09-26  3:02                                         ` Wu Fengguang
  0 siblings, 0 replies; 79+ messages in thread
From: Wu Fengguang @ 2009-09-26  3:02 UTC (permalink / raw)
  To: Dave Chinner
  Cc: Chris Mason, Andrew Morton, Peter Zijlstra, Li, Shaohua,
	linux-kernel, richard, jens.axboe, Jan Kara, linux-fsdevel

On Sat, Sep 26, 2009 at 09:47:15AM +0800, Dave Chinner wrote:
> On Fri, Sep 25, 2009 at 11:19:20AM +0800, Wu Fengguang wrote:
> > On Fri, Sep 25, 2009 at 08:11:17AM +0800, Dave Chinner wrote:
> > > On Thu, Sep 24, 2009 at 11:15:08AM +0800, Wu Fengguang wrote:
> > > > On Wed, Sep 23, 2009 at 10:00:58PM +0800, Chris Mason wrote:
> > > > > The only place that actually honors the congestion flag is pdflush.
> > > > > It's trivial to get pdflush backed up and make it sit down without
> > > > > making any progress because once the queue congests, pdflush goes away.
> > > > 
> > > > Right. I guess that's more or less intentional - to give lowest priority
> > > > to periodic/background writeback.
> > > 
> > > IMO, this is the wrong design. Background writeback should
> > > have higher CPU/scheduler priority than normal tasks. If there is
> > > sufficient dirty pages in the system for background writeback to
> > > be active, it should be running *now* to start as much IO as it can
> > > without being held up by other, lower priority tasks.
> > > 
> > > Cleaning pages is important to keeping the system running smoothly.
> > > Given that IO takes time to clean pages, it is therefore important
> > > to issue as much as possible as quickly as possible without delays
> > > before going back to sleep. Delaying issue of the IO or doing
> > > sub-optimal issue simply reduces performance of the system because
> > > it takes longer to clean the same number of dirty pages.
> > > 
> > > > > Nothing stops other procs from keeping the queue congested forever.
> > > > > This can only be fixed by making everyone wait for congestion, at which
> > > > > point we might as well wait for requests.
> > > > 
> > > > Yes. That gives everyone somehow equal opportunity, this is a policy change
> > > > that may lead to interesting effects, as well as present a challenge to
> > > > get_request_wait(). That said, I'm not against the change to a wait queue
> > > > in general.
> > > 
> > > If you block all threads doing _writebehind caching_ (synchronous IO
> > > is self-throttling) to the same BDI on the same queue as the bdi
> > > flusher then when congestion clears the higher priority background
> > > flusher thread should run first and issue more IO.  This should
> > > happen as a natural side-effect of our scheduling algorithms and it
> > > gives preference to efficient background writeback over in-efficient
> > > foreground writeback. Indeed, with this approach we can even avoid
> > > foreground writeback altogether...
> > 
> > I don't see how balance_dirty_pages() writeout is less efficient than
> > pdflush writeout.
> > 
> > They all called the same routines to do the job.
> > balance_dirty_pages() sets nr_to_write=1536 at least for ext4 and xfs
> > (unless memory is tight; btrfs is 1540), which is in fact 50% bigger
> > than the 1024 pages used by pdflush.
> 
> Sure, but the prёblem now is that you are above the
> bdi->dirty_exceeded threshold, foreground writeback tries to issue
> 1536 pages of IO every 8 pages that are dirtied. That means you'll

I'd suggest to increase that "8 pages". It may be good when
ratelimit_pages is statically set to 32. Now that ratelimit_pages is
dynamically set to a much larger value at boot time, we'd better use
ratelimit_pages/4 for dirty_exceeded case.

> block just about every writing process in writeback at the same time
> and they will all be blocked in congestion trying to write different
> inodes....

Ah got it, balance_dirty_pages could lead to too much concurrency and
thus seeks!

> > And it won't back off on congestion.
> 
> And that is, IMO, a major problem.

Not necessarily, the above concurrency problem occurs even if it backs
off on congestion. But anyway we are switching to get_request_wait now.

> > The s_io/b_io queues are shared, so a balance_dirty_pages() will just
> > continue from where the last sync thread exited. So it does not make
> > much difference who initiates the IO. Did I missed something?
> 
> The current implementation uses the request queue to do that
> blocking at IO submission time. This is based on the premise that if
> we write a certain number of pages, we're guaranteed to have waited
> long enough for that many pages to come clean.

Right.

> However, every other
> thread doing writes and being throttled does the same thing.  This
> leads to N IO submitters from at least N different inodes at the
> same time. Which inode gets written when congestion clears is
> anyone's guess - it's a thundering herd IIUC the congestion
> implementation correctly.

Good insight, thanks for pointing this out!

> The result is that we end up with N different sets of IO being
> issued with potentially zero locality to each other, resulting in
> much lower elevator sort/merge efficiency and hence we seek the disk
> all over the place to service the different sets of IO.

Right. But note that its negative effects are not that common given
the current parameters MAX_WRITEBACK_PAGES=1024, max_sectors_kb=512
and nr_requests=128. As we are going on to the next inode anyway
when 4MB of it have been enqueued. So a request queue could hold up to
128/(4096/512) = 16 inodes.

So the problem will turn up when there are >= 16 throttled processes.

When we increase MAX_WRITEBACK_PAGES to 128MB, even _one_ foreground
writeout will hurt.

> OTOH, if there is only one submission thread, it doesn't jump
> between inodes in the same way when congestion clears - if keeps
> writing to the same inode, resulting in large related chunks of
> sequential IOs being issued to the disk. This is more efficient than
> the above foreground writeback because the elevator works better and
> the disk seeks less.
> 
> As you can probably guess, I think foreground writeout is the wrong
> architecture because of the behaviour it induces under heavy
> multithreaded IO patterns. I agree that it works OK if continue
> tweaking it to fix problems.

Agreed. 

> However, my concern is that if it isn't constantly observed, tweaked
> and maintained, performance goes backwards as other code changes.
> i.e. there is a significant maintenance burden and looking at the
> problems once very couple of years (last big maintenance rounds were
> 2.6.15/16, 2.6.23/24, now 2.6.31/32) isn't good enough to prevent
> performance form sliding backwards from release to release.

One problem is that the queues are very tightly coupled. Every change
of behavior leads to reevaluation of other factors.

> ----
> 
> The rest of this is an idea I've been kicking around for a while
> which is derived from IO throttling work I've done during a
> past life.  I haven't had time to research and prototype it to see
> if it performs any better under really heavy load, but I'm going to
> throw it out anyway so that everyone can see a little bit more about
> what I'm thinking.
> 
> My fundamental premise is that throttling does not require IO to be
> issued from the thread to be throttled. The essence of write
> throttling is waiting for more pages to be cleaned in a period of
> time than has been dirtied. i.e.  What we are really waiting on is
> pages to be cleaned.

Yes, Peter's __bdi_writeout_inc() is a good watch point.

> Based on this observation, if we have a flusher thread working in
> the background, we don't need to submit more IO when we need to
> throttle as all we need to do is wait for a certain number of pages
> to transition to the clean state.

It's great You and Mason (and others) are advocating the same idea :)
Jan and me proposed some possible solutions recently:

        http://lkml.org/lkml/2009/9/14/126

> If we take a leaf from XFS's book by doing work at IO completion
> rather than submission we can keep a count of the number of pages
> cleaned on the bdi. This can be used to implement a FIFO-like
> throttle. If we add a simple ticket system to the bdi, when a
> process needs to be throttled it can submit a ticket with the number
> of pages it has dirtied to the bdi, and the bdi can then decide what
> the page cleaned count needs to reach before the process can be
> woken.

Exactly.

> i.e. Take the following ascii art showing the bdi fllusher
> thread running and issuing IO in the background:
> 
> bdi write thread:   +---X------Y---+-A-----ZB----+------C--------+
> 1st proc:		o............o
> 2nd proc:		       o............o
> 3rd proc:		                   o............o
> 
> When the 1st process comes in to be throttled, it samples the page
> clean count and gets X. It submits a ticket to be woken at A (X +
> some number of pages). If the flusher thread is not running, it gets
> kicked.  Process 2 and 3 do the same at Y and Z to be woken at B and
> C. At IO completion, the number of pages cleaned is counted and the
> tickets that are now under the clean count are pulled from the queue
> and the processes that own them are woken.
> 
> This avoids the thundering herd problem and applies throttling in
> a deterministic, predictable fashion. And by relying on background
> writeback, we only have one writeback path to optimise, not two
> different paths that interchange unpredictably.
> 
> In essence, this mechanism replaces the complex path of IO
> submission and congestion with a simple, deterministic counter and
> queue system that probably doesn't even require any memory
> allocation to implement. I think the simpler a thorttle mechanism
> is the more likely it is to work effectively....
> 
> I know that words without code aren't going to convince anyone, but I
> hope I've given you some food for thought about alternatives to what
> we currently do. ;)

Not at all, it's clear enough - your idea is very similar to Jan's
proposal. And I'd like to present an ascii art for another scheme:


                                          \                /
                  one bdi sync thread      \............../
               =========================>   \............/
                 working on dirty pages      \........../
                                              \......../
                                               \....../
                                                \..../
                                                 \../

  throttled    |  |    |  |    |  |     |  |     |  |     wakeup when enough
============>  |  |    |  |    |  |     |  |     |::| ==> pages are put to io
  task list    +--+    +--+    +--+     +--+     +--+     on behalf of this task
              task 5  task 4  task 3   task 2   task 1

               [.] dirty page  [:] writeback page

One benefit of this scheme is, when necessary, the task list could
be converted to some tree to do priorities and thus IO controller for
buffered writes :)

Thanks,
Fengguang
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: regression in page writeback
  2009-09-25  6:45                                       ` Wu Fengguang
@ 2009-09-28  1:07                                         ` Dave Chinner
  2009-09-28  7:15                                           ` Wu Fengguang
  2009-09-28 14:25                                           ` Chris Mason
  0 siblings, 2 replies; 79+ messages in thread
From: Dave Chinner @ 2009-09-28  1:07 UTC (permalink / raw)
  To: Wu Fengguang
  Cc: Chris Mason, Andrew Morton, Peter Zijlstra, Li, Shaohua,
	linux-kernel, richard, jens.axboe

On Fri, Sep 25, 2009 at 02:45:03PM +0800, Wu Fengguang wrote:
> On Fri, Sep 25, 2009 at 01:04:13PM +0800, Dave Chinner wrote:
> > On Thu, Sep 24, 2009 at 08:38:20PM -0400, Chris Mason wrote:
> > > On Fri, Sep 25, 2009 at 10:11:17AM +1000, Dave Chinner wrote:
> > > > On Thu, Sep 24, 2009 at 11:15:08AM +0800, Wu Fengguang wrote:
> > > > > On Wed, Sep 23, 2009 at 10:00:58PM +0800, Chris Mason wrote:
> > > > > > The only place that actually honors the congestion flag is pdflush.
> > > > > > It's trivial to get pdflush backed up and make it sit down without
> > > > > > making any progress because once the queue congests, pdflush goes away.
> > > > > 
> > > > > Right. I guess that's more or less intentional - to give lowest priority
> > > > > to periodic/background writeback.
> > > > 
> > > > IMO, this is the wrong design. Background writeback should
> > > > have higher CPU/scheduler priority than normal tasks. If there is
> > > > sufficient dirty pages in the system for background writeback to
> > > > be active, it should be running *now* to start as much IO as it can
> > > > without being held up by other, lower priority tasks.
> > > 
> > > I'd say that an fsync from mutt or vi should be done at a higher prio
> > > than a background streaming writer.
> > 
> > I don't think you caught everything I said - synchronous IO is
> > un-throttled.
> 
> O_SYNC writes may be un-throttled in theory, however it seems to be
> throttled in practice:
> 
>   generic_file_aio_write
>     __generic_file_aio_write
>       generic_file_buffered_write
>         generic_perform_write
>           balance_dirty_pages_ratelimited
>     generic_write_sync
> 
> Do you mean some other code path?

In the context of the setup I was talking about, I meant is that sync
IO _should_ be unthrottled because it is self-throttling by it's
very nature. The current code makes no differentiation between the
two.

> > Background writeback should dump async IO to the elevator as fast as
> > it can, then get the hell out of the way. If you've got a UP system,
> > then the fsync can't be issued at the same time pdflush is running
> > (same as right now), and if you've got a MP system then fsync can
> > run at the same time.
> 
> I think you are right for system wide sync.
> 
> System wide sync seems to always wait for the queued bdi writeback
> works to finish, which should be fine in terms of efficiency, except
> that sync could end up do more works and even live lock.
> 
> > On the premise that sync IO is unthrottled and given that elevators
> > queue and issue sync IO sperately to async writes, fsync latency
> > would be entirely derived from the elevator queuing behaviour, not
> > the CPU priority of pdflush.
> 
> It's not exactly CPU priority, but queue fullness priority.

That's exactly what I implied. The elevator manages the
queue fullness and when it decides when to block background or
foreground writes. The problem is, the elevator can't make a sane
scheduling decision because it can't tell the difference between
async and sync IO because we don't propagate that information to
THE Block layer from the VFS.

We have all the smarts in the block layer interface to distinguish
between sync and async IO and the elevators do smart stuff with this
information. But by throwing away that information at the VFS level,
we hamstring the elevator scheduler because it never sees any
"synchronous" write IO for data writes. Hence any synchronous data
write gets stuck in the same queue with all the background stuff
and doesn't get priority.

Hence right now if you issue an fsync or pageout, it's a crap shoot
as to whether the elevator will schedule it first or last behind
other IO. The fact that they then ignore congestion is relying on a
side effect to stop background writeback and allow the fsync to
monopolise the elevator. It is not predictable and hence IO patterns
under load will change all the time regardless of whether the system
is in a steady state or not.

IMO there are architectural failings from top to bottom in the
writeback stack - while people are interested in fixing stuff, I
figured that they should be pointed out to give y'all something to
think about...

> fsync operations always use nonblocking=0, so in fact they _used to_
> enjoy better priority than pdflush. Same is vmscan pageout, which
> calls writepage directly. Both won't back off on congested bdi.
> 
> So when there comes fsync/pageout, they will always be served first.

pageout is so horribly inefficient from an IO perspective it is not
funny. It is one of the reasons Linux sucks so much when under
memory pressure. It basically causes the system to do random 4k
writeback of dirty pages (and lumpy reclaim can make it
synchronous!). 

pageout needs an enema, and preferably it should defer to background
writeback to clean pages. background writeback will clean pages
much, much faster than the random crap that pageout spews at the
disk right now.

Given that I can basically lock up my 2.6.30-based laptop for 10-15
minutes at a time with the disk running flat out in low memory
situations simply by starting to copy a large file(*), I think that
the way we currently handle dirty page writeback needs a bit of a
rethink.

(*) I had this happen 4-5 times last week moving VM images around on
my laptop, and it involved the Linux VM switching between pageout
and swapping to make more memory available while the copy was was
hammering the same drive with dirty pages from foreground writeback.
It made for extremely fragmented files when the machine finally
recovered because of the non-sequential writeback patterns on the
single file being copied.  You can't tell me that this is sane,
desirable behaviour, and this is the sort of problem that I want
sorted out. I don't beleive it can be fixed by maintaining the
number of uncoordinated, competing writeback mechanisms we currently
have.

> Small random IOs may hurt a bit though.

They *always* hurt, and under load, that appears to be the common IO
pattern that Linux is generating....

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: regression in page writeback
  2009-09-28  1:07                                         ` Dave Chinner
@ 2009-09-28  7:15                                           ` Wu Fengguang
  2009-09-28 13:08                                             ` Christoph Hellwig
  2009-09-29  0:15                                             ` Wu Fengguang
  2009-09-28 14:25                                           ` Chris Mason
  1 sibling, 2 replies; 79+ messages in thread
From: Wu Fengguang @ 2009-09-28  7:15 UTC (permalink / raw)
  To: Dave Chinner
  Cc: Chris Mason, Andrew Morton, Peter Zijlstra, Li, Shaohua,
	linux-kernel, richard, jens.axboe

On Mon, Sep 28, 2009 at 09:07:00AM +0800, Dave Chinner wrote:
> On Fri, Sep 25, 2009 at 02:45:03PM +0800, Wu Fengguang wrote:
> > On Fri, Sep 25, 2009 at 01:04:13PM +0800, Dave Chinner wrote:
> > > On Thu, Sep 24, 2009 at 08:38:20PM -0400, Chris Mason wrote:
> > > > On Fri, Sep 25, 2009 at 10:11:17AM +1000, Dave Chinner wrote:
> > > > > On Thu, Sep 24, 2009 at 11:15:08AM +0800, Wu Fengguang wrote:
> > > > > > On Wed, Sep 23, 2009 at 10:00:58PM +0800, Chris Mason wrote:
> > > > > > > The only place that actually honors the congestion flag is pdflush.
> > > > > > > It's trivial to get pdflush backed up and make it sit down without
> > > > > > > making any progress because once the queue congests, pdflush goes away.
> > > > > > 
> > > > > > Right. I guess that's more or less intentional - to give lowest priority
> > > > > > to periodic/background writeback.
> > > > > 
> > > > > IMO, this is the wrong design. Background writeback should
> > > > > have higher CPU/scheduler priority than normal tasks. If there is
> > > > > sufficient dirty pages in the system for background writeback to
> > > > > be active, it should be running *now* to start as much IO as it can
> > > > > without being held up by other, lower priority tasks.
> > > > 
> > > > I'd say that an fsync from mutt or vi should be done at a higher prio
> > > > than a background streaming writer.
> > > 
> > > I don't think you caught everything I said - synchronous IO is
> > > un-throttled.
> > 
> > O_SYNC writes may be un-throttled in theory, however it seems to be
> > throttled in practice:
> > 
> >   generic_file_aio_write
> >     __generic_file_aio_write
> >       generic_file_buffered_write
> >         generic_perform_write
> >           balance_dirty_pages_ratelimited
> >     generic_write_sync
> > 
> > Do you mean some other code path?
> 
> In the context of the setup I was talking about, I meant is that sync
> IO _should_ be unthrottled because it is self-throttling by it's
> very nature. The current code makes no differentiation between the
> two.

Yes, O_SYNC writers are double throttled now..

> > > Background writeback should dump async IO to the elevator as fast as
> > > it can, then get the hell out of the way. If you've got a UP system,
> > > then the fsync can't be issued at the same time pdflush is running
> > > (same as right now), and if you've got a MP system then fsync can
> > > run at the same time.
> > 
> > I think you are right for system wide sync.
> > 
> > System wide sync seems to always wait for the queued bdi writeback
> > works to finish, which should be fine in terms of efficiency, except
> > that sync could end up do more works and even live lock.
> > 
> > > On the premise that sync IO is unthrottled and given that elevators
> > > queue and issue sync IO sperately to async writes, fsync latency
> > > would be entirely derived from the elevator queuing behaviour, not
> > > the CPU priority of pdflush.
> > 
> > It's not exactly CPU priority, but queue fullness priority.
> 
> That's exactly what I implied. The elevator manages the
> queue fullness and when it decides when to block background or
> foreground writes. The problem is, the elevator can't make a sane
> scheduling decision because it can't tell the difference between
> async and sync IO because we don't propagate that information to
> THE Block layer from the VFS.
> 
> We have all the smarts in the block layer interface to distinguish
> between sync and async IO and the elevators do smart stuff with this
> information. But by throwing away that information at the VFS level,
> we hamstring the elevator scheduler because it never sees any
> "synchronous" write IO for data writes. Hence any synchronous data
> write gets stuck in the same queue with all the background stuff
> and doesn't get priority.

Yes this is a problem. We may also need to add priority awareness to
get_request_wait() to get a complete solution.

> Hence right now if you issue an fsync or pageout, it's a crap shoot
> as to whether the elevator will schedule it first or last behind
> other IO. The fact that they then ignore congestion is relying on a
> side effect to stop background writeback and allow the fsync to
> monopolise the elevator. It is not predictable and hence IO patterns
> under load will change all the time regardless of whether the system
> is in a steady state or not.
>
> IMO there are architectural failings from top to bottom in the
> writeback stack - while people are interested in fixing stuff, I
> figured that they should be pointed out to give y'all something to
> think about...

Thanks, your information helps a lot.

> > fsync operations always use nonblocking=0, so in fact they _used to_
> > enjoy better priority than pdflush. Same is vmscan pageout, which
> > calls writepage directly. Both won't back off on congested bdi.
> > 
> > So when there comes fsync/pageout, they will always be served first.
> 
> pageout is so horribly inefficient from an IO perspective it is not
> funny. It is one of the reasons Linux sucks so much when under
> memory pressure. It basically causes the system to do random 4k
> writeback of dirty pages (and lumpy reclaim can make it
> synchronous!). 
> 
> pageout needs an enema, and preferably it should defer to background
> writeback to clean pages. background writeback will clean pages
> much, much faster than the random crap that pageout spews at the
> disk right now.
> 
> Given that I can basically lock up my 2.6.30-based laptop for 10-15
> minutes at a time with the disk running flat out in low memory
> situations simply by starting to copy a large file(*), I think that
> the way we currently handle dirty page writeback needs a bit of a
> rethink.
> 
> (*) I had this happen 4-5 times last week moving VM images around on
> my laptop, and it involved the Linux VM switching between pageout
> and swapping to make more memory available while the copy was was
> hammering the same drive with dirty pages from foreground writeback.
> It made for extremely fragmented files when the machine finally
> recovered because of the non-sequential writeback patterns on the
> single file being copied.  You can't tell me that this is sane,
> desirable behaviour, and this is the sort of problem that I want
> sorted out. I don't beleive it can be fixed by maintaining the
> number of uncoordinated, competing writeback mechanisms we currently
> have.

I imagined some lumpy pageout policy would help, but didn't realize
it's such a severe problem that can happen in daily desktop workload..

Below is a quick patch. Any comments?

> > Small random IOs may hurt a bit though.
> 
> They *always* hurt, and under load, that appears to be the common IO
> pattern that Linux is generating....

Thanks,
Fengguang
---

vmscan: lumpy pageout

Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
---
 mm/vmscan.c |   72 +++++++++++++++++++++++++++++++++++++++++++-------
 1 file changed, 63 insertions(+), 9 deletions(-)

--- linux.orig/mm/vmscan.c	2009-09-28 11:45:48.000000000 +0800
+++ linux/mm/vmscan.c	2009-09-28 14:45:19.000000000 +0800
@@ -344,6 +344,64 @@ typedef enum {
 	PAGE_CLEAN,
 } pageout_t;
 
+#define LUMPY_PAGEOUT_PAGES	(512 * 1024 / PAGE_CACHE_SIZE)
+
+static pageout_t try_lumpy_pageout(struct page *page,
+				   struct address_space *mapping,
+				   struct writeback_control *wbc)
+{
+	struct page *pages[PAGEVEC_SIZE];
+	pgoff_t start;
+	int total;
+	int count;
+	int i;
+	int err;
+	int res = 0;
+
+	page_cache_get(page);
+	pages[0] = page;
+	i = 0;
+	count = 1;
+	start = page->index + 1;
+
+	for (total = LUMPY_PAGEOUT_PAGES; total > 0; total--) {
+		if (i >= count) {
+			i = 0;
+			count = find_get_pages(mapping, start,
+					       min(total, PAGEVEC_SIZE), pages);
+			if (!count)
+				break;
+
+			/* continuous? */
+			if (start + count - 1 != pages[count - 1]->index)
+				break;
+
+			start += count;
+		}
+
+		page = pages[i];
+		if (!PageDirty(page))
+			break;
+		if (!PageActive(page))
+			SetPageReclaim(page);
+		err = mapping->a_ops->writepage(page, wbc);
+		if (err < 0)
+			handle_write_error(mapping, page, res);
+		if (err == AOP_WRITEPAGE_ACTIVATE) {
+			ClearPageReclaim(page);
+			res = PAGE_ACTIVATE;
+			break;
+		}
+		page_cache_release(page);
+		i++;
+	}
+
+	for (; i < count; i++)
+		page_cache_release(pages[i]);
+
+	return res;
+}
+
 /*
  * pageout is called by shrink_page_list() for each dirty page.
  * Calls ->writepage().
@@ -392,21 +450,17 @@ static pageout_t pageout(struct page *pa
 		int res;
 		struct writeback_control wbc = {
 			.sync_mode = WB_SYNC_NONE,
-			.nr_to_write = SWAP_CLUSTER_MAX,
+			.nr_to_write = LUMPY_PAGEOUT_PAGES,
 			.range_start = 0,
 			.range_end = LLONG_MAX,
 			.nonblocking = 1,
 			.for_reclaim = 1,
 		};
 
-		SetPageReclaim(page);
-		res = mapping->a_ops->writepage(page, &wbc);
-		if (res < 0)
-			handle_write_error(mapping, page, res);
-		if (res == AOP_WRITEPAGE_ACTIVATE) {
-			ClearPageReclaim(page);
-			return PAGE_ACTIVATE;
-		}
+
+		res = try_lumpy_pageout(page, mapping, &wbc);
+		if (res == PAGE_ACTIVATE)
+			return res;
 
 		/*
 		 * Wait on writeback if requested to. This happens when

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: regression in page writeback
  2009-09-28  7:15                                           ` Wu Fengguang
@ 2009-09-28 13:08                                             ` Christoph Hellwig
  2009-09-28 14:07                                               ` Theodore Tso
  2009-09-29  2:32                                               ` Wu Fengguang
  2009-09-29  0:15                                             ` Wu Fengguang
  1 sibling, 2 replies; 79+ messages in thread
From: Christoph Hellwig @ 2009-09-28 13:08 UTC (permalink / raw)
  To: Wu Fengguang
  Cc: Dave Chinner, Chris Mason, Andrew Morton, Peter Zijlstra, Li,
	Shaohua, linux-kernel, richard, jens.axboe

On Mon, Sep 28, 2009 at 03:15:07PM +0800, Wu Fengguang wrote:
> +		if (!PageActive(page))
> +			SetPageReclaim(page);
> +		err = mapping->a_ops->writepage(page, wbc);
> +		if (err < 0)
> +			handle_write_error(mapping, page, res);
> +		if (err == AOP_WRITEPAGE_ACTIVATE) {
> +			ClearPageReclaim(page);
> +			res = PAGE_ACTIVATE;
> +			break;
> +		}

This should help a bit for XFS as it historically does multi-page
writeouts from ->writepages (and apprently btrfs that added some
write-around recently?) but not those brave filesystems only
implementing the multi-page writeout from writepages as designed.

But really, the best would be to leave the writeout to the flusher
threads and just reclaim the clean pages from the VM.

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: regression in page writeback
  2009-09-28 13:08                                             ` Christoph Hellwig
@ 2009-09-28 14:07                                               ` Theodore Tso
  2009-09-30  5:26                                                 ` Wu Fengguang
  2009-09-29  2:32                                               ` Wu Fengguang
  1 sibling, 1 reply; 79+ messages in thread
From: Theodore Tso @ 2009-09-28 14:07 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Wu Fengguang, Dave Chinner, Chris Mason, Andrew Morton,
	Peter Zijlstra, Li, Shaohua, linux-kernel, richard, jens.axboe

On Mon, Sep 28, 2009 at 09:08:04AM -0400, Christoph Hellwig wrote:
> This should help a bit for XFS as it historically does multi-page
> writeouts from ->writepages (and apprently btrfs that added some
> write-around recently?) but not those brave filesystems only
> implementing the multi-page writeout from writepages as designed.

Here's the hack which I'm currently working on to work around the
writeback code limiting writebacks to 1024 pages.  I'm assuming this
is going to be short-term hack assuming the writeback code gets more
intelligent, but I thought I would throw this into the mix....

	     	   	     	   	      - Ted

ext4: Adjust ext4_da_writepages() to write out larger contiguous chunks

Work around problems in the writeback code to force out writebacks in
larger chunks than just 4mb, which is just too small.  This also works
around limitations in the ext4 block allocator, which can't allocate
more than 2048 blocks at a time.  So we need to defeat the round-robin
characteristics of the writeback code and try to write out as many
blocks in one inode before allowing the writeback code to move on to
another inode.  We add a a new per-filesystem tunable,
max_contig_writeback_mb, which caps this to a default of 128mb per
inode.

Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
---
 fs/ext4/ext4.h              |    1 +
 fs/ext4/inode.c             |  100 +++++++++++++++++++++++++++++++++++++-----
 fs/ext4/super.c             |    3 +
 include/trace/events/ext4.h |   14 ++++--
 4 files changed, 102 insertions(+), 16 deletions(-)

diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
index e227eea..9f99427 100644
--- a/fs/ext4/ext4.h
+++ b/fs/ext4/ext4.h
@@ -942,6 +942,7 @@ struct ext4_sb_info {
 	unsigned int s_mb_stats;
 	unsigned int s_mb_order2_reqs;
 	unsigned int s_mb_group_prealloc;
+	unsigned int s_max_contig_writeback_mb;
 	/* where last allocation was done - for stream allocation */
 	unsigned long s_mb_last_group;
 	unsigned long s_mb_last_start;
diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
index 5fb72a9..9e0acb7 100644
--- a/fs/ext4/inode.c
+++ b/fs/ext4/inode.c
@@ -1145,6 +1145,60 @@ static int check_block_validity(struct inode *inode, const char *msg,
 }
 
 /*
+ * Return the number of dirty pages in the given inode starting at
+ * page frame idx.
+ */
+static pgoff_t ext4_num_dirty_pages(struct inode *inode, pgoff_t idx)
+{
+	struct address_space *mapping = inode->i_mapping;
+	pgoff_t	index;
+	struct pagevec pvec;
+	pgoff_t num = 0;
+	int i, nr_pages, done = 0;
+
+	pagevec_init(&pvec, 0);
+
+	while (!done) {
+		index = idx;
+		nr_pages = pagevec_lookup_tag(&pvec, mapping, &index,
+					      PAGECACHE_TAG_DIRTY,
+					      (pgoff_t)PAGEVEC_SIZE);
+		if (nr_pages == 0)
+			break;
+		for (i = 0; i < nr_pages; i++) {
+			struct page *page = pvec.pages[i];
+			struct buffer_head *bh, *head;
+
+			lock_page(page);
+			if (unlikely(page->mapping != mapping) ||
+			    !PageDirty(page) ||
+			    PageWriteback(page) ||
+			    page->index != idx) {
+				done = 1;
+				unlock_page(page);
+				break;
+			}
+			head = page_buffers(page);
+			bh = head;
+			do {
+				if (!buffer_delay(bh) &&
+				    !buffer_unwritten(bh)) {
+					done = 1;
+					break;
+				}
+			} while ((bh = bh->b_this_page) != head);
+			unlock_page(page);
+			if (done)
+				break;
+			idx++;
+			num++;
+		}
+		pagevec_release(&pvec);
+	}
+	return num;
+}
+
+/*
  * The ext4_get_blocks() function tries to look up the requested blocks,
  * and returns if the blocks are already mapped.
  *
@@ -2744,7 +2798,8 @@ static int ext4_da_writepages(struct address_space *mapping,
 	int pages_written = 0;
 	long pages_skipped;
 	int range_cyclic, cycled = 1, io_done = 0;
-	int needed_blocks, ret = 0, nr_to_writebump = 0;
+	int needed_blocks, ret = 0;
+	long desired_nr_to_write, nr_to_writebump = 0;
 	loff_t range_start = wbc->range_start;
 	struct ext4_sb_info *sbi = EXT4_SB(mapping->host->i_sb);
 
@@ -2771,16 +2826,6 @@ static int ext4_da_writepages(struct address_space *mapping,
 	if (unlikely(sbi->s_mount_flags & EXT4_MF_FS_ABORTED))
 		return -EROFS;
 
-	/*
-	 * Make sure nr_to_write is >= sbi->s_mb_stream_request
-	 * This make sure small files blocks are allocated in
-	 * single attempt. This ensure that small files
-	 * get less fragmented.
-	 */
-	if (wbc->nr_to_write < sbi->s_mb_stream_request) {
-		nr_to_writebump = sbi->s_mb_stream_request - wbc->nr_to_write;
-		wbc->nr_to_write = sbi->s_mb_stream_request;
-	}
 	if (wbc->range_start == 0 && wbc->range_end == LLONG_MAX)
 		range_whole = 1;
 
@@ -2795,6 +2840,36 @@ static int ext4_da_writepages(struct address_space *mapping,
 	} else
 		index = wbc->range_start >> PAGE_CACHE_SHIFT;
 
+	/*
+	 * This works around two forms of stupidity.  The first is in
+	 * the writeback code, which caps the maximum number of pages
+	 * written to be 1024 pages.  This is wrong on multiple
+	 * levels; different architectues have a different page size,
+	 * which changes the maximum amount of data which gets
+	 * written.  Secondly, 4 megabytes is way too small.  XFS
+	 * forces this value to be 16 megabytes by multiplying
+	 * nr_to_write parameter by four, and then relies on its
+	 * allocator to allocate larger extents to make them
+	 * contiguous.  Unfortunately this brings us to the second
+	 * stupidity, which is that ext4's mballoc code only allocates
+	 * at most 2048 blocks.  So we force contiguous writes up to
+	 * the number of dirty blocks in the inode, or
+	 * sbi->max_contig_writeback_mb whichever is smaller.
+	 */
+	if (!range_cyclic && range_whole)
+		desired_nr_to_write = wbc->nr_to_write * 8;
+	else
+		desired_nr_to_write = ext4_num_dirty_pages(inode, index);
+	if (desired_nr_to_write > (sbi->s_max_contig_writeback_mb << 
+				   (20 - PAGE_CACHE_SHIFT)))
+		desired_nr_to_write = (sbi->s_max_contig_writeback_mb << 
+				       (20 - PAGE_CACHE_SHIFT));
+
+	if (wbc->nr_to_write < desired_nr_to_write) {
+		nr_to_writebump = desired_nr_to_write - wbc->nr_to_write;
+		wbc->nr_to_write = desired_nr_to_write;
+	}
+
 	mpd.wbc = wbc;
 	mpd.inode = mapping->host;
 
@@ -2914,7 +2989,8 @@ retry:
 out_writepages:
 	if (!no_nrwrite_index_update)
 		wbc->no_nrwrite_index_update = 0;
-	wbc->nr_to_write -= nr_to_writebump;
+	if (wbc->nr_to_write > nr_to_writebump)
+		wbc->nr_to_write -= nr_to_writebump;
 	wbc->range_start = range_start;
 	trace_ext4_da_writepages_result(inode, wbc, ret, pages_written);
 	return ret;
diff --git a/fs/ext4/super.c b/fs/ext4/super.c
index df539ba..9d04bd9 100644
--- a/fs/ext4/super.c
+++ b/fs/ext4/super.c
@@ -2197,6 +2197,7 @@ EXT4_RW_ATTR_SBI_UI(mb_min_to_scan, s_mb_min_to_scan);
 EXT4_RW_ATTR_SBI_UI(mb_order2_req, s_mb_order2_reqs);
 EXT4_RW_ATTR_SBI_UI(mb_stream_req, s_mb_stream_request);
 EXT4_RW_ATTR_SBI_UI(mb_group_prealloc, s_mb_group_prealloc);
+EXT4_RW_ATTR_SBI_UI(max_contig_writeback, s_max_contig_writeback_mb);
 
 static struct attribute *ext4_attrs[] = {
 	ATTR_LIST(delayed_allocation_blocks),
@@ -2210,6 +2211,7 @@ static struct attribute *ext4_attrs[] = {
 	ATTR_LIST(mb_order2_req),
 	ATTR_LIST(mb_stream_req),
 	ATTR_LIST(mb_group_prealloc),
+	ATTR_LIST(max_contig_writeback),
 	NULL,
 };
 
@@ -2679,6 +2681,7 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent)
 	}
 
 	sbi->s_stripe = ext4_get_stripe_size(sbi);
+	sbi->s_max_contig_writeback_mb = 128;
 
 	/*
 	 * set up enough so that it can read an inode
diff --git a/include/trace/events/ext4.h b/include/trace/events/ext4.h
index c1bd8f1..7c6bbb7 100644
--- a/include/trace/events/ext4.h
+++ b/include/trace/events/ext4.h
@@ -236,6 +236,7 @@ TRACE_EVENT(ext4_da_writepages,
 		__field(	char,	for_kupdate		)
 		__field(	char,	for_reclaim		)
 		__field(	char,	range_cyclic		)
+		__field(       pgoff_t,	writeback_index		)
 	),
 
 	TP_fast_assign(
@@ -249,15 +250,17 @@ TRACE_EVENT(ext4_da_writepages,
 		__entry->for_kupdate	= wbc->for_kupdate;
 		__entry->for_reclaim	= wbc->for_reclaim;
 		__entry->range_cyclic	= wbc->range_cyclic;
+		__entry->writeback_index = inode->i_mapping->writeback_index;
 	),
 
-	TP_printk("dev %s ino %lu nr_to_write %ld pages_skipped %ld range_start %llu range_end %llu nonblocking %d for_kupdate %d for_reclaim %d range_cyclic %d",
+	TP_printk("dev %s ino %lu nr_to_write %ld pages_skipped %ld range_start %llu range_end %llu nonblocking %d for_kupdate %d for_reclaim %d range_cyclic %d writeback_index %lu",
 		  jbd2_dev_to_name(__entry->dev),
 		  (unsigned long) __entry->ino, __entry->nr_to_write,
 		  __entry->pages_skipped, __entry->range_start,
 		  __entry->range_end, __entry->nonblocking,
 		  __entry->for_kupdate, __entry->for_reclaim,
-		  __entry->range_cyclic)
+		  __entry->range_cyclic,
+		  (unsigned long) __entry->writeback_index)
 );
 
 TRACE_EVENT(ext4_da_write_pages,
@@ -309,6 +312,7 @@ TRACE_EVENT(ext4_da_writepages_result,
 		__field(	char,	encountered_congestion	)
 		__field(	char,	more_io			)	
 		__field(	char,	no_nrwrite_index_update )
+		__field(       pgoff_t,	writeback_index		)
 	),
 
 	TP_fast_assign(
@@ -320,14 +324,16 @@ TRACE_EVENT(ext4_da_writepages_result,
 		__entry->encountered_congestion	= wbc->encountered_congestion;
 		__entry->more_io	= wbc->more_io;
 		__entry->no_nrwrite_index_update = wbc->no_nrwrite_index_update;
+		__entry->writeback_index = inode->i_mapping->writeback_index;
 	),
 
-	TP_printk("dev %s ino %lu ret %d pages_written %d pages_skipped %ld congestion %d more_io %d no_nrwrite_index_update %d",
+	TP_printk("dev %s ino %lu ret %d pages_written %d pages_skipped %ld congestion %d more_io %d no_nrwrite_index_update %d writeback_index %lu",
 		  jbd2_dev_to_name(__entry->dev),
 		  (unsigned long) __entry->ino, __entry->ret,
 		  __entry->pages_written, __entry->pages_skipped,
 		  __entry->encountered_congestion, __entry->more_io,
-		  __entry->no_nrwrite_index_update)
+		  __entry->no_nrwrite_index_update,
+		  (unsigned long) __entry->writeback_index)
 );
 
 TRACE_EVENT(ext4_da_write_begin,

^ permalink raw reply related	[flat|nested] 79+ messages in thread

* Re: regression in page writeback
  2009-09-28  1:07                                         ` Dave Chinner
  2009-09-28  7:15                                           ` Wu Fengguang
@ 2009-09-28 14:25                                           ` Chris Mason
  2009-09-29 23:39                                             ` Dave Chinner
  1 sibling, 1 reply; 79+ messages in thread
From: Chris Mason @ 2009-09-28 14:25 UTC (permalink / raw)
  To: Dave Chinner
  Cc: Wu Fengguang, Andrew Morton, Peter Zijlstra, Li, Shaohua,
	linux-kernel, richard, jens.axboe

On Mon, Sep 28, 2009 at 11:07:00AM +1000, Dave Chinner wrote:
> On Fri, Sep 25, 2009 at 02:45:03PM +0800, Wu Fengguang wrote:
> > On Fri, Sep 25, 2009 at 01:04:13PM +0800, Dave Chinner wrote:
> > > On Thu, Sep 24, 2009 at 08:38:20PM -0400, Chris Mason wrote:
> > > > On Fri, Sep 25, 2009 at 10:11:17AM +1000, Dave Chinner wrote:
> > > > > On Thu, Sep 24, 2009 at 11:15:08AM +0800, Wu Fengguang wrote:
> > > > > > On Wed, Sep 23, 2009 at 10:00:58PM +0800, Chris Mason wrote:
> > > > > > > The only place that actually honors the congestion flag is pdflush.
> > > > > > > It's trivial to get pdflush backed up and make it sit down without
> > > > > > > making any progress because once the queue congests, pdflush goes away.
> > > > > > 
> > > > > > Right. I guess that's more or less intentional - to give lowest priority
> > > > > > to periodic/background writeback.
> > > > > 
> > > > > IMO, this is the wrong design. Background writeback should
> > > > > have higher CPU/scheduler priority than normal tasks. If there is
> > > > > sufficient dirty pages in the system for background writeback to
> > > > > be active, it should be running *now* to start as much IO as it can
> > > > > without being held up by other, lower priority tasks.
> > > > 
> > > > I'd say that an fsync from mutt or vi should be done at a higher prio
> > > > than a background streaming writer.
> > > 
> > > I don't think you caught everything I said - synchronous IO is
> > > un-throttled.
> > 
> > O_SYNC writes may be un-throttled in theory, however it seems to be
> > throttled in practice:
> > 
> >   generic_file_aio_write
> >     __generic_file_aio_write
> >       generic_file_buffered_write
> >         generic_perform_write
> >           balance_dirty_pages_ratelimited
> >     generic_write_sync
> > 
> > Do you mean some other code path?
> 
> In the context of the setup I was talking about, I meant is that sync
> IO _should_ be unthrottled because it is self-throttling by it's
> very nature. The current code makes no differentiation between the
> two.

This isn't entirely true anymore.  WB_SYNC_ALL is turned into a sync
bio, which is sent down with higher priority.  There may be a few spots
that still need to be changed for it, but it is much better than it was.

re: pageout() being the worst way to do IO, definitely agree.

-chris

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: regression in page writeback
  2009-09-28  7:15                                           ` Wu Fengguang
  2009-09-28 13:08                                             ` Christoph Hellwig
@ 2009-09-29  0:15                                             ` Wu Fengguang
  1 sibling, 0 replies; 79+ messages in thread
From: Wu Fengguang @ 2009-09-29  0:15 UTC (permalink / raw)
  To: Dave Chinner
  Cc: Chris Mason, Andrew Morton, Peter Zijlstra, Li, Shaohua,
	linux-kernel, richard, jens.axboe

On Mon, Sep 28, 2009 at 03:15:07PM +0800, Wu Fengguang wrote:
> On Mon, Sep 28, 2009 at 09:07:00AM +0800, Dave Chinner wrote:
> > 
> > pageout is so horribly inefficient from an IO perspective it is not
> > funny. It is one of the reasons Linux sucks so much when under
> > memory pressure. It basically causes the system to do random 4k
> > writeback of dirty pages (and lumpy reclaim can make it
> > synchronous!). 
> > 
> > pageout needs an enema, and preferably it should defer to background
> > writeback to clean pages. background writeback will clean pages
> > much, much faster than the random crap that pageout spews at the
> > disk right now.
> > 
> > Given that I can basically lock up my 2.6.30-based laptop for 10-15
> > minutes at a time with the disk running flat out in low memory
> > situations simply by starting to copy a large file(*), I think that
> > the way we currently handle dirty page writeback needs a bit of a
> > rethink.
> > 
> > (*) I had this happen 4-5 times last week moving VM images around on
> > my laptop, and it involved the Linux VM switching between pageout
> > and swapping to make more memory available while the copy was was
> > hammering the same drive with dirty pages from foreground writeback.
> > It made for extremely fragmented files when the machine finally
> > recovered because of the non-sequential writeback patterns on the
> > single file being copied.  You can't tell me that this is sane,
> > desirable behaviour, and this is the sort of problem that I want
> > sorted out. I don't beleive it can be fixed by maintaining the
> > number of uncoordinated, competing writeback mechanisms we currently
> > have.
> 
> I imagined some lumpy pageout policy would help, but didn't realize
> it's such a severe problem that can happen in daily desktop workload..
> 
> Below is a quick patch. Any comments?

Wow, it's much easier to reuse write_cache_pages for lumpy pageout :)

---
 mm/page-writeback.c |   36 ++++++++++++++++++++++++------------
 mm/shmem.c          |    1 +
 mm/vmscan.c         |    6 ++++++
 3 files changed, 31 insertions(+), 12 deletions(-)

--- linux.orig/mm/vmscan.c	2009-09-29 07:21:51.000000000 +0800
+++ linux/mm/vmscan.c	2009-09-29 07:46:59.000000000 +0800
@@ -344,6 +344,8 @@ typedef enum {
 	PAGE_CLEAN,
 } pageout_t;
 
+#define LUMPY_PAGEOUT_PAGES	(512 * 1024 / PAGE_CACHE_SIZE)
+
 /*
  * pageout is called by shrink_page_list() for each dirty page.
  * Calls ->writepage().
@@ -408,6 +410,10 @@ static pageout_t pageout(struct page *pa
 			return PAGE_ACTIVATE;
 		}
 
+		wbc.range_start = (page->index + 1) << PAGE_CACHE_SHIFT;
+		wbc.nr_to_write = LUMPY_PAGEOUT_PAGES - 1;
+		generic_writepages(mapping, &wbc);
+
 		/*
 		 * Wait on writeback if requested to. This happens when
 		 * direct reclaiming a large contiguous area and the
--- linux.orig/mm/page-writeback.c	2009-09-29 07:33:13.000000000 +0800
+++ linux/mm/page-writeback.c	2009-09-29 08:10:39.000000000 +0800
@@ -799,6 +799,12 @@ retry:
 		if (nr_pages == 0)
 			break;
 
+		if (wbc->for_reclaim && done_index + nr_pages - 1 !=
+					pvec.pages[nr_pages - 1]->index) {
+			pagevec_release(&pvec);
+			break;
+		}
+
 		for (i = 0; i < nr_pages; i++) {
 			struct page *page = pvec.pages[i];
 
@@ -852,24 +858,30 @@ continue_unlock:
 			if (!clear_page_dirty_for_io(page))
 				goto continue_unlock;
 
+			/*
+			 * active and unevictable pages will be checked at
+			 * rotate time
+			 */
+			if (wbc->for_reclaim)
+				SetPageReclaim(page);
+
 			ret = (*writepage)(page, wbc, data);
 			if (unlikely(ret)) {
 				if (ret == AOP_WRITEPAGE_ACTIVATE) {
 					unlock_page(page);
 					ret = 0;
-				} else {
-					/*
-					 * done_index is set past this page,
-					 * so media errors will not choke
-					 * background writeout for the entire
-					 * file. This has consequences for
-					 * range_cyclic semantics (ie. it may
-					 * not be suitable for data integrity
-					 * writeout).
-					 */
-					done = 1;
-					break;
 				}
+				/*
+				 * done_index is set past this page,
+				 * so media errors will not choke
+				 * background writeout for the entire
+				 * file. This has consequences for
+				 * range_cyclic semantics (ie. it may
+				 * not be suitable for data integrity
+				 * writeout).
+				 */
+				done = 1;
+				break;
  			}
 
 			if (nr_to_write > 0) {
--- linux.orig/mm/shmem.c	2009-09-29 08:07:22.000000000 +0800
+++ linux/mm/shmem.c	2009-09-29 08:08:02.000000000 +0800
@@ -1103,6 +1103,7 @@ unlock:
 	 */
 	swapcache_free(swap, NULL);
 redirty:
+	wbc->pages_skipped++;
 	set_page_dirty(page);
 	if (wbc->for_reclaim)
 		return AOP_WRITEPAGE_ACTIVATE;	/* Return with page locked */

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: regression in page writeback
  2009-09-28 13:08                                             ` Christoph Hellwig
  2009-09-28 14:07                                               ` Theodore Tso
@ 2009-09-29  2:32                                               ` Wu Fengguang
  2009-09-29 14:00                                                 ` Chris Mason
  2009-09-29 14:21                                                 ` Christoph Hellwig
  1 sibling, 2 replies; 79+ messages in thread
From: Wu Fengguang @ 2009-09-29  2:32 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Dave Chinner, Chris Mason, Andrew Morton, Peter Zijlstra, Li,
	Shaohua, linux-kernel, richard, jens.axboe

On Mon, Sep 28, 2009 at 09:08:04PM +0800, Christoph Hellwig wrote:
> On Mon, Sep 28, 2009 at 03:15:07PM +0800, Wu Fengguang wrote:
> > +		if (!PageActive(page))
> > +			SetPageReclaim(page);
> > +		err = mapping->a_ops->writepage(page, wbc);
> > +		if (err < 0)
> > +			handle_write_error(mapping, page, res);
> > +		if (err == AOP_WRITEPAGE_ACTIVATE) {
> > +			ClearPageReclaim(page);
> > +			res = PAGE_ACTIVATE;
> > +			break;
> > +		}
> 
> This should help a bit for XFS as it historically does multi-page
> writeouts from ->writepages (and apprently btrfs that added some

->writepage ?

> write-around recently?) but not those brave filesystems only
> implementing the multi-page writeout from writepages as designed.

Thanks.  Just tried write_cache_pages(), looks simple. Need to further
convert all aops->writepages to support lumpy pageout :)

> But really, the best would be to leave the writeout to the flusher
> threads and just reclaim the clean pages from the VM.

Yup, that's much larger behavior change, and could be pursued as a
long term goal.

Thanks,
Fengguang

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: regression in page writeback
  2009-09-29  2:32                                               ` Wu Fengguang
@ 2009-09-29 14:00                                                 ` Chris Mason
  2009-09-29 14:21                                                 ` Christoph Hellwig
  1 sibling, 0 replies; 79+ messages in thread
From: Chris Mason @ 2009-09-29 14:00 UTC (permalink / raw)
  To: Wu Fengguang
  Cc: Christoph Hellwig, Dave Chinner, Andrew Morton, Peter Zijlstra,
	Li, Shaohua, linux-kernel, richard, jens.axboe

On Tue, Sep 29, 2009 at 10:32:27AM +0800, Wu Fengguang wrote:
> On Mon, Sep 28, 2009 at 09:08:04PM +0800, Christoph Hellwig wrote:
> > On Mon, Sep 28, 2009 at 03:15:07PM +0800, Wu Fengguang wrote:
> > > +		if (!PageActive(page))
> > > +			SetPageReclaim(page);
> > > +		err = mapping->a_ops->writepage(page, wbc);
> > > +		if (err < 0)
> > > +			handle_write_error(mapping, page, res);
> > > +		if (err == AOP_WRITEPAGE_ACTIVATE) {
> > > +			ClearPageReclaim(page);
> > > +			res = PAGE_ACTIVATE;
> > > +			break;
> > > +		}
> > 
> > This should help a bit for XFS as it historically does multi-page
> > writeouts from ->writepages (and apprently btrfs that added some
> 
> ->writepage ?
> 
> > write-around recently?) but not those brave filesystems only
> > implementing the multi-page writeout from writepages as designed.
> 
> Thanks.  Just tried write_cache_pages(), looks simple. Need to further
> convert all aops->writepages to support lumpy pageout :)
> 
> > But really, the best would be to leave the writeout to the flusher
> > threads and just reclaim the clean pages from the VM.
> 
> Yup, that's much larger behavior change, and could be pursued as a
> long term goal.

I don't think we can just change kswapd to wait on flusher thread
progress because the flusher thread can happily spend forever writing
pages that can't actually be freed.  Lumpy pageout is a better middle
ground I think.

-chris


^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: regression in page writeback
  2009-09-29  2:32                                               ` Wu Fengguang
  2009-09-29 14:00                                                 ` Chris Mason
@ 2009-09-29 14:21                                                 ` Christoph Hellwig
  1 sibling, 0 replies; 79+ messages in thread
From: Christoph Hellwig @ 2009-09-29 14:21 UTC (permalink / raw)
  To: Wu Fengguang
  Cc: Christoph Hellwig, Dave Chinner, Chris Mason, Andrew Morton,
	Peter Zijlstra, Li, Shaohua, linux-kernel, richard, jens.axboe

On Tue, Sep 29, 2009 at 10:32:27AM +0800, Wu Fengguang wrote:
> > This should help a bit for XFS as it historically does multi-page
> > writeouts from ->writepages (and apprently btrfs that added some
> 
> ->writepage ?

Yes.

> > write-around recently?) but not those brave filesystems only
> > implementing the multi-page writeout from writepages as designed.
> 
> Thanks.  Just tried write_cache_pages(), looks simple. Need to further
> convert all aops->writepages to support lumpy pageout :)

Yeah.  Most ->writepages instances are just copies of write_cache_pages
with local hacks.  If you have any good idea to consolidate that that
would be great.  Also XFS and btrfs do cluster move pages even from
->writepage, I wonder how well that interacts with reclaim.


^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: regression in page writeback
  2009-09-28 14:25                                           ` Chris Mason
@ 2009-09-29 23:39                                             ` Dave Chinner
  2009-09-30  1:30                                               ` Wu Fengguang
  0 siblings, 1 reply; 79+ messages in thread
From: Dave Chinner @ 2009-09-29 23:39 UTC (permalink / raw)
  To: Chris Mason, Wu Fengguang, Andrew Morton, Peter Zijlstra, Li,
	Shaohua, linux-kernel, richard, jens.axboe

On Mon, Sep 28, 2009 at 10:25:24AM -0400, Chris Mason wrote:
> On Mon, Sep 28, 2009 at 11:07:00AM +1000, Dave Chinner wrote:
> > In the context of the setup I was talking about, I meant is that sync
> > IO _should_ be unthrottled because it is self-throttling by it's
> > very nature. The current code makes no differentiation between the
> > two.
> 
> This isn't entirely true anymore.  WB_SYNC_ALL is turned into a sync
> bio, which is sent down with higher priority.  There may be a few spots
> that still need to be changed for it, but it is much better than it was.

Oh, I didn't realise that had changed - when did WRITE_SYNC_PLUG get
introduced? FWIW, I notice that __block_write_full_page(), gfs2, btrfs
and jdb use WRITE_SYNC_PLUG to implement this, but it appears that
filesystems that have their own writeback code (e.g. XFS) have not
been converted (gfs2 and btrfs being the exceptions).

Oh well, something else that needs tweaking in XFS...

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: regression in page writeback
  2009-09-29 23:39                                             ` Dave Chinner
@ 2009-09-30  1:30                                               ` Wu Fengguang
  0 siblings, 0 replies; 79+ messages in thread
From: Wu Fengguang @ 2009-09-30  1:30 UTC (permalink / raw)
  To: Dave Chinner
  Cc: Chris Mason, Andrew Morton, Peter Zijlstra, Li, Shaohua,
	linux-kernel, richard, jens.axboe

On Wed, Sep 30, 2009 at 07:39:36AM +0800, Dave Chinner wrote:
> On Mon, Sep 28, 2009 at 10:25:24AM -0400, Chris Mason wrote:
> > On Mon, Sep 28, 2009 at 11:07:00AM +1000, Dave Chinner wrote:
> > > In the context of the setup I was talking about, I meant is that sync
> > > IO _should_ be unthrottled because it is self-throttling by it's
> > > very nature. The current code makes no differentiation between the
> > > two.
> > 
> > This isn't entirely true anymore.  WB_SYNC_ALL is turned into a sync
> > bio, which is sent down with higher priority.  There may be a few spots
> > that still need to be changed for it, but it is much better than it was.
> 
> Oh, I didn't realise that had changed - when did WRITE_SYNC_PLUG get
> introduced?

About 5 months before, when Linus complained something similar :)

        http://lkml.org/lkml/2009/4/6/114

Thanks,
Fengguang

> FWIW, I notice that __block_write_full_page(), gfs2, btrfs
> and jdb use WRITE_SYNC_PLUG to implement this, but it appears that
> filesystems that have their own writeback code (e.g. XFS) have not
> been converted (gfs2 and btrfs being the exceptions).
> 
> Oh well, something else that needs tweaking in XFS...
> 
> Cheers,
> 
> Dave.
> -- 
> Dave Chinner
> david@fromorbit.com

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: regression in page writeback
  2009-09-28 14:07                                               ` Theodore Tso
@ 2009-09-30  5:26                                                 ` Wu Fengguang
  2009-09-30  5:32                                                   ` Wu Fengguang
  2009-09-30 14:11                                                   ` Theodore Tso
  0 siblings, 2 replies; 79+ messages in thread
From: Wu Fengguang @ 2009-09-30  5:26 UTC (permalink / raw)
  To: Theodore Tso, Christoph Hellwig, Dave Chinner, Chris Mason,
	Andrew Morton, Peter Zijlstra, Li, Shaohua, linux-kernel,
	richard, jens.axboe

Hi Ted,

On Mon, Sep 28, 2009 at 10:07:57PM +0800, Theodore Ts'o wrote:
> On Mon, Sep 28, 2009 at 09:08:04AM -0400, Christoph Hellwig wrote:
> > This should help a bit for XFS as it historically does multi-page
> > writeouts from ->writepages (and apprently btrfs that added some
> > write-around recently?) but not those brave filesystems only
> > implementing the multi-page writeout from writepages as designed.
> 
> Here's the hack which I'm currently working on to work around the
> writeback code limiting writebacks to 1024 pages.  I'm assuming this
> is going to be short-term hack assuming the writeback code gets more
> intelligent, but I thought I would throw this into the mix....

> ext4: Adjust ext4_da_writepages() to write out larger contiguous chunks
> 
> Work around problems in the writeback code to force out writebacks in
> larger chunks than just 4mb, which is just too small.  This also works
> around limitations in the ext4 block allocator, which can't allocate
> more than 2048 blocks at a time.  So we need to defeat the round-robin
> characteristics of the writeback code and try to write out as many
> blocks in one inode before allowing the writeback code to move on to
> another inode.  We add a a new per-filesystem tunable,
> max_contig_writeback_mb, which caps this to a default of 128mb per
> inode.

It's good to increase MAX_WRITEBACK_PAGES, however I'm afraid
max_contig_writeback_mb may be a burden in future: either it is not
necessary, or a per-bdi counterpart must be introduced for all
filesystems.

And it's preferred to automatically handle slow devices well with the
increased chunk size, instead of adding another parameter.

I scratched up a patch to demo the ideas collected in recent discussions.
Can you check if it serves your needs? Thanks.

> Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
> ---
>  fs/ext4/ext4.h              |    1 +
>  fs/ext4/inode.c             |  100 +++++++++++++++++++++++++++++++++++++-----
>  fs/ext4/super.c             |    3 +
>  include/trace/events/ext4.h |   14 ++++--
>  4 files changed, 102 insertions(+), 16 deletions(-)
> 
> diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
> index e227eea..9f99427 100644
> --- a/fs/ext4/ext4.h
> +++ b/fs/ext4/ext4.h
> @@ -942,6 +942,7 @@ struct ext4_sb_info {
>  	unsigned int s_mb_stats;
>  	unsigned int s_mb_order2_reqs;
>  	unsigned int s_mb_group_prealloc;
> +	unsigned int s_max_contig_writeback_mb;
>  	/* where last allocation was done - for stream allocation */
>  	unsigned long s_mb_last_group;
>  	unsigned long s_mb_last_start;
> diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
> index 5fb72a9..9e0acb7 100644
> --- a/fs/ext4/inode.c
> +++ b/fs/ext4/inode.c
> @@ -1145,6 +1145,60 @@ static int check_block_validity(struct inode *inode, const char *msg,
>  }
>  
>  /*
> + * Return the number of dirty pages in the given inode starting at
> + * page frame idx.
> + */
> +static pgoff_t ext4_num_dirty_pages(struct inode *inode, pgoff_t idx)
> +{

ext4_num_dirty_pages() may be improved to take a "max_pages" parameter
to avoid unnecessary work.

> +	struct address_space *mapping = inode->i_mapping;
> +	pgoff_t	index;
> +	struct pagevec pvec;
> +	pgoff_t num = 0;
> +	int i, nr_pages, done = 0;
> +
> +	pagevec_init(&pvec, 0);
> +
> +	while (!done) {
> +		index = idx;
> +		nr_pages = pagevec_lookup_tag(&pvec, mapping, &index,
> +					      PAGECACHE_TAG_DIRTY,
> +					      (pgoff_t)PAGEVEC_SIZE);
> +		if (nr_pages == 0)
> +			break;
> +		for (i = 0; i < nr_pages; i++) {
> +			struct page *page = pvec.pages[i];
> +			struct buffer_head *bh, *head;
> +
> +			lock_page(page);
> +			if (unlikely(page->mapping != mapping) ||
> +			    !PageDirty(page) ||
> +			    PageWriteback(page) ||
> +			    page->index != idx) {
> +				done = 1;
> +				unlock_page(page);
> +				break;
> +			}
> +			head = page_buffers(page);
> +			bh = head;
> +			do {
> +				if (!buffer_delay(bh) &&
> +				    !buffer_unwritten(bh)) {
> +					done = 1;
> +					break;
> +				}
> +			} while ((bh = bh->b_this_page) != head);
> +			unlock_page(page);

I guess a rough estimation will suffice, hehe.
There are no guarantee anyway.

> +			if (done)
> +				break;
> +			idx++;
> +			num++;
> +		}
> +		pagevec_release(&pvec);
> +	}
> +	return num;
> +}
> +
> +/*
>   * The ext4_get_blocks() function tries to look up the requested blocks,
>   * and returns if the blocks are already mapped.
>   *
> @@ -2744,7 +2798,8 @@ static int ext4_da_writepages(struct address_space *mapping,
>  	int pages_written = 0;
>  	long pages_skipped;
>  	int range_cyclic, cycled = 1, io_done = 0;
> -	int needed_blocks, ret = 0, nr_to_writebump = 0;
> +	int needed_blocks, ret = 0;
> +	long desired_nr_to_write, nr_to_writebump = 0;
>  	loff_t range_start = wbc->range_start;
>  	struct ext4_sb_info *sbi = EXT4_SB(mapping->host->i_sb);
>  
> @@ -2771,16 +2826,6 @@ static int ext4_da_writepages(struct address_space *mapping,
>  	if (unlikely(sbi->s_mount_flags & EXT4_MF_FS_ABORTED))
>  		return -EROFS;
>  
> -	/*
> -	 * Make sure nr_to_write is >= sbi->s_mb_stream_request
> -	 * This make sure small files blocks are allocated in
> -	 * single attempt. This ensure that small files
> -	 * get less fragmented.
> -	 */
> -	if (wbc->nr_to_write < sbi->s_mb_stream_request) {
> -		nr_to_writebump = sbi->s_mb_stream_request - wbc->nr_to_write;
> -		wbc->nr_to_write = sbi->s_mb_stream_request;
> -	}
>  	if (wbc->range_start == 0 && wbc->range_end == LLONG_MAX)
>  		range_whole = 1;
>  
> @@ -2795,6 +2840,36 @@ static int ext4_da_writepages(struct address_space *mapping,
>  	} else
>  		index = wbc->range_start >> PAGE_CACHE_SHIFT;
>  
> +	/*
> +	 * This works around two forms of stupidity.  The first is in
> +	 * the writeback code, which caps the maximum number of pages
> +	 * written to be 1024 pages.  This is wrong on multiple
> +	 * levels; different architectues have a different page size,
> +	 * which changes the maximum amount of data which gets

Good point.

> +	 * written.  Secondly, 4 megabytes is way too small.  XFS
> +	 * forces this value to be 16 megabytes by multiplying
> +	 * nr_to_write parameter by four, and then relies on its
> +	 * allocator to allocate larger extents to make them
> +	 * contiguous.  Unfortunately this brings us to the second

Hopefully we can make these hacks unnecessary :)

> +	 * stupidity, which is that ext4's mballoc code only allocates
> +	 * at most 2048 blocks.  So we force contiguous writes up to
> +	 * the number of dirty blocks in the inode, or
> +	 * sbi->max_contig_writeback_mb whichever is smaller.
> +	 */
> +	if (!range_cyclic && range_whole)
> +		desired_nr_to_write = wbc->nr_to_write * 8;
> +	else
> +		desired_nr_to_write = ext4_num_dirty_pages(inode, index);
> +	if (desired_nr_to_write > (sbi->s_max_contig_writeback_mb << 
> +				   (20 - PAGE_CACHE_SHIFT)))
> +		desired_nr_to_write = (sbi->s_max_contig_writeback_mb << 
> +				       (20 - PAGE_CACHE_SHIFT));
> +
> +	if (wbc->nr_to_write < desired_nr_to_write) {
> +		nr_to_writebump = desired_nr_to_write - wbc->nr_to_write;
> +		wbc->nr_to_write = desired_nr_to_write;
> +	}
> +
>  	mpd.wbc = wbc;
>  	mpd.inode = mapping->host;
>  
> @@ -2914,7 +2989,8 @@ retry:
>  out_writepages:
>  	if (!no_nrwrite_index_update)
>  		wbc->no_nrwrite_index_update = 0;
> -	wbc->nr_to_write -= nr_to_writebump;
> +	if (wbc->nr_to_write > nr_to_writebump)
> +		wbc->nr_to_write -= nr_to_writebump;
>  	wbc->range_start = range_start;
>  	trace_ext4_da_writepages_result(inode, wbc, ret, pages_written);
>  	return ret;

Thanks,
Fengguang
---
writeback: bump up writeback chunk size to 128MB

Adjust the writeback call stack to support larger writeback chunk size.

- make wbc.nr_to_write a per-file parameter
- init wbc.nr_to_write with MAX_WRITEBACK_PAGES=128MB
  (proposed by Ted)
- add wbc.nr_segments to limit seeks inside sparsely dirtied file
  (proposed by Chris)
- add wbc.timeout which will be used to control IO submission time
  either per-file or globally.
  
The wbc.nr_segments is now determined purely by logical page index
distance: if two pages are 1MB apart, it makes a new segment.

Filesystems could do this better with real extent knowledges.
One possible scheme is to record the previous page index in
wbc.writeback_index, and let ->writepage compare if the current and
previous pages lie in the same extent, and decrease wbc.nr_segments
accordingly. Care should taken to avoid double decreases in writepage
and write_cache_pages.

The wbc.timeout (when used per-file) is mainly a safeguard against slow
devices, which may take too long time to sync 128MB data.

The wbc.timeout (when used globally) could be useful when we decide to
do two sync scans on dirty pages and dirty metadata. XFS could say:
please return to sync dirty metadata after 10s. Would need another
b_io_metadata queue, but that's possible.

This work depends on the balance_dirty_pages() wait queue patch.

CC: Theodore Ts'o <tytso@mit.edu>
CC: Chris Mason <chris.mason@oracle.com>
CC: Dave Chinner <david@fromorbit.com> 
CC: Christoph Hellwig <hch@infradead.org>
CC: Jan Kara <jack@suse.cz> 
CC: Peter Zijlstra <a.p.zijlstra@chello.nl> 
CC: Jens Axboe <jens.axboe@oracle.com> 
Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
---
 fs/fs-writeback.c         |   60 ++++++++++++++++++++++--------------
 fs/jbd2/commit.c          |    1 
 include/linux/writeback.h |   15 +++++++--
 mm/backing-dev.c          |    2 -
 mm/filemap.c              |    1 
 mm/page-writeback.c       |   13 +++++++
 6 files changed, 67 insertions(+), 25 deletions(-)

--- linux.orig/fs/fs-writeback.c	2009-09-30 12:17:15.000000000 +0800
+++ linux/fs/fs-writeback.c	2009-09-30 12:17:17.000000000 +0800
@@ -31,6 +31,15 @@
 #define inode_to_bdi(inode)	((inode)->i_mapping->backing_dev_info)
 
 /*
+ * The maximum number of pages to writeout in a single bdi flush/kupdate
+ * operation.  We do this so we don't hold I_SYNC against an inode for
+ * enormous amounts of time, which would block a userspace task which has
+ * been forced to throttle against that inode.  Also, the code reevaluates
+ * the dirty each time it has written this many pages.
+ */
+#define MAX_WRITEBACK_PAGES     (128 * (20 - PAGE_CACHE_SHIFT))
+
+/*
  * We don't actually have pdflush, but this one is exported though /proc...
  */
 int nr_pdflush_threads;
@@ -540,6 +549,14 @@ writeback_single_inode(struct inode *ino
 
 	spin_unlock(&inode_lock);
 
+	if (wbc->for_kupdate || wbc->for_background) {
+		wbc->nr_segments = 1;	/* TODO: test blk_queue_nonrot() */
+		wbc->timeout = HZ;
+	} else {
+		wbc->nr_segments = LONG_MAX;
+		wbc->timeout = 0;
+	}
+
 	ret = do_writepages(mapping, wbc);
 
 	/* Don't write the inode if only I_DIRTY_PAGES was set */
@@ -564,7 +581,9 @@ writeback_single_inode(struct inode *ino
 			 * sometimes bales out without doing anything.
 			 */
 			inode->i_state |= I_DIRTY_PAGES;
-			if (wbc->nr_to_write <= 0) {
+			if (wbc->nr_to_write <= 0 ||
+			    wbc->nr_segments <= 0 ||
+			    wbc->timeout < 0) {
 				/*
 				 * slice used up: queue for next turn
 				 */
@@ -659,12 +678,17 @@ pinned:
 	return 0;
 }
 
-static void writeback_inodes_wb(struct bdi_writeback *wb,
+static long writeback_inodes_wb(struct bdi_writeback *wb,
 				struct writeback_control *wbc)
 {
 	struct super_block *sb = wbc->sb, *pin_sb = NULL;
 	const int is_blkdev_sb = sb_is_blkdev_sb(sb);
 	const unsigned long start = jiffies;	/* livelock avoidance */
+	unsigned long stop_time = 0;
+	long wrote = 0;
+
+	if (wbc->timeout)
+		stop_time = (start + wbc->timeout) | 1;
 
 	spin_lock(&inode_lock);
 
@@ -721,7 +745,9 @@ static void writeback_inodes_wb(struct b
 		BUG_ON(inode->i_state & (I_FREEING | I_CLEAR));
 		__iget(inode);
 		pages_skipped = wbc->pages_skipped;
+		wbc->nr_to_write = MAX_WRITEBACK_PAGES;
 		writeback_single_inode(inode, wbc);
+		wrote += MAX_WRITEBACK_PAGES - wbc->nr_to_write;
 		if (wbc->pages_skipped != pages_skipped) {
 			/*
 			 * writeback is not making progress due to locked
@@ -739,12 +765,15 @@ static void writeback_inodes_wb(struct b
 		}
 		if (!list_empty(&wb->b_more_io))
 			wbc->more_io = 1;
+		if (stop_time && time_after(jiffies, stop_time))
+			break;
 	}
 
 	unpin_sb_for_writeback(&pin_sb);
 
 	spin_unlock(&inode_lock);
 	/* Leave any unwritten inodes on b_io */
+	return wrote;
 }
 
 void writeback_inodes_wbc(struct writeback_control *wbc)
@@ -754,15 +783,6 @@ void writeback_inodes_wbc(struct writeba
 	writeback_inodes_wb(&bdi->wb, wbc);
 }
 
-/*
- * The maximum number of pages to writeout in a single bdi flush/kupdate
- * operation.  We do this so we don't hold I_SYNC against an inode for
- * enormous amounts of time, which would block a userspace task which has
- * been forced to throttle against that inode.  Also, the code reevaluates
- * the dirty each time it has written this many pages.
- */
-#define MAX_WRITEBACK_PAGES     1024
-
 static inline bool over_bground_thresh(void)
 {
 	unsigned long background_thresh, dirty_thresh;
@@ -797,10 +817,12 @@ static long wb_writeback(struct bdi_writ
 		.sync_mode		= args->sync_mode,
 		.older_than_this	= NULL,
 		.for_kupdate		= args->for_kupdate,
+		.for_background		= args->for_background,
 		.range_cyclic		= args->range_cyclic,
 	};
 	unsigned long oldest_jif;
 	long wrote = 0;
+	long nr;
 	struct inode *inode;
 
 	if (wbc.for_kupdate) {
@@ -834,26 +856,20 @@ static long wb_writeback(struct bdi_writ
 
 		wbc.more_io = 0;
 		wbc.encountered_congestion = 0;
-		wbc.nr_to_write = MAX_WRITEBACK_PAGES;
 		wbc.pages_skipped = 0;
-		writeback_inodes_wb(wb, &wbc);
-		args->nr_pages -= MAX_WRITEBACK_PAGES - wbc.nr_to_write;
-		wrote += MAX_WRITEBACK_PAGES - wbc.nr_to_write;
+		nr = writeback_inodes_wb(wb, &wbc);
+		args->nr_pages -= nr;
+		wrote += nr;
 
 		/*
-		 * If we consumed everything, see if we have more
-		 */
-		if (wbc.nr_to_write <= 0)
-			continue;
-		/*
-		 * Didn't write everything and we don't have more IO, bail
+		 * Bail if no more IO
 		 */
 		if (!wbc.more_io)
 			break;
 		/*
 		 * Did we write something? Try for more
 		 */
-		if (wbc.nr_to_write < MAX_WRITEBACK_PAGES)
+		if (nr)
 			continue;
 		/*
 		 * Nothing written. Wait for some inode to
--- linux.orig/include/linux/writeback.h	2009-09-30 12:13:00.000000000 +0800
+++ linux/include/linux/writeback.h	2009-09-30 12:17:17.000000000 +0800
@@ -32,10 +32,15 @@ struct writeback_control {
 	struct super_block *sb;		/* if !NULL, only write inodes from
 					   this super_block */
 	enum writeback_sync_modes sync_mode;
+	int timeout;
 	unsigned long *older_than_this;	/* If !NULL, only write back inodes
 					   older than this */
-	long nr_to_write;		/* Write this many pages, and decrement
-					   this for each page written */
+	long nr_to_write;		/* Max pages to write per file, and
+					   decrement this for each page written
+					 */
+	long nr_segments;		/* Max page segments to write per file,
+					   this is also a count down value
+					 */
 	long pages_skipped;		/* Pages which were not written */
 
 	/*
@@ -49,6 +54,7 @@ struct writeback_control {
 	unsigned nonblocking:1;		/* Don't get stuck on request queues */
 	unsigned encountered_congestion:1; /* An output: a queue is full */
 	unsigned for_kupdate:1;		/* A kupdate writeback */
+	unsigned for_background:1;	/* A background writeback */
 	unsigned for_reclaim:1;		/* Invoked from the page allocator */
 	unsigned range_cyclic:1;	/* range_start is cyclic */
 	unsigned stop_on_wrap:1;	/* stop when write index is to wrap */
@@ -65,6 +71,11 @@ struct writeback_control {
 };
 
 /*
+ * if two page ranges are more than 1MB apart, they are taken as two segments.
+ */
+#define WB_SEGMENT_DIST		(1024 >> (PAGE_CACHE_SHIFT - 10))
+
+/*
  * fs/fs-writeback.c
  */	
 struct bdi_writeback;
--- linux.orig/mm/filemap.c	2009-09-30 12:13:00.000000000 +0800
+++ linux/mm/filemap.c	2009-09-30 12:17:17.000000000 +0800
@@ -216,6 +216,7 @@ int __filemap_fdatawrite_range(struct ad
 	struct writeback_control wbc = {
 		.sync_mode = sync_mode,
 		.nr_to_write = LONG_MAX,
+		.nr_segments = LONG_MAX,
 		.range_start = start,
 		.range_end = end,
 	};
--- linux.orig/mm/page-writeback.c	2009-09-30 12:17:15.000000000 +0800
+++ linux/mm/page-writeback.c	2009-09-30 12:17:17.000000000 +0800
@@ -765,6 +765,7 @@ int write_cache_pages(struct address_spa
 	int cycled;
 	int range_whole = 0;
 	long nr_to_write = wbc->nr_to_write;
+	unsigned long start_time = jiffies;
 
 	pagevec_init(&pvec, 0);
 	if (wbc->range_cyclic) {
@@ -818,6 +819,12 @@ retry:
 				break;
 			}
 
+			if (done_index + WB_SEGMENT_DIST > page->index &&
+			    --wbc->nr_segments <= 0) {
+				done = 1;
+				break;
+			}
+
 			done_index = page->index + 1;
 
 			lock_page(page);
@@ -899,6 +906,12 @@ continue_unlock:
 		}
 		pagevec_release(&pvec);
 		cond_resched();
+		if (wbc->timeout &&
+		    time_after(jiffies, start_time + wbc->timeout)) {
+			wbc->timeout = -1;
+			done = 1;
+			break;
+		}
 	}
 	if (wbc->stop_on_wrap)
 		done_index = 0;
--- linux.orig/fs/jbd2/commit.c	2009-09-30 12:13:00.000000000 +0800
+++ linux/fs/jbd2/commit.c	2009-09-30 12:17:17.000000000 +0800
@@ -219,6 +219,7 @@ static int journal_submit_inode_data_buf
 	struct writeback_control wbc = {
 		.sync_mode =  WB_SYNC_ALL,
 		.nr_to_write = mapping->nrpages * 2,
+		.nr_segments = LONG_MAX,
 		.range_start = 0,
 		.range_end = i_size_read(mapping->host),
 	};
--- linux.orig/mm/backing-dev.c	2009-09-30 12:17:26.000000000 +0800
+++ linux/mm/backing-dev.c	2009-09-30 12:17:38.000000000 +0800
@@ -336,7 +336,7 @@ static void bdi_flush_io(struct backing_
 		.sync_mode		= WB_SYNC_NONE,
 		.older_than_this	= NULL,
 		.range_cyclic		= 1,
-		.nr_to_write		= 1024,
+		.timeout		= HZ,
 	};
 
 	writeback_inodes_wbc(&wbc);

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: regression in page writeback
  2009-09-30  5:26                                                 ` Wu Fengguang
@ 2009-09-30  5:32                                                   ` Wu Fengguang
  2009-10-01 22:17                                                     ` Jan Kara
  2009-09-30 14:11                                                   ` Theodore Tso
  1 sibling, 1 reply; 79+ messages in thread
From: Wu Fengguang @ 2009-09-30  5:32 UTC (permalink / raw)
  To: Theodore Tso, Christoph Hellwig, Dave Chinner, Chris Mason,
	Andrew Morton, Peter Zijlstra, Li, Shaohua, linux-kernel,
	richard, jens.axboe, Jan Kara

On Wed, Sep 30, 2009 at 01:26:57PM +0800, Wu Fengguang wrote:

> +#define MAX_WRITEBACK_PAGES     (128 * (20 - PAGE_CACHE_SHIFT))

Sorry for the silly mistake!

---
writeback: bump up writeback chunk size to 128MB

Adjust the writeback call stack to support larger writeback chunk size.

- make wbc.nr_to_write a per-file parameter
- init wbc.nr_to_write with MAX_WRITEBACK_PAGES=128MB
  (proposed by Ted)
- add wbc.nr_segments to limit seeks inside sparsely dirtied file
  (proposed by Chris)
- add wbc.timeout which will be used to control IO submission time
  either per-file or globally.
  
The wbc.nr_segments is now determined purely by logical page index
distance: if two pages are 1MB apart, it makes a new segment.

Filesystems could do this better with real extent knowledges.
One possible scheme is to record the previous page index in
wbc.writeback_index, and let ->writepage compare if the current and
previous pages lie in the same extent, and decrease wbc.nr_segments
accordingly. Care should taken to avoid double decreases in writepage
and write_cache_pages.

The wbc.timeout (when used per-file) is mainly a safeguard against slow
devices, which may take too long time to sync 128MB data.

The wbc.timeout (when used globally) could be useful when we decide to
do two sync scans on dirty pages and dirty metadata. XFS could say:
please return to sync dirty metadata after 10s. Would need another
b_io_metadata queue, but that's possible.

This work depends on the balance_dirty_pages() wait queue patch.

CC: Theodore Ts'o <tytso@mit.edu>
CC: Chris Mason <chris.mason@oracle.com>
CC: Dave Chinner <david@fromorbit.com> 
CC: Christoph Hellwig <hch@infradead.org>
CC: Jan Kara <jack@suse.cz> 
CC: Peter Zijlstra <a.p.zijlstra@chello.nl> 
CC: Jens Axboe <jens.axboe@oracle.com> 
Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
---
 fs/fs-writeback.c         |   60 ++++++++++++++++++++++--------------
 fs/jbd2/commit.c          |    1 
 include/linux/writeback.h |   15 +++++++--
 mm/backing-dev.c          |    2 -
 mm/filemap.c              |    1 
 mm/page-writeback.c       |   13 +++++++
 6 files changed, 67 insertions(+), 25 deletions(-)

--- linux.orig/fs/fs-writeback.c	2009-09-30 12:17:15.000000000 +0800
+++ linux/fs/fs-writeback.c	2009-09-30 13:29:24.000000000 +0800
@@ -31,6 +31,15 @@
 #define inode_to_bdi(inode)	((inode)->i_mapping->backing_dev_info)
 
 /*
+ * The maximum number of pages to writeout in a single bdi flush/kupdate
+ * operation.  We do this so we don't hold I_SYNC against an inode for
+ * enormous amounts of time, which would block a userspace task which has
+ * been forced to throttle against that inode.  Also, the code reevaluates
+ * the dirty each time it has written this many pages.
+ */
+#define MAX_WRITEBACK_PAGES     (128 << (20 - PAGE_CACHE_SHIFT))
+
+/*
  * We don't actually have pdflush, but this one is exported though /proc...
  */
 int nr_pdflush_threads;
@@ -540,6 +549,14 @@ writeback_single_inode(struct inode *ino
 
 	spin_unlock(&inode_lock);
 
+	if (wbc->for_kupdate || wbc->for_background) {
+		wbc->nr_segments = 1;	/* TODO: test blk_queue_nonrot() */
+		wbc->timeout = HZ;
+	} else {
+		wbc->nr_segments = LONG_MAX;
+		wbc->timeout = 0;
+	}
+
 	ret = do_writepages(mapping, wbc);
 
 	/* Don't write the inode if only I_DIRTY_PAGES was set */
@@ -564,7 +581,9 @@ writeback_single_inode(struct inode *ino
 			 * sometimes bales out without doing anything.
 			 */
 			inode->i_state |= I_DIRTY_PAGES;
-			if (wbc->nr_to_write <= 0) {
+			if (wbc->nr_to_write <= 0 ||
+			    wbc->nr_segments <= 0 ||
+			    wbc->timeout < 0) {
 				/*
 				 * slice used up: queue for next turn
 				 */
@@ -659,12 +678,17 @@ pinned:
 	return 0;
 }
 
-static void writeback_inodes_wb(struct bdi_writeback *wb,
+static long writeback_inodes_wb(struct bdi_writeback *wb,
 				struct writeback_control *wbc)
 {
 	struct super_block *sb = wbc->sb, *pin_sb = NULL;
 	const int is_blkdev_sb = sb_is_blkdev_sb(sb);
 	const unsigned long start = jiffies;	/* livelock avoidance */
+	unsigned long stop_time = 0;
+	long wrote = 0;
+
+	if (wbc->timeout)
+		stop_time = (start + wbc->timeout) | 1;
 
 	spin_lock(&inode_lock);
 
@@ -721,7 +745,9 @@ static void writeback_inodes_wb(struct b
 		BUG_ON(inode->i_state & (I_FREEING | I_CLEAR));
 		__iget(inode);
 		pages_skipped = wbc->pages_skipped;
+		wbc->nr_to_write = MAX_WRITEBACK_PAGES;
 		writeback_single_inode(inode, wbc);
+		wrote += MAX_WRITEBACK_PAGES - wbc->nr_to_write;
 		if (wbc->pages_skipped != pages_skipped) {
 			/*
 			 * writeback is not making progress due to locked
@@ -739,12 +765,15 @@ static void writeback_inodes_wb(struct b
 		}
 		if (!list_empty(&wb->b_more_io))
 			wbc->more_io = 1;
+		if (stop_time && time_after(jiffies, stop_time))
+			break;
 	}
 
 	unpin_sb_for_writeback(&pin_sb);
 
 	spin_unlock(&inode_lock);
 	/* Leave any unwritten inodes on b_io */
+	return wrote;
 }
 
 void writeback_inodes_wbc(struct writeback_control *wbc)
@@ -754,15 +783,6 @@ void writeback_inodes_wbc(struct writeba
 	writeback_inodes_wb(&bdi->wb, wbc);
 }
 
-/*
- * The maximum number of pages to writeout in a single bdi flush/kupdate
- * operation.  We do this so we don't hold I_SYNC against an inode for
- * enormous amounts of time, which would block a userspace task which has
- * been forced to throttle against that inode.  Also, the code reevaluates
- * the dirty each time it has written this many pages.
- */
-#define MAX_WRITEBACK_PAGES     1024
-
 static inline bool over_bground_thresh(void)
 {
 	unsigned long background_thresh, dirty_thresh;
@@ -797,10 +817,12 @@ static long wb_writeback(struct bdi_writ
 		.sync_mode		= args->sync_mode,
 		.older_than_this	= NULL,
 		.for_kupdate		= args->for_kupdate,
+		.for_background		= args->for_background,
 		.range_cyclic		= args->range_cyclic,
 	};
 	unsigned long oldest_jif;
 	long wrote = 0;
+	long nr;
 	struct inode *inode;
 
 	if (wbc.for_kupdate) {
@@ -834,26 +856,20 @@ static long wb_writeback(struct bdi_writ
 
 		wbc.more_io = 0;
 		wbc.encountered_congestion = 0;
-		wbc.nr_to_write = MAX_WRITEBACK_PAGES;
 		wbc.pages_skipped = 0;
-		writeback_inodes_wb(wb, &wbc);
-		args->nr_pages -= MAX_WRITEBACK_PAGES - wbc.nr_to_write;
-		wrote += MAX_WRITEBACK_PAGES - wbc.nr_to_write;
+		nr = writeback_inodes_wb(wb, &wbc);
+		args->nr_pages -= nr;
+		wrote += nr;
 
 		/*
-		 * If we consumed everything, see if we have more
-		 */
-		if (wbc.nr_to_write <= 0)
-			continue;
-		/*
-		 * Didn't write everything and we don't have more IO, bail
+		 * Bail if no more IO
 		 */
 		if (!wbc.more_io)
 			break;
 		/*
 		 * Did we write something? Try for more
 		 */
-		if (wbc.nr_to_write < MAX_WRITEBACK_PAGES)
+		if (nr)
 			continue;
 		/*
 		 * Nothing written. Wait for some inode to
--- linux.orig/include/linux/writeback.h	2009-09-30 12:13:00.000000000 +0800
+++ linux/include/linux/writeback.h	2009-09-30 12:17:17.000000000 +0800
@@ -32,10 +32,15 @@ struct writeback_control {
 	struct super_block *sb;		/* if !NULL, only write inodes from
 					   this super_block */
 	enum writeback_sync_modes sync_mode;
+	int timeout;
 	unsigned long *older_than_this;	/* If !NULL, only write back inodes
 					   older than this */
-	long nr_to_write;		/* Write this many pages, and decrement
-					   this for each page written */
+	long nr_to_write;		/* Max pages to write per file, and
+					   decrement this for each page written
+					 */
+	long nr_segments;		/* Max page segments to write per file,
+					   this is also a count down value
+					 */
 	long pages_skipped;		/* Pages which were not written */
 
 	/*
@@ -49,6 +54,7 @@ struct writeback_control {
 	unsigned nonblocking:1;		/* Don't get stuck on request queues */
 	unsigned encountered_congestion:1; /* An output: a queue is full */
 	unsigned for_kupdate:1;		/* A kupdate writeback */
+	unsigned for_background:1;	/* A background writeback */
 	unsigned for_reclaim:1;		/* Invoked from the page allocator */
 	unsigned range_cyclic:1;	/* range_start is cyclic */
 	unsigned stop_on_wrap:1;	/* stop when write index is to wrap */
@@ -65,6 +71,11 @@ struct writeback_control {
 };
 
 /*
+ * if two page ranges are more than 1MB apart, they are taken as two segments.
+ */
+#define WB_SEGMENT_DIST		(1024 >> (PAGE_CACHE_SHIFT - 10))
+
+/*
  * fs/fs-writeback.c
  */	
 struct bdi_writeback;
--- linux.orig/mm/filemap.c	2009-09-30 12:13:00.000000000 +0800
+++ linux/mm/filemap.c	2009-09-30 12:17:17.000000000 +0800
@@ -216,6 +216,7 @@ int __filemap_fdatawrite_range(struct ad
 	struct writeback_control wbc = {
 		.sync_mode = sync_mode,
 		.nr_to_write = LONG_MAX,
+		.nr_segments = LONG_MAX,
 		.range_start = start,
 		.range_end = end,
 	};
--- linux.orig/mm/page-writeback.c	2009-09-30 12:17:15.000000000 +0800
+++ linux/mm/page-writeback.c	2009-09-30 12:17:17.000000000 +0800
@@ -765,6 +765,7 @@ int write_cache_pages(struct address_spa
 	int cycled;
 	int range_whole = 0;
 	long nr_to_write = wbc->nr_to_write;
+	unsigned long start_time = jiffies;
 
 	pagevec_init(&pvec, 0);
 	if (wbc->range_cyclic) {
@@ -818,6 +819,12 @@ retry:
 				break;
 			}
 
+			if (done_index + WB_SEGMENT_DIST > page->index &&
+			    --wbc->nr_segments <= 0) {
+				done = 1;
+				break;
+			}
+
 			done_index = page->index + 1;
 
 			lock_page(page);
@@ -899,6 +906,12 @@ continue_unlock:
 		}
 		pagevec_release(&pvec);
 		cond_resched();
+		if (wbc->timeout &&
+		    time_after(jiffies, start_time + wbc->timeout)) {
+			wbc->timeout = -1;
+			done = 1;
+			break;
+		}
 	}
 	if (wbc->stop_on_wrap)
 		done_index = 0;
--- linux.orig/fs/jbd2/commit.c	2009-09-30 12:13:00.000000000 +0800
+++ linux/fs/jbd2/commit.c	2009-09-30 12:17:17.000000000 +0800
@@ -219,6 +219,7 @@ static int journal_submit_inode_data_buf
 	struct writeback_control wbc = {
 		.sync_mode =  WB_SYNC_ALL,
 		.nr_to_write = mapping->nrpages * 2,
+		.nr_segments = LONG_MAX,
 		.range_start = 0,
 		.range_end = i_size_read(mapping->host),
 	};
--- linux.orig/mm/backing-dev.c	2009-09-30 12:17:26.000000000 +0800
+++ linux/mm/backing-dev.c	2009-09-30 12:17:38.000000000 +0800
@@ -336,7 +336,7 @@ static void bdi_flush_io(struct backing_
 		.sync_mode		= WB_SYNC_NONE,
 		.older_than_this	= NULL,
 		.range_cyclic		= 1,
-		.nr_to_write		= 1024,
+		.timeout		= HZ,
 	};
 
 	writeback_inodes_wbc(&wbc);

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: regression in page writeback
  2009-09-30  5:26                                                 ` Wu Fengguang
  2009-09-30  5:32                                                   ` Wu Fengguang
@ 2009-09-30 14:11                                                   ` Theodore Tso
  2009-10-01 15:14                                                     ` Wu Fengguang
  1 sibling, 1 reply; 79+ messages in thread
From: Theodore Tso @ 2009-09-30 14:11 UTC (permalink / raw)
  To: Wu Fengguang
  Cc: Christoph Hellwig, Dave Chinner, Chris Mason, Andrew Morton,
	Peter Zijlstra, Li, Shaohua, linux-kernel, richard, jens.axboe

On Wed, Sep 30, 2009 at 01:26:57PM +0800, Wu Fengguang wrote:
> It's good to increase MAX_WRITEBACK_PAGES, however I'm afraid
> max_contig_writeback_mb may be a burden in future: either it is not
> necessary, or a per-bdi counterpart must be introduced for all
> filesystems.

The per-filesystem tunable was just a short-term hack; the reason why
I did it that way was it was clear that a global tunable wouldn't fly,
and rightly so --- what might be suitable for a slow USB stick might
be very different than a super-fast RAID array, and someone might very
well have both on the same system.

> And it's preferred to automatically handle slow devices well with the
> increased chunk size, instead of adding another parameter.

Agreed; long-term what we probably need is something which is
automatically tunable.  My thinking was that we should tune the the
initial nr_to_write parameter based on how many blocks could be
written in some time interval, which is tunable.  So if we decide that
1 second is a suitable time period to be writing out one inode's dirty
pages, then for a fast server-class SATA disk, we might want to set
nr_to_write to be around 128mb worth of pages.  For a laptop SATA
disk, it might be around 64mb, and for a really slow USB stick, it
might be more like 16mb.  For super-fast enterprise RAID array, 128mb
might be too small!

If we get timing and/or congestion information from the block layer,
it wouldn't be hard to figure out the optimal number of pages that
should be sent down to the filesystem, and to tune this automatically.

> I scratched up a patch to demo the ideas collected in recent discussions.
> Can you check if it serves your needs? Thanks.

Sure, I'll definitely play with it, thanks.

> The wbc.timeout (when used per-file) is mainly a safeguard against slow
> devices, which may take too long time to sync 128MB data.

Maybe I'm missing something, but I don't think the wbc.timeout
approach is sufficient.  Consider the scenario of someone who is
ripping a DVD disc to an 8 gig USB stick.  The USB stick will be very
slow, but since the file is contiguous the filesystem will very
happily try to push it out there 128MB at a time, and wbc.timeout
value isn't really going to help since a single call to writepages
could easily cause 128MB worth of data to be streamed out to the USB
stick.

This is why the MAX_WRITEBACK_PAGES really needs to be tuned on a
per-bdi basis; either manually, via a sysfs tunable, or automatically,
by auto-tuning based on how fast the storage device is or by some kind
of congestion-based approach.  This is certainly the best long-term
solution; my concern was that it might take a long-time for us to get
the auto-tunable just right, so in the meantime I added a
per-mounted-filesystem tunable and put the hack in the filesystem
layer.  I would like nothing better than to rip it out, once we have a
long-term solution.

Regards,

							- Ted


^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: regression in page writeback
  2009-09-30 14:11                                                   ` Theodore Tso
@ 2009-10-01 15:14                                                     ` Wu Fengguang
  2009-10-01 21:54                                                       ` Theodore Tso
  0 siblings, 1 reply; 79+ messages in thread
From: Wu Fengguang @ 2009-10-01 15:14 UTC (permalink / raw)
  To: Theodore Tso, Christoph Hellwig, Dave Chinner, Chris Mason,
	Andrew Morton, Peter Zijlstra, Li, Shaohua, linux-kernel,
	richard, jens.axboe

Ted,

On Wed, Sep 30, 2009 at 10:11:58PM +0800, Theodore Ts'o wrote:
> On Wed, Sep 30, 2009 at 01:26:57PM +0800, Wu Fengguang wrote:
> > It's good to increase MAX_WRITEBACK_PAGES, however I'm afraid
> > max_contig_writeback_mb may be a burden in future: either it is not
> > necessary, or a per-bdi counterpart must be introduced for all
> > filesystems.
> 
> The per-filesystem tunable was just a short-term hack; the reason why
> I did it that way was it was clear that a global tunable wouldn't fly,
> and rightly so --- what might be suitable for a slow USB stick might
> be very different than a super-fast RAID array, and someone might very
> well have both on the same system.

Ah Yes.

> > And it's preferred to automatically handle slow devices well with the
> > increased chunk size, instead of adding another parameter.
> 
> Agreed; long-term what we probably need is something which is
> automatically tunable.  My thinking was that we should tune the the
> initial nr_to_write parameter based on how many blocks could be
> written in some time interval, which is tunable.  So if we decide that
> 1 second is a suitable time period to be writing out one inode's dirty
> pages, then for a fast server-class SATA disk, we might want to set
> nr_to_write to be around 128mb worth of pages.  For a laptop SATA
> disk, it might be around 64mb, and for a really slow USB stick, it
> might be more like 16mb.  For super-fast enterprise RAID array, 128mb
> might be too small!

Yes, 128MB may be too small :)

> If we get timing and/or congestion information from the block layer,
> it wouldn't be hard to figure out the optimal number of pages that
> should be sent down to the filesystem, and to tune this automatically.

Sure, it's possible.

> > I scratched up a patch to demo the ideas collected in recent discussions.
> > Can you check if it serves your needs? Thanks.
> 
> Sure, I'll definitely play with it, thanks.

Thanks :)

> > The wbc.timeout (when used per-file) is mainly a safeguard against slow
> > devices, which may take too long time to sync 128MB data.
> 
> Maybe I'm missing something, but I don't think the wbc.timeout
> approach is sufficient.  Consider the scenario of someone who is
> ripping a DVD disc to an 8 gig USB stick.  The USB stick will be very
> slow, but since the file is contiguous the filesystem will very
> happily try to push it out there 128MB at a time, and wbc.timeout
> value isn't really going to help since a single call to writepages
> could easily cause 128MB worth of data to be streamed out to the USB
> stick.

Yes and no. Yes if the queue was empty for the slow device. No if the
queue was full, in which case IO submission speed = IO complete speed
for previously queued requests.

So wbc.timeout will be accurate for IO submission time, and mostly
accurate for IO completion time. The transient queue fill up phase
shall not be a big problem?

> This is why the MAX_WRITEBACK_PAGES really needs to be tuned on a
> per-bdi basis; either manually, via a sysfs tunable, or automatically,
> by auto-tuning based on how fast the storage device is or by some kind
> of congestion-based approach.  This is certainly the best long-term
> solution; my concern was that it might take a long-time for us to get
> the auto-tunable just right, so in the meantime I added a
> per-mounted-filesystem tunable and put the hack in the filesystem
> layer.  I would like nothing better than to rip it out, once we have a
> long-term solution.

Thanks,
Fengguang

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: regression in page writeback
  2009-10-01 15:14                                                     ` Wu Fengguang
@ 2009-10-01 21:54                                                       ` Theodore Tso
  2009-10-02  2:55                                                         ` Wu Fengguang
  0 siblings, 1 reply; 79+ messages in thread
From: Theodore Tso @ 2009-10-01 21:54 UTC (permalink / raw)
  To: Wu Fengguang
  Cc: Christoph Hellwig, Dave Chinner, Chris Mason, Andrew Morton,
	Peter Zijlstra, Li, Shaohua, linux-kernel, richard, jens.axboe

On Thu, Oct 01, 2009 at 11:14:29PM +0800, Wu Fengguang wrote:
> Yes and no. Yes if the queue was empty for the slow device. No if the
> queue was full, in which case IO submission speed = IO complete speed
> for previously queued requests.
> 
> So wbc.timeout will be accurate for IO submission time, and mostly
> accurate for IO completion time. The transient queue fill up phase
> shall not be a big problem?

So the problem is if we have a mixed workload where there are lots
large contiguous writes, and lots of small writes which are fsync'ed()
--- for example, consider the workload of copying lots of big DVD
images combined with the infamous firefox-we-must-write-out-300-megs-of-
small-random-writes-and-then-fsync-them-on-every-single-url-click-so-
that-every-last-visited-page-is-preserved-for-history-bar-autocompletion
workload.    The big writes, if the are contiguous, could take 1-2 seconds
on a very slow, ancient laptop disk, and that will hold up any kind of 
small synchornous activities --- such as either a disk read or a firefox-
triggered fsync().

That's why the IO completion time matters; it causes latency problems
for slow disks and mixed large and small write workloads.  It was the
original reason for the 1024 MAX_WRITEBACK_PAGES, which might have
made sense 10 years ago back when disks were a lot slower.  One of the
advantages of an auto-tuning algorithm, beyond auto-adjusting for
different types of hardware, is that we don't need to worry about
arbitrary and magic caps beocoming obsolete due to technological
changes.  :-)

							- Ted


^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: regression in page writeback
  2009-09-30  5:32                                                   ` Wu Fengguang
@ 2009-10-01 22:17                                                     ` Jan Kara
  2009-10-02  3:27                                                       ` Wu Fengguang
  0 siblings, 1 reply; 79+ messages in thread
From: Jan Kara @ 2009-10-01 22:17 UTC (permalink / raw)
  To: Wu Fengguang
  Cc: Theodore Tso, Christoph Hellwig, Dave Chinner, Chris Mason,
	Andrew Morton, Peter Zijlstra, Li, Shaohua, linux-kernel,
	richard, jens.axboe, Jan Kara

On Wed 30-09-09 13:32:23, Wu Fengguang wrote:
> writeback: bump up writeback chunk size to 128MB
> 
> Adjust the writeback call stack to support larger writeback chunk size.
> 
> - make wbc.nr_to_write a per-file parameter
> - init wbc.nr_to_write with MAX_WRITEBACK_PAGES=128MB
>   (proposed by Ted)
> - add wbc.nr_segments to limit seeks inside sparsely dirtied file
>   (proposed by Chris)
> - add wbc.timeout which will be used to control IO submission time
>   either per-file or globally.
>   
> The wbc.nr_segments is now determined purely by logical page index
> distance: if two pages are 1MB apart, it makes a new segment.
> 
> Filesystems could do this better with real extent knowledges.
> One possible scheme is to record the previous page index in
> wbc.writeback_index, and let ->writepage compare if the current and
> previous pages lie in the same extent, and decrease wbc.nr_segments
> accordingly. Care should taken to avoid double decreases in writepage
> and write_cache_pages.
> 
> The wbc.timeout (when used per-file) is mainly a safeguard against slow
> devices, which may take too long time to sync 128MB data.
> 
> The wbc.timeout (when used globally) could be useful when we decide to
> do two sync scans on dirty pages and dirty metadata. XFS could say:
> please return to sync dirty metadata after 10s. Would need another
> b_io_metadata queue, but that's possible.
> 
> This work depends on the balance_dirty_pages() wait queue patch.
  I don't know, I think it gets too complicated... I'd either use the
segments idea or the timeout idea but not both (unless you can find real
world tests in which both help). Also when we'll assure fairness via
timeout, maybe nr_to_write isn't needed anymore? WB_SYNC_ALL writeback
doesn't use nr_to_write. WB_SYNC_NONE writeback either sets it to some
large value (like LONG_MAX) or number of dirty pages (to effectively write
back as much as possible) or to MAX_WRITEBACK_PAGES to assure fairness
in kupdate style writeback. There are a few exceptions in btrfs but I
belive nr_to_write isn't really needed there either...

								Honza
-- 
Jan Kara <jack@suse.cz>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: regression in page writeback
  2009-10-01 21:54                                                       ` Theodore Tso
@ 2009-10-02  2:55                                                         ` Wu Fengguang
  2009-10-02  8:19                                                           ` Wu Fengguang
  0 siblings, 1 reply; 79+ messages in thread
From: Wu Fengguang @ 2009-10-02  2:55 UTC (permalink / raw)
  To: Theodore Tso, Christoph Hellwig, Dave Chinner, Chris Mason,
	Andrew Morton, Peter Zijlstra, Li, Shaohua, linux-kernel,
	richard, jens.axboe

On Fri, Oct 02, 2009 at 05:54:38AM +0800, Theodore Ts'o wrote:
> On Thu, Oct 01, 2009 at 11:14:29PM +0800, Wu Fengguang wrote:
> > Yes and no. Yes if the queue was empty for the slow device. No if the
> > queue was full, in which case IO submission speed = IO complete speed
> > for previously queued requests.
> > 
> > So wbc.timeout will be accurate for IO submission time, and mostly
> > accurate for IO completion time. The transient queue fill up phase
> > shall not be a big problem?
> 
> So the problem is if we have a mixed workload where there are lots
> large contiguous writes, and lots of small writes which are fsync'ed()
> --- for example, consider the workload of copying lots of big DVD
> images combined with the infamous firefox-we-must-write-out-300-megs-of-
> small-random-writes-and-then-fsync-them-on-every-single-url-click-so-
> that-every-last-visited-page-is-preserved-for-history-bar-autocompletion
> workload.    The big writes, if the are contiguous, could take 1-2 seconds
> on a very slow, ancient laptop disk, and that will hold up any kind of 
> small synchornous activities --- such as either a disk read or a firefox-
> triggered fsync().

Yes, that's a problem. The SYNC/ASYNC elevator queues can help here.

In IO submission paths, fsync writes will not be blocked by non-sync
writes because __filemap_fdatawrite_range() starts foreground sync
for the inode. Without the congestion backoff, it will now have to
compete queue with bdi-flush. Should not be a big problem though.

There's still the problem of IO submission time != IO completion time,
due to fluctuations of randomness and more. However that's a general
and unavoidable problem.  Both the wbc.timeout scheme and the
"wbc.nr_to_write based on estimated throughput" scheme are based on
_past_ requests and it's simply impossible to have a 100% accurate
scheme. In principle, wbc.timeout will only be inferior at IO startup
time. In the steady state of 100% full queue, it is actually estimating
the IO throughput implicitly :)

> That's why the IO completion time matters; it causes latency problems
> for slow disks and mixed large and small write workloads.  It was the
> original reason for the 1024 MAX_WRITEBACK_PAGES, which might have
> made sense 10 years ago back when disks were a lot slower.  One of the
> advantages of an auto-tuning algorithm, beyond auto-adjusting for
> different types of hardware, is that we don't need to worry about
> arbitrary and magic caps beocoming obsolete due to technological
> changes.  :-)

Yeah, I'm a big fan of auto-tuning :)

Thanks,
Fengguang

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: regression in page writeback
  2009-10-01 22:17                                                     ` Jan Kara
@ 2009-10-02  3:27                                                       ` Wu Fengguang
  2009-10-06 12:55                                                         ` Jan Kara
  0 siblings, 1 reply; 79+ messages in thread
From: Wu Fengguang @ 2009-10-02  3:27 UTC (permalink / raw)
  To: Jan Kara
  Cc: Theodore Tso, Christoph Hellwig, Dave Chinner, Chris Mason,
	Andrew Morton, Peter Zijlstra, Li, Shaohua, linux-kernel,
	richard, jens.axboe

On Fri, Oct 02, 2009 at 06:17:39AM +0800, Jan Kara wrote:
> On Wed 30-09-09 13:32:23, Wu Fengguang wrote:
> > writeback: bump up writeback chunk size to 128MB
> > 
> > Adjust the writeback call stack to support larger writeback chunk size.
> > 
> > - make wbc.nr_to_write a per-file parameter
> > - init wbc.nr_to_write with MAX_WRITEBACK_PAGES=128MB
> >   (proposed by Ted)
> > - add wbc.nr_segments to limit seeks inside sparsely dirtied file
> >   (proposed by Chris)
> > - add wbc.timeout which will be used to control IO submission time
> >   either per-file or globally.
> >   
> > The wbc.nr_segments is now determined purely by logical page index
> > distance: if two pages are 1MB apart, it makes a new segment.
> > 
> > Filesystems could do this better with real extent knowledges.
> > One possible scheme is to record the previous page index in
> > wbc.writeback_index, and let ->writepage compare if the current and
> > previous pages lie in the same extent, and decrease wbc.nr_segments
> > accordingly. Care should taken to avoid double decreases in writepage
> > and write_cache_pages.
> > 
> > The wbc.timeout (when used per-file) is mainly a safeguard against slow
> > devices, which may take too long time to sync 128MB data.
> > 
> > The wbc.timeout (when used globally) could be useful when we decide to
> > do two sync scans on dirty pages and dirty metadata. XFS could say:
> > please return to sync dirty metadata after 10s. Would need another
> > b_io_metadata queue, but that's possible.
> > 
> > This work depends on the balance_dirty_pages() wait queue patch.
>   I don't know, I think it gets too complicated... I'd either use the
> segments idea or the timeout idea but not both (unless you can find real
> world tests in which both help).

Maybe complicated, but nr_segments and timeout each has their target
application.  nr_segments serves two major purposes:
- fairness between two large files, one is continuously dirtied,
  another is sparsely dirtied. Given the same amount of dirty pages,
  it could take vastly different time to sync them to the _same_
  device. The nr_segments check helps to favor continuous data.
- avoid seeks/fragmentations. To give each file fair chance of
  writeback, we have to abort a file when some nr_to_write or timeout
  is reached. However they are both not good abort conditions.
  The best is for filesystem to abort earlier in seek boundaries,
  and treat nr_to_write/timeout as large enough bottom lines.
timeout is mainly a safeguard in case nr_to_write is too large for
slow devices. It is not necessary if nr_to_write is auto-computed,
however timeout in itself serves as a simple throughput adapting
scheme.

> Also when we'll assure fairness via
> timeout, maybe nr_to_write isn't needed anymore? WB_SYNC_ALL writeback
> doesn't use nr_to_write. WB_SYNC_NONE writeback either sets it to some
> large value (like LONG_MAX) or number of dirty pages (to effectively write
> back as much as possible) or to MAX_WRITEBACK_PAGES to assure fairness
> in kupdate style writeback. There are a few exceptions in btrfs but I
> belive nr_to_write isn't really needed there either...

Totally agreed. I'd rather remove the top level nr_page/nr_to_write
parameters. They are simply redundant ones.. The meaningful ones are
background threshold, dirty expireness or global timeout, depending on
the mission of the writeback work.

Thanks,
Fengguang

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: regression in page writeback
  2009-10-02  2:55                                                         ` Wu Fengguang
@ 2009-10-02  8:19                                                           ` Wu Fengguang
  2009-10-02 17:26                                                             ` Theodore Tso
  0 siblings, 1 reply; 79+ messages in thread
From: Wu Fengguang @ 2009-10-02  8:19 UTC (permalink / raw)
  To: Theodore Tso, Christoph Hellwig, Dave Chinner, Chris Mason,
	Andrew Morton, Peter Zijlstra, Li, Shaohua, linux-kernel,
	richard, jens.axboe

On Fri, Oct 02, 2009 at 10:55:02AM +0800, Wu Fengguang wrote:
> On Fri, Oct 02, 2009 at 05:54:38AM +0800, Theodore Ts'o wrote:
> > On Thu, Oct 01, 2009 at 11:14:29PM +0800, Wu Fengguang wrote:
> > > Yes and no. Yes if the queue was empty for the slow device. No if the
> > > queue was full, in which case IO submission speed = IO complete speed
> > > for previously queued requests.
> > > 
> > > So wbc.timeout will be accurate for IO submission time, and mostly
> > > accurate for IO completion time. The transient queue fill up phase
> > > shall not be a big problem?
> > 
> > So the problem is if we have a mixed workload where there are lots
> > large contiguous writes, and lots of small writes which are fsync'ed()
> > --- for example, consider the workload of copying lots of big DVD
> > images combined with the infamous firefox-we-must-write-out-300-megs-of-
> > small-random-writes-and-then-fsync-them-on-every-single-url-click-so-
> > that-every-last-visited-page-is-preserved-for-history-bar-autocompletion
> > workload.    The big writes, if the are contiguous, could take 1-2 seconds
> > on a very slow, ancient laptop disk, and that will hold up any kind of 
> > small synchornous activities --- such as either a disk read or a firefox-
> > triggered fsync().
> 
> Yes, that's a problem. The SYNC/ASYNC elevator queues can help here.
> 
> In IO submission paths, fsync writes will not be blocked by non-sync
> writes because __filemap_fdatawrite_range() starts foreground sync
> for the inode.

> Without the congestion backoff, it will now have to
> compete queue with bdi-flush. Should not be a big problem though.

I'd like to correct this: get_request_wait() uses one queue for SYNC
rw and another for ASYNC rw. So fsync won't compete the request queue
with background flush. That's perfect: when fsync comes, CFQ will
honor it a green channel, and somehow block background flushes.

> There's still the problem of IO submission time != IO completion time,
> due to fluctuations of randomness and more. However that's a general
> and unavoidable problem.  Both the wbc.timeout scheme and the
> "wbc.nr_to_write based on estimated throughput" scheme are based on
> _past_ requests and it's simply impossible to have a 100% accurate
> scheme. In principle, wbc.timeout will only be inferior at IO startup
> time. In the steady state of 100% full queue, it is actually estimating
> the IO throughput implicitly :)

Another difference between wbc.timeout and adaptive wbc.nr_to_write
is, when there comes many _read_ requests or fsync, these SYNC rw
requests will significant lower the ASYNC writeback throughput, if
it's not completely stalled. So with timeout, the inode will be
aborted with few pages written; with nr_to_write, the inode will be
written a good number of pages, at the cost of taking up long time.

IMHO the nr_to_write behavior seems more efficient. What do you think?

Thanks,
Fengguang

> > That's why the IO completion time matters; it causes latency problems
> > for slow disks and mixed large and small write workloads.  It was the
> > original reason for the 1024 MAX_WRITEBACK_PAGES, which might have
> > made sense 10 years ago back when disks were a lot slower.  One of the
> > advantages of an auto-tuning algorithm, beyond auto-adjusting for
> > different types of hardware, is that we don't need to worry about
> > arbitrary and magic caps beocoming obsolete due to technological
> > changes.  :-)
> 
> Yeah, I'm a big fan of auto-tuning :)
> 
> Thanks,
> Fengguang

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: regression in page writeback
  2009-10-02  8:19                                                           ` Wu Fengguang
@ 2009-10-02 17:26                                                             ` Theodore Tso
  2009-10-03  6:10                                                               ` Wu Fengguang
  0 siblings, 1 reply; 79+ messages in thread
From: Theodore Tso @ 2009-10-02 17:26 UTC (permalink / raw)
  To: Wu Fengguang
  Cc: Christoph Hellwig, Dave Chinner, Chris Mason, Andrew Morton,
	Peter Zijlstra, Li, Shaohua, linux-kernel, richard, jens.axboe

On Fri, Oct 02, 2009 at 04:19:53PM +0800, Wu Fengguang wrote:
> > > The big writes, if they are contiguous, could take 1-2 seconds
> > > on a very slow, ancient laptop disk, and that will hold up any kind of 
> > > small synchornous activities --- such as either a disk read or a firefox-
> > > triggered fsync().
> > 
> > Yes, that's a problem. The SYNC/ASYNC elevator queues can help here.

The SYNC/ASYNC queues will partially help, up to the whatever the
largest I/O that can issued as a single chunk times the queue depth
for those disks that support NCQ. 

> > There's still the problem of IO submission time != IO completion time,
> > due to fluctuations of randomness and more. However that's a general
> > and unavoidable problem.  Both the wbc.timeout scheme and the
> > "wbc.nr_to_write based on estimated throughput" scheme are based on
> > _past_ requests and it's simply impossible to have a 100% accurate
> > scheme. In principle, wbc.timeout will only be inferior at IO startup
> > time. In the steady state of 100% full queue, it is actually estimating
> > the IO throughput implicitly :)
> 
> Another difference between wbc.timeout and adaptive wbc.nr_to_write
> is, when there comes many _read_ requests or fsync, these SYNC rw
> requests will significant lower the ASYNC writeback throughput, if
> it's not completely stalled. So with timeout, the inode will be
> aborted with few pages written; with nr_to_write, the inode will be
> written a good number of pages, at the cost of taking up long time.
> 
> IMHO the nr_to_write behavior seems more efficient. What do you think?

I agree, adaptively changing nr_to_write seems like the right thing to
do.  For bonus points, we could also monitor how often synchronous I/O
operations are happening, allow nr_to_write to go up by some amount if
there aren't many synchronous operations happening at the moment.  So
that might be another opportunity to do auto-tuning, although this
might be a hueristic that might need to be configurable for certain
specialized workloads.  For many other workloads, the it should be
possible to detect regular pattern of reads and/or synchronous writes,
and if so, use a lower nr_to_write versus if there isn't many
synchronous I/O operations happening on that particular block device.

	    		   	     	     - Ted

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: regression in page writeback
  2009-10-02 17:26                                                             ` Theodore Tso
@ 2009-10-03  6:10                                                               ` Wu Fengguang
  0 siblings, 0 replies; 79+ messages in thread
From: Wu Fengguang @ 2009-10-03  6:10 UTC (permalink / raw)
  To: Theodore Tso, Christoph Hellwig, Dave Chinner, Chris Mason,
	Andrew Morton, Peter Zijlstra, Li, Shaohua, linux-kernel,
	richard, jens.axboe

On Sat, Oct 03, 2009 at 01:26:20AM +0800, Theodore Ts'o wrote:
> On Fri, Oct 02, 2009 at 04:19:53PM +0800, Wu Fengguang wrote:
> > > > The big writes, if they are contiguous, could take 1-2 seconds
> > > > on a very slow, ancient laptop disk, and that will hold up any kind of 
> > > > small synchornous activities --- such as either a disk read or a firefox-
> > > > triggered fsync().
> > > 
> > > Yes, that's a problem. The SYNC/ASYNC elevator queues can help here.
> 
> The SYNC/ASYNC queues will partially help, up to the whatever the
> largest I/O that can issued as a single chunk times the queue depth
> for those disks that support NCQ. 
> 
> > > There's still the problem of IO submission time != IO completion time,
> > > due to fluctuations of randomness and more. However that's a general
> > > and unavoidable problem.  Both the wbc.timeout scheme and the
> > > "wbc.nr_to_write based on estimated throughput" scheme are based on
> > > _past_ requests and it's simply impossible to have a 100% accurate
> > > scheme. In principle, wbc.timeout will only be inferior at IO startup
> > > time. In the steady state of 100% full queue, it is actually estimating
> > > the IO throughput implicitly :)
> > 
> > Another difference between wbc.timeout and adaptive wbc.nr_to_write
> > is, when there comes many _read_ requests or fsync, these SYNC rw
> > requests will significant lower the ASYNC writeback throughput, if
> > it's not completely stalled. So with timeout, the inode will be
> > aborted with few pages written; with nr_to_write, the inode will be
> > written a good number of pages, at the cost of taking up long time.
> > 
> > IMHO the nr_to_write behavior seems more efficient. What do you think?
> 
> I agree, adaptively changing nr_to_write seems like the right thing to

I'd like to estimate the writeback throughput in bdi_writeback_wakeup(),
where the queue is not starved and the estimation would reflect the max
device capability (unless there are busy reads, in which case we need
lower nr_to_write anyway).

> do.  For bonus points, we could also monitor how often synchronous I/O
> operations are happening, allow nr_to_write to go up by some amount if
> there aren't many synchronous operations happening at the moment.  So
> that might be another opportunity to do auto-tuning, although this
> might be a hueristic that might need to be configurable for certain
> specialized workloads.  For many other workloads, the it should be
> possible to detect regular pattern of reads and/or synchronous writes,
> and if so, use a lower nr_to_write versus if there isn't many
> synchronous I/O operations happening on that particular block device.

It's not easy to get state of the art SYNC read/write busyness.
However it is possible to "feel" them through the progress of ASYNC
writes.

- setup a per-file timeout=3*HZ
- check this in write_cache_pages:

        if (half nr_to_write pages written && timeout)
                break;

In this way we back off to nr_to_write/2 if the writeback is blocked
by some busy READs.

I'd choose to implement this advanced feature some time later :)

Thanks,
Fengguang

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: regression in page writeback
  2009-10-02  3:27                                                       ` Wu Fengguang
@ 2009-10-06 12:55                                                         ` Jan Kara
  2009-10-06 13:18                                                           ` Wu Fengguang
  0 siblings, 1 reply; 79+ messages in thread
From: Jan Kara @ 2009-10-06 12:55 UTC (permalink / raw)
  To: Wu Fengguang
  Cc: Jan Kara, Theodore Tso, Christoph Hellwig, Dave Chinner,
	Chris Mason, Andrew Morton, Peter Zijlstra, Li, Shaohua,
	linux-kernel, richard, jens.axboe

On Fri 02-10-09 11:27:14, Wu Fengguang wrote:
> On Fri, Oct 02, 2009 at 06:17:39AM +0800, Jan Kara wrote:
> > On Wed 30-09-09 13:32:23, Wu Fengguang wrote:
> > > writeback: bump up writeback chunk size to 128MB
> > > 
> > > Adjust the writeback call stack to support larger writeback chunk size.
> > > 
> > > - make wbc.nr_to_write a per-file parameter
> > > - init wbc.nr_to_write with MAX_WRITEBACK_PAGES=128MB
> > >   (proposed by Ted)
> > > - add wbc.nr_segments to limit seeks inside sparsely dirtied file
> > >   (proposed by Chris)
> > > - add wbc.timeout which will be used to control IO submission time
> > >   either per-file or globally.
> > >   
> > > The wbc.nr_segments is now determined purely by logical page index
> > > distance: if two pages are 1MB apart, it makes a new segment.
> > > 
> > > Filesystems could do this better with real extent knowledges.
> > > One possible scheme is to record the previous page index in
> > > wbc.writeback_index, and let ->writepage compare if the current and
> > > previous pages lie in the same extent, and decrease wbc.nr_segments
> > > accordingly. Care should taken to avoid double decreases in writepage
> > > and write_cache_pages.
> > > 
> > > The wbc.timeout (when used per-file) is mainly a safeguard against slow
> > > devices, which may take too long time to sync 128MB data.
> > > 
> > > The wbc.timeout (when used globally) could be useful when we decide to
> > > do two sync scans on dirty pages and dirty metadata. XFS could say:
> > > please return to sync dirty metadata after 10s. Would need another
> > > b_io_metadata queue, but that's possible.
> > > 
> > > This work depends on the balance_dirty_pages() wait queue patch.
> >   I don't know, I think it gets too complicated... I'd either use the
> > segments idea or the timeout idea but not both (unless you can find real
> > world tests in which both help).
  I'm sorry for a delayed reply but I had to work on something else.

> Maybe complicated, but nr_segments and timeout each has their target
> application.  nr_segments serves two major purposes:
> - fairness between two large files, one is continuously dirtied,
>   another is sparsely dirtied. Given the same amount of dirty pages,
>   it could take vastly different time to sync them to the _same_
>   device. The nr_segments check helps to favor continuous data.
> - avoid seeks/fragmentations. To give each file fair chance of
>   writeback, we have to abort a file when some nr_to_write or timeout
>   is reached. However they are both not good abort conditions.
>   The best is for filesystem to abort earlier in seek boundaries,
>   and treat nr_to_write/timeout as large enough bottom lines.
> timeout is mainly a safeguard in case nr_to_write is too large for
> slow devices. It is not necessary if nr_to_write is auto-computed,
> however timeout in itself serves as a simple throughput adapting
> scheme.
  I understand why you have introduced both segments and timeout value
and a completely agree with your reasons to introduce them. I just think
that when the system gets too complex (there will be several independent
methods of determining when writeback should be terminated, and even
though each method is simple on its own, their interactions needn't be
simple...) it will be hard to debug all the corner cases - even more
because they will manifest "just" by slow or unfair writeback. So I'd
prefer a single metric to determine when to stop writeback of an inode
even though it might be a bit more complicated.
  For example terminating on writeout does not really get a file fair
chance of writeback because it might have been blocked just because we were
writing some heavily fragmented file just before. And your nr_segments
check is just a rough guess of whether a writeback is going to be
fragmented or not.
  So I'd rather implement in mpage_ functions a proper detection of how
fragmented the writeback is and give each inode a limit on number of
fragments which mpage_ functions would obey. We could even use a queue's
NONROT flag (set for solid state disks) to detect whether we should expect
higher or lower seek times.
								Honza

^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: regression in page writeback
  2009-10-06 12:55                                                         ` Jan Kara
@ 2009-10-06 13:18                                                           ` Wu Fengguang
  0 siblings, 0 replies; 79+ messages in thread
From: Wu Fengguang @ 2009-10-06 13:18 UTC (permalink / raw)
  To: Jan Kara
  Cc: Theodore Tso, Christoph Hellwig, Dave Chinner, Chris Mason,
	Andrew Morton, Peter Zijlstra, Li, Shaohua, linux-kernel,
	richard, jens.axboe

On Tue, Oct 06, 2009 at 08:55:19PM +0800, Jan Kara wrote:
> On Fri 02-10-09 11:27:14, Wu Fengguang wrote:
> > On Fri, Oct 02, 2009 at 06:17:39AM +0800, Jan Kara wrote:
> > > On Wed 30-09-09 13:32:23, Wu Fengguang wrote:
> > > > writeback: bump up writeback chunk size to 128MB
> > > > 
> > > > Adjust the writeback call stack to support larger writeback chunk size.
> > > > 
> > > > - make wbc.nr_to_write a per-file parameter
> > > > - init wbc.nr_to_write with MAX_WRITEBACK_PAGES=128MB
> > > >   (proposed by Ted)
> > > > - add wbc.nr_segments to limit seeks inside sparsely dirtied file
> > > >   (proposed by Chris)
> > > > - add wbc.timeout which will be used to control IO submission time
> > > >   either per-file or globally.
> > > >   
> > > > The wbc.nr_segments is now determined purely by logical page index
> > > > distance: if two pages are 1MB apart, it makes a new segment.
> > > > 
> > > > Filesystems could do this better with real extent knowledges.
> > > > One possible scheme is to record the previous page index in
> > > > wbc.writeback_index, and let ->writepage compare if the current and
> > > > previous pages lie in the same extent, and decrease wbc.nr_segments
> > > > accordingly. Care should taken to avoid double decreases in writepage
> > > > and write_cache_pages.
> > > > 
> > > > The wbc.timeout (when used per-file) is mainly a safeguard against slow
> > > > devices, which may take too long time to sync 128MB data.
> > > > 
> > > > The wbc.timeout (when used globally) could be useful when we decide to
> > > > do two sync scans on dirty pages and dirty metadata. XFS could say:
> > > > please return to sync dirty metadata after 10s. Would need another
> > > > b_io_metadata queue, but that's possible.
> > > > 
> > > > This work depends on the balance_dirty_pages() wait queue patch.
> > >   I don't know, I think it gets too complicated... I'd either use the
> > > segments idea or the timeout idea but not both (unless you can find real
> > > world tests in which both help).
>   I'm sorry for a delayed reply but I had to work on something else.
> 
> > Maybe complicated, but nr_segments and timeout each has their target
> > application.  nr_segments serves two major purposes:
> > - fairness between two large files, one is continuously dirtied,
> >   another is sparsely dirtied. Given the same amount of dirty pages,
> >   it could take vastly different time to sync them to the _same_
> >   device. The nr_segments check helps to favor continuous data.
> > - avoid seeks/fragmentations. To give each file fair chance of
> >   writeback, we have to abort a file when some nr_to_write or timeout
> >   is reached. However they are both not good abort conditions.
> >   The best is for filesystem to abort earlier in seek boundaries,
> >   and treat nr_to_write/timeout as large enough bottom lines.
> > timeout is mainly a safeguard in case nr_to_write is too large for
> > slow devices. It is not necessary if nr_to_write is auto-computed,
> > however timeout in itself serves as a simple throughput adapting
> > scheme.
>   I understand why you have introduced both segments and timeout value
> and a completely agree with your reasons to introduce them. I just think
> that when the system gets too complex (there will be several independent
> methods of determining when writeback should be terminated, and even
> though each method is simple on its own, their interactions needn't be
> simple...) it will be hard to debug all the corner cases - even more
> because they will manifest "just" by slow or unfair writeback. So I'd

I definitely agree on the complications. There are some known issues
as well as possibly some corner cases to be discovered. One problem I
noticed now is, what if all the files are sparsely dirtied? Then
a small nr_segments can only hurt.  Another problem is, the block
device file tend to have sparsely dirtied pages (with metadata on
them).  Not sure how to detect/handle such conditions..

> prefer a single metric to determine when to stop writeback of an inode
> even though it might be a bit more complicated.
>   For example terminating on writeout does not really get a file fair
> chance of writeback because it might have been blocked just because we were
> writing some heavily fragmented file just before. And your nr_segments

You mean timeout? I've dropped that idea in favor of an nr_to_write
adaptive to the bdi write speed :)

> check is just a rough guess of whether a writeback is going to be
> fragmented or not.

It could be made accurate if btrfs decreases it in its own writepages,
based on the extent info. Should also be possible for ext4.

>   So I'd rather implement in mpage_ functions a proper detection of how
> fragmented the writeback is and give each inode a limit on number of
> fragments which mpage_ functions would obey. We could even use a queue's
> NONROT flag (set for solid state disks) to detect whether we should expect
> higher or lower seek times.

Yes, mpage_* can also utilize nr_segments.

Anyway nr_segments is not perfect, I'll post a patch and let fs
developers decide whether it is convenient/useful :) 

Thanks,
Fengguang

^ permalink raw reply	[flat|nested] 79+ messages in thread

end of thread, other threads:[~2009-10-06 13:19 UTC | newest]

Thread overview: 79+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-09-22  5:49 regression in page writeback Shaohua Li
2009-09-22  6:40 ` Peter Zijlstra
2009-09-22  8:05   ` Wu Fengguang
2009-09-22  8:09     ` Peter Zijlstra
2009-09-22  8:24       ` Wu Fengguang
2009-09-22  8:32         ` Peter Zijlstra
2009-09-22  8:51           ` Wu Fengguang
2009-09-22  8:52           ` Richard Kennedy
2009-09-22  9:05             ` Wu Fengguang
2009-09-22 11:41               ` Shaohua Li
2009-09-22 15:52           ` Chris Mason
2009-09-23  0:22             ` Wu Fengguang
2009-09-23  0:54               ` Andrew Morton
2009-09-23  1:17                 ` Wu Fengguang
2009-09-23  1:27                   ` Wu Fengguang
2009-09-23  1:28                   ` Andrew Morton
2009-09-23  1:32                     ` Wu Fengguang
2009-09-23  1:47                       ` Andrew Morton
2009-09-23  2:01                         ` Wu Fengguang
2009-09-23  2:09                           ` Andrew Morton
2009-09-23  3:07                             ` Wu Fengguang
2009-09-23  1:45                     ` Wu Fengguang
2009-09-23  1:59                       ` Andrew Morton
2009-09-23  2:26                         ` Wu Fengguang
2009-09-23  2:36                           ` Andrew Morton
2009-09-23  2:49                             ` Wu Fengguang
2009-09-23  2:56                               ` Andrew Morton
2009-09-23  3:11                                 ` Wu Fengguang
2009-09-23  3:10                               ` Shaohua Li
2009-09-23  3:14                                 ` Wu Fengguang
2009-09-23  3:25                                   ` Wu Fengguang
2009-09-23 14:00                             ` Chris Mason
2009-09-24  3:15                               ` Wu Fengguang
2009-09-24 12:10                                 ` Chris Mason
2009-09-25  3:26                                   ` Wu Fengguang
2009-09-25  0:11                                 ` Dave Chinner
2009-09-25  0:38                                   ` Chris Mason
2009-09-25  5:04                                     ` Dave Chinner
2009-09-25  6:45                                       ` Wu Fengguang
2009-09-28  1:07                                         ` Dave Chinner
2009-09-28  7:15                                           ` Wu Fengguang
2009-09-28 13:08                                             ` Christoph Hellwig
2009-09-28 14:07                                               ` Theodore Tso
2009-09-30  5:26                                                 ` Wu Fengguang
2009-09-30  5:32                                                   ` Wu Fengguang
2009-10-01 22:17                                                     ` Jan Kara
2009-10-02  3:27                                                       ` Wu Fengguang
2009-10-06 12:55                                                         ` Jan Kara
2009-10-06 13:18                                                           ` Wu Fengguang
2009-09-30 14:11                                                   ` Theodore Tso
2009-10-01 15:14                                                     ` Wu Fengguang
2009-10-01 21:54                                                       ` Theodore Tso
2009-10-02  2:55                                                         ` Wu Fengguang
2009-10-02  8:19                                                           ` Wu Fengguang
2009-10-02 17:26                                                             ` Theodore Tso
2009-10-03  6:10                                                               ` Wu Fengguang
2009-09-29  2:32                                               ` Wu Fengguang
2009-09-29 14:00                                                 ` Chris Mason
2009-09-29 14:21                                                 ` Christoph Hellwig
2009-09-29  0:15                                             ` Wu Fengguang
2009-09-28 14:25                                           ` Chris Mason
2009-09-29 23:39                                             ` Dave Chinner
2009-09-30  1:30                                               ` Wu Fengguang
2009-09-25 12:06                                       ` Chris Mason
2009-09-25  3:19                                   ` Wu Fengguang
2009-09-26  1:47                                     ` Dave Chinner
2009-09-26  3:02                                       ` Wu Fengguang
2009-09-26  3:02                                         ` Wu Fengguang
2009-09-23  9:19                         ` Richard Kennedy
2009-09-23  9:23                           ` Peter Zijlstra
2009-09-23  9:37                             ` Wu Fengguang
2009-09-23 10:30                               ` Wu Fengguang
2009-09-23  6:41             ` Shaohua Li
2009-09-22 10:49 ` Wu Fengguang
2009-09-22 11:50   ` Shaohua Li
2009-09-22 13:39     ` Wu Fengguang
2009-09-23  1:52       ` Shaohua Li
2009-09-23  4:00         ` Wu Fengguang
2009-09-25  6:14           ` Wu Fengguang

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.