All of lore.kernel.org
 help / color / mirror / Atom feed
* memcg writeback (was Re: [Lsf-pc] [LSF/MM TOPIC] memcg topics.)
@ 2012-02-08  7:55 Greg Thelen
  2012-02-08  9:31 ` Wu Fengguang
  0 siblings, 1 reply; 33+ messages in thread
From: Greg Thelen @ 2012-02-08  7:55 UTC (permalink / raw)
  To: Wu Fengguang
  Cc: Jan Kara, bsingharora, Hugh Dickins, Michal Hocko, linux-mm,
	Mel Gorman, Ying Han, hannes, lsf-pc, KAMEZAWA Hiroyuki

On Fri, Feb 3, 2012 at 1:40 AM, Wu Fengguang <fengguang.wu@intel.com> wrote:
> If moving dirty pages out of the memcg to the 20% global dirty pages
> pool on page reclaim, the above OOM can be avoided. It does change the
> meaning of memory.limit_in_bytes in that the memcg tasks can now
> actually consume more pages (up to the shared global 20% dirty limit).

This seems like an easy change, but unfortunately the global 20% pool
has some shortcomings for my needs:

1. the global 20% pool is not moderated.  One cgroup can dominate it
    and deny service to other cgroups.

2. the global 20% pool is free, unaccounted memory.  Ideally cgroups only
    use the amount of memory specified in their memory.limit_in_bytes.  The
    goal is to sell portions of a system.  Global resource like the 20% are an
    undesirable system-wide tax that's shared by jobs that may not even
    perform buffered writes.

3. Setting aside 20% extra memory for system wide dirty buffers is a lot of
    memory.  This becomes a larger issue when the global dirty_ratio is
    higher than 20%.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: memcg writeback (was Re: [Lsf-pc] [LSF/MM TOPIC] memcg topics.)
  2012-02-08  7:55 memcg writeback (was Re: [Lsf-pc] [LSF/MM TOPIC] memcg topics.) Greg Thelen
@ 2012-02-08  9:31 ` Wu Fengguang
  2012-02-08 20:54   ` Ying Han
  2012-02-10  5:51   ` Greg Thelen
  0 siblings, 2 replies; 33+ messages in thread
From: Wu Fengguang @ 2012-02-08  9:31 UTC (permalink / raw)
  To: Greg Thelen
  Cc: Jan Kara, bsingharora, Hugh Dickins, Michal Hocko, linux-mm,
	Mel Gorman, Ying Han, hannes, lsf-pc, KAMEZAWA Hiroyuki

On Tue, Feb 07, 2012 at 11:55:05PM -0800, Greg Thelen wrote:
> On Fri, Feb 3, 2012 at 1:40 AM, Wu Fengguang <fengguang.wu@intel.com> wrote:
> > If moving dirty pages out of the memcg to the 20% global dirty pages
> > pool on page reclaim, the above OOM can be avoided. It does change the
> > meaning of memory.limit_in_bytes in that the memcg tasks can now
> > actually consume more pages (up to the shared global 20% dirty limit).
> 
> This seems like an easy change, but unfortunately the global 20% pool
> has some shortcomings for my needs:
> 
> 1. the global 20% pool is not moderated.  One cgroup can dominate it
>     and deny service to other cgroups.

It is moderated by balance_dirty_pages() -- in terms of dirty ratelimit.
And you have the freedom to control the bandwidth allocation with some
async write I/O controller.

Even though there is no direct control of dirty pages, we can roughly
get it as the side effect of rate control. Given

        ratelimit_cgroup_A = 2 * ratelimit_cgroup_B

There will naturally be more dirty pages for cgroup A to be worked by
the flusher. And the dirty pages will be roughly balanced around

        nr_dirty_cgroup_A = 2 * nr_dirty_cgroup_B

when writeout bandwidths for their dirty pages are equal.

> 2. the global 20% pool is free, unaccounted memory.  Ideally cgroups only
>     use the amount of memory specified in their memory.limit_in_bytes.  The
>     goal is to sell portions of a system.  Global resource like the 20% are an
>     undesirable system-wide tax that's shared by jobs that may not even
>     perform buffered writes.

Right, it is the shortcoming.

> 3. Setting aside 20% extra memory for system wide dirty buffers is a lot of
>     memory.  This becomes a larger issue when the global dirty_ratio is
>     higher than 20%.

Yeah the global pool scheme does mean that you'd better allocate at
most 80% memory to individual memory cgroups, otherwise it's possible
for a tiny memcg doing dd writes to push dirty pages to global LRU and
*squeeze* the size of other memcgs.

However I guess it should be mitigated by the fact that

- we typically already reserve some space for the root memcg

- 20% dirty ratio is mostly an overkill for large memory systems.
  It's often enough to hold 10-30s worth of dirty data for them, which
  is 1-3GB for one 100MB/s disk. This is the reason vm.dirty_bytes is
  introduced: someone wants to do some <1% dirty ratio.

Thanks,
Fengguang

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: memcg writeback (was Re: [Lsf-pc] [LSF/MM TOPIC] memcg topics.)
  2012-02-08  9:31 ` Wu Fengguang
@ 2012-02-08 20:54   ` Ying Han
  2012-02-09 13:50     ` Wu Fengguang
  2012-02-10  5:51   ` Greg Thelen
  1 sibling, 1 reply; 33+ messages in thread
From: Ying Han @ 2012-02-08 20:54 UTC (permalink / raw)
  To: Wu Fengguang
  Cc: Greg Thelen, Jan Kara, bsingharora, Hugh Dickins, Michal Hocko,
	linux-mm, Mel Gorman, hannes, lsf-pc, KAMEZAWA Hiroyuki

On Wed, Feb 8, 2012 at 1:31 AM, Wu Fengguang <fengguang.wu@intel.com> wrote:
> On Tue, Feb 07, 2012 at 11:55:05PM -0800, Greg Thelen wrote:
>> On Fri, Feb 3, 2012 at 1:40 AM, Wu Fengguang <fengguang.wu@intel.com> wrote:
>> > If moving dirty pages out of the memcg to the 20% global dirty pages
>> > pool on page reclaim, the above OOM can be avoided. It does change the
>> > meaning of memory.limit_in_bytes in that the memcg tasks can now
>> > actually consume more pages (up to the shared global 20% dirty limit).
>>
>> This seems like an easy change, but unfortunately the global 20% pool
>> has some shortcomings for my needs:
>>
>> 1. the global 20% pool is not moderated.  One cgroup can dominate it
>>     and deny service to other cgroups.
>
> It is moderated by balance_dirty_pages() -- in terms of dirty ratelimit.
> And you have the freedom to control the bandwidth allocation with some
> async write I/O controller.
>
> Even though there is no direct control of dirty pages, we can roughly
> get it as the side effect of rate control. Given
>
>        ratelimit_cgroup_A = 2 * ratelimit_cgroup_B
>
> There will naturally be more dirty pages for cgroup A to be worked by
> the flusher. And the dirty pages will be roughly balanced around
>
>        nr_dirty_cgroup_A = 2 * nr_dirty_cgroup_B
>
> when writeout bandwidths for their dirty pages are equal.
>
>> 2. the global 20% pool is free, unaccounted memory.  Ideally cgroups only
>>     use the amount of memory specified in their memory.limit_in_bytes.  The
>>     goal is to sell portions of a system.  Global resource like the 20% are an
>>     undesirable system-wide tax that's shared by jobs that may not even
>>     perform buffered writes.
>
> Right, it is the shortcoming.
>
>> 3. Setting aside 20% extra memory for system wide dirty buffers is a lot of
>>     memory.  This becomes a larger issue when the global dirty_ratio is
>>     higher than 20%.
>
> Yeah the global pool scheme does mean that you'd better allocate at
> most 80% memory to individual memory cgroups, otherwise it's possible
> for a tiny memcg doing dd writes to push dirty pages to global LRU and
> *squeeze* the size of other memcgs.
>
> However I guess it should be mitigated by the fact that
>
> - we typically already reserve some space for the root memcg

Can you give more details on that? AFAIK, we don't treat root cgroup
differently than other sub-cgroups, except root cgroup doesn't have
limit.

In general, I don't like the idea of shared pool in root for all the
dirty pages.

Imagining a system which has nothing running under root and every
application runs within sub-cgroup. It is easy to track and limit each
cgroup's memory usage, but not the pages being moved to root. We have
been experiencing difficulties of tracking pages being re-parented to
root, and this will make it even harder.

--Ying

>
> - 20% dirty ratio is mostly an overkill for large memory systems.
>  It's often enough to hold 10-30s worth of dirty data for them, which
>  is 1-3GB for one 100MB/s disk. This is the reason vm.dirty_bytes is
>  introduced: someone wants to do some <1% dirty ratio.
>
> Thanks,
> Fengguang

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: memcg writeback (was Re: [Lsf-pc] [LSF/MM TOPIC] memcg topics.)
  2012-02-08 20:54   ` Ying Han
@ 2012-02-09 13:50     ` Wu Fengguang
  2012-02-13 18:40       ` Ying Han
  0 siblings, 1 reply; 33+ messages in thread
From: Wu Fengguang @ 2012-02-09 13:50 UTC (permalink / raw)
  To: Ying Han
  Cc: Greg Thelen, Jan Kara, bsingharora, Hugh Dickins, Michal Hocko,
	linux-mm, Mel Gorman, hannes, lsf-pc, KAMEZAWA Hiroyuki

On Wed, Feb 08, 2012 at 12:54:33PM -0800, Ying Han wrote:
> On Wed, Feb 8, 2012 at 1:31 AM, Wu Fengguang <fengguang.wu@intel.com> wrote:
> > On Tue, Feb 07, 2012 at 11:55:05PM -0800, Greg Thelen wrote:
> >> On Fri, Feb 3, 2012 at 1:40 AM, Wu Fengguang <fengguang.wu@intel.com> wrote:
> >> > If moving dirty pages out of the memcg to the 20% global dirty pages
> >> > pool on page reclaim, the above OOM can be avoided. It does change the
> >> > meaning of memory.limit_in_bytes in that the memcg tasks can now
> >> > actually consume more pages (up to the shared global 20% dirty limit).
> >>
> >> This seems like an easy change, but unfortunately the global 20% pool
> >> has some shortcomings for my needs:
> >>
> >> 1. the global 20% pool is not moderated. A One cgroup can dominate it
> >> A  A  and deny service to other cgroups.
> >
> > It is moderated by balance_dirty_pages() -- in terms of dirty ratelimit.
> > And you have the freedom to control the bandwidth allocation with some
> > async write I/O controller.
> >
> > Even though there is no direct control of dirty pages, we can roughly
> > get it as the side effect of rate control. Given
> >
> > A  A  A  A ratelimit_cgroup_A = 2 * ratelimit_cgroup_B
> >
> > There will naturally be more dirty pages for cgroup A to be worked by
> > the flusher. And the dirty pages will be roughly balanced around
> >
> > A  A  A  A nr_dirty_cgroup_A = 2 * nr_dirty_cgroup_B
> >
> > when writeout bandwidths for their dirty pages are equal.
> >
> >> 2. the global 20% pool is free, unaccounted memory. A Ideally cgroups only
> >> A  A  use the amount of memory specified in their memory.limit_in_bytes. A The
> >> A  A  goal is to sell portions of a system. A Global resource like the 20% are an
> >> A  A  undesirable system-wide tax that's shared by jobs that may not even
> >> A  A  perform buffered writes.
> >
> > Right, it is the shortcoming.
> >
> >> 3. Setting aside 20% extra memory for system wide dirty buffers is a lot of
> >> A  A  memory. A This becomes a larger issue when the global dirty_ratio is
> >> A  A  higher than 20%.
> >
> > Yeah the global pool scheme does mean that you'd better allocate at
> > most 80% memory to individual memory cgroups, otherwise it's possible
> > for a tiny memcg doing dd writes to push dirty pages to global LRU and
> > *squeeze* the size of other memcgs.
> >
> > However I guess it should be mitigated by the fact that
> >
> > - we typically already reserve some space for the root memcg
> 
> Can you give more details on that? AFAIK, we don't treat root cgroup
> differently than other sub-cgroups, except root cgroup doesn't have
> limit.

OK. I'd imagine this to be the typical usage for desktop and quite a
few servers: a few cgroups are employed to limit the resource usage
for selected tasks (such as backups, background GUI tasks, cron tasks,
etc.). These systems are still running mainly in the global context.

> In general, I don't like the idea of shared pool in root for all the
> dirty pages.
> 
> Imagining a system which has nothing running under root and every
> application runs within sub-cgroup. It is easy to track and limit each
> cgroup's memory usage, but not the pages being moved to root. We have
> been experiencing difficulties of tracking pages being re-parented to
> root, and this will make it even harder.

So you want to push memcg allocations to the hardware limits. This is
a worthwhile target for cloud servers that run a number of well
contained jobs.

I guess it can be achieved reasonably well with the global shared
dirty pool.  Let's discuss the two major cases.

1) no change of behavior

For example, when the system memory is divided equally to 10 cgroups
each running 1 dd. In this case, the dirty pages will be contained
within the memcg LRUs. Page reclaim rarely encounters any dirty pages.
There is no moving to the global LRU, so no side effect at all.

2) small memcg squeezing other memcg(s)

When system memory is divided to 1 small memcg A and 1 large memcg B,
each running a dd task. In this case the dirty pages from A will be
moved to the global LRU, and global page reclaims will be triggered.

In the end it will be balanced around

- global LRU: 10% memory (which are A's dirty pages)
- memcg B: 90% memory
- memcg A: a tiny ignorable fraction of memory

Now job B uses 10% less memory than w/o the global dirty pool scheme.
I guess this is bad for some type of jobs.

However my question is, will the typical demand be more flexible?
Something like the "minimal" and "recommended" setup: "this job
requires at least XXX memory and better at YYY memory", rather than
some fixed size memory allocation.

The minimal requirement should be trivially satisfied by adding a
memcg watermark that protects the memcg LRU from being reclaimed
when dropped under it.

Then the cloud server could be configured to

        sum(memcg.limit_in_bytes) / memtotal = 100%
        sum(memcg.minimal_size)   / memtotal < 100% - dirty_ratio

Which makes a simple and flexibly partitioned system.

Thanks,
Fengguang

> > - 20% dirty ratio is mostly an overkill for large memory systems.
> > A It's often enough to hold 10-30s worth of dirty data for them, which
> > A is 1-3GB for one 100MB/s disk. This is the reason vm.dirty_bytes is
> > A introduced: someone wants to do some <1% dirty ratio.
> >
> > Thanks,
> > Fengguang

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: memcg writeback (was Re: [Lsf-pc] [LSF/MM TOPIC] memcg topics.)
  2012-02-08  9:31 ` Wu Fengguang
  2012-02-08 20:54   ` Ying Han
@ 2012-02-10  5:51   ` Greg Thelen
  2012-02-10  5:52     ` Greg Thelen
  2012-02-10 11:47     ` Wu Fengguang
  1 sibling, 2 replies; 33+ messages in thread
From: Greg Thelen @ 2012-02-10  5:51 UTC (permalink / raw)
  To: Wu Fengguang
  Cc: Jan Kara, bsingharora, Hugh Dickins, Michal Hocko, linux-mm,
	Mel Gorman, Ying Han, hannes, KAMEZAWA Hiroyuki

(removed lsf-pc@lists.linux-foundation.org because this really isn't
program committee matter)

On Wed, Feb 8, 2012 at 1:31 AM, Wu Fengguang <fengguang.wu@intel.com> wrote:
> On Tue, Feb 07, 2012 at 11:55:05PM -0800, Greg Thelen wrote:
>> On Fri, Feb 3, 2012 at 1:40 AM, Wu Fengguang <fengguang.wu@intel.com> wrote:
>> > If moving dirty pages out of the memcg to the 20% global dirty pages
>> > pool on page reclaim, the above OOM can be avoided. It does change the
>> > meaning of memory.limit_in_bytes in that the memcg tasks can now
>> > actually consume more pages (up to the shared global 20% dirty limit).
>>
>> This seems like an easy change, but unfortunately the global 20% pool
>> has some shortcomings for my needs:
>>
>> 1. the global 20% pool is not moderated.  One cgroup can dominate it
>>     and deny service to other cgroups.
>
> It is moderated by balance_dirty_pages() -- in terms of dirty ratelimit.
> And you have the freedom to control the bandwidth allocation with some
> async write I/O controller.
>
> Even though there is no direct control of dirty pages, we can roughly
> get it as the side effect of rate control. Given
>
>        ratelimit_cgroup_A = 2 * ratelimit_cgroup_B
>
> There will naturally be more dirty pages for cgroup A to be worked by
> the flusher. And the dirty pages will be roughly balanced around
>
>        nr_dirty_cgroup_A = 2 * nr_dirty_cgroup_B
>
> when writeout bandwidths for their dirty pages are equal.
>
>> 2. the global 20% pool is free, unaccounted memory.  Ideally cgroups only
>>     use the amount of memory specified in their memory.limit_in_bytes.  The
>>     goal is to sell portions of a system.  Global resource like the 20% are an
>>     undesirable system-wide tax that's shared by jobs that may not even
>>     perform buffered writes.
>
> Right, it is the shortcoming.
>
>> 3. Setting aside 20% extra memory for system wide dirty buffers is a lot of
>>     memory.  This becomes a larger issue when the global dirty_ratio is
>>     higher than 20%.
>
> Yeah the global pool scheme does mean that you'd better allocate at
> most 80% memory to individual memory cgroups, otherwise it's possible
> for a tiny memcg doing dd writes to push dirty pages to global LRU and
> *squeeze* the size of other memcgs.
>
> However I guess it should be mitigated by the fact that
>
> - we typically already reserve some space for the root memcg
>
> - 20% dirty ratio is mostly an overkill for large memory systems.
>  It's often enough to hold 10-30s worth of dirty data for them, which
>  is 1-3GB for one 100MB/s disk. This is the reason vm.dirty_bytes is
>  introduced: someone wants to do some <1% dirty ratio.

Have you encountered situations where it's desirable to have more than
20% dirty ratio?  I imagine that if the dirty working set is larger
than 20% increasing dirty ratio would prevent rewrites.

Leaking dirty memory to a root global dirty pool is concerning.  I
suspect that under some conditions such pages may remain remain in
root after writeback indefinitely as clean pages.  I admit this may
not be the common case, but having such leaks into root can allow low
priority jobs access entire machine denying service to higher priority
jobs.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: memcg writeback (was Re: [Lsf-pc] [LSF/MM TOPIC] memcg topics.)
  2012-02-10  5:51   ` Greg Thelen
@ 2012-02-10  5:52     ` Greg Thelen
  2012-02-10  9:20       ` Wu Fengguang
  2012-02-10 11:47     ` Wu Fengguang
  1 sibling, 1 reply; 33+ messages in thread
From: Greg Thelen @ 2012-02-10  5:52 UTC (permalink / raw)
  To: Wu Fengguang
  Cc: KAMEZAWA Hiroyuki, linux-mm, hannes, Michal Hocko, bsingharora,
	Hugh Dickins, Ying Han, Mel Gorman

(removed lsf-pc@lists.linux-foundation.org because this really isn't
program committee matter)

On Wed, Feb 1, 2012 at 11:52 PM, Wu Fengguang <fengguang.wu@intel.com> wrote:
> Unfortunately the memcg partitioning could fundamentally make the
> dirty throttling more bumpy.
>
> Imagine 10 memcgs each with
>
> - memcg_dirty_limit=50MB
> - 1 dd dirty task
>
> The flusher thread will be working on 10 inodes in turn, each time
> grabbing the next inode and taking ~0.5s to write ~50MB of its dirty
> pages to the disk. So each inode will be flushed on every ~5s.

Does the flusher thread need to write 50MB/inode in this case?  Would
there be problems interleaving writes by declaring some max write
limit (e.g. 8 MiB/write).  Such interleaving would be beneficial if
there are multiple memcg expecting service from the single bdi flusher
thread.  I suspect certain filesystems might have increased
fragmentation with this, but I am not sure if appending writes can
easily expand an extent.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: memcg writeback (was Re: [Lsf-pc] [LSF/MM TOPIC] memcg topics.)
  2012-02-10  5:52     ` Greg Thelen
@ 2012-02-10  9:20       ` Wu Fengguang
  0 siblings, 0 replies; 33+ messages in thread
From: Wu Fengguang @ 2012-02-10  9:20 UTC (permalink / raw)
  To: Greg Thelen
  Cc: KAMEZAWA Hiroyuki, linux-mm, hannes, Michal Hocko, bsingharora,
	Hugh Dickins, Ying Han, Mel Gorman

On Thu, Feb 09, 2012 at 09:52:03PM -0800, Greg Thelen wrote:
> (removed lsf-pc@lists.linux-foundation.org because this really isn't
> program committee matter)
> 
> On Wed, Feb 1, 2012 at 11:52 PM, Wu Fengguang <fengguang.wu@intel.com> wrote:
> > Unfortunately the memcg partitioning could fundamentally make the
> > dirty throttling more bumpy.
> >
> > Imagine 10 memcgs each with
> >
> > - memcg_dirty_limit=50MB
> > - 1 dd dirty task
> >
> > The flusher thread will be working on 10 inodes in turn, each time
> > grabbing the next inode and taking ~0.5s to write ~50MB of its dirty
> > pages to the disk. So each inode will be flushed on every ~5s.
> 
> Does the flusher thread need to write 50MB/inode in this case?
> Would there be problems interleaving writes by declaring some max
> write limit (e.g. 8 MiB/write).  

ext4 actually forces write chunk size to be >=128MB for better write
throughput and less fragmentation, which also helps read performance.

Other filesystems use the VFS computed chunk size, which is defined
in writeback_chunk_size() as write_bandwith/2.

> Such interleaving would be beneficial if there are multiple memcg
> expecting service from the single bdi flusher thread.

Right, reducing the writeback chunk size will improve the memcg's
dirty pages smoothness right away.

> I suspect certain filesystems might have increased fragmentation
> with this, but I am not sure if appending writes can easily expand
> an extent.

To be exact, it's ext4 that will suffer from fragmentation with smaller
chunk sizes. Because it uses the size passed by ->writepages() as hint
to allocate extents. Perhaps this heuristic is somehow improvable.

XFS does not have the fragmentation issue (at least not affected by
the chunk size). However my old tests show that it costs much less
seeks and performs noticeably better with raised write chunk size.

Thanks,
Fengguang

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: memcg writeback (was Re: [Lsf-pc] [LSF/MM TOPIC] memcg topics.)
  2012-02-10  5:51   ` Greg Thelen
  2012-02-10  5:52     ` Greg Thelen
@ 2012-02-10 11:47     ` Wu Fengguang
  2012-02-11 12:44       ` reclaim the LRU lists full of dirty/writeback pages Wu Fengguang
  1 sibling, 1 reply; 33+ messages in thread
From: Wu Fengguang @ 2012-02-10 11:47 UTC (permalink / raw)
  To: Greg Thelen
  Cc: Jan Kara, bsingharora, Hugh Dickins, Michal Hocko, linux-mm,
	Mel Gorman, Ying Han, hannes, KAMEZAWA Hiroyuki

On Thu, Feb 09, 2012 at 09:51:31PM -0800, Greg Thelen wrote:
> On Wed, Feb 8, 2012 at 1:31 AM, Wu Fengguang <fengguang.wu@intel.com> wrote:
> > On Tue, Feb 07, 2012 at 11:55:05PM -0800, Greg Thelen wrote:
> >> On Fri, Feb 3, 2012 at 1:40 AM, Wu Fengguang <fengguang.wu@intel.com> wrote:
> >> > If moving dirty pages out of the memcg to the 20% global dirty pages
> >> > pool on page reclaim, the above OOM can be avoided. It does change the
> >> > meaning of memory.limit_in_bytes in that the memcg tasks can now
> >> > actually consume more pages (up to the shared global 20% dirty limit).
> >>
> >> This seems like an easy change, but unfortunately the global 20% pool
> >> has some shortcomings for my needs:
> >>
> >> 1. the global 20% pool is not moderated. A One cgroup can dominate it
> >> A  A  and deny service to other cgroups.
> >
> > It is moderated by balance_dirty_pages() -- in terms of dirty ratelimit.
> > And you have the freedom to control the bandwidth allocation with some
> > async write I/O controller.
> >
> > Even though there is no direct control of dirty pages, we can roughly
> > get it as the side effect of rate control. Given
> >
> > A  A  A  A ratelimit_cgroup_A = 2 * ratelimit_cgroup_B
> >
> > There will naturally be more dirty pages for cgroup A to be worked by
> > the flusher. And the dirty pages will be roughly balanced around
> >
> > A  A  A  A nr_dirty_cgroup_A = 2 * nr_dirty_cgroup_B
> >
> > when writeout bandwidths for their dirty pages are equal.
> >
> >> 2. the global 20% pool is free, unaccounted memory. A Ideally cgroups only
> >> A  A  use the amount of memory specified in their memory.limit_in_bytes. A The
> >> A  A  goal is to sell portions of a system. A Global resource like the 20% are an
> >> A  A  undesirable system-wide tax that's shared by jobs that may not even
> >> A  A  perform buffered writes.
> >
> > Right, it is the shortcoming.
> >
> >> 3. Setting aside 20% extra memory for system wide dirty buffers is a lot of
> >> A  A  memory. A This becomes a larger issue when the global dirty_ratio is
> >> A  A  higher than 20%.
> >
> > Yeah the global pool scheme does mean that you'd better allocate at
> > most 80% memory to individual memory cgroups, otherwise it's possible
> > for a tiny memcg doing dd writes to push dirty pages to global LRU and
> > *squeeze* the size of other memcgs.
> >
> > However I guess it should be mitigated by the fact that
> >
> > - we typically already reserve some space for the root memcg
> >
> > - 20% dirty ratio is mostly an overkill for large memory systems.
> > A It's often enough to hold 10-30s worth of dirty data for them, which
> > A is 1-3GB for one 100MB/s disk. This is the reason vm.dirty_bytes is
> > A introduced: someone wants to do some <1% dirty ratio.
> 
> Have you encountered situations where it's desirable to have more than
> 20% dirty ratio?  I imagine that if the dirty working set is larger
> than 20% increasing dirty ratio would prevent rewrites.

Not encountered in person, but there will sure be such situations.

One may need to dirty some 40% sized in-memory data set and don't want
to be throttled and trigger lots of I/O. In this case increasing the
dirty ratio to 40% will do the job.

The less obvious condition for this to work is, the workload should
avoid much page allocations when working on the large fixed data set.
Otherwise page reclaim will keep running into dirty pages and act badly.

So it looks still pretty compatible with the "reparent to root memcg
on page reclaim" scheme if that job is put into some memcg. There will
be almost no dirty pages encountered during page reclaim, hence no
move and no any side effects.

But if there is another job doing heavy dirtying, that job will eat up
the global 40% dirty limit and heavily impact the above job. This is
one case the memcg dirty ratio can help a lot.

> Leaking dirty memory to a root global dirty pool is concerning.  I
> suspect that under some conditions such pages may remain remain in
> root after writeback indefinitely as clean pages.  I admit this may
> not be the common case, but having such leaks into root can allow low
> priority jobs access entire machine denying service to higher priority
> jobs.

You are right. DoS can be achieved by

        loop {
                dirty one more page
                access all previously dirtied pages
        }

Assuming only !PG_reference pages are moved to the global dirty pool,
it requires someone to access the page in order for it to stay in the
global LRU for one more cycle, and to access it frequently for keeping
it in global LRU indefinitely.

So yes, it's possible for some evil job to DoS the whole box.  It will
be an issue when hosting jobs from untrusted sources (ie. Amazon style
cloud service), which I guess should be running inside KVM?

It should hardly happen in real workloads. If some job does manage to
do so, it probably means some kind of mis-configuration: the memcg is
configured way too small to hold the job's working set.

Thanks,
Fengguang

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* reclaim the LRU lists full of dirty/writeback pages
  2012-02-10 11:47     ` Wu Fengguang
@ 2012-02-11 12:44       ` Wu Fengguang
  2012-02-11 14:55         ` Rik van Riel
  2012-02-14 10:19         ` Mel Gorman
  0 siblings, 2 replies; 33+ messages in thread
From: Wu Fengguang @ 2012-02-11 12:44 UTC (permalink / raw)
  To: Greg Thelen
  Cc: Jan Kara, bsingharora, Hugh Dickins, Michal Hocko, linux-mm,
	Mel Gorman, Ying Han, hannes, KAMEZAWA Hiroyuki, Rik van Riel,
	Minchan Kim

On Fri, Feb 10, 2012 at 07:47:06PM +0800, Wu Fengguang wrote:
> On Thu, Feb 09, 2012 at 09:51:31PM -0800, Greg Thelen wrote:

> > Have you encountered situations where it's desirable to have more than
> > 20% dirty ratio?  I imagine that if the dirty working set is larger
> > than 20% increasing dirty ratio would prevent rewrites.

> One may need to dirty some 40% sized in-memory data set and don't want
> to be throttled and trigger lots of I/O. In this case increasing the
> dirty ratio to 40% will do the job.
 
> But if there is another job doing heavy dirtying, that job will eat up
> the global 40% dirty limit and heavily impact the above job. This is
> one case the memcg dirty ratio can help a lot.
> 
> > Leaking dirty memory to a root global dirty pool is concerning.  I
> > suspect that under some conditions such pages may remain remain in
> > root after writeback indefinitely as clean pages.  I admit this may
> > not be the common case, but having such leaks into root can allow low
> > priority jobs access entire machine denying service to higher priority
> > jobs.
> 
> You are right. DoS can be achieved by
> 
>         loop {
>                 dirty one more page
>                 access all previously dirtied pages
>         }

So there are situations that prefer the dirty pages to be strictly
contained within the memcg.

For these use cases it looks worthwhile to improve the page reclaim
algorithms to handle the 100% dirty zone well. I'd regard this as a
much more promising direction than memcg dirty ratio, because efforts
on this is going to benefit the general kernel as a whole.

The below patch aims to be the first step towards the goal.  It turns
out to work pretty well for avoiding OOM, with reasonably good I/O
throughput and low CPU overheads.

Hopefully the page reclaim can be further improved to make the 100%
dirty zone a seriously supported and well performed case.

Thanks,
Fengguang
---
Subject: writeback: introduce the pageout work
Date: Thu Jul 29 14:41:19 CST 2010

This relays file pageout IOs to the flusher threads.

The ultimate target is to gracefully handle the LRU lists full of
dirty/writeback pages.

1) I/O efficiency

The flusher will piggy back the around 1MB dirty pages for I/O (XXX:
make the chunk size adaptive to the bdi write bandwidth).

This takes advantage of the time/spacial locality in most workloads: the
nearby pages of one file are typically populated into the LRU at the same
time, hence will likely be close to each other in the LRU list. Writing
them in one shot helps clean more pages effectively for page reclaim.

2) OOM avoidance and scan rate control

Typically we do LRU scan w/o rate control and quickly get enough clean
pages for the LRU lists not full of dirty pages.

Or we can still get a number of freshly cleaned pages (moved to LRU tail
by end_page_writeback()) when the queued pageout I/O is completed within
tens of milli-seconds.

However if the LRU list is small and full of dirty pages, it can be
quickly fully scanned and go OOM before the flusher manages to clean
enough pages.

Here a simple yet reliable scheme is employed to avoid OOM and keep scan
rate in sync with the I/O rate:

	if (PageReclaim(page))
		congestion_wait();

PG_reclaim plays the key role. When dirty pages are encountered, we
queue I/O for it, set PG_reclaim and put it back to the LRU head.
So if PG_reclaim pages are encountered again, it means the dirty page
has not yet been cleaned by the flusher after a full zone scan. It
indicates we are scanning more fast than I/O and shall take a snap.

The runtime behavior on a fully dirtied small LRU list would be:
It will start with a quick scan of the list, queuing all pages for I/O.
Then the scan will be slowed down by the PG_reclaim pages *adaptively*
to match the I/O bandwidth.

3) writeback work coordinations

To avoid memory allocations at page reclaim, a mempool for struct
wb_writeback_work is created.

wakeup_flusher_threads() is removed because it can easily delay the
more oriented pageout works and even exhaust the mempool reservations.
It's often not I/O efficient by submitting writeback works with small
->nr_pages.

Background/periodic works will quit automatically (as done in another
patch), so as to clean the pages under reclaim ASAP. However for now the
sync work can still block us for long time.

Jan Kara: limit the search scope. Note that the limited search and work
pool is not a big problem: 1000 IOs under flight are typically more than
enough to saturate the disk. And the overheads of searching in the work
list didn't even show up in the perf report.

4) test case

Run 2 dd tasks in a 100MB memcg (a very handy test case from Greg Thelen):

	mkdir /cgroup/x
	echo 100M > /cgroup/x/memory.limit_in_bytes
	echo $$ > /cgroup/x/tasks

	for i in `seq 2`
	do
		dd if=/dev/zero of=/fs/f$i bs=1k count=1M &
	done

Before patch, the dd tasks are quickly OOM killed.
After patch, they run well with reasonably good performance and overheads:

1073741824 bytes (1.1 GB) copied, 22.2196 s, 48.3 MB/s
1073741824 bytes (1.1 GB) copied, 22.4675 s, 47.8 MB/s

iostat -kx 1

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0.00     0.00    0.00  178.00     0.00 89568.00  1006.38    74.35  417.71   4.80  85.40
sda               0.00     2.00    0.00  191.00     0.00 94428.00   988.77    53.34  219.03   4.34  82.90
sda               0.00    20.00    0.00  196.00     0.00 97712.00   997.06    71.11  337.45   4.77  93.50
sda               0.00     5.00    0.00  175.00     0.00 84648.00   967.41    54.03  316.44   5.06  88.60
sda               0.00     0.00    0.00  186.00     0.00 92432.00   993.89    56.22  267.54   5.38 100.00
sda               0.00     1.00    0.00  183.00     0.00 90156.00   985.31    37.99  325.55   4.33  79.20
sda               0.00     0.00    0.00  175.00     0.00 88692.00  1013.62    48.70  218.43   4.69  82.10
sda               0.00     0.00    0.00  196.00     0.00 97528.00   995.18    43.38  236.87   5.10 100.00
sda               0.00     0.00    0.00  179.00     0.00 88648.00   990.48    45.83  285.43   5.59 100.00
sda               0.00     0.00    0.00  178.00     0.00 88500.00   994.38    28.28  158.89   4.99  88.80
sda               0.00     0.00    0.00  194.00     0.00 95852.00   988.16    32.58  167.39   5.15 100.00
sda               0.00     2.00    0.00  215.00     0.00 105996.00   986.01    41.72  201.43   4.65 100.00
sda               0.00     4.00    0.00  173.00     0.00 84332.00   974.94    50.48  260.23   5.76  99.60
sda               0.00     0.00    0.00  182.00     0.00 90312.00   992.44    36.83  212.07   5.49 100.00
sda               0.00     8.00    0.00  195.00     0.00 95940.50   984.01    50.18  221.06   5.13 100.00
sda               0.00     1.00    0.00  220.00     0.00 108852.00   989.56    40.99  202.68   4.55 100.00
sda               0.00     2.00    0.00  161.00     0.00 80384.00   998.56    37.19  268.49   6.21 100.00
sda               0.00     4.00    0.00  182.00     0.00 90830.00   998.13    50.58  239.77   5.49 100.00
sda               0.00     0.00    0.00  197.00     0.00 94877.00   963.22    36.68  196.79   5.08 100.00

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.25    0.00   15.08   33.92    0.00   50.75
           0.25    0.00   14.54   35.09    0.00   50.13
           0.50    0.00   13.57   32.41    0.00   53.52
           0.50    0.00   11.28   36.84    0.00   51.38
           0.50    0.00   15.75   32.00    0.00   51.75
           0.50    0.00   10.50   34.00    0.00   55.00
           0.50    0.00   17.63   27.46    0.00   54.41
           0.50    0.00   15.08   30.90    0.00   53.52
           0.50    0.00   11.28   32.83    0.00   55.39
           0.75    0.00   16.79   26.82    0.00   55.64
           0.50    0.00   16.08   29.15    0.00   54.27
           0.50    0.00   13.50   30.50    0.00   55.50
           0.50    0.00   14.32   35.18    0.00   50.00
           0.50    0.00   12.06   33.92    0.00   53.52
           0.50    0.00   17.29   30.58    0.00   51.63
           0.50    0.00   15.08   29.65    0.00   54.77
           0.50    0.00   12.53   29.32    0.00   57.64
           0.50    0.00   15.29   31.83    0.00   52.38

The global dd iostat for comparison:

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0.00     0.00    0.00  189.00     0.00 95752.00  1013.25   143.09  684.48   5.29 100.00
sda               0.00     0.00    0.00  208.00     0.00 105480.00  1014.23   143.06  733.29   4.81 100.00
sda               0.00     0.00    0.00  161.00     0.00 81924.00  1017.69   141.71  757.79   6.21 100.00
sda               0.00     0.00    0.00  217.00     0.00 109580.00  1009.95   143.09  749.55   4.61 100.10
sda               0.00     0.00    0.00  187.00     0.00 94728.00  1013.13   144.31  773.67   5.35 100.00
sda               0.00     0.00    0.00  189.00     0.00 95752.00  1013.25   144.14  742.00   5.29 100.00
sda               0.00     0.00    0.00  177.00     0.00 90032.00  1017.31   143.32  656.59   5.65 100.00
sda               0.00     0.00    0.00  215.00     0.00 108640.00  1010.60   142.90  817.54   4.65 100.00
sda               0.00     2.00    0.00  166.00     0.00 83858.00  1010.34   143.64  808.61   6.02 100.00
sda               0.00     0.00    0.00  186.00     0.00 92813.00   997.99   141.18  736.95   5.38 100.00
sda               0.00     0.00    0.00  206.00     0.00 104456.00  1014.14   146.27  729.33   4.85 100.00
sda               0.00     0.00    0.00  213.00     0.00 107024.00  1004.92   143.25  705.70   4.69 100.00
sda               0.00     0.00    0.00  188.00     0.00 95748.00  1018.60   141.82  764.78   5.32 100.00

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.51    0.00   11.22   52.30    0.00   35.97
           0.25    0.00   10.15   52.54    0.00   37.06
           0.25    0.00    5.01   56.64    0.00   38.10
           0.51    0.00   15.15   43.94    0.00   40.40
           0.25    0.00   12.12   48.23    0.00   39.39
           0.51    0.00   11.20   53.94    0.00   34.35
           0.26    0.00    9.72   51.41    0.00   38.62
           0.76    0.00    9.62   50.63    0.00   38.99
           0.51    0.00   10.46   53.32    0.00   35.71
           0.51    0.00    9.41   51.91    0.00   38.17
           0.25    0.00   10.69   49.62    0.00   39.44
           0.51    0.00   12.21   52.67    0.00   34.61
           0.51    0.00   11.45   53.18    0.00   34.86

Note that it's data for XFS. ext4 seems to have some problem with the
workload: the majority pages are found to be writeback pages, and the
flusher ends up blocking on the unconditional wait_on_page_writeback()
in write_cache_pages_da() from time to time...

XXX: commit NFS unstable pages via write_inode()
XXX: the added congestion_wait() may be undesirable in some situations

CC: Jan Kara <jack@suse.cz>
CC: Mel Gorman <mgorman@suse.de>
CC: Rik van Riel <riel@redhat.com>
CC: Greg Thelen <gthelen@google.com>
CC: Minchan Kim <minchan.kim@gmail.com>
Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
---
 fs/fs-writeback.c                |  165 ++++++++++++++++++++++++++++-
 include/linux/writeback.h        |    4 
 include/trace/events/writeback.h |   12 +-
 mm/vmscan.c                      |   17 +-
 4 files changed, 184 insertions(+), 14 deletions(-)

--- linux.orig/mm/vmscan.c	2012-02-03 21:42:21.000000000 +0800
+++ linux/mm/vmscan.c	2012-02-11 17:28:54.000000000 +0800
@@ -813,6 +813,8 @@ static unsigned long shrink_page_list(st
 
 		if (PageWriteback(page)) {
 			nr_writeback++;
+			if (PageReclaim(page))
+				congestion_wait(BLK_RW_ASYNC, HZ/10);
 			/*
 			 * Synchronous reclaim cannot queue pages for
 			 * writeback due to the possibility of stack overflow
@@ -874,12 +876,15 @@ static unsigned long shrink_page_list(st
 			nr_dirty++;
 
 			/*
-			 * Only kswapd can writeback filesystem pages to
-			 * avoid risk of stack overflow but do not writeback
-			 * unless under significant pressure.
+			 * run into the visited page again: we are scanning
+			 * faster than the flusher can writeout dirty pages
 			 */
-			if (page_is_file_cache(page) &&
-					(!current_is_kswapd() || priority >= DEF_PRIORITY - 2)) {
+			if (page_is_file_cache(page) && PageReclaim(page)) {
+				congestion_wait(BLK_RW_ASYNC, HZ/10);
+				goto keep_locked;
+			}
+			if (page_is_file_cache(page) && mapping &&
+			    flush_inode_page(mapping, page, true) >= 0) {
 				/*
 				 * Immediately reclaim when written back.
 				 * Similar in principal to deactivate_page()
@@ -2382,8 +2387,6 @@ static unsigned long do_try_to_free_page
 		 */
 		writeback_threshold = sc->nr_to_reclaim + sc->nr_to_reclaim / 2;
 		if (total_scanned > writeback_threshold) {
-			wakeup_flusher_threads(laptop_mode ? 0 : total_scanned,
-						WB_REASON_TRY_TO_FREE_PAGES);
 			sc->may_writepage = 1;
 		}
 
--- linux.orig/fs/fs-writeback.c	2012-02-03 21:42:16.000000000 +0800
+++ linux/fs/fs-writeback.c	2012-02-11 18:24:24.000000000 +0800
@@ -35,12 +35,21 @@
 #define MIN_WRITEBACK_PAGES	(4096UL >> (PAGE_CACHE_SHIFT - 10))
 
 /*
+ * When flushing an inode page (for page reclaim), try to piggy back up to
+ * 1MB nearby pages for IO efficiency. These pages will have good opportunity
+ * to be in the same LRU list.
+ */
+#define WRITE_AROUND_PAGES	(1024UL >> (PAGE_CACHE_SHIFT - 10))
+
+/*
  * Passed into wb_writeback(), essentially a subset of writeback_control
  */
 struct wb_writeback_work {
 	long nr_pages;
 	struct super_block *sb;
 	unsigned long *older_than_this;
+	struct inode *inode;
+	pgoff_t offset;
 	enum writeback_sync_modes sync_mode;
 	unsigned int tagged_writepages:1;
 	unsigned int for_kupdate:1;
@@ -65,6 +74,27 @@ struct wb_writeback_work {
  */
 int nr_pdflush_threads;
 
+static mempool_t *wb_work_mempool;
+
+static void *wb_work_alloc(gfp_t gfp_mask, void *pool_data)
+{
+	/*
+	 * bdi_flush_inode_range() may be called on page reclaim
+	 */
+	if (current->flags & PF_MEMALLOC)
+		return NULL;
+
+	return kmalloc(sizeof(struct wb_writeback_work), gfp_mask);
+}
+
+static __init int wb_work_init(void)
+{
+	wb_work_mempool = mempool_create(1024,
+					 wb_work_alloc, mempool_kfree, NULL);
+	return wb_work_mempool ? 0 : -ENOMEM;
+}
+fs_initcall(wb_work_init);
+
 /**
  * writeback_in_progress - determine whether there is writeback in progress
  * @bdi: the device's backing_dev_info structure.
@@ -129,7 +159,7 @@ __bdi_start_writeback(struct backing_dev
 	 * This is WB_SYNC_NONE writeback, so if allocation fails just
 	 * wakeup the thread for old dirty data writeback
 	 */
-	work = kzalloc(sizeof(*work), GFP_ATOMIC);
+	work = mempool_alloc(wb_work_mempool, GFP_NOWAIT);
 	if (!work) {
 		if (bdi->wb.task) {
 			trace_writeback_nowork(bdi);
@@ -138,6 +168,7 @@ __bdi_start_writeback(struct backing_dev
 		return;
 	}
 
+	memset(work, 0, sizeof(*work));
 	work->sync_mode	= WB_SYNC_NONE;
 	work->nr_pages	= nr_pages;
 	work->range_cyclic = range_cyclic;
@@ -186,6 +217,114 @@ void bdi_start_background_writeback(stru
 	spin_unlock_bh(&bdi->wb_lock);
 }
 
+static bool extend_writeback_range(struct wb_writeback_work *work,
+				   pgoff_t offset)
+{
+	pgoff_t end = work->offset + work->nr_pages;
+
+	if (offset >= work->offset && offset < end)
+		return true;
+
+	if (work->nr_pages >= 8 * WRITE_AROUND_PAGES)
+		return false;
+
+	/* the unsigned comparison helps eliminate one compare */
+	if (work->offset - offset < WRITE_AROUND_PAGES) {
+		work->nr_pages += WRITE_AROUND_PAGES;
+		work->offset -= WRITE_AROUND_PAGES;
+		return true;
+	}
+
+	if (offset - end < WRITE_AROUND_PAGES) {
+		work->nr_pages += WRITE_AROUND_PAGES;
+		return true;
+	}
+
+	return false;
+}
+
+/*
+ * schedule writeback on a range of inode pages.
+ */
+static struct wb_writeback_work *
+bdi_flush_inode_range(struct backing_dev_info *bdi,
+		      struct inode *inode,
+		      pgoff_t offset,
+		      pgoff_t len,
+		      bool wait)
+{
+	struct wb_writeback_work *work;
+
+	if (!igrab(inode))
+		return ERR_PTR(-ENOENT);
+
+	work = mempool_alloc(wb_work_mempool, wait ? GFP_NOIO : GFP_NOWAIT);
+	if (!work)
+		return ERR_PTR(-ENOMEM);
+
+	memset(work, 0, sizeof(*work));
+	work->sync_mode		= WB_SYNC_NONE;
+	work->inode		= inode;
+	work->offset		= offset;
+	work->nr_pages		= len;
+	work->reason		= WB_REASON_PAGEOUT;
+
+	bdi_queue_work(bdi, work);
+
+	return work;
+}
+
+/*
+ * Called by page reclaim code to flush the dirty page ASAP. Do write-around to
+ * improve IO throughput. The nearby pages will have good chance to reside in
+ * the same LRU list that vmscan is working on, and even close to each other
+ * inside the LRU list in the common case of sequential read/write.
+ *
+ * ret > 0: success, found/reused a previous writeback work
+ * ret = 0: success, allocated/queued a new writeback work
+ * ret < 0: failed
+ */
+long flush_inode_page(struct address_space *mapping,
+		      struct page *page,
+		      bool wait)
+{
+	struct backing_dev_info *bdi = mapping->backing_dev_info;
+	struct inode *inode = mapping->host;
+	pgoff_t offset = page->index;
+	pgoff_t len = 0;
+	struct wb_writeback_work *work;
+	long ret = -ENOENT;
+
+	if (unlikely(!inode))
+		goto out;
+
+	len = 1;
+	spin_lock_bh(&bdi->wb_lock);
+	list_for_each_entry_reverse(work, &bdi->work_list, list) {
+		if (work->inode != inode)
+			continue;
+		if (extend_writeback_range(work, offset)) {
+			ret = len;
+			offset = work->offset;
+			len = work->nr_pages;
+			break;
+		}
+		if (len++ > 100)	/* limit search depth */
+			break;
+	}
+	spin_unlock_bh(&bdi->wb_lock);
+
+	if (ret > 0)
+		goto out;
+
+	offset = round_down(offset, WRITE_AROUND_PAGES);
+	len = WRITE_AROUND_PAGES;
+	work = bdi_flush_inode_range(bdi, inode, offset, len, wait);
+	ret = IS_ERR(work) ? PTR_ERR(work) : 0;
+out:
+	return ret;
+}
+
 /*
  * Remove the inode from the writeback list it is on.
  */
@@ -833,6 +972,23 @@ static unsigned long get_nr_dirty_pages(
 		get_nr_dirty_inodes();
 }
 
+static long wb_flush_inode(struct bdi_writeback *wb,
+			   struct wb_writeback_work *work)
+{
+	struct writeback_control wbc = {
+		.sync_mode = WB_SYNC_NONE,
+		.nr_to_write = LONG_MAX,
+		.range_start = work->offset << PAGE_CACHE_SHIFT,
+		.range_end = (work->offset + work->nr_pages - 1)
+						<< PAGE_CACHE_SHIFT,
+	};
+
+	do_writepages(work->inode->i_mapping, &wbc);
+	iput(work->inode);
+
+	return LONG_MAX - wbc.nr_to_write;
+}
+
 static long wb_check_background_flush(struct bdi_writeback *wb)
 {
 	if (over_bground_thresh(wb->bdi)) {
@@ -905,7 +1061,10 @@ long wb_do_writeback(struct bdi_writebac
 
 		trace_writeback_exec(bdi, work);
 
-		wrote += wb_writeback(wb, work);
+		if (work->inode)
+			wrote += wb_flush_inode(wb, work);
+		else
+			wrote += wb_writeback(wb, work);
 
 		/*
 		 * Notify the caller of completion if this is a synchronous
@@ -914,7 +1073,7 @@ long wb_do_writeback(struct bdi_writebac
 		if (work->done)
 			complete(work->done);
 		else
-			kfree(work);
+			mempool_free(work, wb_work_mempool);
 	}
 
 	/*
--- linux.orig/include/trace/events/writeback.h	2012-02-10 21:54:14.000000000 +0800
+++ linux/include/trace/events/writeback.h	2012-02-11 16:49:18.000000000 +0800
@@ -23,7 +23,7 @@
 
 #define WB_WORK_REASON							\
 		{WB_REASON_BACKGROUND,		"background"},		\
-		{WB_REASON_TRY_TO_FREE_PAGES,	"try_to_free_pages"},	\
+		{WB_REASON_PAGEOUT,		"pageout"},		\
 		{WB_REASON_SYNC,		"sync"},		\
 		{WB_REASON_PERIODIC,		"periodic"},		\
 		{WB_REASON_LAPTOP_TIMER,	"laptop_timer"},	\
@@ -45,6 +45,8 @@ DECLARE_EVENT_CLASS(writeback_work_class
 		__field(int, range_cyclic)
 		__field(int, for_background)
 		__field(int, reason)
+		__field(unsigned long, ino)
+		__field(unsigned long, offset)
 	),
 	TP_fast_assign(
 		strncpy(__entry->name, dev_name(bdi->dev), 32);
@@ -55,9 +57,11 @@ DECLARE_EVENT_CLASS(writeback_work_class
 		__entry->range_cyclic = work->range_cyclic;
 		__entry->for_background	= work->for_background;
 		__entry->reason = work->reason;
+		__entry->ino = work->inode ? work->inode->i_ino : 0;
+		__entry->offset = work->offset;
 	),
 	TP_printk("bdi %s: sb_dev %d:%d nr_pages=%ld sync_mode=%d "
-		  "kupdate=%d range_cyclic=%d background=%d reason=%s",
+		  "kupdate=%d range_cyclic=%d background=%d reason=%s ino=%lu offset=%lu",
 		  __entry->name,
 		  MAJOR(__entry->sb_dev), MINOR(__entry->sb_dev),
 		  __entry->nr_pages,
@@ -65,7 +69,9 @@ DECLARE_EVENT_CLASS(writeback_work_class
 		  __entry->for_kupdate,
 		  __entry->range_cyclic,
 		  __entry->for_background,
-		  __print_symbolic(__entry->reason, WB_WORK_REASON)
+		  __print_symbolic(__entry->reason, WB_WORK_REASON),
+		  __entry->ino,
+		  __entry->offset
 	)
 );
 #define DEFINE_WRITEBACK_WORK_EVENT(name) \
--- linux.orig/include/linux/writeback.h	2012-02-11 09:53:53.000000000 +0800
+++ linux/include/linux/writeback.h	2012-02-11 16:49:36.000000000 +0800
@@ -40,7 +40,7 @@ enum writeback_sync_modes {
  */
 enum wb_reason {
 	WB_REASON_BACKGROUND,
-	WB_REASON_TRY_TO_FREE_PAGES,
+	WB_REASON_PAGEOUT,
 	WB_REASON_SYNC,
 	WB_REASON_PERIODIC,
 	WB_REASON_LAPTOP_TIMER,
@@ -94,6 +94,8 @@ long writeback_inodes_wb(struct bdi_writ
 				enum wb_reason reason);
 long wb_do_writeback(struct bdi_writeback *wb, int force_wait);
 void wakeup_flusher_threads(long nr_pages, enum wb_reason reason);
+long flush_inode_page(struct address_space *mapping, struct page *page,
+		      bool wait);
 
 /* writeback.h requires fs.h; it, too, is not included from here. */
 static inline void wait_on_inode(struct inode *inode)

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: reclaim the LRU lists full of dirty/writeback pages
  2012-02-11 12:44       ` reclaim the LRU lists full of dirty/writeback pages Wu Fengguang
@ 2012-02-11 14:55         ` Rik van Riel
  2012-02-12  3:10           ` Wu Fengguang
  2012-02-14 10:19         ` Mel Gorman
  1 sibling, 1 reply; 33+ messages in thread
From: Rik van Riel @ 2012-02-11 14:55 UTC (permalink / raw)
  To: Wu Fengguang
  Cc: Greg Thelen, Jan Kara, bsingharora, Hugh Dickins, Michal Hocko,
	linux-mm, Mel Gorman, Ying Han, hannes, KAMEZAWA Hiroyuki,
	Minchan Kim

On 02/11/2012 07:44 AM, Wu Fengguang wrote:

> Note that it's data for XFS. ext4 seems to have some problem with the
> workload: the majority pages are found to be writeback pages, and the
> flusher ends up blocking on the unconditional wait_on_page_writeback()
> in write_cache_pages_da() from time to time...
>
> XXX: commit NFS unstable pages via write_inode()
> XXX: the added congestion_wait() may be undesirable in some situations

Even with these caveats, this seems to be the right way forward.

> CC: Jan Kara<jack@suse.cz>
> CC: Mel Gorman<mgorman@suse.de>
> CC: Rik van Riel<riel@redhat.com>
> CC: Greg Thelen<gthelen@google.com>
> CC: Minchan Kim<minchan.kim@gmail.com>
> Signed-off-by: Wu Fengguang<fengguang.wu@intel.com>

Acked-by: Rik van Riel <riel@redhat.com>

-- 
All rights reversed

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: reclaim the LRU lists full of dirty/writeback pages
  2012-02-11 14:55         ` Rik van Riel
@ 2012-02-12  3:10           ` Wu Fengguang
  2012-02-12  6:45             ` Wu Fengguang
  2012-02-13 15:43             ` Jan Kara
  0 siblings, 2 replies; 33+ messages in thread
From: Wu Fengguang @ 2012-02-12  3:10 UTC (permalink / raw)
  To: Rik van Riel
  Cc: Greg Thelen, Jan Kara, bsingharora, Hugh Dickins, Michal Hocko,
	linux-mm, Mel Gorman, Ying Han, hannes, KAMEZAWA Hiroyuki,
	Minchan Kim

On Sat, Feb 11, 2012 at 09:55:38AM -0500, Rik van Riel wrote:
> On 02/11/2012 07:44 AM, Wu Fengguang wrote:
> 
> >Note that it's data for XFS. ext4 seems to have some problem with the
> >workload: the majority pages are found to be writeback pages, and the
> >flusher ends up blocking on the unconditional wait_on_page_writeback()
> >in write_cache_pages_da() from time to time...

Sorry I overlooked the WB_SYNC_NONE test before the wait_on_page_writeback()
call! And the issue can no longer be reproduce anyway. ext4 performs pretty
good now, here is the result for one single memcg dd:

        dd if=/dev/zero of=/fs/f$i bs=4k count=1M

        4294967296 bytes (4.3 GB) copied, 44.5759 s, 96.4 MB/s

iostat -kx 3

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.25    0.00   11.03   28.54    0.00   60.19
           0.25    0.00   13.71   16.65    0.00   69.39
           0.17    0.00    8.41   24.81    0.00   66.61
           0.25    0.00   15.00   19.63    0.00   65.12

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0.00    17.00    0.00  178.33     0.00 90694.67  1017.14   111.34  520.88   5.45  97.23
sda               0.00     0.00    0.00  193.67     0.00 98816.00  1020.48    86.22  496.81   4.81  93.07
sda               0.00     3.33    0.00  182.33     0.00 92345.33  1012.93   101.14  623.98   5.49 100.03
sda               0.00     3.00    0.00  187.00     0.00 95586.67  1022.32    89.36  441.70   4.96  92.70

> >XXX: commit NFS unstable pages via write_inode()
> >XXX: the added congestion_wait() may be undesirable in some situations
> 
> Even with these caveats, this seems to be the right way forward.

> Acked-by: Rik van Riel <riel@redhat.com>

Thank you!
 
Here is the updated patch.
- ~10ms write around chunk size, adaptive to the bdi bandwith 
- cleanup flush_inode_page()

Thanks,
Fengguang
---
Subject: writeback: introduce the pageout work
Date: Thu Jul 29 14:41:19 CST 2010

This relays file pageout IOs to the flusher threads.

The ultimate target is to gracefully handle the LRU lists full of
dirty/writeback pages.

1) I/O efficiency

The flusher will piggy back the nearby ~10ms worth of dirty pages for I/O.

This takes advantage of the time/spacial locality in most workloads: the
nearby pages of one file are typically populated into the LRU at the same
time, hence will likely be close to each other in the LRU list. Writing
them in one shot helps clean more pages effectively for page reclaim.

2) OOM avoidance and scan rate control

Typically we do LRU scan w/o rate control and quickly get enough clean
pages for the LRU lists not full of dirty pages.

Or we can still get a number of freshly cleaned pages (moved to LRU tail
by end_page_writeback()) when the queued pageout I/O is completed within
tens of milli-seconds.

However if the LRU list is small and full of dirty pages, it can be
quickly fully scanned and go OOM before the flusher manages to clean
enough pages.

A simple yet reliable scheme is employed to avoid OOM and keep scan rate
in sync with the I/O rate:

	if (PageReclaim(page))
		congestion_wait(HZ/10);

PG_reclaim plays the key role. When dirty pages are encountered, we
queue I/O for it, set PG_reclaim and put it back to the LRU head.
So if PG_reclaim pages are encountered again, it means the dirty page
has not yet been cleaned by the flusher after a full zone scan. It
indicates we are scanning more fast than I/O and shall take a snap.

The runtime behavior on a fully dirtied small LRU list would be:
It will start with a quick scan of the list, queuing all pages for I/O.
Then the scan will be slowed down by the PG_reclaim pages *adaptively*
to match the I/O bandwidth.

3) writeback work coordinations

To avoid memory allocations at page reclaim, a mempool for struct
wb_writeback_work is created.

wakeup_flusher_threads() is removed because it can easily delay the
more oriented pageout works and even exhaust the mempool reservations.
It's also found to not I/O efficient by frequently submitting writeback
works with small ->nr_pages.

Background/periodic works will quit automatically, so as to clean the
pages under reclaim ASAP. However for now the sync work can still block
us for long time.

Jan Kara: limit the search scope. Note that the limited search and work
pool is not a big problem: 1000 IOs under flight are typically more than
enough to saturate the disk. And the overheads of searching in the work
list didn't even show up in the perf report.

4) test case

Run 2 dd tasks in a 100MB memcg (a very handy test case from Greg Thelen):

	mkdir /cgroup/x
	echo 100M > /cgroup/x/memory.limit_in_bytes
	echo $$ > /cgroup/x/tasks

	for i in `seq 2`
	do
		dd if=/dev/zero of=/fs/f$i bs=1k count=1M &
	done

Before patch, the dd tasks are quickly OOM killed.
After patch, they run well with reasonably good performance and overheads:

1073741824 bytes (1.1 GB) copied, 22.2196 s, 48.3 MB/s
1073741824 bytes (1.1 GB) copied, 22.4675 s, 47.8 MB/s

iostat -kx 1

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0.00     0.00    0.00  178.00     0.00 89568.00  1006.38    74.35  417.71   4.80  85.40
sda               0.00     2.00    0.00  191.00     0.00 94428.00   988.77    53.34  219.03   4.34  82.90
sda               0.00    20.00    0.00  196.00     0.00 97712.00   997.06    71.11  337.45   4.77  93.50
sda               0.00     5.00    0.00  175.00     0.00 84648.00   967.41    54.03  316.44   5.06  88.60
sda               0.00     0.00    0.00  186.00     0.00 92432.00   993.89    56.22  267.54   5.38 100.00
sda               0.00     1.00    0.00  183.00     0.00 90156.00   985.31    37.99  325.55   4.33  79.20
sda               0.00     0.00    0.00  175.00     0.00 88692.00  1013.62    48.70  218.43   4.69  82.10
sda               0.00     0.00    0.00  196.00     0.00 97528.00   995.18    43.38  236.87   5.10 100.00
sda               0.00     0.00    0.00  179.00     0.00 88648.00   990.48    45.83  285.43   5.59 100.00
sda               0.00     0.00    0.00  178.00     0.00 88500.00   994.38    28.28  158.89   4.99  88.80
sda               0.00     0.00    0.00  194.00     0.00 95852.00   988.16    32.58  167.39   5.15 100.00
sda               0.00     2.00    0.00  215.00     0.00 105996.00   986.01    41.72  201.43   4.65 100.00
sda               0.00     4.00    0.00  173.00     0.00 84332.00   974.94    50.48  260.23   5.76  99.60
sda               0.00     0.00    0.00  182.00     0.00 90312.00   992.44    36.83  212.07   5.49 100.00
sda               0.00     8.00    0.00  195.00     0.00 95940.50   984.01    50.18  221.06   5.13 100.00
sda               0.00     1.00    0.00  220.00     0.00 108852.00   989.56    40.99  202.68   4.55 100.00
sda               0.00     2.00    0.00  161.00     0.00 80384.00   998.56    37.19  268.49   6.21 100.00
sda               0.00     4.00    0.00  182.00     0.00 90830.00   998.13    50.58  239.77   5.49 100.00
sda               0.00     0.00    0.00  197.00     0.00 94877.00   963.22    36.68  196.79   5.08 100.00

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.25    0.00   15.08   33.92    0.00   50.75
           0.25    0.00   14.54   35.09    0.00   50.13
           0.50    0.00   13.57   32.41    0.00   53.52
           0.50    0.00   11.28   36.84    0.00   51.38
           0.50    0.00   15.75   32.00    0.00   51.75
           0.50    0.00   10.50   34.00    0.00   55.00
           0.50    0.00   17.63   27.46    0.00   54.41
           0.50    0.00   15.08   30.90    0.00   53.52
           0.50    0.00   11.28   32.83    0.00   55.39
           0.75    0.00   16.79   26.82    0.00   55.64
           0.50    0.00   16.08   29.15    0.00   54.27
           0.50    0.00   13.50   30.50    0.00   55.50
           0.50    0.00   14.32   35.18    0.00   50.00
           0.50    0.00   12.06   33.92    0.00   53.52
           0.50    0.00   17.29   30.58    0.00   51.63
           0.50    0.00   15.08   29.65    0.00   54.77
           0.50    0.00   12.53   29.32    0.00   57.64
           0.50    0.00   15.29   31.83    0.00   52.38

The global dd numbers for comparison:

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0.00     0.00    0.00  189.00     0.00 95752.00  1013.25   143.09  684.48   5.29 100.00
sda               0.00     0.00    0.00  208.00     0.00 105480.00  1014.23   143.06  733.29   4.81 100.00
sda               0.00     0.00    0.00  161.00     0.00 81924.00  1017.69   141.71  757.79   6.21 100.00
sda               0.00     0.00    0.00  217.00     0.00 109580.00  1009.95   143.09  749.55   4.61 100.10
sda               0.00     0.00    0.00  187.00     0.00 94728.00  1013.13   144.31  773.67   5.35 100.00
sda               0.00     0.00    0.00  189.00     0.00 95752.00  1013.25   144.14  742.00   5.29 100.00
sda               0.00     0.00    0.00  177.00     0.00 90032.00  1017.31   143.32  656.59   5.65 100.00
sda               0.00     0.00    0.00  215.00     0.00 108640.00  1010.60   142.90  817.54   4.65 100.00
sda               0.00     2.00    0.00  166.00     0.00 83858.00  1010.34   143.64  808.61   6.02 100.00
sda               0.00     0.00    0.00  186.00     0.00 92813.00   997.99   141.18  736.95   5.38 100.00
sda               0.00     0.00    0.00  206.00     0.00 104456.00  1014.14   146.27  729.33   4.85 100.00
sda               0.00     0.00    0.00  213.00     0.00 107024.00  1004.92   143.25  705.70   4.69 100.00
sda               0.00     0.00    0.00  188.00     0.00 95748.00  1018.60   141.82  764.78   5.32 100.00

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.51    0.00   11.22   52.30    0.00   35.97
           0.25    0.00   10.15   52.54    0.00   37.06
           0.25    0.00    5.01   56.64    0.00   38.10
           0.51    0.00   15.15   43.94    0.00   40.40
           0.25    0.00   12.12   48.23    0.00   39.39
           0.51    0.00   11.20   53.94    0.00   34.35
           0.26    0.00    9.72   51.41    0.00   38.62
           0.76    0.00    9.62   50.63    0.00   38.99
           0.51    0.00   10.46   53.32    0.00   35.71
           0.51    0.00    9.41   51.91    0.00   38.17
           0.25    0.00   10.69   49.62    0.00   39.44
           0.51    0.00   12.21   52.67    0.00   34.61
           0.51    0.00   11.45   53.18    0.00   34.86

XXX: commit NFS unstable pages via write_inode()
XXX: the added congestion_wait() may be undesirable in some situations

CC: Jan Kara <jack@suse.cz>
CC: Mel Gorman <mgorman@suse.de>
Acked-by: Rik van Riel <riel@redhat.com>
CC: Greg Thelen <gthelen@google.com>
CC: Minchan Kim <minchan.kim@gmail.com>
Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
---
 fs/fs-writeback.c                |  167 ++++++++++++++++++++++++++++-
 include/linux/writeback.h        |    4 
 include/trace/events/writeback.h |   12 +-
 mm/vmscan.c                      |   17 +-
 4 files changed, 186 insertions(+), 14 deletions(-)

--- linux.orig/mm/vmscan.c	2012-02-11 19:59:40.000000000 +0800
+++ linux/mm/vmscan.c	2012-02-11 20:07:48.000000000 +0800
@@ -813,6 +813,8 @@ static unsigned long shrink_page_list(st
 
 		if (PageWriteback(page)) {
 			nr_writeback++;
+			if (PageReclaim(page))
+				congestion_wait(BLK_RW_ASYNC, HZ/10);
 			/*
 			 * Synchronous reclaim cannot queue pages for
 			 * writeback due to the possibility of stack overflow
@@ -874,12 +876,15 @@ static unsigned long shrink_page_list(st
 			nr_dirty++;
 
 			/*
-			 * Only kswapd can writeback filesystem pages to
-			 * avoid risk of stack overflow but do not writeback
-			 * unless under significant pressure.
+			 * run into the visited page again: we are scanning
+			 * faster than the flusher can writeout dirty pages
 			 */
-			if (page_is_file_cache(page) &&
-					(!current_is_kswapd() || priority >= DEF_PRIORITY - 2)) {
+			if (page_is_file_cache(page) && PageReclaim(page)) {
+				congestion_wait(BLK_RW_ASYNC, HZ/10);
+				goto keep_locked;
+			}
+			if (page_is_file_cache(page) && mapping &&
+			    flush_inode_page(mapping, page, true) >= 0) {
 				/*
 				 * Immediately reclaim when written back.
 				 * Similar in principal to deactivate_page()
@@ -2382,8 +2387,6 @@ static unsigned long do_try_to_free_page
 		 */
 		writeback_threshold = sc->nr_to_reclaim + sc->nr_to_reclaim / 2;
 		if (total_scanned > writeback_threshold) {
-			wakeup_flusher_threads(laptop_mode ? 0 : total_scanned,
-						WB_REASON_TRY_TO_FREE_PAGES);
 			sc->may_writepage = 1;
 		}
 
--- linux.orig/fs/fs-writeback.c	2012-02-11 19:59:40.000000000 +0800
+++ linux/fs/fs-writeback.c	2012-02-12 10:54:05.000000000 +0800
@@ -41,6 +41,8 @@ struct wb_writeback_work {
 	long nr_pages;
 	struct super_block *sb;
 	unsigned long *older_than_this;
+	struct inode *inode;
+	pgoff_t offset;
 	enum writeback_sync_modes sync_mode;
 	unsigned int tagged_writepages:1;
 	unsigned int for_kupdate:1;
@@ -65,6 +67,27 @@ struct wb_writeback_work {
  */
 int nr_pdflush_threads;
 
+static mempool_t *wb_work_mempool;
+
+static void *wb_work_alloc(gfp_t gfp_mask, void *pool_data)
+{
+	/*
+	 * bdi_flush_inode_range() may be called on page reclaim
+	 */
+	if (current->flags & PF_MEMALLOC)
+		return NULL;
+
+	return kmalloc(sizeof(struct wb_writeback_work), gfp_mask);
+}
+
+static __init int wb_work_init(void)
+{
+	wb_work_mempool = mempool_create(1024,
+					 wb_work_alloc, mempool_kfree, NULL);
+	return wb_work_mempool ? 0 : -ENOMEM;
+}
+fs_initcall(wb_work_init);
+
 /**
  * writeback_in_progress - determine whether there is writeback in progress
  * @bdi: the device's backing_dev_info structure.
@@ -129,7 +152,7 @@ __bdi_start_writeback(struct backing_dev
 	 * This is WB_SYNC_NONE writeback, so if allocation fails just
 	 * wakeup the thread for old dirty data writeback
 	 */
-	work = kzalloc(sizeof(*work), GFP_ATOMIC);
+	work = mempool_alloc(wb_work_mempool, GFP_NOWAIT);
 	if (!work) {
 		if (bdi->wb.task) {
 			trace_writeback_nowork(bdi);
@@ -138,6 +161,7 @@ __bdi_start_writeback(struct backing_dev
 		return;
 	}
 
+	memset(work, 0, sizeof(*work));
 	work->sync_mode	= WB_SYNC_NONE;
 	work->nr_pages	= nr_pages;
 	work->range_cyclic = range_cyclic;
@@ -186,6 +210,123 @@ void bdi_start_background_writeback(stru
 	spin_unlock_bh(&bdi->wb_lock);
 }
 
+static bool extend_writeback_range(struct wb_writeback_work *work,
+				   pgoff_t offset,
+				   unsigned long write_around_pages)
+{
+	pgoff_t end = work->offset + work->nr_pages;
+
+	if (offset >= work->offset && offset < end)
+		return true;
+
+	/*
+	 * for sequential workloads with good locality, include up to 8 times
+	 * more data in one chunk
+	 */
+	if (work->nr_pages >= 8 * write_around_pages)
+		return false;
+
+	/* the unsigned comparison helps eliminate one compare */
+	if (work->offset - offset < write_around_pages) {
+		work->nr_pages += write_around_pages;
+		work->offset -= write_around_pages;
+		return true;
+	}
+
+	if (offset - end < write_around_pages) {
+		work->nr_pages += write_around_pages;
+		return true;
+	}
+
+	return false;
+}
+
+/*
+ * schedule writeback on a range of inode pages.
+ */
+static struct wb_writeback_work *
+bdi_flush_inode_range(struct backing_dev_info *bdi,
+		      struct inode *inode,
+		      pgoff_t offset,
+		      pgoff_t len,
+		      bool wait)
+{
+	struct wb_writeback_work *work;
+
+	if (!igrab(inode))
+		return ERR_PTR(-ENOENT);
+
+	work = mempool_alloc(wb_work_mempool, wait ? GFP_NOIO : GFP_NOWAIT);
+	if (!work)
+		return ERR_PTR(-ENOMEM);
+
+	memset(work, 0, sizeof(*work));
+	work->sync_mode		= WB_SYNC_NONE;
+	work->inode		= inode;
+	work->offset		= offset;
+	work->nr_pages		= len;
+	work->reason		= WB_REASON_PAGEOUT;
+
+	bdi_queue_work(bdi, work);
+
+	return work;
+}
+
+/*
+ * Called by page reclaim code to flush the dirty page ASAP. Do write-around to
+ * improve IO throughput. The nearby pages will have good chance to reside in
+ * the same LRU list that vmscan is working on, and even close to each other
+ * inside the LRU list in the common case of sequential read/write.
+ *
+ * ret > 0: success, found/reused a previous writeback work
+ * ret = 0: success, allocated/queued a new writeback work
+ * ret < 0: failed
+ */
+long flush_inode_page(struct address_space *mapping,
+		      struct page *page,
+		      bool wait)
+{
+	struct backing_dev_info *bdi = mapping->backing_dev_info;
+	struct inode *inode = mapping->host;
+	struct wb_writeback_work *work;
+	unsigned long write_around_pages;
+	pgoff_t offset = page->index;
+	int i;
+	long ret = 0;
+
+	if (unlikely(!inode))
+		return -ENOENT;
+
+	/*
+	 * piggy back 8-15ms worth of data
+	 */
+	write_around_pages = bdi->avg_write_bandwidth + MIN_WRITEBACK_PAGES;
+	write_around_pages = rounddown_pow_of_two(write_around_pages) >> 6;
+
+	i = 1;
+	spin_lock_bh(&bdi->wb_lock);
+	list_for_each_entry_reverse(work, &bdi->work_list, list) {
+		if (work->inode != inode)
+			continue;
+		if (extend_writeback_range(work, offset, write_around_pages)) {
+			ret = i;
+			break;
+		}
+		if (i++ > 100)	/* limit search depth */
+			break;
+	}
+	spin_unlock_bh(&bdi->wb_lock);
+
+	if (!ret) {
+		offset = round_down(offset, write_around_pages);
+		work = bdi_flush_inode_range(bdi, inode,
+					     offset, write_around_pages, wait);
+		if (IS_ERR(work))
+			ret = PTR_ERR(work);
+	}
+	return ret;
+}
+
 /*
  * Remove the inode from the writeback list it is on.
  */
@@ -833,6 +974,23 @@ static unsigned long get_nr_dirty_pages(
 		get_nr_dirty_inodes();
 }
 
+static long wb_flush_inode(struct bdi_writeback *wb,
+			   struct wb_writeback_work *work)
+{
+	struct writeback_control wbc = {
+		.sync_mode = WB_SYNC_NONE,
+		.nr_to_write = LONG_MAX,
+		.range_start = work->offset << PAGE_CACHE_SHIFT,
+		.range_end = (work->offset + work->nr_pages - 1)
+						<< PAGE_CACHE_SHIFT,
+	};
+
+	do_writepages(work->inode->i_mapping, &wbc);
+	iput(work->inode);
+
+	return LONG_MAX - wbc.nr_to_write;
+}
+
 static long wb_check_background_flush(struct bdi_writeback *wb)
 {
 	if (over_bground_thresh(wb->bdi)) {
@@ -905,7 +1063,10 @@ long wb_do_writeback(struct bdi_writebac
 
 		trace_writeback_exec(bdi, work);
 
-		wrote += wb_writeback(wb, work);
+		if (work->inode)
+			wrote += wb_flush_inode(wb, work);
+		else
+			wrote += wb_writeback(wb, work);
 
 		/*
 		 * Notify the caller of completion if this is a synchronous
@@ -914,7 +1075,7 @@ long wb_do_writeback(struct bdi_writebac
 		if (work->done)
 			complete(work->done);
 		else
-			kfree(work);
+			mempool_free(work, wb_work_mempool);
 	}
 
 	/*
--- linux.orig/include/trace/events/writeback.h	2012-02-11 19:59:40.000000000 +0800
+++ linux/include/trace/events/writeback.h	2012-02-11 20:07:48.000000000 +0800
@@ -23,7 +23,7 @@
 
 #define WB_WORK_REASON							\
 		{WB_REASON_BACKGROUND,		"background"},		\
-		{WB_REASON_TRY_TO_FREE_PAGES,	"try_to_free_pages"},	\
+		{WB_REASON_PAGEOUT,		"pageout"},		\
 		{WB_REASON_SYNC,		"sync"},		\
 		{WB_REASON_PERIODIC,		"periodic"},		\
 		{WB_REASON_LAPTOP_TIMER,	"laptop_timer"},	\
@@ -45,6 +45,8 @@ DECLARE_EVENT_CLASS(writeback_work_class
 		__field(int, range_cyclic)
 		__field(int, for_background)
 		__field(int, reason)
+		__field(unsigned long, ino)
+		__field(unsigned long, offset)
 	),
 	TP_fast_assign(
 		strncpy(__entry->name, dev_name(bdi->dev), 32);
@@ -55,9 +57,11 @@ DECLARE_EVENT_CLASS(writeback_work_class
 		__entry->range_cyclic = work->range_cyclic;
 		__entry->for_background	= work->for_background;
 		__entry->reason = work->reason;
+		__entry->ino = work->inode ? work->inode->i_ino : 0;
+		__entry->offset = work->offset;
 	),
 	TP_printk("bdi %s: sb_dev %d:%d nr_pages=%ld sync_mode=%d "
-		  "kupdate=%d range_cyclic=%d background=%d reason=%s",
+		  "kupdate=%d range_cyclic=%d background=%d reason=%s ino=%lu offset=%lu",
 		  __entry->name,
 		  MAJOR(__entry->sb_dev), MINOR(__entry->sb_dev),
 		  __entry->nr_pages,
@@ -65,7 +69,9 @@ DECLARE_EVENT_CLASS(writeback_work_class
 		  __entry->for_kupdate,
 		  __entry->range_cyclic,
 		  __entry->for_background,
-		  __print_symbolic(__entry->reason, WB_WORK_REASON)
+		  __print_symbolic(__entry->reason, WB_WORK_REASON),
+		  __entry->ino,
+		  __entry->offset
 	)
 );
 #define DEFINE_WRITEBACK_WORK_EVENT(name) \
--- linux.orig/include/linux/writeback.h	2012-02-11 19:59:40.000000000 +0800
+++ linux/include/linux/writeback.h	2012-02-11 20:07:48.000000000 +0800
@@ -40,7 +40,7 @@ enum writeback_sync_modes {
  */
 enum wb_reason {
 	WB_REASON_BACKGROUND,
-	WB_REASON_TRY_TO_FREE_PAGES,
+	WB_REASON_PAGEOUT,
 	WB_REASON_SYNC,
 	WB_REASON_PERIODIC,
 	WB_REASON_LAPTOP_TIMER,
@@ -94,6 +94,8 @@ long writeback_inodes_wb(struct bdi_writ
 				enum wb_reason reason);
 long wb_do_writeback(struct bdi_writeback *wb, int force_wait);
 void wakeup_flusher_threads(long nr_pages, enum wb_reason reason);
+long flush_inode_page(struct address_space *mapping, struct page *page,
+		      bool wait);
 
 /* writeback.h requires fs.h; it, too, is not included from here. */
 static inline void wait_on_inode(struct inode *inode)

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: reclaim the LRU lists full of dirty/writeback pages
  2012-02-12  3:10           ` Wu Fengguang
@ 2012-02-12  6:45             ` Wu Fengguang
  2012-02-13 15:43             ` Jan Kara
  1 sibling, 0 replies; 33+ messages in thread
From: Wu Fengguang @ 2012-02-12  6:45 UTC (permalink / raw)
  To: Rik van Riel
  Cc: Greg Thelen, Jan Kara, bsingharora, Hugh Dickins, Michal Hocko,
	linux-mm, Mel Gorman, Ying Han, hannes, KAMEZAWA Hiroyuki,
	Minchan Kim

[-- Attachment #1: Type: text/plain, Size: 409223 bytes --]

> ext4 performs pretty good now, here is the result for one single
> memcg dd:
> 
>         dd if=/dev/zero of=/fs/f$i bs=4k count=1M

Here is one full writeback trace collected with

echo 1 > /debug/tracing/events/writeback/enable
echo 0 > /debug/tracing/events/writeback/global_dirty_state/enable
echo 0 > /debug/tracing/events/writeback/wbc_writepage/enable
echo 0 > /debug/tracing/events/writeback/writeback_wait_iff_congested/enable

Looking at the reason=pageout lines, the chunk size at queue time is
mostly nr_pages=256, and since it's a sequential dd with very good
locality, lots of the pageout works have been extended to
nr_pages=2048 at the time of execution.

The progress is a bit bumpy in that the writeback_queue/writeback_exec
lines and the writeback_congestion_wait lines tend to come together in
batches. According to the attached graphs (generated on a private
task_io trace event), the dd write progress is reasonably smooth.
There are several sudden jump of both x and y values, maybe it's
caused by lost trace samples.

# tracer: nop
#
# entries-in-buffer/entries-written: 127889/168807   #P:4
#
#                              _-----=> irqs-off
#                             / _----=> need-resched
#                            | / _---=> hardirq/softirq
#                            || / _--=> preempt-depth
#                            ||| /     delay
#           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
#              | |       |   ||||       |         |
              dd-4267  [002] ....  8250.326788: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [002] ....  8250.335124: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [002] ....  8250.343750: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [002] ....  8250.356916: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [002] ....  8250.361203: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [002] ....  8251.323386: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [002] ....  8251.363306: writeback_congestion_wait: usec_timeout=100000 usec_delayed=7000
              dd-4267  [002] ....  8251.405320: writeback_congestion_wait: usec_timeout=100000 usec_delayed=7000
              dd-4267  [002] ....  8251.427430: writeback_congestion_wait: usec_timeout=100000 usec_delayed=0
              dd-4267  [002] ....  8251.458557: writeback_congestion_wait: usec_timeout=100000 usec_delayed=0
              dd-4267  [002] ....  8251.476925: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [002] ....  8251.592810: writeback_congestion_wait: usec_timeout=100000 usec_delayed=100000
              dd-4267  [002] ....  8251.647229: writeback_congestion_wait: usec_timeout=100000 usec_delayed=54000
              dd-4267  [002] ....  8251.682142: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [002] ....  8251.728420: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [002] ....  8251.754083: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
       flush-8:0-4272  [002] ....  8251.819186: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=24320
       flush-8:0-4272  [002] ....  8251.823270: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=24320
       flush-8:0-4272  [002] ....  8251.826068: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=26368
       flush-8:0-4272  [002] ....  8251.832939: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=28416
       flush-8:0-4272  [002] ....  8251.839785: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=30464
       flush-8:0-4272  [002] ....  8251.846554: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=32512
       flush-8:0-4272  [002] ....  8251.852810: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=34560
       flush-8:0-4272  [002] ....  8251.858865: writeback_start: bdi 8:0: sb_dev 0:0 nr_pages=9223372036854775807 sync_mode=0 kupdate=0 range_cyclic=1 background=1 reason=background ino=0 offset=0
       flush-8:0-4272  [002] ....  8251.858866: writeback_queue_io: bdi 8:0: older=4302923541 age=0 enqueue=0 reason=background
       flush-8:0-4272  [002] ....  8251.860726: writeback_single_inode: bdi 8:0: ino=13 state=I_DIRTY_PAGES dirtied_when=4302921148 age=55 index=37632 to_write=1024 wrote=1024
       flush-8:0-4272  [002] ....  8251.860729: writeback_written: bdi 8:0: sb_dev 0:0 nr_pages=9223372036854774783 sync_mode=0 kupdate=0 range_cyclic=1 background=1 reason=background ino=0 offset=0
       flush-8:0-4272  [002] ....  8251.860730: writeback_start: bdi 8:0: sb_dev 0:0 nr_pages=9223372036854774783 sync_mode=0 kupdate=0 range_cyclic=1 background=1 reason=background ino=0 offset=0
       flush-8:0-4272  [002] ....  8251.860731: writeback_queue_io: bdi 8:0: older=4302923543 age=0 enqueue=0 reason=background
       flush-8:0-4272  [002] ....  8252.607338: writeback_single_inode: bdi 8:0: ino=13 state=I_DIRTY_PAGES dirtied_when=4302921148 age=55 index=51712 to_write=1024 wrote=1024
       flush-8:0-4272  [002] ....  8252.607341: writeback_written: bdi 8:0: sb_dev 0:0 nr_pages=9223372036854774783 sync_mode=0 kupdate=0 range_cyclic=1 background=1 reason=background ino=0 offset=0
       flush-8:0-4272  [002] ....  8252.607342: writeback_start: bdi 8:0: sb_dev 0:0 nr_pages=9223372036854774783 sync_mode=0 kupdate=0 range_cyclic=1 background=1 reason=background ino=0 offset=0
       flush-8:0-4272  [002] ....  8252.607343: writeback_queue_io: bdi 8:0: older=4302924290 age=0 enqueue=0 reason=background
       flush-8:0-4272  [002] ....  8253.026787: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=63232
       flush-8:0-4272  [002] ....  8253.026915: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=512 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=63232
       flush-8:0-4272  [002] ....  8253.027360: writeback_pages_written: 316
       flush-8:0-4272  [002] ....  8253.027437: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=63744
       flush-8:0-4272  [002] ....  8253.029744: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=768 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=63744
       flush-8:0-4272  [002] ....  8253.030492: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=1024 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=64512
       flush-8:0-4272  [002] ....  8253.033488: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=65280
       flush-8:0-4272  [002] ....  8253.036827: writeback_pages_written: 3584
       flush-8:0-4272  [002] ....  8253.040560: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=67328
       flush-8:0-4272  [002] ....  8253.042129: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=67328
       flush-8:0-4272  [002] ....  8253.045098: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=69376
       flush-8:0-4272  [002] ....  8253.050144: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=71424
##### CPU 3 buffer started ####
       flush-8:0-4272  [002] ....  8253.485155: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=75008
       flush-8:0-4272  [002] ....  8253.488248: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=75008
       flush-8:0-4272  [002] ....  8253.488254: writeback_pages_written: 256
              dd-4267  [003] ....  8253.587708: writeback_congestion_wait: usec_timeout=100000 usec_delayed=100000
              dd-4267  [002] ....  8253.616408: writeback_congestion_wait: usec_timeout=100000 usec_delayed=28000
              dd-4267  [002] ....  8253.627826: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
       flush-8:0-4272  [002] ....  8253.678663: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=75264
       flush-8:0-4272  [002] ....  8253.682232: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=75264
       flush-8:0-4272  [002] ....  8253.682240: writeback_start: bdi 8:0: sb_dev 0:0 nr_pages=9223372036854775807 sync_mode=0 kupdate=0 range_cyclic=1 background=1 reason=background ino=0 offset=0
       flush-8:0-4272  [002] ....  8253.682240: writeback_queue_io: bdi 8:0: older=4302925365 age=0 enqueue=0 reason=background
       flush-8:0-4272  [002] ....  8253.684064: writeback_single_inode: bdi 8:0: ino=13 state=I_DIRTY_PAGES dirtied_when=4302921148 age=55 index=76544 to_write=1024 wrote=1024
       flush-8:0-4272  [002] ....  8253.684067: writeback_written: bdi 8:0: sb_dev 0:0 nr_pages=9223372036854774783 sync_mode=0 kupdate=0 range_cyclic=1 background=1 reason=background ino=0 offset=0
       flush-8:0-4272  [002] ....  8253.684069: writeback_pages_written: 1280
              dd-4267  [002] ....  8253.730357: writeback_congestion_wait: usec_timeout=100000 usec_delayed=0
              dd-4267  [002] ....  8253.814751: writeback_congestion_wait: usec_timeout=100000 usec_delayed=7000
              dd-4267  [002] ....  8253.850639: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [002] ....  8253.870770: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [002] ....  8253.897749: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [002] ....  8254.014484: writeback_congestion_wait: usec_timeout=100000 usec_delayed=100000
              dd-4267  [002] ....  8254.113342: writeback_congestion_wait: usec_timeout=100000 usec_delayed=0
       flush-8:0-4272  [002] ....  8254.123898: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=76544
       flush-8:0-4272  [002] ....  8254.128327: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=76544
       flush-8:0-4272  [002] ....  8254.131253: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=78592
       flush-8:0-4272  [002] ....  8254.138062: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=80640
       flush-8:0-4272  [002] ....  8254.144703: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=82688
       flush-8:0-4272  [002] ....  8254.230122: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=84736
       flush-8:0-4272  [002] ....  8254.235649: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=86784
       flush-8:0-4272  [002] ....  8254.241491: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=88832
       flush-8:0-4272  [002] ....  8254.246782: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=1024 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=90880
       flush-8:0-4272  [002] ....  8254.249905: writeback_pages_written: 15360
       flush-8:0-4272  [002] ....  8254.282817: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=91904
       flush-8:0-4272  [002] ....  8254.285648: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=91904
       flush-8:0-4272  [002] ....  8254.289279: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=1280 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=93952
       flush-8:0-4272  [002] ....  8254.292899: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=94976
       flush-8:0-4272  [002] ....  8254.295917: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=97024
       flush-8:0-4272  [002] ....  8254.299985: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=99072
       flush-8:0-4272  [002] ....  8254.300865: writeback_pages_written: 7424
              dd-4267  [003] ....  8254.393293: writeback_congestion_wait: usec_timeout=100000 usec_delayed=100000
              dd-4267  [002] ....  8254.536096: writeback_congestion_wait: usec_timeout=100000 usec_delayed=82000
              dd-4267  [002] ....  8254.625314: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=99328
              dd-4267  [002] ....  8254.625340: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=99328
              dd-4267  [002] ....  8254.626413: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [002] ....  8254.630694: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
      flush-0:16-4223  [003] ....  8255.307759: writeback_start: bdi 0:16: sb_dev 0:0 nr_pages=45399 sync_mode=0 kupdate=1 range_cyclic=1 background=0 reason=periodic ino=0 offset=0
      flush-0:16-4223  [003] ....  8255.307762: writeback_queue_io: bdi 0:16: older=4302896992 age=30000 enqueue=0 reason=periodic
      flush-0:16-4223  [003] ....  8255.307762: writeback_written: bdi 0:16: sb_dev 0:0 nr_pages=45399 sync_mode=0 kupdate=1 range_cyclic=1 background=0 reason=periodic ino=0 offset=0
      flush-0:16-4223  [003] ....  8255.307764: writeback_pages_written: 0
              dd-4267  [002] ....  8255.318648: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=99584
              dd-4267  [002] ....  8255.318664: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=99584
              dd-4267  [002] ....  8255.320138: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=101632
              dd-4267  [002] ....  8255.321638: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=103680
              dd-4267  [002] ....  8255.323080: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=105728
              dd-4267  [002] ....  8255.324538: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=107776
              dd-4267  [002] ....  8255.326007: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=109824
              dd-4267  [002] ....  8255.327498: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=111872
              dd-4267  [002] ....  8255.328994: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=113920
              dd-4267  [002] ....  8255.330478: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=115968
              dd-4267  [002] ....  8255.332007: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=118016
              dd-4267  [002] ....  8255.333557: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=120064
              dd-4267  [002] ....  8255.335096: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=122112
              dd-4267  [002] ....  8255.341094: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [002] ....  8255.345525: writeback_congestion_wait: usec_timeout=100000 usec_delayed=0
              dd-4267  [002] ....  8255.350156: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
       flush-8:0-4272  [003] ....  8255.767124: writeback_single_inode: bdi 8:0: ino=13 state=I_DIRTY_SYNC|I_DIRTY_DATASYNC|I_DIRTY_PAGES dirtied_when=4302921148 age=55 index=127007 to_write=1024 wrote=2079
       flush-8:0-4272  [003] ....  8255.767127: writeback_written: bdi 8:0: sb_dev 0:0 nr_pages=21763 sync_mode=0 kupdate=1 range_cyclic=1 background=0 reason=periodic ino=0 offset=0
       flush-8:0-4272  [003] ....  8255.767128: writeback_start: bdi 8:0: sb_dev 0:0 nr_pages=21763 sync_mode=0 kupdate=1 range_cyclic=1 background=0 reason=periodic ino=0 offset=0
       flush-8:0-4272  [003] ....  8255.767128: writeback_queue_io: bdi 8:0: older=4302897451 age=30000 enqueue=0 reason=periodic
              dd-4267  [002] ....  8255.913486: writeback_congestion_wait: usec_timeout=100000 usec_delayed=100000
       flush-8:0-4272  [003] ....  8256.148983: writeback_single_inode: bdi 8:0: ino=13 state=I_DIRTY_PAGES dirtied_when=4302921148 age=55 index=141755 to_write=1024 wrote=5901
       flush-8:0-4272  [003] ....  8256.148987: writeback_written: bdi 8:0: sb_dev 0:0 nr_pages=7015 sync_mode=0 kupdate=1 range_cyclic=1 background=0 reason=periodic ino=0 offset=0
       flush-8:0-4272  [003] ....  8256.148988: writeback_start: bdi 8:0: sb_dev 0:0 nr_pages=7015 sync_mode=0 kupdate=1 range_cyclic=1 background=0 reason=periodic ino=0 offset=0
       flush-8:0-4272  [003] ....  8256.148988: writeback_queue_io: bdi 8:0: older=4302897833 age=30000 enqueue=0 reason=periodic
              dd-4267  [002] ....  8256.369200: writeback_congestion_wait: usec_timeout=100000 usec_delayed=100000
       flush-8:0-4272  [003] ....  8256.628529: writeback_single_inode: bdi 8:0: ino=13 state=I_DIRTY_PAGES dirtied_when=4302921148 age=55 index=150440 to_write=1024 wrote=6452
       flush-8:0-4272  [003] ....  8256.628533: writeback_written: bdi 8:0: sb_dev 0:0 nr_pages=-1670 sync_mode=0 kupdate=1 range_cyclic=1 background=0 reason=periodic ino=0 offset=0
       flush-8:0-4272  [003] ....  8256.628536: writeback_pages_written: 50856
       flush-8:0-4272  [003] ....  8256.628538: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=147456
       flush-8:0-4272  [003] ....  8256.628545: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=149504
       flush-8:0-4272  [003] ....  8256.630190: writeback_pages_written: 1112
              dd-4267  [002] ....  8256.693086: writeback_congestion_wait: usec_timeout=100000 usec_delayed=100000
              dd-4267  [003] ....  8256.909199: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [003] ....  8257.016875: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [003] ....  8257.041563: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [003] ....  8257.106900: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [003] ....  8257.246224: writeback_congestion_wait: usec_timeout=100000 usec_delayed=8000
              dd-4267  [003] ....  8257.271030: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [003] ....  8257.348551: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [003] ....  8257.381224: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
       flush-8:0-4272  [003] ....  8257.393446: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=151552
       flush-8:0-4272  [003] ....  8257.398224: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=151552
       flush-8:0-4272  [003] ....  8257.400851: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=153600
              dd-4267  [003] ....  8257.434020: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [003] ....  8257.541621: writeback_congestion_wait: usec_timeout=100000 usec_delayed=100000
              dd-4267  [003] ....  8257.840373: writeback_congestion_wait: usec_timeout=100000 usec_delayed=100000
              dd-4267  [002] ....  8258.031388: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [002] ....  8258.054534: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
       flush-8:0-4272  [002] ....  8258.104190: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=175872
       flush-8:0-4272  [002] ....  8258.107511: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=512 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=175872
       flush-8:0-4272  [002] ....  8258.107923: writeback_pages_written: 512
              dd-4267  [002] ....  8258.126205: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [002] ....  8258.130305: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=176384
              dd-4267  [002] ....  8258.130881: writeback_congestion_wait: usec_timeout=100000 usec_delayed=0
              dd-4267  [002] ....  8258.133116: writeback_congestion_wait: usec_timeout=100000 usec_delayed=0
              dd-4267  [002] ....  8258.137594: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [002] ....  8258.257151: writeback_congestion_wait: usec_timeout=100000 usec_delayed=100000
              dd-4267  [002] ....  8258.357770: writeback_congestion_wait: usec_timeout=100000 usec_delayed=7000
              dd-4267  [002] ....  8258.406325: writeback_congestion_wait: usec_timeout=100000 usec_delayed=7000
              dd-4267  [002] ....  8258.432124: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [002] ....  8258.452275: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [002] ....  8258.472362: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [002] ....  8258.492576: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
       flush-8:0-4272  [002] ....  8258.518908: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=176640
       flush-8:0-4272  [002] ....  8258.523446: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=176640
       flush-8:0-4272  [002] ....  8258.523453: writeback_pages_written: 256
              dd-4267  [002] ....  8258.545795: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
       flush-8:0-4272  [002] ....  8258.559572: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=176896
       flush-8:0-4272  [002] ....  8258.564143: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=176896
       flush-8:0-4272  [002] ....  8258.567279: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=178944
       flush-8:0-4272  [002] ....  8258.575307: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=180992
              dd-4267  [002] ....  8258.779190: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=192256
              dd-4267  [002] ....  8258.779212: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=192256
              dd-4267  [002] ....  8258.784476: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=193280
              dd-4267  [002] ....  8258.784497: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=193280
              dd-4267  [002] ....  8258.786546: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=195328
              dd-4267  [002] ....  8258.788222: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=197376
              dd-4267  [002] ....  8258.789834: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=199424
              dd-4267  [003] ....  8259.091794: writeback_congestion_wait: usec_timeout=100000 usec_delayed=9000
              dd-4267  [003] ....  8259.103932: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [003] ....  8259.142635: writeback_congestion_wait: usec_timeout=100000 usec_delayed=7000
       flush-8:0-4272  [003] ....  8259.166727: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=200448
       flush-8:0-4272  [003] ....  8259.169783: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=200448
       flush-8:0-4272  [003] ....  8259.169790: writeback_pages_written: 256
              dd-4267  [003] ....  8259.183714: writeback_congestion_wait: usec_timeout=100000 usec_delayed=7000
       flush-8:0-4272  [003] ....  8259.211788: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=200704
       flush-8:0-4272  [003] ....  8259.214924: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=200704
       flush-8:0-4272  [003] ....  8259.217800: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=202752
       flush-8:0-4272  [003] ....  8259.222920: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=1024 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=204800
       flush-8:0-4272  [003] ....  8259.226184: writeback_pages_written: 5120
              dd-4267  [003] ....  8259.257092: writeback_congestion_wait: usec_timeout=100000 usec_delayed=35000
              dd-4267  [003] ....  8259.275740: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=205824
              dd-4267  [003] ....  8259.275756: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=205824
              dd-4267  [003] ....  8259.277624: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [003] ....  8259.298203: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [003] ....  8259.318745: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [003] ....  8259.451826: writeback_congestion_wait: usec_timeout=100000 usec_delayed=54000
              dd-4267  [003] ....  8259.465856: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
              dd-4267  [003] ....  8259.499584: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [003] ....  8259.518780: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [003] ....  8259.538457: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [003] ....  8259.558070: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
       flush-8:0-4272  [003] ....  8259.570146: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=206080
       flush-8:0-4272  [003] ....  8259.573737: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=512 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=206080
       flush-8:0-4272  [003] ....  8259.574164: writeback_pages_written: 512
              dd-4267  [003] ....  8259.577670: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
       flush-8:0-4272  [003] ....  8259.594106: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=206592
       flush-8:0-4272  [003] ....  8259.597848: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=206592
       flush-8:0-4272  [003] ....  8259.600521: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=208640
       flush-8:0-4272  [003] ....  8259.606747: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=210688
       flush-8:0-4272  [003] ....  8259.612775: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=212736
       flush-8:0-4272  [003] ....  8259.618277: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=1792 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=214784
              dd-4267  [003] ....  8259.656171: writeback_congestion_wait: usec_timeout=100000 usec_delayed=37000
              dd-4267  [003] ....  8259.890390: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=216576
              dd-4267  [003] ....  8259.890404: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=216576
              dd-4267  [003] ....  8259.891900: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=218624
              dd-4267  [003] ....  8259.895754: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=220672
              dd-4267  [003] ....  8259.897308: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=222720
              dd-4267  [003] ....  8259.902780: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [003] ....  8259.908417: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [003] ....  8260.079480: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [003] ....  8260.084487: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=224768
              dd-4267  [003] ....  8260.084506: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=224768
              dd-4267  [003] ....  8260.107416: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [003] ....  8260.173675: writeback_congestion_wait: usec_timeout=100000 usec_delayed=38000
              dd-4267  [003] ....  8261.028667: writeback_congestion_wait: usec_timeout=100000 usec_delayed=100000
              dd-4267  [003] ....  8261.148521: writeback_congestion_wait: usec_timeout=100000 usec_delayed=100000
       flush-8:0-4272  [002] ....  8261.283270: writeback_start: bdi 8:0: sb_dev 0:0 nr_pages=26057 sync_mode=0 kupdate=1 range_cyclic=1 background=0 reason=periodic ino=0 offset=0
       flush-8:0-4272  [002] ....  8261.283272: writeback_queue_io: bdi 8:0: older=4302902970 age=30000 enqueue=0 reason=periodic
       flush-8:0-4272  [002] ....  8261.285234: writeback_single_inode: bdi 8:0: ino=13 state=I_DIRTY_PAGES dirtied_when=4302921148 age=55 index=250624 to_write=1024 wrote=1024
       flush-8:0-4272  [002] ....  8261.285237: writeback_written: bdi 8:0: sb_dev 0:0 nr_pages=25033 sync_mode=0 kupdate=1 range_cyclic=1 background=0 reason=periodic ino=0 offset=0
       flush-8:0-4272  [002] ....  8261.285237: writeback_start: bdi 8:0: sb_dev 0:0 nr_pages=25033 sync_mode=0 kupdate=1 range_cyclic=1 background=0 reason=periodic ino=0 offset=0
       flush-8:0-4272  [002] ....  8261.285238: writeback_queue_io: bdi 8:0: older=4302902972 age=30000 enqueue=0 reason=periodic
       flush-8:0-4272  [002] ....  8261.556566: writeback_single_inode: bdi 8:0: ino=13 state=I_DIRTY_PAGES dirtied_when=4302921148 age=55 index=258435 to_write=1024 wrote=4472
       flush-8:0-4272  [002] ....  8261.556570: writeback_written: bdi 8:0: sb_dev 0:0 nr_pages=17222 sync_mode=0 kupdate=1 range_cyclic=1 background=0 reason=periodic ino=0 offset=0
       flush-8:0-4272  [002] ....  8261.556571: writeback_start: bdi 8:0: sb_dev 0:0 nr_pages=17222 sync_mode=0 kupdate=1 range_cyclic=1 background=0 reason=periodic ino=0 offset=0
       flush-8:0-4272  [002] ....  8261.556571: writeback_queue_io: bdi 8:0: older=4302903244 age=30000 enqueue=0 reason=periodic
              dd-4267  [003] ....  8261.828188: writeback_congestion_wait: usec_timeout=100000 usec_delayed=100000
              dd-4267  [003] ....  8261.930822: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=272384
              dd-4267  [003] ....  8261.934217: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=274432
              dd-4267  [003] ....  8262.188991: writeback_congestion_wait: usec_timeout=100000 usec_delayed=100000
       flush-8:0-4272  [002] ....  8262.287291: writeback_single_inode: bdi 8:0: ino=13 state=I_DIRTY_PAGES dirtied_when=4302921148 age=55 index=276437 to_write=1024 wrote=6356
       flush-8:0-4272  [002] ....  8262.287296: writeback_written: bdi 8:0: sb_dev 0:0 nr_pages=-780 sync_mode=0 kupdate=1 range_cyclic=1 background=0 reason=periodic ino=0 offset=0
       flush-8:0-4272  [002] ....  8262.287298: writeback_pages_written: 51413
       flush-8:0-4272  [002] ....  8262.287300: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=768 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=272384
       flush-8:0-4272  [002] ....  8262.287308: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=1280 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=274432
       flush-8:0-4272  [002] ....  8262.287312: writeback_pages_written: 0
##### CPU 1 buffer started ####
              dd-4267  [001] ....  8262.412091: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=276224
       flush-8:0-4272  [002] ....  8262.412109: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=276224
              dd-4267  [001] ....  8262.412113: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=276224
       flush-8:0-4272  [002] ....  8262.412273: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=512 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=276224
              dd-4267  [001] ....  8262.412843: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
       flush-8:0-4272  [002] ....  8262.413228: writeback_pages_written: 299
              dd-4267  [002] ....  8262.432804: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [002] ....  8262.452739: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [002] ....  8262.472704: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [002] ....  8262.546051: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [002] ....  8262.585998: writeback_congestion_wait: usec_timeout=100000 usec_delayed=7000
              dd-4267  [002] ....  8262.635683: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [002] ....  8262.655672: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [002] ....  8262.675597: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [002] ....  8262.695554: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [002] ....  8262.717581: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [002] ....  8262.738142: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [002] ....  8262.758666: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
              dd-4267  [002] ....  8262.780860: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [002] ....  8262.801393: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [002] ....  8262.811726: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [002] ....  8262.821976: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [002] ....  8262.842505: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
       flush-8:0-4272  [002] ....  8262.855614: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=276736
       flush-8:0-4272  [002] ....  8262.862079: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=276736
              dd-4267  [001] ....  8262.863088: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=280832
              dd-4267  [001] ....  8262.864811: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=282880
       flush-8:0-4272  [002] ....  8262.865154: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=278784
              dd-4267  [001] ....  8262.866948: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=284928
              dd-4267  [001] ....  8262.868516: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=286976
              dd-4267  [002] ....  8262.873680: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
       flush-8:0-4272  [001] ....  8263.074676: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=282880
       flush-8:0-4272  [001] ....  8263.082377: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=284928
       flush-8:0-4272  [001] ....  8263.088659: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=1792 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=286976
       flush-8:0-4272  [001] ....  8263.093872: writeback_pages_written: 12032
              dd-4267  [002] ....  8263.162787: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=288768
       flush-8:0-4272  [001] ....  8263.162806: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=288768
              dd-4267  [002] ....  8263.162807: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=288768
              dd-4267  [002] ....  8263.166798: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=290816
       flush-8:0-4272  [001] ....  8263.167706: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=288768
              dd-4267  [002] ....  8263.168416: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=292864
              dd-4267  [002] ....  8263.170061: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=294912
       flush-8:0-4272  [001] ....  8263.172041: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=290816
              dd-4267  [002] ....  8263.174392: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=296960
              dd-4267  [002] ....  8263.178723: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=299008
       flush-8:0-4272  [001] ....  8263.178800: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=292864
              dd-4267  [002] ....  8263.180744: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [002] ....  8263.184819: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
       flush-8:0-4272  [001] ....  8263.184948: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=294912
       flush-8:0-4272  [001] ....  8263.189635: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=296960
       flush-8:0-4272  [001] ....  8263.193882: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=1280 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=299008
       flush-8:0-4272  [001] ....  8263.196897: writeback_pages_written: 11520
              dd-4267  [001] ....  8263.371538: writeback_congestion_wait: usec_timeout=100000 usec_delayed=22000
              dd-4267  [001] ....  8263.391105: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [001] ....  8263.397755: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [001] ....  8263.417164: writeback_congestion_wait: usec_timeout=100000 usec_delayed=15000
              dd-4267  [003] ....  8263.421709: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [001] ....  8263.424214: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [001] ....  8263.466360: writeback_congestion_wait: usec_timeout=100000 usec_delayed=43000
              dd-4267  [001] ....  8263.482457: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=300288
       flush-8:0-4272  [003] ....  8263.482473: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=300288
              dd-4267  [001] ....  8263.482480: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=300288
              dd-4267  [001] ....  8263.483957: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=302336
              dd-4267  [001] ....  8263.485413: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=304384
       flush-8:0-4272  [003] ....  8263.485563: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=300288
       flush-8:0-4272  [003] ....  8263.488444: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=302336
       flush-8:0-4272  [003] ....  8263.493375: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=1792 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=304384
       flush-8:0-4272  [003] ....  8263.497569: writeback_pages_written: 5888
              dd-4267  [003] ....  8263.586187: writeback_congestion_wait: usec_timeout=100000 usec_delayed=100000
              dd-4267  [003] ....  8263.621915: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=306176
       flush-8:0-4272  [001] ....  8263.621932: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=306176
              dd-4267  [003] ....  8263.621932: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=306176
       flush-8:0-4272  [001] ....  8263.624277: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=306176
       flush-8:0-4272  [001] ....  8263.624284: writeback_pages_written: 256
              dd-4267  [001] ....  8263.713731: writeback_congestion_wait: usec_timeout=100000 usec_delayed=92000
              dd-4267  [003] ....  8263.762825: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [001] ....  8263.772571: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [001] ....  8263.782397: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [001] ....  8263.786346: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [003] ....  8263.792063: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [001] ....  8263.794373: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [001] ....  8263.802026: writeback_congestion_wait: usec_timeout=100000 usec_delayed=7000
              dd-4267  [001] ....  8263.823655: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=306432
       flush-8:0-4272  [003] ....  8263.823672: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=306432
              dd-4267  [001] ....  8263.823673: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=306432
              dd-4267  [001] ....  8263.826272: writeback_congestion_wait: usec_timeout=100000 usec_delayed=0
              dd-4267  [001] ....  8263.828813: writeback_congestion_wait: usec_timeout=100000 usec_delayed=0
              dd-4267  [001] ....  8263.831328: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [003] ....  8263.835578: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [001] ....  8263.843013: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [001] ....  8263.852831: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [001] ....  8263.862617: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [003] ....  8263.872450: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [001] ....  8263.894518: writeback_congestion_wait: usec_timeout=100000 usec_delayed=18000
              dd-4267  [001] ....  8263.898954: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [001] ....  8263.942418: writeback_congestion_wait: usec_timeout=100000 usec_delayed=44000
              dd-4267  [001] ....  8263.962011: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=306688
              dd-4267  [001] ....  8263.962030: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=306688
              dd-4267  [001] ....  8263.963721: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=308736
              dd-4267  [001] ....  8263.965316: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=310784
              dd-4267  [001] ....  8263.966932: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=312832
              dd-4267  [003] ....  8263.969872: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [001] ....  8264.003468: writeback_congestion_wait: usec_timeout=100000 usec_delayed=31000
              dd-4267  [001] ....  8264.005330: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [001] ....  8264.007494: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [001] ....  8264.172407: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [003] ....  8264.179889: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [003] ....  8264.185841: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [003] ....  8264.190127: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [001] ....  8264.196118: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [003] ....  8264.200459: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [003] ....  8264.206405: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [003] ....  8264.210702: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [001] ....  8264.216654: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [003] ....  8264.221002: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [003] ....  8264.226948: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [003] ....  8264.231270: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [001] ....  8264.237169: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [003] ....  8264.241569: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [003] ....  8264.247491: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [003] ....  8264.249307: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=313856
              dd-4267  [003] ....  8264.249328: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=313856
              dd-4267  [003] ....  8264.250997: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=315904
              dd-4267  [003] ....  8264.251768: writeback_congestion_wait: usec_timeout=100000 usec_delayed=0
              dd-4267  [003] ....  8264.255542: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=317952
              dd-4267  [003] ....  8264.257148: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=320000
              dd-4267  [001] ....  8264.257726: writeback_congestion_wait: usec_timeout=100000 usec_delayed=0
              dd-4267  [001] ....  8264.262047: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [001] ....  8264.268005: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [001] ....  8264.270266: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=321792
              dd-4267  [001] ....  8264.271910: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=323840
              dd-4267  [003] ....  8264.373787: writeback_congestion_wait: usec_timeout=100000 usec_delayed=100000
              dd-4267  [001] ....  8264.530715: writeback_congestion_wait: usec_timeout=100000 usec_delayed=100000
       flush-8:0-4272  [002] ....  8264.602404: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=325120
       flush-8:0-4272  [002] ....  8264.605571: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=325120
       flush-8:0-4272  [002] ....  8264.608617: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=327168
       flush-8:0-4272  [002] ....  8264.613823: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=1280 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=329216
       flush-8:0-4272  [002] ....  8264.617055: writeback_pages_written: 5376
              dd-4267  [003] ....  8264.706635: writeback_congestion_wait: usec_timeout=100000 usec_delayed=100000
              dd-4267  [003] ....  8264.777165: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=330496
       flush-8:0-4272  [002] ....  8264.777184: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=330496
              dd-4267  [003] ....  8264.777184: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=330496
       flush-8:0-4272  [002] ....  8264.780903: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=330496
       flush-8:0-4272  [002] ....  8264.780910: writeback_pages_written: 256
              dd-4267  [002] ....  8264.781921: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
              dd-4267  [002] ....  8264.821810: writeback_congestion_wait: usec_timeout=100000 usec_delayed=7000
              dd-4267  [002] ....  8264.845760: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [002] ....  8264.875710: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
       flush-8:0-4272  [002] ....  8264.887380: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=330752
       flush-8:0-4272  [002] ....  8264.890912: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=330752
       flush-8:0-4272  [002] ....  8264.890919: writeback_pages_written: 256
              dd-4267  [002] ....  8264.901596: writeback_congestion_wait: usec_timeout=100000 usec_delayed=8000
              dd-4267  [002] ....  8264.941549: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [002] ....  8264.965702: writeback_congestion_wait: usec_timeout=100000 usec_delayed=8000
       flush-8:0-4272  [002] ....  8264.983529: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=331008
       flush-8:0-4272  [002] ....  8264.987449: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=331008
       flush-8:0-4272  [002] ....  8264.987456: writeback_pages_written: 256
              dd-4267  [002] ....  8264.991277: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
       flush-8:0-4272  [002] ....  8265.017144: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=331264
       flush-8:0-4272  [002] ....  8265.021013: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=331264
       flush-8:0-4272  [002] ....  8265.024087: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=333312
       flush-8:0-4272  [002] ....  8265.030297: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=335360
       flush-8:0-4272  [002] ....  8265.036048: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=1024 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=337408
       flush-8:0-4272  [002] ....  8265.039951: writeback_pages_written: 7168
              dd-4267  [002] ....  8265.057223: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [002] ....  8265.077361: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [002] ....  8265.097530: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [002] ....  8265.117633: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [002] ....  8265.143609: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [002] ....  8265.163795: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [002] ....  8265.183921: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [002] ....  8265.204076: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
       flush-8:0-4272  [002] ....  8265.211688: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=338432
       flush-8:0-4272  [002] ....  8265.215220: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=338432
       flush-8:0-4272  [002] ....  8265.215226: writeback_pages_written: 256
              dd-4267  [002] ....  8265.224208: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
       flush-8:0-4272  [002] ....  8265.231866: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=338688
       flush-8:0-4272  [002] ....  8265.235287: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=338688
       flush-8:0-4272  [002] ....  8265.238119: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=340736
       flush-8:0-4272  [002] ....  8265.243774: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=342784
       flush-8:0-4272  [002] ....  8265.249266: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=1536 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=344832
      flush-0:16-4223  [003] ....  8265.302236: writeback_start: bdi 0:16: sb_dev 0:0 nr_pages=30929 sync_mode=0 kupdate=1 range_cyclic=1 background=0 reason=periodic ino=0 offset=0
      flush-0:16-4223  [003] ....  8265.302237: writeback_queue_io: bdi 0:16: older=4302906992 age=30000 enqueue=0 reason=periodic
      flush-0:16-4223  [003] ....  8265.302238: writeback_written: bdi 0:16: sb_dev 0:0 nr_pages=30929 sync_mode=0 kupdate=1 range_cyclic=1 background=0 reason=periodic ino=0 offset=0
      flush-0:16-4223  [003] ....  8265.302243: writeback_pages_written: 0
              dd-4267  [002] ....  8265.346236: writeback_congestion_wait: usec_timeout=100000 usec_delayed=100000
              dd-4267  [002] ....  8265.415236: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=346368
              dd-4267  [002] ....  8265.415253: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=346368
              dd-4267  [002] ....  8265.416806: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=348416
              dd-4267  [002] ....  8265.593749: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [002] ....  8265.597894: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=349696
              dd-4267  [002] ....  8265.597911: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=349696
              dd-4267  [002] ....  8265.599469: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=351744
              dd-4267  [002] ....  8265.601033: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=353792
              dd-4267  [002] ....  8265.603547: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
       flush-8:0-4272  [001] ....  8265.604475: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=351744
       flush-8:0-4272  [001] ....  8265.612575: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=1024 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=353792
       flush-8:0-4272  [001] ....  8265.615654: writeback_pages_written: 5120
       flush-8:0-4272  [001] ....  8265.779243: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=354816
              dd-4267  [002] ....  8265.780668: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
       flush-8:0-4272  [003] ....  8266.597438: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=354816
       flush-8:0-4272  [003] ....  8266.597448: writeback_start: bdi 8:0: sb_dev 0:0 nr_pages=34303 sync_mode=0 kupdate=1 range_cyclic=1 background=0 reason=periodic ino=0 offset=0
       flush-8:0-4272  [003] ....  8266.597449: writeback_queue_io: bdi 8:0: older=4302908287 age=30000 enqueue=0 reason=periodic
       flush-8:0-4272  [003] ....  8266.599938: writeback_single_inode: bdi 8:0: ino=13 state=I_DIRTY_PAGES dirtied_when=4302921148 age=55 index=356096 to_write=1024 wrote=1024
       flush-8:0-4272  [003] ....  8266.599941: writeback_written: bdi 8:0: sb_dev 0:0 nr_pages=33279 sync_mode=0 kupdate=1 range_cyclic=1 background=0 reason=periodic ino=0 offset=0
       flush-8:0-4272  [003] ....  8266.599942: writeback_start: bdi 8:0: sb_dev 0:0 nr_pages=33279 sync_mode=0 kupdate=1 range_cyclic=1 background=0 reason=periodic ino=0 offset=0
       flush-8:0-4272  [003] ....  8266.599942: writeback_queue_io: bdi 8:0: older=4302908290 age=30000 enqueue=0 reason=periodic
       flush-8:0-4272  [003] ....  8266.622191: writeback_single_inode: bdi 8:0: ino=13 state=I_DIRTY_PAGES dirtied_when=4302921148 age=55 index=367701 to_write=1024 wrote=11605
       flush-8:0-4272  [003] ....  8266.622196: writeback_written: bdi 8:0: sb_dev 0:0 nr_pages=21674 sync_mode=0 kupdate=1 range_cyclic=1 background=0 reason=periodic ino=0 offset=0
       flush-8:0-4272  [003] ....  8266.622196: writeback_start: bdi 8:0: sb_dev 0:0 nr_pages=21674 sync_mode=0 kupdate=1 range_cyclic=1 background=0 reason=periodic ino=0 offset=0
       flush-8:0-4272  [003] ....  8266.622197: writeback_queue_io: bdi 8:0: older=4302908312 age=30000 enqueue=0 reason=periodic
       flush-8:0-4272  [003] ....  8266.624595: writeback_single_inode: bdi 8:0: ino=13 state=I_DIRTY_PAGES dirtied_when=4302921148 age=55 index=368861 to_write=1024 wrote=1160
       flush-8:0-4272  [003] ....  8266.624597: writeback_written: bdi 8:0: sb_dev 0:0 nr_pages=20514 sync_mode=0 kupdate=1 range_cyclic=1 background=0 reason=periodic ino=0 offset=0
       flush-8:0-4272  [003] ....  8266.624597: writeback_start: bdi 8:0: sb_dev 0:0 nr_pages=20514 sync_mode=0 kupdate=1 range_cyclic=1 background=0 reason=periodic ino=0 offset=0
       flush-8:0-4272  [003] ....  8266.624598: writeback_queue_io: bdi 8:0: older=4302908315 age=30000 enqueue=0 reason=periodic
       flush-8:0-4272  [003] ....  8266.625314: writeback_single_inode: bdi 8:0: ino=13 state=I_DIRTY_PAGES dirtied_when=4302938315 age=38 index=368861 to_write=1024 wrote=145
       flush-8:0-4272  [003] ....  8266.625316: writeback_written: bdi 8:0: sb_dev 0:0 nr_pages=20369 sync_mode=0 kupdate=1 range_cyclic=1 background=0 reason=periodic ino=0 offset=0
       flush-8:0-4272  [003] ....  8266.625317: writeback_start: bdi 8:0: sb_dev 0:0 nr_pages=20369 sync_mode=0 kupdate=1 range_cyclic=1 background=0 reason=periodic ino=0 offset=0
       flush-8:0-4272  [003] ....  8266.625317: writeback_queue_io: bdi 8:0: older=4302908315 age=30000 enqueue=0 reason=periodic
       flush-8:0-4272  [003] ....  8266.625317: writeback_written: bdi 8:0: sb_dev 0:0 nr_pages=20369 sync_mode=0 kupdate=1 range_cyclic=1 background=0 reason=periodic ino=0 offset=0
       flush-8:0-4272  [003] ....  8266.625319: writeback_pages_written: 14190
       flush-8:0-4272  [003] ....  8266.825892: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=368896
       flush-8:0-4272  [003] ....  8266.826219: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=768 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=368896
       flush-8:0-4272  [003] ....  8266.827173: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=1536 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=369408
       flush-8:0-4272  [003] ....  8266.829558: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=370688
       flush-8:0-4272  [003] ....  8266.832712: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=372736
       flush-8:0-4272  [003] ....  8266.838252: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=374784
       flush-8:0-4272  [003] ....  8266.842368: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=376832
              dd-4267  [002] ....  8266.935447: writeback_congestion_wait: usec_timeout=100000 usec_delayed=100000
       flush-8:0-4272  [001] ....  8266.999637: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=378880
       flush-8:0-4272  [001] ....  8267.145618: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=1280 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=380928
       flush-8:0-4272  [001] ....  8267.148792: writeback_pages_written: 13202
              dd-4267  [001] ....  8267.351195: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [001] ....  8267.361164: writeback_congestion_wait: usec_timeout=100000 usec_delayed=7000
              dd-4267  [003] ....  8267.370981: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [001] ....  8267.380830: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [001] ....  8267.390569: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [001] ....  8267.400401: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [003] ....  8267.410227: writeback_congestion_wait: usec_timeout=100000 usec_delayed=7000
              dd-4267  [001] ....  8267.420027: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
              dd-4267  [001] ....  8267.429869: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [001] ....  8267.439597: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [003] ....  8267.449465: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [001] ....  8267.459294: writeback_congestion_wait: usec_timeout=100000 usec_delayed=7000
              dd-4267  [001] ....  8267.469116: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [001] ....  8267.477442: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
              dd-4267  [003] ....  8267.481950: writeback_congestion_wait: usec_timeout=100000 usec_delayed=0
              dd-4267  [001] ....  8267.486445: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [001] ....  8267.490959: writeback_congestion_wait: usec_timeout=100000 usec_delayed=0
              dd-4267  [001] ....  8267.609943: writeback_congestion_wait: usec_timeout=100000 usec_delayed=97000
              dd-4267  [003] ....  8267.614031: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [001] ....  8267.618624: writeback_congestion_wait: usec_timeout=100000 usec_delayed=0
              dd-4267  [001] ....  8267.628386: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [003] ....  8267.635454: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [001] ....  8267.639944: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [001] ....  8267.644410: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [001] ....  8267.648982: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [003] ....  8267.654300: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [001] ....  8267.658700: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [002] ....  8267.667206: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [001] ....  8267.669674: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [001] ....  8267.674305: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [001] ....  8267.706797: writeback_congestion_wait: usec_timeout=100000 usec_delayed=18000
              dd-4267  [001] ....  8267.711275: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [003] ....  8267.719838: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [001] ....  8267.729680: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
              dd-4267  [001] ....  8267.737881: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
              dd-4267  [001] ....  8267.742316: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [001] ....  8267.752567: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=382208
       flush-8:0-4272  [003] ....  8267.752583: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=382208
              dd-4267  [001] ....  8267.752584: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=382208
              dd-4267  [001] ....  8267.754070: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=384256
              dd-4267  [001] ....  8267.755594: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=386304
              dd-4267  [001] ....  8267.757105: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=388352
       flush-8:0-4272  [003] ....  8267.757895: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=382208
              dd-4267  [001] ....  8267.758599: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=390400
              dd-4267  [001] ....  8267.760122: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=392448
       flush-8:0-4272  [003] ....  8267.760868: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=384256
       flush-8:0-4272  [003] ....  8267.768745: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=386304
       flush-8:0-4272  [003] ....  8267.776742: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=388352
       flush-8:0-4272  [003] ....  8267.783814: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=390400
       flush-8:0-4272  [003] ....  8267.790773: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=392448
       flush-8:0-4272  [003] ....  8267.797382: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=394496
       flush-8:0-4272  [003] ....  8267.803995: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=396544
       flush-8:0-4272  [003] ....  8267.809316: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=398592
       flush-8:0-4272  [003] ....  8267.813952: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=400640
       flush-8:0-4272  [003] ....  8267.818094: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=402688
              dd-4267  [002] ....  8267.905908: writeback_congestion_wait: usec_timeout=100000 usec_delayed=100000
       flush-8:0-4272  [001] ....  8267.938284: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=1792 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=404736
       flush-8:0-4272  [001] ....  8267.943129: writeback_pages_written: 24320
              dd-4267  [001] ....  8268.163764: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [001] ....  8268.173986: writeback_congestion_wait: usec_timeout=100000 usec_delayed=7000
              dd-4267  [003] ....  8268.184334: writeback_congestion_wait: usec_timeout=100000 usec_delayed=7000
              dd-4267  [001] ....  8268.194576: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [001] ....  8268.198511: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=406528
       flush-8:0-4272  [003] ....  8268.198524: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=406528
              dd-4267  [001] ....  8268.198525: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=406528
       flush-8:0-4272  [003] ....  8268.202131: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=1280 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=406528
       flush-8:0-4272  [003] ....  8268.204478: writeback_pages_written: 1280
              dd-4267  [001] ....  8268.204959: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [001] ....  8268.209828: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=407808
       flush-8:0-4272  [003] ....  8268.209837: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=407808
              dd-4267  [001] ....  8268.209837: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=407808
       flush-8:0-4272  [003] ....  8268.212352: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=407808
       flush-8:0-4272  [003] ....  8268.212358: writeback_pages_written: 256
              dd-4267  [001] ....  8268.215084: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
              dd-4267  [003] ....  8268.227002: writeback_congestion_wait: usec_timeout=100000 usec_delayed=8000
              dd-4267  [001] ....  8268.237303: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [001] ....  8268.247561: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [001] ....  8268.257823: writeback_congestion_wait: usec_timeout=100000 usec_delayed=7000
              dd-4267  [003] ....  8268.268081: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [001] ....  8268.278409: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [001] ....  8268.288799: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
              dd-4267  [001] ....  8268.298937: writeback_congestion_wait: usec_timeout=100000 usec_delayed=7000
              dd-4267  [003] ....  8268.309285: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [001] ....  8268.319485: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [001] ....  8268.323556: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=408064
       flush-8:0-4272  [003] ....  8268.323569: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=408064
              dd-4267  [001] ....  8268.323570: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=408064
       flush-8:0-4272  [003] ....  8268.326837: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=408064
       flush-8:0-4272  [003] ....  8268.326843: writeback_pages_written: 256
              dd-4267  [001] ....  8268.329773: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [003] ....  8268.333919: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [001] ....  8268.339844: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [001] ....  8268.345842: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [001] ....  8268.350165: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [003] ....  8268.356091: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [001] ....  8268.358321: writeback_congestion_wait: usec_timeout=100000 usec_delayed=0
              dd-4267  [001] ....  8268.362952: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
              dd-4267  [001] ....  8268.383814: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [001] ....  8268.437266: writeback_congestion_wait: usec_timeout=100000 usec_delayed=52000
              dd-4267  [001] ....  8268.442101: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [001] ....  8268.444583: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [003] ....  8268.448725: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [003] ....  8268.457661: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
              dd-4267  [001] ....  8268.467635: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [001] ....  8268.477643: writeback_congestion_wait: usec_timeout=100000 usec_delayed=7000
              dd-4267  [001] ....  8268.487599: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [001] ....  8268.497414: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [001] ....  8268.501582: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [001] ....  8268.507385: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [003] ....  8268.511569: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [001] ....  8268.517350: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [001] ....  8268.521564: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [001] ....  8268.527355: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [003] ....  8268.531537: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [001] ....  8268.533925: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [001] ....  8268.539707: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [001] ....  8268.561714: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=408320
       flush-8:0-4272  [003] ....  8268.561729: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=408320
              dd-4267  [001] ....  8268.561736: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=408320
              dd-4267  [001] ....  8268.562225: writeback_congestion_wait: usec_timeout=100000 usec_delayed=0
              dd-4267  [001] ....  8268.564550: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
       flush-8:0-4272  [003] ....  8268.566727: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=408320
       flush-8:0-4272  [003] ....  8268.566734: writeback_pages_written: 256
              dd-4267  [001] ....  8268.567268: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [001] ....  8268.569268: writeback_congestion_wait: usec_timeout=100000 usec_delayed=0
              dd-4267  [003] ....  8268.575561: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [001] ....  8268.579702: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [001] ....  8268.585533: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [001] ....  8268.589691: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [003] ....  8268.595494: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [001] ....  8268.599670: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [001] ....  8268.605495: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [001] ....  8268.609659: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [003] ....  8268.615463: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [003] ....  8268.617783: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=408576
       flush-8:0-4272  [001] ....  8268.617795: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=408576
              dd-4267  [003] ....  8268.617796: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=408576
              dd-4267  [003] ....  8268.619270: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=410624
              dd-4267  [003] ....  8268.622449: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=412672
       flush-8:0-4272  [001] ....  8268.622860: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=408576
              dd-4267  [003] ....  8268.623989: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=414720
       flush-8:0-4272  [001] ....  8268.625976: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=410624
              dd-4267  [003] ....  8268.635422: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
       flush-8:0-4272  [001] ....  8268.840617: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=412672
       flush-8:0-4272  [001] ....  8268.848404: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=414720
              dd-4267  [003] ....  8268.930278: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=419584
              dd-4267  [003] ....  8268.930296: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=419584
              dd-4267  [003] ....  8268.931994: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=421632
              dd-4267  [003] ....  8268.934099: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=423680
              dd-4267  [003] ....  8268.935726: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=425728
              dd-4267  [003] ....  8268.937352: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=427776
              dd-4267  [003] ....  8268.939046: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=429824
              dd-4267  [001] ....  8269.039200: writeback_congestion_wait: usec_timeout=100000 usec_delayed=100000
              dd-4267  [003] ....  8269.139135: writeback_congestion_wait: usec_timeout=100000 usec_delayed=100000
              dd-4267  [001] ....  8269.239185: writeback_congestion_wait: usec_timeout=100000 usec_delayed=100000
              dd-4267  [001] ....  8269.408981: writeback_congestion_wait: usec_timeout=100000 usec_delayed=100000
              dd-4267  [001] ....  8269.584400: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=431104
              dd-4267  [001] ....  8269.584469: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=431104
              dd-4267  [001] ....  8269.593152: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=431360
              dd-4267  [001] ....  8269.597285: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=431616
              dd-4267  [001] ....  8269.597492: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=431616
              dd-4267  [001] ....  8269.598967: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=433664
              dd-4267  [001] ....  8269.614402: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=434944
              dd-4267  [001] ....  8269.614421: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=434944
              dd-4267  [001] ....  8269.616055: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=436992
              dd-4267  [002] ....  8269.637753: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [002] ....  8269.680296: writeback_congestion_wait: usec_timeout=100000 usec_delayed=29000
              dd-4267  [002] ....  8269.817372: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
       flush-8:0-4272  [002] ....  8269.864374: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=437248
       flush-8:0-4272  [002] ....  8269.868771: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=437248
              dd-4267  [001] ....  8269.871123: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=443392
       flush-8:0-4272  [002] ....  8269.871919: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=439296
              dd-4267  [001] ....  8269.875734: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=445440
              dd-4267  [001] ....  8269.877352: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=447488
       flush-8:0-4272  [002] ....  8269.878824: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=441344
              dd-4267  [001] ....  8269.881258: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=449536
              dd-4267  [002] ....  8269.885307: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [002] ....  8269.889882: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
       flush-8:0-4272  [002] ....  8269.936593: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=443392
       flush-8:0-4272  [002] ....  8269.942755: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=445440
       flush-8:0-4272  [002] ....  8269.948672: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=447488
       flush-8:0-4272  [002] ....  8269.953747: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=768 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=449536
       flush-8:0-4272  [002] ....  8269.956079: writeback_pages_written: 13056
              dd-4267  [001] ....  8270.015147: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=450304
       flush-8:0-4272  [002] ....  8270.015168: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=450304
              dd-4267  [001] ....  8270.015174: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=450304
       flush-8:0-4272  [002] ....  8270.018425: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=768 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=450304
       flush-8:0-4272  [002] ....  8270.019354: writeback_pages_written: 768
              dd-4267  [002] ....  8270.064531: writeback_congestion_wait: usec_timeout=100000 usec_delayed=48000
              dd-4267  [002] ....  8270.106521: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=451072
              dd-4267  [002] ....  8270.106542: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=451072
              dd-4267  [002] ....  8270.109830: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=452352
              dd-4267  [002] ....  8270.111279: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=454400
       flush-8:0-4272  [002] ....  8270.274267: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=455424
       flush-8:0-4272  [002] ....  8270.277302: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=455424
       flush-8:0-4272  [002] ....  8270.277308: writeback_pages_written: 256
              dd-4267  [002] ....  8270.281460: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
      flush-0:16-4223  [001] ....  8270.300792: writeback_start: bdi 0:16: sb_dev 0:0 nr_pages=34352 sync_mode=0 kupdate=1 range_cyclic=1 background=0 reason=periodic ino=0 offset=0
      flush-0:16-4223  [001] ....  8270.300795: writeback_queue_io: bdi 0:16: older=4302911993 age=30000 enqueue=1 reason=periodic
      flush-0:16-4223  [001] ....  8270.300805: writeback_single_inode: bdi 0:16: ino=4874359 state= dirtied_when=4302911988 age=65 index=0 to_write=1024 wrote=0
      flush-0:16-4223  [001] ....  8270.300808: writeback_written: bdi 0:16: sb_dev 0:0 nr_pages=34352 sync_mode=0 kupdate=1 range_cyclic=1 background=0 reason=periodic ino=0 offset=0
      flush-0:16-4223  [001] ....  8270.300809: writeback_start: bdi 0:16: sb_dev 0:0 nr_pages=34352 sync_mode=0 kupdate=1 range_cyclic=1 background=0 reason=periodic ino=0 offset=0
      flush-0:16-4223  [001] ....  8270.300809: writeback_queue_io: bdi 0:16: older=4302911993 age=30000 enqueue=0 reason=periodic
      flush-0:16-4223  [001] ....  8270.300809: writeback_written: bdi 0:16: sb_dev 0:0 nr_pages=34352 sync_mode=0 kupdate=1 range_cyclic=1 background=0 reason=periodic ino=0 offset=0
      flush-0:16-4223  [001] ....  8270.300814: writeback_pages_written: 0
              dd-4267  [002] ....  8270.302002: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [002] ....  8270.316621: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [002] ....  8270.343276: writeback_congestion_wait: usec_timeout=100000 usec_delayed=7000
              dd-4267  [002] ....  8270.373944: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [002] ....  8270.394471: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [002] ....  8270.415038: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [002] ....  8270.435568: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [002] ....  8270.472530: writeback_congestion_wait: usec_timeout=100000 usec_delayed=7000
              dd-4267  [002] ....  8270.513637: writeback_congestion_wait: usec_timeout=100000 usec_delayed=7000
              dd-4267  [002] ....  8270.517612: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=455680
              dd-4267  [002] ....  8270.517625: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=455680
              dd-4267  [002] ....  8270.554764: writeback_congestion_wait: usec_timeout=100000 usec_delayed=7000
              dd-4267  [002] ....  8270.558840: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=455936
              dd-4267  [002] ....  8270.558852: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=455936
              dd-4267  [002] ....  8270.560339: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=457984
              dd-4267  [002] ....  8270.561810: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=460032
              dd-4267  [002] ....  8270.564781: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [002] ....  8270.575044: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
       flush-8:0-4272  [002] ....  8270.590611: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=461056
       flush-8:0-4272  [002] ....  8270.594701: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=461056
       flush-8:0-4272  [002] ....  8270.598284: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=463104
       flush-8:0-4272  [002] ....  8270.600496: writeback_pages_written: 2304
              dd-4267  [002] ....  8270.601943: writeback_congestion_wait: usec_timeout=100000 usec_delayed=9000
              dd-4267  [002] ....  8270.633486: writeback_congestion_wait: usec_timeout=100000 usec_delayed=7000
       flush-8:0-4272  [002] ....  8270.657444: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=463360
       flush-8:0-4272  [002] ....  8270.660617: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=463360
       flush-8:0-4272  [002] ....  8270.660624: writeback_pages_written: 256
              dd-4267  [002] ....  8270.673407: writeback_congestion_wait: usec_timeout=100000 usec_delayed=7000
              dd-4267  [002] ....  8270.703164: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [002] ....  8270.723126: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [002] ....  8270.743057: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [002] ....  8270.883761: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=463616
              dd-4267  [002] ....  8270.883782: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=463616
              dd-4267  [002] ....  8270.885303: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=465664
              dd-4267  [002] ....  8270.886785: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=467712
              dd-4267  [002] ....  8270.888382: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=469760
              dd-4267  [002] ....  8270.889945: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=471808
       flush-8:0-4272  [001] ....  8271.327896: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=467712
       flush-8:0-4272  [001] ....  8271.334733: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=469760
       flush-8:0-4272  [001] ....  8271.340404: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=471808
       flush-8:0-4272  [001] ....  8271.345681: writeback_pages_written: 10240
              dd-4267  [002] ....  8271.421367: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=473856
       flush-8:0-4272  [001] ....  8271.421387: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=473856
              dd-4267  [002] ....  8271.421390: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=473856
              dd-4267  [002] ....  8271.423065: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=475904
              dd-4267  [002] ....  8271.425186: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=477952
       flush-8:0-4272  [001] ....  8271.425640: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=473856
              dd-4267  [002] ....  8271.426874: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=480000
       flush-8:0-4272  [001] ....  8271.492067: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=475904
       flush-8:0-4272  [001] ....  8271.497902: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=477952
       flush-8:0-4272  [001] ....  8271.502398: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=1280 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=480000
       flush-8:0-4272  [001] ....  8271.505244: writeback_pages_written: 7424
              dd-4267  [002] ....  8271.678718: writeback_congestion_wait: usec_timeout=100000 usec_delayed=100000
              dd-4267  [001] ....  8271.777600: writeback_congestion_wait: usec_timeout=100000 usec_delayed=98000
              dd-4267  [001] ....  8271.817785: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
              dd-4267  [001] ....  8271.827867: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [001] ....  8271.837939: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [003] ....  8271.847990: writeback_congestion_wait: usec_timeout=100000 usec_delayed=7000
              dd-4267  [001] ....  8271.858059: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [001] ....  8271.868133: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [001] ....  8271.878146: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [003] ....  8271.888271: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [001] ....  8271.898364: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [001] ....  8271.903945: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [001] ....  8271.908242: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [003] ....  8271.910635: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [001] ....  8271.918477: writeback_congestion_wait: usec_timeout=100000 usec_delayed=7000
              dd-4267  [001] ....  8271.941108: writeback_congestion_wait: usec_timeout=100000 usec_delayed=0
              dd-4267  [001] ....  8271.945668: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [001] ....  8271.948169: writeback_congestion_wait: usec_timeout=100000 usec_delayed=0
              dd-4267  [001] ....  8271.954621: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
              dd-4267  [001] ....  8271.958741: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=481280
       flush-8:0-4272  [003] ....  8271.958757: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=481280
              dd-4267  [001] ....  8271.958758: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=481280
              dd-4267  [001] ....  8271.960266: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=483328
              dd-4267  [001] ....  8271.961761: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=485376
       flush-8:0-4272  [003] ....  8271.962732: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=481280
              dd-4267  [001] ....  8271.963264: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=487424
              dd-4267  [001] ....  8271.964441: writeback_congestion_wait: usec_timeout=100000 usec_delayed=0
              dd-4267  [001] ....  8271.974773: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [003] ....  8272.009232: writeback_congestion_wait: usec_timeout=100000 usec_delayed=30000
              dd-4267  [001] ....  8272.029378: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [001] ....  8272.035038: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [003] ....  8272.039234: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [001] ....  8272.045124: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [001] ....  8272.049333: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [001] ....  8272.051819: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [003] ....  8272.057558: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [003] ....  8272.078026: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [001] ....  8272.082691: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [001] ....  8272.090003: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [003] ....  8272.100194: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [001] ....  8272.110246: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [001] ....  8272.120279: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [001] ....  8272.130351: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [003] ....  8272.136822: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [001] ....  8272.142901: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [001] ....  8272.183805: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=488192
              dd-4267  [001] ....  8272.183823: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=488192
              dd-4267  [001] ....  8272.195352: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=489216
              dd-4267  [001] ....  8272.195373: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=489216
              dd-4267  [001] ....  8272.197069: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=491264
              dd-4267  [001] ....  8272.198688: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=493312
              dd-4267  [001] ....  8272.200844: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=495360
              dd-4267  [001] ....  8272.202448: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=497408
              dd-4267  [003] ....  8272.302380: writeback_congestion_wait: usec_timeout=100000 usec_delayed=100000
              dd-4267  [001] ....  8272.402340: writeback_congestion_wait: usec_timeout=100000 usec_delayed=100000
##### CPU 0 buffer started ####
              dd-4267  [000] ....  8272.587442: writeback_congestion_wait: usec_timeout=100000 usec_delayed=95000
              dd-4267  [000] ....  8272.589707: writeback_congestion_wait: usec_timeout=100000 usec_delayed=0
              dd-4267  [000] ....  8272.594405: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [000] ....  8272.603546: writeback_congestion_wait: usec_timeout=100000 usec_delayed=8000
              dd-4267  [000] ....  8272.613529: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [000] ....  8272.617066: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=498176
       flush-8:0-4272  [002] ....  8272.617081: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=498176
              dd-4267  [000] ....  8272.617086: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=498176
              dd-4267  [000] ....  8272.619094: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=500224
       flush-8:0-4272  [002] ....  8272.620566: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=498176
              dd-4267  [000] ....  8272.620634: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=502272
              dd-4267  [000] ....  8272.622174: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=504320
       flush-8:0-4272  [002] ....  8272.623787: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=500224
       flush-8:0-4272  [002] ....  8272.629276: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=502272
       flush-8:0-4272  [002] ....  8272.633956: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=1792 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=504320
       flush-8:0-4272  [002] ....  8272.637736: writeback_pages_written: 7936
              dd-4267  [003] ....  8272.728244: writeback_congestion_wait: usec_timeout=100000 usec_delayed=100000
              dd-4267  [003] ....  8272.816879: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=506112
       flush-8:0-4272  [002] ....  8272.816896: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=506112
              dd-4267  [003] ....  8272.816897: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=506112
       flush-8:0-4272  [002] ....  8272.820965: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=768 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=506112
       flush-8:0-4272  [002] ....  8272.822144: writeback_pages_written: 768
              dd-4267  [000] ....  8272.883681: writeback_congestion_wait: usec_timeout=100000 usec_delayed=66000
              dd-4267  [000] ....  8272.901202: writeback_congestion_wait: usec_timeout=100000 usec_delayed=9000
              dd-4267  [000] ....  8272.911472: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [000] ....  8272.921776: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [000] ....  8272.932042: writeback_congestion_wait: usec_timeout=100000 usec_delayed=7000
              dd-4267  [000] ....  8272.942300: writeback_congestion_wait: usec_timeout=100000 usec_delayed=7000
              dd-4267  [002] ....  8272.952536: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [000] ....  8272.962856: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [000] ....  8272.972948: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [002] ....  8272.977268: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [000] ....  8272.983253: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [000] ....  8272.989219: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [000] ....  8272.993531: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [002] ....  8272.999473: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [000] ....  8273.003785: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8273.009743: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [000] ....  8273.014080: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [002] ....  8273.020012: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [000] ....  8273.024305: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [000] ....  8273.030315: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [000] ....  8273.033647: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=506880
       flush-8:0-4272  [002] ....  8273.033663: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=506880
              dd-4267  [000] ....  8273.033664: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=506880
              dd-4267  [000] ....  8273.035857: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=508928
       flush-8:0-4272  [002] ....  8273.037953: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=506880
       flush-8:0-4272  [000] ....  8273.118847: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=768 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=508928
       flush-8:0-4272  [000] ....  8273.125008: writeback_pages_written: 2816
              dd-4267  [001] ....  8273.201257: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=509696
       flush-8:0-4272  [000] ....  8273.201275: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=509696
              dd-4267  [001] ....  8273.201283: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=509696
              dd-4267  [001] ....  8273.202988: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=511744
              dd-4267  [001] ....  8273.204095: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [001] ....  8273.204351: writeback_congestion_wait: usec_timeout=100000 usec_delayed=0
       flush-8:0-4272  [000] ....  8273.207289: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=509696
       flush-8:0-4272  [000] ....  8273.210162: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=768 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=511744
              dd-4267  [001] ....  8273.214288: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
       flush-8:0-4272  [000] ....  8273.214741: writeback_pages_written: 2816
              dd-4267  [002] ....  8273.220237: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8273.224624: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8273.230590: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [000] ....  8273.234929: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [002] ....  8273.240847: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [000] ....  8273.245152: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [000] ....  8273.252550: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
              dd-4267  [000] ....  8273.256650: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [002] ....  8273.262386: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [000] ....  8273.266457: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8273.272133: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [000] ....  8273.274530: writeback_congestion_wait: usec_timeout=100000 usec_delayed=0
              dd-4267  [002] ....  8273.276786: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8273.281514: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
              dd-4267  [000] ....  8273.303716: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=512512
       flush-8:0-4272  [002] ....  8273.303734: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=512512
              dd-4267  [000] ....  8273.303742: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=512512
              dd-4267  [000] ....  8273.305253: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=514560
              dd-4267  [000] ....  8273.306756: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=516608
       flush-8:0-4272  [002] ....  8273.308082: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=512512
              dd-4267  [000] ....  8273.308310: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=518656
              dd-4267  [001] ....  8273.309955: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=520704
       flush-8:0-4272  [002] ....  8273.310937: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=514560
       flush-8:0-4272  [002] ....  8273.317219: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=516608
       flush-8:0-4272  [002] ....  8273.322499: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=518656
       flush-8:0-4272  [002] ....  8273.327288: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=520704
       flush-8:0-4272  [002] ....  8273.331707: writeback_pages_written: 10240
              dd-4267  [003] ....  8273.411759: writeback_congestion_wait: usec_timeout=100000 usec_delayed=100000
              dd-4267  [003] ....  8273.467526: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=522752
       flush-8:0-4272  [002] ....  8273.467545: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=522752
              dd-4267  [003] ....  8273.467547: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=522752
              dd-4267  [003] ....  8273.469243: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=524800
              dd-4267  [003] ....  8273.470853: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=526848
       flush-8:0-4272  [002] ....  8273.471531: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=522752
              dd-4267  [003] ....  8273.472458: writeback_congestion_wait: usec_timeout=100000 usec_delayed=0
       flush-8:0-4272  [002] ....  8273.476836: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=524800
       flush-8:0-4272  [002] ....  8273.482005: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=1280 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=526848
       flush-8:0-4272  [002] ....  8273.484987: writeback_pages_written: 5376
              dd-4267  [001] ....  8273.576718: writeback_congestion_wait: usec_timeout=100000 usec_delayed=100000
              dd-4267  [001] ....  8273.630560: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=528128
       flush-8:0-4272  [002] ....  8273.630579: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=528128
              dd-4267  [001] ....  8273.630580: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=528128
              dd-4267  [001] ....  8273.632915: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=530176
       flush-8:0-4272  [002] ....  8273.634217: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=528128
       flush-8:0-4272  [002] ....  8273.637881: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=1280 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=530176
       flush-8:0-4272  [002] ....  8273.641495: writeback_pages_written: 3328
              dd-4267  [003] ....  8273.733643: writeback_congestion_wait: usec_timeout=100000 usec_delayed=100000
              dd-4267  [000] ....  8273.803010: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [000] ....  8273.812834: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [000] ....  8273.822636: writeback_congestion_wait: usec_timeout=100000 usec_delayed=7000
              dd-4267  [002] ....  8273.832432: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
              dd-4267  [000] ....  8273.842319: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [000] ....  8273.852182: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [000] ....  8273.861874: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
              dd-4267  [002] ....  8273.872685: writeback_congestion_wait: usec_timeout=100000 usec_delayed=8000
              dd-4267  [000] ....  8273.877222: writeback_congestion_wait: usec_timeout=100000 usec_delayed=0
              dd-4267  [000] ....  8273.881324: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8273.887043: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [002] ....  8273.891135: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8273.896818: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [000] ....  8273.900890: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8273.906661: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [002] ....  8273.910747: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [000] ....  8273.916474: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [000] ....  8273.920550: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [000] ....  8273.926259: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [002] ....  8273.930361: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8273.936114: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [000] ....  8273.940195: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8273.942458: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=531456
       flush-8:0-4272  [002] ....  8273.942469: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=531456
              dd-4267  [000] ....  8273.942470: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=531456
              dd-4267  [000] ....  8273.943968: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=533504
              dd-4267  [000] ....  8273.945828: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
       flush-8:0-4272  [002] ....  8273.946273: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=531456
       flush-8:0-4272  [002] ....  8273.949248: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=1536 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=533504
              dd-4267  [000] ....  8273.949900: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
       flush-8:0-4272  [002] .N..  8273.954420: writeback_pages_written: 3584
              dd-4267  [002] ....  8273.955696: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [000] ....  8273.959774: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [000] ....  8273.965545: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [002] ....  8273.969610: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8273.975340: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8273.979456: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [002] ....  8273.987689: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8273.992204: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [000] ....  8274.026101: writeback_congestion_wait: usec_timeout=100000 usec_delayed=0
              dd-4267  [002] ....  8274.031783: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [000] ....  8274.035939: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8274.040032: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8274.045692: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [002] ....  8274.049762: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8274.055579: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [000] ....  8274.059648: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [000] ....  8274.065335: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [002] ....  8274.069439: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [000] ....  8274.075170: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [000] ....  8274.079279: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [000] ....  8274.086684: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [002] ....  8274.091051: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8274.097019: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [000] ....  8274.102975: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [000] ....  8274.107277: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8274.111257: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=535040
       flush-8:0-4272  [002] ....  8274.111272: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=535040
              dd-4267  [000] ....  8274.111280: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=535040
              dd-4267  [000] ....  8274.114916: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=537088
       flush-8:0-4272  [002] ....  8274.115545: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=535040
       flush-8:0-4272  [002] ....  8274.118567: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=1280 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=537088
              dd-4267  [001] ....  8274.120214: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=538112
              dd-4267  [001] ....  8274.123035: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
       flush-8:0-4272  [002] ....  8274.123531: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=768 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=538112
       flush-8:0-4272  [002] ....  8274.124428: writeback_pages_written: 3840
              dd-4267  [001] ....  8274.125289: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8274.127722: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [000] ....  8274.130996: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [002] ....  8274.133237: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8274.135492: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [002] ....  8274.175907: writeback_congestion_wait: usec_timeout=100000 usec_delayed=40000
              dd-4267  [000] ....  8274.290285: writeback_congestion_wait: usec_timeout=100000 usec_delayed=100000
              dd-4267  [000] ....  8274.336268: writeback_congestion_wait: usec_timeout=100000 usec_delayed=46000
              dd-4267  [002] ....  8274.340912: writeback_congestion_wait: usec_timeout=100000 usec_delayed=0
              dd-4267  [000] ....  8274.347609: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [000] ....  8274.357660: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [002] ....  8274.367884: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [000] ....  8274.373602: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8274.377990: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8274.383934: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [000] ....  8274.386279: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=538880
       flush-8:0-4272  [002] ....  8274.386291: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=538880
              dd-4267  [000] ....  8274.386292: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=538880
              dd-4267  [000] ....  8274.387774: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=540928
       flush-8:0-4272  [002] ....  8274.390547: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=538880
              dd-4267  [000] ....  8274.391322: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=542976
              dd-4267  [000] ....  8274.392891: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=545024
       flush-8:0-4272  [000] ....  8274.536676: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=540928
       flush-8:0-4272  [000] ....  8274.543142: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=542976
       flush-8:0-4272  [000] ....  8274.549982: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=1536 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=545024
       flush-8:0-4272  [000] ....  8274.554681: writeback_pages_written: 7680
              dd-4267  [001] ....  8274.657074: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=546560
       flush-8:0-4272  [000] ....  8274.657090: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=546560
              dd-4267  [001] ....  8274.657093: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=546560
              dd-4267  [001] ....  8274.661141: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=548608
       flush-8:0-4272  [000] ....  8274.662544: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=546560
              dd-4267  [001] ....  8274.662787: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=550656
       flush-8:0-4272  [000] ....  8274.666715: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=548608
              dd-4267  [001] ....  8274.666972: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=552704
              dd-4267  [001] ....  8274.670755: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=554752
              dd-4267  [001] ....  8274.673577: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
       flush-8:0-4272  [000] ....  8274.674196: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=550656
              dd-4267  [001] ....  8274.677756: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
       flush-8:0-4272  [000] ....  8274.680418: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=552704
       flush-8:0-4272  [000] ....  8274.685640: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=1024 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=554752
       flush-8:0-4272  [000] ....  8274.688771: writeback_pages_written: 9216
              dd-4267  [003] ....  8274.779091: writeback_congestion_wait: usec_timeout=100000 usec_delayed=100000
              dd-4267  [002] ....  8274.843553: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
              dd-4267  [000] ....  8274.853430: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [000] ....  8274.863405: writeback_congestion_wait: usec_timeout=100000 usec_delayed=7000
              dd-4267  [000] ....  8274.870132: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [002] ....  8274.874634: writeback_congestion_wait: usec_timeout=100000 usec_delayed=0
              dd-4267  [000] ....  8274.878968: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [000] ....  8274.883432: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [000] ....  8274.887848: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [000] ....  8274.930432: writeback_congestion_wait: usec_timeout=100000 usec_delayed=37000
              dd-4267  [000] ....  8274.932637: writeback_congestion_wait: usec_timeout=100000 usec_delayed=0
              dd-4267  [000] ....  8274.943276: writeback_congestion_wait: usec_timeout=100000 usec_delayed=7000
              dd-4267  [000] ....  8274.943281: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=555776
       flush-8:0-4272  [002] ....  8274.943292: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=555776
       flush-8:0-4272  [002] ....  8274.947234: writeback_pages_written: 256
              dd-4267  [000] ....  8274.947540: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=556032
       flush-8:0-4272  [002] ....  8274.947548: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=556032
              dd-4267  [000] ....  8274.947549: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=556032
              dd-4267  [000] ....  8274.949050: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=558080
       flush-8:0-4272  [002] ....  8274.950731: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=556032
       flush-8:0-4272  [002] ....  8274.954025: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=1024 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=558080
       flush-8:0-4272  [002] ....  8274.958120: writeback_pages_written: 3072
              dd-4267  [001] ....  8274.962461: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=559104
       flush-8:0-4272  [002] ....  8274.962480: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=559104
              dd-4267  [001] ....  8274.962481: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=559104
       flush-8:0-4272  [002] ....  8274.965051: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=1024 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=559104
       flush-8:0-4272  [002] ....  8274.966440: writeback_pages_written: 1024
              dd-4267  [001] ....  8274.971338: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=560128
       flush-8:0-4272  [002] ....  8274.971354: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=560128
              dd-4267  [001] ....  8274.971355: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=560128
       flush-8:0-4272  [002] ....  8274.973875: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=768 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=560128
       flush-8:0-4272  [002] ....  8274.974803: writeback_pages_written: 768
              dd-4267  [001] ....  8274.976380: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=560896
       flush-8:0-4272  [002] ....  8274.976392: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=560896
              dd-4267  [001] ....  8274.976393: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=560896
       flush-8:0-4272  [002] ....  8274.980232: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=1024 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=560896
              dd-4267  [001] ....  8274.981786: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=561920
       flush-8:0-4272  [002] ....  8274.982195: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=512 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=561920
              dd-4267  [001] ....  8274.982196: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=562176
       flush-8:0-4272  [002] ....  8274.985722: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=1536 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=562176
       flush-8:0-4272  [002] ....  8274.988800: writeback_pages_written: 2816
              dd-4267  [003] ....  8275.082867: writeback_congestion_wait: usec_timeout=100000 usec_delayed=100000
              dd-4267  [000] ....  8275.128437: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8275.138326: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=563712
       flush-8:0-4272  [002] ....  8275.138340: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=563712
              dd-4267  [000] ....  8275.138341: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=563712
       flush-8:0-4272  [002] ....  8275.141461: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=563712
       flush-8:0-4272  [002] ....  8275.141469: writeback_pages_written: 256
              dd-4267  [002] ....  8275.194016: writeback_congestion_wait: usec_timeout=100000 usec_delayed=55000
              dd-4267  [000] ....  8275.198421: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [000] ....  8275.292553: writeback_congestion_wait: usec_timeout=100000 usec_delayed=69000
              dd-4267  [000] ....  8275.296650: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=563968
       flush-8:0-4272  [002] ....  8275.296664: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=563968
              dd-4267  [000] ....  8275.296665: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=563968
              dd-4267  [000] ....  8275.297205: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
      flush-0:16-4223  [003] ....  8275.297726: writeback_start: bdi 0:16: sb_dev 0:0 nr_pages=35970 sync_mode=0 kupdate=1 range_cyclic=1 background=0 reason=periodic ino=0 offset=0
      flush-0:16-4223  [003] ....  8275.297730: writeback_queue_io: bdi 0:16: older=4302916993 age=30000 enqueue=2 reason=periodic
      flush-0:16-4223  [003] ....  8275.297741: writeback_single_inode: bdi 0:16: ino=1573485 state= dirtied_when=4302912037 age=65 index=0 to_write=1024 wrote=0
      flush-0:16-4223  [003] ....  8275.297765: writeback_single_inode: bdi 0:16: ino=1573501 state= dirtied_when=4302915356 age=61 index=24 to_write=1024 wrote=1
      flush-0:16-4223  [003] ....  8275.297769: writeback_written: bdi 0:16: sb_dev 0:0 nr_pages=35969 sync_mode=0 kupdate=1 range_cyclic=1 background=0 reason=periodic ino=0 offset=0
      flush-0:16-4223  [003] ....  8275.297770: writeback_start: bdi 0:16: sb_dev 0:0 nr_pages=35969 sync_mode=0 kupdate=1 range_cyclic=1 background=0 reason=periodic ino=0 offset=0
      flush-0:16-4223  [003] ....  8275.297770: writeback_queue_io: bdi 0:16: older=4302916993 age=30000 enqueue=0 reason=periodic
      flush-0:16-4223  [003] ....  8275.297771: writeback_written: bdi 0:16: sb_dev 0:0 nr_pages=35969 sync_mode=0 kupdate=1 range_cyclic=1 background=0 reason=periodic ino=0 offset=0
      flush-0:16-4223  [003] ....  8275.297777: writeback_pages_written: 1
       flush-8:0-4272  [002] ....  8275.300356: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=563968
       flush-8:0-4272  [002] ....  8275.300363: writeback_pages_written: 256
              dd-4267  [000] ....  8275.301842: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [000] ....  8275.307614: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [002] ....  8275.312153: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [000] ....  8275.319068: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [000] ....  8275.328375: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
              dd-4267  [002] ....  8275.338401: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
              dd-4267  [002] ....  8275.342574: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=564224
       flush-8:0-4272  [000] ....  8275.342588: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=564224
              dd-4267  [002] ....  8275.342589: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=564224
              dd-4267  [002] ....  8275.346229: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=566272
       flush-8:0-4272  [000] ....  8275.346586: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=564224
              dd-4267  [002] ....  8275.347786: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=568320
       flush-8:0-4272  [000] ....  8275.349746: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=566272
              dd-4267  [002] ....  8275.351519: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=570368
              dd-4267  [002] ....  8275.353984: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
       flush-8:0-4272  [000] ....  8275.355780: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=568320
              dd-4267  [002] ....  8275.356369: writeback_congestion_wait: usec_timeout=100000 usec_delayed=0
       flush-8:0-4272  [000] ....  8275.361337: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=768 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=570368
       flush-8:0-4272  [000] ....  8275.363817: writeback_pages_written: 6912
              dd-4267  [000] ....  8275.455701: writeback_congestion_wait: usec_timeout=100000 usec_delayed=100000
              dd-4267  [000] ....  8275.493067: writeback_congestion_wait: usec_timeout=100000 usec_delayed=14000
              dd-4267  [002] ....  8275.513157: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [000] ....  8275.523329: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [000] ....  8275.533356: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [000] ....  8275.536894: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=571136
       flush-8:0-4272  [002] ....  8275.536907: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=571136
              dd-4267  [000] ....  8275.536915: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=571136
       flush-8:0-4272  [002] ....  8275.540007: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=768 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=571136
       flush-8:0-4272  [002] ....  8275.540783: writeback_pages_written: 768
              dd-4267  [000] ....  8275.549196: writeback_congestion_wait: usec_timeout=100000 usec_delayed=12000
              dd-4267  [002] ....  8275.559066: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [000] ....  8275.569257: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [000] ....  8275.574835: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [000] ....  8275.576992: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=571904
       flush-8:0-4272  [002] ....  8275.577003: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=571904
              dd-4267  [000] ....  8275.577010: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=571904
              dd-4267  [000] ....  8275.579035: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
       flush-8:0-4272  [002] ....  8275.580062: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=512 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=571904
       flush-8:0-4272  [002] ....  8275.580471: writeback_pages_written: 512
              dd-4267  [002] ....  8275.584884: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [000] ....  8275.590767: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [000] ....  8275.595004: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8275.600852: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [000] ....  8275.607811: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8275.611310: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=572416
       flush-8:0-4272  [002] ....  8275.611324: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=572416
              dd-4267  [000] ....  8275.611327: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=572416
              dd-4267  [000] ....  8275.612088: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
       flush-8:0-4272  [002] ....  8275.614606: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=572416
       flush-8:0-4272  [002] ....  8275.614613: writeback_pages_written: 256
              dd-4267  [000] ....  8275.616597: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [002] ....  8275.620899: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [002] ....  8275.622607: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=572672
       flush-8:0-4272  [000] ....  8275.622616: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=572672
              dd-4267  [002] ....  8275.622617: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=572672
       flush-8:0-4272  [000] ....  8275.626265: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=572672
       flush-8:0-4272  [000] ....  8275.626271: writeback_pages_written: 256
              dd-4267  [000] ....  8275.632606: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8275.642731: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [000] ....  8275.656678: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=572928
       flush-8:0-4272  [002] ....  8275.656693: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=572928
              dd-4267  [000] ....  8275.656694: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=572928
              dd-4267  [000] ....  8275.658185: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=574976
       flush-8:0-4272  [002] ....  8275.660256: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=572928
              dd-4267  [002] ....  8275.663561: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
       flush-8:0-4272  [000] ....  8275.762227: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=1280 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=574976
       flush-8:0-4272  [000] ....  8275.768192: writeback_pages_written: 3328
              dd-4267  [001] ....  8275.813954: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=576256
       flush-8:0-4272  [000] ....  8275.813972: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=576256
              dd-4267  [001] ....  8275.813973: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=576256
       flush-8:0-4272  [000] ....  8275.818853: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=1280 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=576256
              dd-4267  [001] ....  8275.819749: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=577536
       flush-8:0-4272  [000] ....  8275.821425: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=577536
              dd-4267  [001] ....  8275.821426: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=579328
              dd-4267  [001] ....  8275.825312: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
       flush-8:0-4272  [000] ....  8275.829046: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=1024 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=579328
       flush-8:0-4272  [000] ....  8275.830311: writeback_pages_written: 4096
              dd-4267  [000] ....  8275.835347: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
              dd-4267  [002] ....  8275.841226: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8275.845447: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [000] ....  8276.548367: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=580352
       flush-8:0-4272  [002] ....  8276.548384: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=580352
              dd-4267  [000] ....  8276.548400: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=580352
              dd-4267  [000] ....  8276.550364: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=582400
              dd-4267  [000] ....  8276.551926: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=584448
              dd-4267  [000] ....  8276.553518: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=586496
       flush-8:0-4272  [002] ....  8276.554547: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=580352
              dd-4267  [000] ....  8276.555127: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=588544
              dd-4267  [000] ....  8276.556649: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=590592
       flush-8:0-4272  [002] ....  8276.557449: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=582400
              dd-4267  [001] ....  8276.560430: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=592640
              dd-4267  [001] ....  8276.562084: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=594688
              dd-4267  [001] ....  8276.563704: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=596736
              dd-4267  [001] ....  8276.565324: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=598784
       flush-8:0-4272  [002] ....  8276.565402: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=584448
              dd-4267  [001] ....  8276.566863: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=600832
              dd-4267  [001] ....  8276.568420: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=602880
       flush-8:0-4272  [002] ....  8276.573699: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=586496
       flush-8:0-4272  [002] ....  8276.580114: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=588544
       flush-8:0-4272  [002] ....  8276.585744: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=590592
       flush-8:0-4272  [002] ....  8276.590904: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=592640
       flush-8:0-4272  [002] ....  8276.595589: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=594688
       flush-8:0-4272  [002] ....  8276.599917: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=596736
       flush-8:0-4272  [002] ....  8276.603987: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=598784
              dd-4267  [003] ....  8276.668972: writeback_congestion_wait: usec_timeout=100000 usec_delayed=100000
              dd-4267  [001] ....  8276.769045: writeback_congestion_wait: usec_timeout=100000 usec_delayed=100000
       flush-8:0-4272  [002] ....  8276.890790: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=600832
       flush-8:0-4272  [002] ....  8276.895314: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=1792 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=602880
       flush-8:0-4272  [002] ....  8276.898568: writeback_pages_written: 24320
              dd-4267  [003] ....  8276.908887: writeback_congestion_wait: usec_timeout=100000 usec_delayed=100000
              dd-4267  [001] ....  8277.050810: writeback_congestion_wait: usec_timeout=100000 usec_delayed=100000
              dd-4267  [000] ....  8277.200263: writeback_congestion_wait: usec_timeout=100000 usec_delayed=88000
              dd-4267  [000] ....  8277.258031: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=604672
       flush-8:0-4272  [002] ....  8277.258049: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=604672
              dd-4267  [000] ....  8277.258050: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=604672
              dd-4267  [000] ....  8277.258823: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
       flush-8:0-4272  [002] ....  8277.261572: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=604672
       flush-8:0-4272  [002] ....  8277.261579: writeback_start: bdi 8:0: sb_dev 0:0 nr_pages=35048 sync_mode=0 kupdate=1 range_cyclic=1 background=0 reason=periodic ino=0 offset=0
       flush-8:0-4272  [002] ....  8277.261580: writeback_queue_io: bdi 8:0: older=4302918957 age=30000 enqueue=0 reason=periodic
       flush-8:0-4272  [002] ....  8277.261580: writeback_written: bdi 8:0: sb_dev 0:0 nr_pages=35048 sync_mode=0 kupdate=1 range_cyclic=1 background=0 reason=periodic ino=0 offset=0
       flush-8:0-4272  [002] ....  8277.261582: writeback_pages_written: 256
              dd-4267  [000] ....  8277.262937: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [000] ....  8277.268671: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [002] ....  8277.272757: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [000] ....  8277.278481: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [000] ....  8277.282585: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [000] ....  8277.288315: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [002] ....  8277.292395: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [000] ....  8277.298115: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [000] ....  8277.302215: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8277.307931: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [002] ....  8277.312008: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8277.317735: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [000] ....  8277.324837: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [002] ....  8277.355709: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8277.364584: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [000] ....  8277.374398: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [000] ....  8277.382588: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
              dd-4267  [002] ....  8277.392402: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
              dd-4267  [000] ....  8277.397901: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8277.402057: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8277.407739: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [002] ....  8277.411818: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [000] ....  8277.417560: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [000] ....  8277.421658: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [000] ....  8277.427424: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [002] ....  8277.431473: writeback_congestion_wait: usec_timeout=100000 usec_delayed=0
              dd-4267  [000] ....  8277.437191: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [000] ....  8277.441271: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8277.447015: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [002] ....  8277.451087: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8277.456772: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [000] ....  8277.460844: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8277.466614: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [002] ....  8277.470713: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [000] ....  8277.476407: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [000] ....  8277.480519: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [000] ....  8277.486224: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [002] ....  8277.490324: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8277.496067: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [000] ....  8277.500124: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8277.505806: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [002] ....  8277.509968: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8277.515667: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [000] ....  8277.519729: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8277.521881: writeback_congestion_wait: usec_timeout=100000 usec_delayed=0
              dd-4267  [002] ....  8277.524031: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8277.526217: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [002] ....  8277.528385: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8277.530862: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [002] ....  8277.533032: writeback_congestion_wait: usec_timeout=100000 usec_delayed=0
              dd-4267  [000] ....  8277.539390: writeback_congestion_wait: usec_timeout=100000 usec_delayed=0
              dd-4267  [000] ....  8277.546494: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
              dd-4267  [002] ....  8277.550594: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8277.556326: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8277.560414: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [000] ....  8277.566132: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [002] ....  8277.570213: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [000] ....  8277.575939: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8277.580028: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8277.585719: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [002] ....  8277.589773: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8277.595538: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [000] ....  8277.599643: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [000] ....  8277.605356: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [002] ....  8277.609463: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8277.615151: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [000] ....  8277.619253: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8277.624990: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [002] ....  8277.629080: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [002] ....  8277.630998: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=604928
       flush-8:0-4272  [000] ....  8277.631010: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=604928
              dd-4267  [002] ....  8277.631011: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=604928
              dd-4267  [002] ....  8277.632513: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=606976
              dd-4267  [002] ....  8277.634731: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
       flush-8:0-4272  [000] ....  8277.636659: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=604928
              dd-4267  [002] ....  8277.638506: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=609024
              dd-4267  [002] ....  8277.640040: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=611072
              dd-4267  [002] ....  8277.641550: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=613120
              dd-4267  [002] ....  8277.643094: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=615168
       flush-8:0-4272  [000] ....  8277.715839: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=606976
       flush-8:0-4272  [000] ....  8277.724516: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=609024
       flush-8:0-4272  [000] ....  8277.733064: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=611072
       flush-8:0-4272  [000] ....  8277.740840: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=613120
       flush-8:0-4272  [000] ....  8277.748062: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=615168
       flush-8:0-4272  [000] ....  8277.755291: writeback_pages_written: 12288
              dd-4267  [001] ....  8277.802545: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=617216
       flush-8:0-4272  [000] ....  8277.802565: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=617216
              dd-4267  [001] ....  8277.802566: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=617216
              dd-4267  [001] ....  8277.804416: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=619264
              dd-4267  [001] ....  8277.806516: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=621312
       flush-8:0-4272  [000] ....  8277.808332: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=617216
       flush-8:0-4272  [000] ....  8277.813105: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=619264
              dd-4267  [001] ....  8277.813218: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=623360
              dd-4267  [001] ....  8277.815522: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=625408
       flush-8:0-4272  [000] ....  8277.820794: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=621312
              dd-4267  [001] ....  8277.821744: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=627456
       flush-8:0-4272  [000] ....  8277.826650: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=623360
              dd-4267  [001] ....  8277.826949: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
       flush-8:0-4272  [000] ....  8277.832483: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=625408
       flush-8:0-4272  [000] ....  8277.837046: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=1536 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=627456
       flush-8:0-4272  [000] ....  8277.840357: writeback_pages_written: 11776
              dd-4267  [003] ....  8277.931348: writeback_congestion_wait: usec_timeout=100000 usec_delayed=100000
              dd-4267  [000] ....  8278.051459: writeback_congestion_wait: usec_timeout=100000 usec_delayed=55000
              dd-4267  [000] ....  8278.116343: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=628992
       flush-8:0-4272  [002] ....  8278.116363: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=628992
              dd-4267  [000] ....  8278.116364: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=628992
              dd-4267  [000] ....  8278.118347: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=631040
              dd-4267  [000] ....  8278.119860: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=633088
       flush-8:0-4272  [001] ....  8278.120392: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=628992
              dd-4267  [000] ....  8278.123157: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
       flush-8:0-4272  [001] ....  8278.123728: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=631040
       flush-8:0-4272  [001] ....  8278.130201: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=1280 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=633088
       flush-8:0-4272  [001] ....  8278.133845: writeback_pages_written: 5376
              dd-4267  [002] ....  8278.225775: writeback_congestion_wait: usec_timeout=100000 usec_delayed=99000
              dd-4267  [002] ....  8278.294088: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=634368
       flush-8:0-4272  [001] ....  8278.294107: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=634368
              dd-4267  [002] ....  8278.294109: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=634368
       flush-8:0-4272  [001] ....  8278.297397: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=634368
       flush-8:0-4272  [001] ....  8278.297405: writeback_pages_written: 256
              dd-4267  [002] ....  8278.303627: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [000] ....  8278.308159: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
              dd-4267  [000] ....  8278.359684: writeback_congestion_wait: usec_timeout=100000 usec_delayed=51000
              dd-4267  [000] ....  8278.362293: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [002] ....  8278.414805: writeback_congestion_wait: usec_timeout=100000 usec_delayed=52000
              dd-4267  [000] ....  8278.417186: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [000] ....  8278.419534: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8278.421935: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8278.424848: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [002] ....  8278.523938: writeback_congestion_wait: usec_timeout=100000 usec_delayed=100000
              dd-4267  [000] ....  8278.609350: writeback_congestion_wait: usec_timeout=100000 usec_delayed=85000
              dd-4267  [002] ....  8278.611904: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [000] ....  8278.614392: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8278.617127: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [000] ....  8278.621765: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [002] ....  8278.629512: writeback_congestion_wait: usec_timeout=100000 usec_delayed=8000
              dd-4267  [000] ....  8278.639410: writeback_congestion_wait: usec_timeout=100000 usec_delayed=10000
              dd-4267  [003] ....  8278.649387: writeback_congestion_wait: usec_timeout=100000 usec_delayed=10000
              dd-4267  [001] ....  8278.655016: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [001] ....  8278.659199: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [001] ....  8278.664970: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [001] ....  8278.744840: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [001] ....  8278.748992: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [001] ....  8278.754791: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [003] ....  8278.758983: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [001] ....  8278.764795: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [001] ....  8278.768952: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [001] ....  8278.774753: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
              dd-4267  [003] ....  8278.778942: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
              dd-4267  [001] ....  8278.784729: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
              dd-4267  [001] ....  8278.788914: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
              dd-4267  [001] ....  8278.794714: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
              dd-4267  [003] ....  8278.798872: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
              dd-4267  [001] ....  8278.804677: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
              dd-4267  [001] ....  8278.808866: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
              dd-4267  [001] ....  8278.814669: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
              dd-4267  [003] ....  8278.818829: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
              dd-4267  [001] ....  8278.824640: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
              dd-4267  [001] ....  8278.828858: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
              dd-4267  [001] ....  8278.834601: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
              dd-4267  [003] ....  8278.838792: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
              dd-4267  [000] ....  8278.901654: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [000] ....  8278.908163: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=634624
       flush-8:0-4272  [001] ....  8278.908182: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=634624
              dd-4267  [000] ....  8278.908650: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=634624
              dd-4267  [000] ....  8278.911296: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
       flush-8:0-4272  [001] ....  8278.913786: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=634624
       flush-8:0-4272  [001] ....  8278.913794: writeback_pages_written: 256
              dd-4267  [002] ....  8278.921513: writeback_congestion_wait: usec_timeout=100000 usec_delayed=8000
              dd-4267  [000] ....  8278.925513: writeback_congestion_wait: usec_timeout=100000 usec_delayed=0
              dd-4267  [000] ....  8278.931324: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [000] ....  8278.935488: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [002] ....  8278.941321: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [000] ....  8278.945468: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8278.951282: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [002] ....  8278.955458: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8278.961249: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [000] ....  8278.964003: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=634880
       flush-8:0-4272  [001] ....  8278.964020: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=634880
              dd-4267  [000] ....  8278.964035: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=634880
              dd-4267  [000] ....  8278.966502: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=636928
              dd-4267  [000] ....  8278.968844: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=638976
       flush-8:0-4272  [001] ....  8278.969634: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=634880
              dd-4267  [000] ....  8278.971204: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=641024
       flush-8:0-4272  [001] ....  8278.973125: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=636928
              dd-4267  [000] ....  8278.975599: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=643072
              dd-4267  [000] ....  8278.977264: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=645120
       flush-8:0-4272  [001] ....  8278.982123: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=638976
              dd-4267  [000] ....  8278.985235: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=647168
              dd-4267  [000] ....  8278.986899: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=649216
              dd-4267  [000] ....  8278.988478: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=651264
       flush-8:0-4272  [001] ....  8279.107478: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=641024
       flush-8:0-4272  [001] ....  8279.115590: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=643072
       flush-8:0-4272  [001] ....  8279.122629: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=645120
       flush-8:0-4272  [001] ....  8279.128637: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=647168
       flush-8:0-4272  [001] ....  8279.134624: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=649216
       flush-8:0-4272  [001] ....  8279.139908: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=1024 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=651264
       flush-8:0-4272  [001] ....  8279.143584: writeback_pages_written: 17408
              dd-4267  [000] ....  8279.191039: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=652288
       flush-8:0-4272  [001] ....  8279.191060: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=652288
              dd-4267  [000] ....  8279.191070: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=652288
              dd-4267  [000] ....  8279.192824: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=654336
       flush-8:0-4272  [001] ....  8279.194897: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=652288
       flush-8:0-4272  [001] ....  8279.198344: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=1280 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=654336
              dd-4267  [000] ....  8279.198346: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=655360
              dd-4267  [000] ....  8279.199880: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=657408
       flush-8:0-4272  [001] ....  8279.202409: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=655360
       flush-8:0-4272  [001] ....  8279.205272: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=1792 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=657408
       flush-8:0-4272  [001] ....  8279.209400: writeback_pages_written: 6912
              dd-4267  [002] ....  8279.300548: writeback_congestion_wait: usec_timeout=100000 usec_delayed=100000
              dd-4267  [003] ....  8279.420643: writeback_congestion_wait: usec_timeout=100000 usec_delayed=59000
              dd-4267  [003] ....  8279.473709: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=659200
       flush-8:0-4272  [001] ....  8279.473725: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=659200
              dd-4267  [003] ....  8279.473735: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=659200
       flush-8:0-4272  [001] ....  8279.477129: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=512 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=659200
       flush-8:0-4272  [001] ....  8279.477537: writeback_pages_written: 512
              dd-4267  [001] ....  8279.567996: writeback_congestion_wait: usec_timeout=100000 usec_delayed=94000
              dd-4267  [001] ....  8279.570412: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [001] ....  8279.575103: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [001] ....  8279.583971: writeback_congestion_wait: usec_timeout=100000 usec_delayed=7000
              dd-4267  [001] ....  8279.594089: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [003] ....  8279.605732: writeback_congestion_wait: usec_timeout=100000 usec_delayed=8000
              dd-4267  [001] ....  8279.615825: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [001] ....  8279.625850: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [001] ....  8279.635943: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [003] ....  8279.646070: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [001] ....  8279.656125: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [001] ....  8279.666165: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [001] ....  8279.676215: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [003] ....  8279.686308: writeback_congestion_wait: usec_timeout=100000 usec_delayed=7000
              dd-4267  [001] ....  8279.696389: writeback_congestion_wait: usec_timeout=100000 usec_delayed=7000
              dd-4267  [001] ....  8279.706454: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [001] ....  8279.716539: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [001] ....  8279.720112: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=659712
       flush-8:0-4272  [003] ....  8279.720126: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=659712
              dd-4267  [001] ....  8279.720127: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=659712
              dd-4267  [001] ....  8279.721617: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=661760
              dd-4267  [001] ....  8279.723124: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=663808
       flush-8:0-4272  [003] ....  8279.724157: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=659712
              dd-4267  [003] ....  8279.728278: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
       flush-8:0-4272  [000] ....  8279.728765: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=661760
       flush-8:0-4272  [000] ....  8279.736675: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=1024 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=663808
              dd-4267  [001] ....  8279.738442: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
       flush-8:0-4272  [000] ....  8279.740568: writeback_pages_written: 5120
              dd-4267  [001] ....  8279.748446: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
              dd-4267  [001] ....  8279.758431: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
              dd-4267  [003] ....  8279.768498: writeback_congestion_wait: usec_timeout=100000 usec_delayed=7000
              dd-4267  [001] ....  8279.778569: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [001] ....  8279.788654: writeback_congestion_wait: usec_timeout=100000 usec_delayed=7000
              dd-4267  [001] ....  8279.798693: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [003] ....  8279.808736: writeback_congestion_wait: usec_timeout=100000 usec_delayed=7000
              dd-4267  [001] ....  8279.818831: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [001] ....  8279.828949: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [001] ....  8279.839007: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [001] ....  8279.842997: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=664832
       flush-8:0-4272  [000] ....  8279.843014: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=664832
              dd-4267  [001] ....  8279.843016: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=664832
       flush-8:0-4272  [000] ....  8279.847182: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=664832
       flush-8:0-4272  [000] ....  8279.847191: writeback_pages_written: 256
              dd-4267  [003] ....  8279.849076: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [001] ....  8279.853108: writeback_congestion_wait: usec_timeout=100000 usec_delayed=0
              dd-4267  [001] ....  8279.863346: writeback_congestion_wait: usec_timeout=100000 usec_delayed=9000
              dd-4267  [001] ....  8279.874975: writeback_congestion_wait: usec_timeout=100000 usec_delayed=7000
              dd-4267  [003] ....  8279.880557: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [001] ....  8279.884813: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [001] ....  8279.890544: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [001] ....  8279.894781: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [003] ....  8279.900525: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [001] ....  8279.904786: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [001] ....  8279.910495: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [001] ....  8279.914742: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [003] ....  8279.920501: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [001] ....  8279.924716: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [001] ....  8279.930462: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [001] ....  8279.934704: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [003] ....  8279.940466: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [001] ....  8279.944675: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [001] ....  8279.950443: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [001] ....  8279.954671: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [003] ....  8279.960415: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [001] ....  8279.964640: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [001] ....  8279.970407: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [001] ....  8279.974635: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [003] ....  8279.980386: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [001] ....  8279.984609: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [001] ....  8279.990382: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [001] ....  8279.992689: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=665088
       flush-8:0-4272  [000] ....  8279.992705: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=665088
              dd-4267  [001] ....  8279.992706: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=665088
              dd-4267  [001] ....  8279.996636: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=667136
       flush-8:0-4272  [000] ....  8279.998271: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=665088
              dd-4267  [001] ....  8279.998348: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=669184
              dd-4267  [001] ....  8279.999931: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=671232
       flush-8:0-4272  [000] ....  8280.002961: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=667136
              dd-4267  [001] ....  8280.004506: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
       flush-8:0-4272  [000] ....  8280.010048: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=669184
              dd-4267  [001] ....  8280.010313: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [001] ....  8280.012905: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=673280
       flush-8:0-4272  [000] ....  8280.016172: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=671232
              dd-4267  [001] ....  8280.017137: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=675328
              dd-4267  [001] ....  8280.018724: writeback_congestion_wait: usec_timeout=100000 usec_delayed=0
              dd-4267  [002] ....  8280.020975: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [002] ....  8280.023205: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [003] ....  8280.025762: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
       flush-8:0-4272  [002] ....  8280.166717: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=673280
       flush-8:0-4272  [002] ....  8280.172594: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=1280 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=675328
       flush-8:0-4272  [002] ....  8280.176266: writeback_pages_written: 11520
              dd-4267  [003] ....  8280.266612: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=676608
       flush-8:0-4272  [002] ....  8280.266632: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=676608
              dd-4267  [003] ....  8280.266635: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=676608
              dd-4267  [003] ....  8280.268438: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=678656
       flush-8:0-4272  [002] ....  8280.270275: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=676608
              dd-4267  [003] ....  8280.272860: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=680704
       flush-8:0-4272  [002] ....  8280.273667: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=678656
              dd-4267  [003] ....  8280.277004: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=682752
       flush-8:0-4272  [002] ....  8280.279511: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=680704
       flush-8:0-4272  [002] ....  8280.284580: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=768 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=682752
       flush-8:0-4272  [002] ....  8280.286725: writeback_pages_written: 6912
      flush-0:16-4223  [001] ....  8280.294981: writeback_start: bdi 0:16: sb_dev 0:0 nr_pages=29073 sync_mode=0 kupdate=1 range_cyclic=1 background=0 reason=periodic ino=0 offset=0
      flush-0:16-4223  [001] ....  8280.294985: writeback_queue_io: bdi 0:16: older=4302921993 age=30000 enqueue=3 reason=periodic
      flush-0:16-4223  [001] ....  8280.294994: writeback_single_inode: bdi 0:16: ino=3080195 state= dirtied_when=4302920855 age=56 index=0 to_write=1024 wrote=0
      flush-0:16-4223  [001] ....  8280.295020: writeback_single_inode: bdi 0:16: ino=1573416 state= dirtied_when=4302921098 age=56 index=0 to_write=1024 wrote=1
      flush-0:16-4223  [001] ....  8280.295032: writeback_single_inode: bdi 0:16: ino=1573486 state= dirtied_when=4302921098 age=56 index=52481 to_write=1024 wrote=1
      flush-0:16-4223  [001] ....  8280.295035: writeback_written: bdi 0:16: sb_dev 0:0 nr_pages=29071 sync_mode=0 kupdate=1 range_cyclic=1 background=0 reason=periodic ino=0 offset=0
      flush-0:16-4223  [001] ....  8280.295036: writeback_start: bdi 0:16: sb_dev 0:0 nr_pages=29071 sync_mode=0 kupdate=1 range_cyclic=1 background=0 reason=periodic ino=0 offset=0
      flush-0:16-4223  [001] ....  8280.295037: writeback_queue_io: bdi 0:16: older=4302921993 age=30000 enqueue=0 reason=periodic
      flush-0:16-4223  [001] ....  8280.295037: writeback_written: bdi 0:16: sb_dev 0:0 nr_pages=29071 sync_mode=0 kupdate=1 range_cyclic=1 background=0 reason=periodic ino=0 offset=0
      flush-0:16-4223  [001] ....  8280.295042: writeback_pages_written: 2
              dd-4267  [001] ....  8280.297036: writeback_congestion_wait: usec_timeout=100000 usec_delayed=20000
              dd-4267  [003] ....  8280.408987: writeback_congestion_wait: usec_timeout=100000 usec_delayed=100000
              dd-4267  [002] ....  8280.467718: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [000] ....  8280.477905: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
              dd-4267  [000] ....  8280.488203: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [000] ....  8280.492193: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=683520
       flush-8:0-4272  [002] ....  8280.492204: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=683520
              dd-4267  [000] ....  8280.492205: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=683520
              dd-4267  [000] ....  8280.494904: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
       flush-8:0-4272  [002] ....  8280.495455: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=683520
       flush-8:0-4272  [002] ....  8280.495461: writeback_pages_written: 256
              dd-4267  [002] ....  8280.499328: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [000] ....  8280.503823: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [000] ....  8280.508200: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [000] ....  8280.516395: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=683776
       flush-8:0-4272  [002] ....  8280.516410: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=683776
              dd-4267  [000] ....  8280.516410: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=683776
              dd-4267  [000] ....  8280.516691: writeback_congestion_wait: usec_timeout=100000 usec_delayed=0
              dd-4267  [000] ....  8280.516877: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
       flush-8:0-4272  [002] ....  8280.520252: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=683776
       flush-8:0-4272  [002] ....  8280.520259: writeback_pages_written: 256
              dd-4267  [000] ....  8280.521328: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [000] ....  8280.525593: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [000] ....  8280.613120: writeback_congestion_wait: usec_timeout=100000 usec_delayed=84000
              dd-4267  [000] ....  8280.621118: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [000] ....  8280.628140: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [002] ....  8280.636293: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8280.640774: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [000] ....  8280.645305: writeback_congestion_wait: usec_timeout=100000 usec_delayed=0
              dd-4267  [000] ....  8280.649723: writeback_congestion_wait: usec_timeout=100000 usec_delayed=0
              dd-4267  [002] ....  8280.657246: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [000] ....  8280.661750: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [000] ....  8280.670022: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=684032
       flush-8:0-4272  [002] ....  8280.670036: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=684032
              dd-4267  [000] ....  8280.670040: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=684032
              dd-4267  [000] ....  8280.671552: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=686080
              dd-4267  [000] ....  8280.673073: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=688128
       flush-8:0-4272  [002] ....  8280.674106: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=684032
       flush-8:0-4272  [001] ....  8280.677134: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=686080
              dd-4267  [000] ....  8280.680892: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=690176
       flush-8:0-4272  [001] ....  8280.685604: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=688128
              dd-4267  [000] ....  8280.686261: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
       flush-8:0-4272  [001] ....  8280.691619: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=690176
       flush-8:0-4272  [001] ....  8280.693880: writeback_pages_written: 6400
              dd-4267  [000] ....  8280.699811: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [000] ....  8280.725768: writeback_congestion_wait: usec_timeout=100000 usec_delayed=7000
              dd-4267  [002] ....  8280.735583: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [000] ....  8280.745458: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [000] ....  8280.749433: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=690432
       flush-8:0-4272  [001] ....  8280.749453: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=690432
              dd-4267  [000] ....  8280.749455: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=690432
       flush-8:0-4272  [001] ....  8280.753038: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=690432
       flush-8:0-4272  [001] ....  8280.753044: writeback_pages_written: 256
              dd-4267  [000] ....  8280.755184: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [000] ....  8280.763233: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [002] ....  8280.767808: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [000] ....  8280.772434: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [000] ....  8280.776929: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [000] ....  8280.781392: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [002] ....  8280.785741: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [000] ....  8280.790431: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [000] ....  8280.805974: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=690688
       flush-8:0-4272  [001] ....  8280.805990: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=690688
              dd-4267  [000] ....  8280.805995: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=690688
       flush-8:0-4272  [001] ....  8280.810334: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=690688
       flush-8:0-4272  [001] ....  8280.810343: writeback_pages_written: 256
              dd-4267  [002] ....  8280.835906: writeback_congestion_wait: usec_timeout=100000 usec_delayed=30000
              dd-4267  [002] ....  8280.942587: writeback_congestion_wait: usec_timeout=100000 usec_delayed=100000
              dd-4267  [000] ....  8280.999893: writeback_congestion_wait: usec_timeout=100000 usec_delayed=57000
              dd-4267  [000] ....  8281.002359: writeback_congestion_wait: usec_timeout=100000 usec_delayed=0
              dd-4267  [000] ....  8281.004620: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [002] ....  8281.007144: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [000] ....  8281.011505: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8281.015104: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=690944
       flush-8:0-4272  [001] ....  8281.015120: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=690944
              dd-4267  [000] ....  8281.015121: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=690944
              dd-4267  [000] ....  8281.018879: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
       flush-8:0-4272  [001] ....  8281.019801: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=768 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=690944
       flush-8:0-4272  [001] ....  8281.021129: writeback_pages_written: 768
              dd-4267  [000] ....  8281.023390: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=691712
       flush-8:0-4272  [001] ....  8281.023405: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=691712
              dd-4267  [000] ....  8281.023406: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=691712
              dd-4267  [000] ....  8281.025078: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=693760
              dd-4267  [000] ....  8281.026726: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=695808
       flush-8:0-4272  [001] ....  8281.027870: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=691712
              dd-4267  [000] ....  8281.028680: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
       flush-8:0-4272  [001] ....  8281.032562: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=693760
              dd-4267  [000] ....  8281.034082: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=697856
              dd-4267  [002] ....  8281.038551: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
       flush-8:0-4272  [001] ....  8281.039182: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=695808
              dd-4267  [002] ....  8281.043151: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=699904
       flush-8:0-4272  [001] ....  8281.045416: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=697856
       flush-8:0-4272  [001] ....  8281.050590: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=1280 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=699904
       flush-8:0-4272  [001] ....  8281.054085: writeback_pages_written: 9472
              dd-4267  [000] ....  8281.146586: writeback_congestion_wait: usec_timeout=100000 usec_delayed=100000
              dd-4267  [000] ....  8281.267657: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=701184
       flush-8:0-4272  [001] ....  8281.267676: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=701184
              dd-4267  [000] ....  8281.267682: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=701184
              dd-4267  [000] ....  8281.269420: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=703232
       flush-8:0-4272  [001] ....  8281.273221: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=701184
              dd-4267  [000] ....  8281.275742: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=705280
       flush-8:0-4272  [001] ....  8281.276549: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=703232
              dd-4267  [000] ....  8281.277912: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=707328
              dd-4267  [003] ....  8281.280371: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
       flush-8:0-4272  [001] ....  8281.318655: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=705280
       flush-8:0-4272  [001] ....  8281.324616: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=768 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=707328
       flush-8:0-4272  [001] ....  8281.328030: writeback_pages_written: 6912
              dd-4267  [002] ....  8281.450334: writeback_congestion_wait: usec_timeout=100000 usec_delayed=100000
              dd-4267  [003] ....  8281.461260: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [003] ....  8281.483632: writeback_congestion_wait: usec_timeout=100000 usec_delayed=20000
              dd-4267  [003] ....  8281.486146: writeback_congestion_wait: usec_timeout=100000 usec_delayed=0
              dd-4267  [001] ....  8281.488493: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [003] ....  8281.491058: writeback_congestion_wait: usec_timeout=100000 usec_delayed=0
              dd-4267  [003] ....  8281.495253: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [003] ....  8281.503364: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
              dd-4267  [003] ....  8281.513429: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [003] ....  8281.522995: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
              dd-4267  [003] ....  8281.532819: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [003] ....  8281.542599: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [003] ....  8281.546558: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=708096
       flush-8:0-4272  [001] ....  8281.546572: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=708096
              dd-4267  [003] ....  8281.546572: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=708096
              dd-4267  [001] ....  8281.556507: writeback_congestion_wait: usec_timeout=100000 usec_delayed=10000
       flush-8:0-4272  [003] ....  8282.192050: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=708096
       flush-8:0-4272  [003] ....  8282.192057: writeback_pages_written: 256
              dd-4267  [003] ....  8282.363940: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=708352
       flush-8:0-4272  [001] ....  8282.363957: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=708352
              dd-4267  [003] ....  8282.363965: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=708352
              dd-4267  [003] ....  8282.365488: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=710400
              dd-4267  [003] ....  8282.366985: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=712448
              dd-4267  [003] ....  8282.368510: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=714496
       flush-8:0-4272  [001] ....  8282.369497: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=708352
              dd-4267  [003] ....  8282.370048: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=716544
              dd-4267  [003] ....  8282.371633: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=718592
       flush-8:0-4272  [000] ....  8282.374078: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=710400
              dd-4267  [003] ....  8282.378185: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=720640
       flush-8:0-4272  [000] ....  8282.385965: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=712448
              dd-4267  [003] ....  8282.389259: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=722688
              dd-4267  [003] ....  8282.390925: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=724736
       flush-8:0-4272  [000] ....  8282.394370: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=714496
       flush-8:0-4272  [000] ....  8282.403366: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=716544
       flush-8:0-4272  [000] ....  8282.411168: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=718592
       flush-8:0-4272  [000] ....  8282.418447: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=720640
              dd-4267  [003] ....  8282.419012: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=726784
              dd-4267  [003] ....  8282.420711: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=728832
       flush-8:0-4272  [000] ....  8282.425497: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=722688
       flush-8:0-4272  [000] ....  8282.432174: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=724736
              dd-4267  [003] ....  8282.435607: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=730880
       flush-8:0-4272  [000] ....  8282.437550: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=726784
       flush-8:0-4272  [000] ....  8282.443269: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=728832
       flush-8:0-4272  [000] ....  8282.447516: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=1792 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=730880
              dd-4267  [001] ....  8282.536821: writeback_congestion_wait: usec_timeout=100000 usec_delayed=100000
       flush-8:0-4272  [002] ....  8282.594711: writeback_start: bdi 8:0: sb_dev 0:0 nr_pages=28847 sync_mode=0 kupdate=1 range_cyclic=1 background=0 reason=periodic ino=0 offset=0
       flush-8:0-4272  [002] ....  8282.594714: writeback_queue_io: bdi 8:0: older=4302924294 age=30000 enqueue=0 reason=periodic
       flush-8:0-4272  [002] ....  8282.594714: writeback_written: bdi 8:0: sb_dev 0:0 nr_pages=28847 sync_mode=0 kupdate=1 range_cyclic=1 background=0 reason=periodic ino=0 offset=0
       flush-8:0-4272  [002] ....  8282.594717: writeback_pages_written: 24320
              dd-4267  [003] ....  8282.702707: writeback_congestion_wait: usec_timeout=100000 usec_delayed=100000
              dd-4267  [003] ....  8282.766257: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=732672
       flush-8:0-4272  [002] ....  8282.766275: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=732672
              dd-4267  [003] ....  8282.766276: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=732672
              dd-4267  [000] ....  8282.769626: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
       flush-8:0-4272  [002] ....  8282.770439: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=732672
       flush-8:0-4272  [002] ....  8282.770448: writeback_pages_written: 256
              dd-4267  [000] ....  8282.792970: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=732928
       flush-8:0-4272  [002] ....  8282.792985: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=732928
              dd-4267  [000] ....  8282.792991: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=732928
       flush-8:0-4272  [002] ....  8282.796612: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=512 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=732928
       flush-8:0-4272  [002] ....  8282.797095: writeback_pages_written: 512
              dd-4267  [001] ....  8282.803815: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=733440
       flush-8:0-4272  [002] ....  8282.803828: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=733440
              dd-4267  [000] ....  8282.805987: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
       flush-8:0-4272  [002] ....  8282.807590: writeback_pages_written: 256
              dd-4267  [002] ....  8282.816053: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [000] ....  8282.826012: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
              dd-4267  [000] ....  8282.836053: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [000] ....  8282.840123: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [002] ....  8282.862607: writeback_congestion_wait: usec_timeout=100000 usec_delayed=21000
              dd-4267  [000] ....  8282.866842: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [000] ....  8282.872692: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [000] ....  8282.876853: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [002] ....  8282.882737: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [000] ....  8282.887007: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8282.892840: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [000] ....  8282.897051: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [002] ....  8282.902856: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [000] ....  8282.907142: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8282.912979: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [000] ....  8282.917185: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [002] ....  8282.923050: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [000] ....  8282.927261: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8282.933120: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [000] ....  8282.937351: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [002] ....  8282.943169: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [000] ....  8282.947400: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8282.953258: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [000] ....  8282.957497: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [002] ....  8282.963315: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [000] ....  8282.965832: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [000] ....  8282.967943: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8282.976215: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=733696
       flush-8:0-4272  [002] ....  8282.976230: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=733696
              dd-4267  [000] ....  8282.976230: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=733696
              dd-4267  [000] ....  8282.977340: writeback_congestion_wait: usec_timeout=100000 usec_delayed=0
       flush-8:0-4272  [002] ....  8282.980514: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=768 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=733696
              dd-4267  [000] ....  8282.981192: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=734464
       flush-8:0-4272  [002] ....  8282.981312: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=734464
              dd-4267  [000] ....  8282.981745: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [000] ....  8282.981778: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=734464
       flush-8:0-4272  [002] ....  8282.985084: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=734464
       flush-8:0-4272  [002] ....  8282.985091: writeback_pages_written: 1024
              dd-4267  [000] ....  8282.985665: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=734720
       flush-8:0-4272  [002] ....  8282.985674: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=734720
              dd-4267  [000] ....  8282.985675: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=734720
       flush-8:0-4272  [002] ....  8282.989807: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=512 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=734720
       flush-8:0-4272  [002] ....  8282.990230: writeback_pages_written: 512
              dd-4267  [003] ....  8283.089416: writeback_congestion_wait: usec_timeout=100000 usec_delayed=100000
              dd-4267  [000] ....  8283.122929: writeback_congestion_wait: usec_timeout=100000 usec_delayed=33000
              dd-4267  [000] ....  8283.135611: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [000] ....  8283.145666: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [000] ....  8283.155723: writeback_congestion_wait: usec_timeout=100000 usec_delayed=7000
              dd-4267  [002] ....  8283.165791: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [000] ....  8283.176075: writeback_congestion_wait: usec_timeout=100000 usec_delayed=7000
              dd-4267  [000] ....  8283.181528: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8283.185688: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [000] ....  8283.187835: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=735232
       flush-8:0-4272  [002] ....  8283.187848: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=735232
              dd-4267  [000] ....  8283.187849: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=735232
              dd-4267  [000] ....  8283.191527: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
       flush-8:0-4272  [002] ....  8283.192426: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=512 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=735232
       flush-8:0-4272  [002] ....  8283.192847: writeback_pages_written: 512
              dd-4267  [000] ....  8283.195835: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8283.201634: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [000] ....  8283.205915: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [002] ....  8283.211685: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [000] ....  8283.215994: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8283.221809: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [000] ....  8283.226052: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [002] ....  8283.231889: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [000] ....  8283.236089: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8283.241969: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [000] ....  8283.246186: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [002] ....  8283.252004: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [000] ....  8283.256226: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8283.262106: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [000] ....  8283.266341: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [002] ....  8283.273554: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
              dd-4267  [000] ....  8283.277824: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8283.283655: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [000] ....  8283.287868: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [002] ....  8283.293646: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [000] ....  8283.297947: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8283.303809: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [000] ....  8283.312730: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [000] ....  8283.328557: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=735744
       flush-8:0-4272  [002] ....  8283.328574: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=735744
              dd-4267  [000] ....  8283.328582: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=735744
              dd-4267  [000] ....  8283.330129: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=737792
              dd-4267  [000] ....  8283.331848: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=739840
              dd-4267  [000] ....  8283.333393: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=741888
       flush-8:0-4272  [002] ....  8283.333676: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=735744
              dd-4267  [000] ....  8283.334935: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=743936
              dd-4267  [000] ....  8283.337887: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8283.338115: writeback_congestion_wait: usec_timeout=100000 usec_delayed=0
       flush-8:0-4272  [002] ....  8283.431725: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=737792
       flush-8:0-4272  [002] ....  8283.439317: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=739840
       flush-8:0-4272  [002] ....  8283.447089: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=741888
       flush-8:0-4272  [002] ....  8283.454001: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=1792 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=743936
       flush-8:0-4272  [002] ....  8283.459772: writeback_pages_written: 9984
              dd-4267  [001] ....  8283.537146: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=745728
       flush-8:0-4272  [002] ....  8283.537162: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=745728
              dd-4267  [001] ....  8283.537171: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=745728
              dd-4267  [001] ....  8283.539419: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=747776
              dd-4267  [001] ....  8283.541208: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=749824
       flush-8:0-4272  [002] ....  8283.541833: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=745728
       flush-8:0-4272  [002] ....  8283.545118: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=747776
              dd-4267  [001] ....  8283.545748: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=751872
              dd-4267  [001] ....  8283.547488: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=753920
       flush-8:0-4272  [002] ....  8283.552959: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=749824
              dd-4267  [001] ....  8283.558579: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=755968
       flush-8:0-4272  [002] ....  8283.559396: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=751872
              dd-4267  [001] ....  8283.561724: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
       flush-8:0-4272  [002] ....  8283.565992: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=753920
       flush-8:0-4272  [002] ....  8283.570817: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=1792 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=755968
       flush-8:0-4272  [002] ....  8283.574693: writeback_pages_written: 12032
              dd-4267  [003] ....  8283.668117: writeback_congestion_wait: usec_timeout=100000 usec_delayed=100000
              dd-4267  [000] ....  8283.743847: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8283.747107: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=757760
       flush-8:0-4272  [002] ....  8283.747123: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=757760
              dd-4267  [000] ....  8283.747124: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=757760
       flush-8:0-4272  [002] ....  8283.751308: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=1792 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=757760
              dd-4267  [000] ....  8283.751309: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=759296
              dd-4267  [000] ....  8283.752829: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=761344
       flush-8:0-4272  [002] ....  8283.753579: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=759296
              dd-4267  [001] ....  8283.756564: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=763392
       flush-8:0-4272  [002] ....  8283.756906: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=761344
       flush-8:0-4272  [002] ....  8283.762110: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=768 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=763392
       flush-8:0-4272  [002] ....  8283.764271: writeback_pages_written: 6400
              dd-4267  [003] ....  8283.857011: writeback_congestion_wait: usec_timeout=100000 usec_delayed=100000
              dd-4267  [002] ....  8283.973375: writeback_congestion_wait: usec_timeout=100000 usec_delayed=55000
              dd-4267  [002] ....  8284.011625: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=764160
       flush-8:0-4272  [000] ....  8284.011640: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=764160
              dd-4267  [002] ....  8284.011645: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=764160
              dd-4267  [002] ....  8284.013208: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
       flush-8:0-4272  [000] ....  8284.015143: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=764160
       flush-8:0-4272  [000] ....  8284.015149: writeback_pages_written: 256
              dd-4267  [000] ....  8284.019069: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [000] ....  8284.023215: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8284.030639: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
              dd-4267  [002] ....  8284.034941: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [000] ....  8284.040920: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
              dd-4267  [000] ....  8284.051372: writeback_congestion_wait: usec_timeout=100000 usec_delayed=8000
              dd-4267  [000] ....  8284.061660: writeback_congestion_wait: usec_timeout=100000 usec_delayed=7000
              dd-4267  [002] ....  8284.071910: writeback_congestion_wait: usec_timeout=100000 usec_delayed=7000
              dd-4267  [000] ....  8284.082199: writeback_congestion_wait: usec_timeout=100000 usec_delayed=7000
              dd-4267  [000] ....  8284.092449: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [000] ....  8284.102756: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [000] ....  8284.106747: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=764416
       flush-8:0-4272  [002] ....  8284.106761: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=764416
              dd-4267  [000] ....  8284.106772: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=764416
       flush-8:0-4272  [002] ....  8284.110412: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=764416
       flush-8:0-4272  [002] ....  8284.110420: writeback_pages_written: 256
              dd-4267  [002] ....  8284.113017: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [002] ....  8284.117024: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=764672
       flush-8:0-4272  [000] ....  8284.117035: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=764672
              dd-4267  [002] ....  8284.117041: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=764672
       flush-8:0-4272  [000] ....  8284.120264: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=764672
       flush-8:0-4272  [000] ....  8284.120271: writeback_pages_written: 256
              dd-4267  [000] ....  8284.123300: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [000] ....  8284.133583: writeback_congestion_wait: usec_timeout=100000 usec_delayed=7000
              dd-4267  [000] ....  8284.145288: writeback_congestion_wait: usec_timeout=100000 usec_delayed=8000
              dd-4267  [002] ....  8284.149617: writeback_congestion_wait: usec_timeout=100000 usec_delayed=0
              dd-4267  [000] ....  8284.153959: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [000] ....  8284.159928: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
              dd-4267  [000] ....  8284.164244: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [002] ....  8284.170143: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [000] ....  8284.174494: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8284.180462: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [000] ....  8284.184792: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [002] ....  8284.190718: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [000] ....  8284.195060: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [000] ....  8284.201037: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [000] ....  8284.205349: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [002] ....  8284.211282: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [000] ....  8284.215607: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8284.217861: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=764928
       flush-8:0-4272  [002] ....  8284.217873: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=764928
              dd-4267  [000] ....  8284.217883: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=764928
              dd-4267  [000] ....  8284.219412: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=766976
       flush-8:0-4272  [002] ....  8284.222662: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=764928
              dd-4267  [000] ....  8284.224852: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=769024
       flush-8:0-4272  [001] ....  8284.226978: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=766976
              dd-4267  [000] ....  8284.231789: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
       flush-8:0-4272  [001] ....  8284.234737: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=1280 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=769024
              dd-4267  [000] ....  8284.236127: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8284.242083: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
       flush-8:0-4272  [001] ....  8284.537975: writeback_pages_written: 5376
              dd-4267  [002] ....  8284.712091: writeback_congestion_wait: usec_timeout=100000 usec_delayed=97000
              dd-4267  [002] ....  8284.714592: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [002] ....  8284.718902: writeback_congestion_wait: usec_timeout=100000 usec_delayed=0
              dd-4267  [002] ....  8284.721113: writeback_congestion_wait: usec_timeout=100000 usec_delayed=0
              dd-4267  [002] ....  8284.723558: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [000] ....  8284.729274: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [000] ....  8284.730891: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=770304
       flush-8:0-4272  [001] ....  8284.730909: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=770304
              dd-4267  [000] ....  8284.731346: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=770304
              dd-4267  [000] ....  8284.733058: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=772352
       flush-8:0-4272  [001] ....  8284.735492: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=770304
              dd-4267  [000] ....  8284.736899: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=774400
              dd-4267  [000] ....  8284.738556: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=776448
       flush-8:0-4272  [001] ....  8284.738766: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=772352
              dd-4267  [000] ....  8284.743348: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
       flush-8:0-4272  [001] ....  8284.745968: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=774400
              dd-4267  [000] ....  8284.749665: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=778496
       flush-8:0-4272  [001] ....  8284.751572: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=776448
              dd-4267  [000] ....  8284.753153: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
       flush-8:0-4272  [001] ....  8284.757398: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=768 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=778496
              dd-4267  [000] ....  8284.757400: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=779008
       flush-8:0-4272  [001] ....  8284.760124: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=1024 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=779008
       flush-8:0-4272  [001] ....  8284.761353: writeback_pages_written: 9728
              dd-4267  [002] ....  8284.857465: writeback_congestion_wait: usec_timeout=100000 usec_delayed=100000
              dd-4267  [002] ....  8284.912787: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=780032
       flush-8:0-4272  [001] ....  8284.912808: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=780032
              dd-4267  [002] ....  8284.912815: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=780032
       flush-8:0-4272  [001] ....  8284.916025: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=780032
       flush-8:0-4272  [001] ....  8284.918844: writeback_pages_written: 2048
              dd-4267  [003] ....  8284.940686: writeback_congestion_wait: usec_timeout=100000 usec_delayed=26000
              dd-4267  [003] ....  8284.978581: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=782080
       flush-8:0-4272  [001] ....  8284.978599: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=782080
              dd-4267  [003] ....  8284.978602: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=782080
       flush-8:0-4272  [001] ....  8284.982521: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=512 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=782080
       flush-8:0-4272  [001] ....  8284.982946: writeback_pages_written: 512
              dd-4267  [003] ....  8284.983056: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=782592
       flush-8:0-4272  [001] ....  8284.983063: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=782592
              dd-4267  [003] ....  8284.983064: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=782592
              dd-4267  [003] ....  8284.984577: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=784640
       flush-8:0-4272  [001] ....  8284.985859: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=782592
       flush-8:0-4272  [000] ....  8284.989292: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=1280 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=784640
              dd-4267  [003] ....  8284.990536: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=785664
              dd-4267  [003] ....  8284.992120: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=787712
       flush-8:0-4272  [000] ....  8284.994372: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=785664
       flush-8:0-4272  [000] ....  8284.997795: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=768 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=787712
       flush-8:0-4272  [000] ....  8285.000496: writeback_pages_written: 5888
              dd-4267  [001] ....  8285.092449: writeback_congestion_wait: usec_timeout=100000 usec_delayed=100000
              dd-4267  [001] ....  8285.177444: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=788480
       flush-8:0-4272  [000] ....  8285.177462: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=788480
              dd-4267  [001] ....  8285.177462: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=788480
              dd-4267  [001] ....  8285.179141: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
       flush-8:0-4272  [000] ....  8285.181333: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=788480
       flush-8:0-4272  [000] ....  8285.181342: writeback_pages_written: 256
              dd-4267  [001] ....  8285.183347: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=788736
       flush-8:0-4272  [000] ....  8285.183364: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=788736
              dd-4267  [001] ....  8285.183373: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=788736
       flush-8:0-4272  [000] ....  8285.185778: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=788736
       flush-8:0-4272  [000] ....  8285.185784: writeback_pages_written: 256
              dd-4267  [001] ....  8285.188015: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [001] ....  8285.197816: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [003] ....  8285.207603: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [001] ....  8285.217253: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [001] ....  8285.219617: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [003] ....  8285.224224: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
              dd-4267  [003] ....  8285.260785: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=788992
       flush-8:0-4272  [000] ....  8285.260803: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=788992
              dd-4267  [003] ....  8285.260804: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=788992
              dd-4267  [003] ....  8285.262513: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=791040
       flush-8:0-4272  [000] ....  8285.265222: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=788992
       flush-8:0-4272  [000] ....  8285.269537: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=1792 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=791040
              dd-4267  [001] ....  8285.270972: writeback_congestion_wait: usec_timeout=100000 usec_delayed=7000
       flush-8:0-4272  [000] ....  8285.275330: writeback_pages_written: 3840
              dd-4267  [003] ....  8285.324595: writeback_congestion_wait: usec_timeout=100000 usec_delayed=46000
              dd-4267  [001] ....  8285.428150: writeback_congestion_wait: usec_timeout=100000 usec_delayed=100000
              dd-4267  [001] ....  8285.428951: writeback_congestion_wait: usec_timeout=100000 usec_delayed=0
              dd-4267  [001] ....  8285.433539: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [003] ....  8285.440157: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [001] ....  8285.449269: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
              dd-4267  [001] ....  8285.455291: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=792832
       flush-8:0-4272  [000] ....  8285.455308: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=792832
              dd-4267  [001] ....  8285.455310: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=792832
              dd-4267  [001] ....  8285.457006: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=794880
              dd-4267  [001] ....  8285.458912: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
       flush-8:0-4272  [000] ....  8285.459455: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=792832
       flush-8:0-4272  [000] ....  8285.462753: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=794880
              dd-4267  [003] ....  8285.462940: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
       flush-8:0-4272  [000] ....  8285.465164: writeback_pages_written: 2304
              dd-4267  [003] ....  8285.465311: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=795136
       flush-8:0-4272  [000] ....  8285.465328: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=795136
              dd-4267  [003] ....  8285.465330: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=795136
       flush-8:0-4272  [000] ....  8285.467314: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=795136
       flush-8:0-4272  [000] ....  8285.467319: writeback_pages_written: 256
              dd-4267  [001] ....  8285.478696: writeback_congestion_wait: usec_timeout=100000 usec_delayed=13000
              dd-4267  [002] ....  8285.511501: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8285.519624: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
              dd-4267  [002] ....  8285.529305: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [002] ....  8285.529483: writeback_congestion_wait: usec_timeout=100000 usec_delayed=0
              dd-4267  [000] ....  8285.541032: writeback_congestion_wait: usec_timeout=100000 usec_delayed=9000
              dd-4267  [000] ....  8285.577136: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=795392
       flush-8:0-4272  [002] ....  8285.577156: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=795392
              dd-4267  [000] ....  8285.577638: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=795392
       flush-8:0-4272  [002] ....  8285.581251: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=512 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=795392
       flush-8:0-4272  [002] ....  8285.581647: writeback_pages_written: 512
              dd-4267  [002] ....  8285.676988: writeback_congestion_wait: usec_timeout=100000 usec_delayed=100000
              dd-4267  [000] ....  8285.698727: writeback_congestion_wait: usec_timeout=100000 usec_delayed=21000
              dd-4267  [000] ....  8285.701218: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [000] ....  8285.710431: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [000] ....  8285.718230: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [000] ....  8285.722253: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=795904
       flush-8:0-4272  [002] ....  8285.722266: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=795904
              dd-4267  [000] ....  8285.722267: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=795904
              dd-4267  [000] ....  8285.723781: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=797952
              dd-4267  [000] ....  8285.725289: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=800000
       flush-8:0-4272  [002] ....  8285.726260: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=795904
              dd-4267  [000] ....  8285.728248: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
       flush-8:0-4272  [002] ....  8285.729667: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=797952
       flush-8:0-4272  [002] ....  8285.735947: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=1536 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=800000
       flush-8:0-4272  [002] ....  8285.740515: writeback_pages_written: 5632
              dd-4267  [000] ....  8285.743317: writeback_congestion_wait: usec_timeout=100000 usec_delayed=11000
              dd-4267  [000] ....  8285.748737: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=801536
       flush-8:0-4272  [002] ....  8285.748749: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=801536
              dd-4267  [000] ....  8285.748750: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=801536
              dd-4267  [000] ....  8285.750301: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=803584
       flush-8:0-4272  [002] ....  8285.751451: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=801536
              dd-4267  [000] ....  8285.789981: writeback_congestion_wait: usec_timeout=100000 usec_delayed=39000
       flush-8:0-4272  [000] ....  8285.820051: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=1024 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=803584
       flush-8:0-4272  [000] ....  8285.824204: writeback_pages_written: 3072
              dd-4267  [002] ....  8285.877041: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=804608
       flush-8:0-4272  [000] ....  8285.877059: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=804608
              dd-4267  [002] ....  8285.877059: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=804608
       flush-8:0-4272  [000] ....  8285.880219: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=804608
              dd-4267  [002] ....  8285.881976: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
       flush-8:0-4272  [000] ....  8285.883304: writeback_pages_written: 2048
              dd-4267  [000] ....  8285.913064: writeback_congestion_wait: usec_timeout=100000 usec_delayed=27000
              dd-4267  [000] ....  8285.953294: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [000] ....  8285.993092: writeback_congestion_wait: usec_timeout=100000 usec_delayed=40000
              dd-4267  [000] ....  8286.017921: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=806656
       flush-8:0-4272  [002] ....  8286.017941: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=806656
              dd-4267  [000] ....  8286.017942: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=806656
       flush-8:0-4272  [002] ....  8286.022344: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=806656
       flush-8:0-4272  [002] ....  8286.022351: writeback_pages_written: 256
              dd-4267  [000] ....  8286.022457: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=806912
       flush-8:0-4272  [002] ....  8286.022463: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=806912
              dd-4267  [000] ....  8286.022464: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=806912
       flush-8:0-4272  [002] ....  8286.025804: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=1024 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=806912
              dd-4267  [001] ....  8286.025808: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=807680
       flush-8:0-4272  [002] ....  8286.027152: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=1792 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=807680
              dd-4267  [001] ....  8286.027154: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=809216
       flush-8:0-4272  [002] ....  8286.030162: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=1536 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=809216
              dd-4267  [001] ....  8286.030934: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=810752
       flush-8:0-4272  [002] ....  8286.032432: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=1792 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=810752
              dd-4267  [001] ....  8286.032436: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=812288
       flush-8:0-4272  [002] ....  8286.037770: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=768 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=812288
       flush-8:0-4272  [002] ....  8286.038525: writeback_pages_written: 6144
              dd-4267  [003] ....  8286.131784: writeback_congestion_wait: usec_timeout=100000 usec_delayed=100000
              dd-4267  [003] ....  8286.193153: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=813056
       flush-8:0-4272  [002] ....  8286.193171: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=813056
              dd-4267  [003] ....  8286.193182: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=813056
              dd-4267  [000] ....  8286.194538: writeback_congestion_wait: usec_timeout=100000 usec_delayed=0
       flush-8:0-4272  [002] ....  8286.197515: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=512 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=813056
       flush-8:0-4272  [002] ....  8286.198149: writeback_pages_written: 512
              dd-4267  [000] ....  8286.204432: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [002] ....  8286.208413: writeback_congestion_wait: usec_timeout=100000 usec_delayed=0
              dd-4267  [000] ....  8286.214233: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [000] ....  8286.218379: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8286.224189: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [002] ....  8286.228384: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [000] ....  8286.234169: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8286.238355: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [000] ....  8286.244159: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [002] ....  8286.248349: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [000] ....  8286.254131: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [000] ....  8286.258310: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [000] ....  8286.268537: writeback_congestion_wait: usec_timeout=100000 usec_delayed=7000
              dd-4267  [002] ....  8286.278455: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [000] ....  8286.288438: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [000] ....  8286.298422: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [002] ....  8286.308378: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [002] ....  8286.311951: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=813568
       flush-8:0-4272  [000] ....  8286.311964: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=813568
              dd-4267  [002] ....  8286.311974: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=813568
              dd-4267  [002] ....  8286.313470: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=815616
       flush-8:0-4272  [000] ....  8286.316042: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=813568
              dd-4267  [002] ....  8286.318069: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
       flush-8:0-4272  [000] ....  8286.319085: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=1536 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=815616
              dd-4267  [001] ....  8286.323896: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
       flush-8:0-4272  [000] ....  8286.324156: writeback_pages_written: 3584
              dd-4267  [002] ....  8286.328179: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [000] ....  8286.333974: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [000] ....  8286.338144: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8286.343939: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [002] ....  8286.348136: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8286.353914: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [000] ....  8286.358096: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8286.363902: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [002] ....  8286.368092: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8286.375470: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [000] ....  8286.379691: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8286.390024: writeback_congestion_wait: usec_timeout=100000 usec_delayed=7000
              dd-4267  [000] ....  8286.394213: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=817152
       flush-8:0-4272  [002] ....  8286.394227: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=817152
              dd-4267  [000] ....  8286.394228: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=817152
              dd-4267  [000] ....  8286.395740: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=819200
       flush-8:0-4272  [002] ....  8286.397700: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=817152
              dd-4267  [000] ....  8286.399729: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [000] ....  8286.399908: writeback_congestion_wait: usec_timeout=100000 usec_delayed=0
       flush-8:0-4272  [002] ....  8286.400793: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=819200
              dd-4267  [000] ....  8286.403696: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=821248
       flush-8:0-4272  [002] ....  8286.406096: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=512 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=821248
       flush-8:0-4272  [002] ....  8286.408455: writeback_pages_written: 4608
              dd-4267  [000] ....  8286.490429: writeback_congestion_wait: usec_timeout=100000 usec_delayed=86000
              dd-4267  [000] ....  8287.253038: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=821760
       flush-8:0-4272  [002] ....  8287.253056: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=821760
              dd-4267  [000] ....  8287.253111: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=821760
              dd-4267  [000] ....  8287.256585: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=823808
              dd-4267  [000] ....  8287.258183: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=825856
       flush-8:0-4272  [002] ....  8287.259493: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=821760
              dd-4267  [001] ....  8287.259950: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=827904
              dd-4267  [001] ....  8287.261742: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=829952
       flush-8:0-4272  [002] ....  8287.262677: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=823808
              dd-4267  [001] ....  8287.264192: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=832000
              dd-4267  [001] ....  8287.265825: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=834048
              dd-4267  [001] ....  8287.267436: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=836096
              dd-4267  [001] ....  8287.269029: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=838144
              dd-4267  [001] ....  8287.270652: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=840192
       flush-8:0-4272  [002] ....  8287.271798: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=825856
              dd-4267  [001] ....  8287.272253: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=842240
              dd-4267  [001] ....  8287.273814: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=844288
       flush-8:0-4272  [002] ....  8287.280257: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=827904
       flush-8:0-4272  [002] ....  8287.286640: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=829952
       flush-8:0-4272  [002] ....  8287.292224: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=832000
       flush-8:0-4272  [002] ....  8287.297298: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=834048
              dd-4267  [003] ....  8287.375066: writeback_congestion_wait: usec_timeout=100000 usec_delayed=100000
       flush-8:0-4272  [000] ....  8287.433845: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=836096
       flush-8:0-4272  [000] ....  8287.439490: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=838144
       flush-8:0-4272  [000] ....  8287.444495: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=840192
       flush-8:0-4272  [000] ....  8287.449427: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=842240
              dd-4267  [001] ....  8287.568053: writeback_congestion_wait: usec_timeout=100000 usec_delayed=100000
       flush-8:0-4272  [002] ....  8287.589239: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=844288
       flush-8:0-4272  [002] ....  8287.594059: writeback_start: bdi 8:0: sb_dev 0:0 nr_pages=25531 sync_mode=0 kupdate=1 range_cyclic=1 background=0 reason=periodic ino=0 offset=0
       flush-8:0-4272  [002] ....  8287.594061: writeback_queue_io: bdi 8:0: older=4302929296 age=30000 enqueue=1 reason=periodic
       flush-8:0-4272  [002] ....  8287.594217: writeback_single_inode: bdi 8:0: ino=0 state= dirtied_when=4302926220 age=51 index=62423040 to_write=1024 wrote=60
       flush-8:0-4272  [002] ....  8287.594220: writeback_written: bdi 8:0: sb_dev 0:0 nr_pages=25471 sync_mode=0 kupdate=1 range_cyclic=1 background=0 reason=periodic ino=0 offset=0
       flush-8:0-4272  [002] ....  8287.594220: writeback_start: bdi 8:0: sb_dev 0:0 nr_pages=25471 sync_mode=0 kupdate=1 range_cyclic=1 background=0 reason=periodic ino=0 offset=0
       flush-8:0-4272  [002] ....  8287.594220: writeback_queue_io: bdi 8:0: older=4302929296 age=30000 enqueue=0 reason=periodic
       flush-8:0-4272  [002] ....  8287.594221: writeback_written: bdi 8:0: sb_dev 0:0 nr_pages=25471 sync_mode=0 kupdate=1 range_cyclic=1 background=0 reason=periodic ino=0 offset=0
       flush-8:0-4272  [002] ....  8287.594223: writeback_pages_written: 24636
              dd-4267  [003] ....  8287.725915: writeback_congestion_wait: usec_timeout=100000 usec_delayed=100000
              dd-4267  [000] ....  8287.867240: writeback_congestion_wait: usec_timeout=100000 usec_delayed=68000
              dd-4267  [000] ....  8287.929273: writeback_congestion_wait: usec_timeout=100000 usec_delayed=29000
              dd-4267  [000] ....  8287.933932: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [002] ....  8287.943436: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [000] ....  8287.953437: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [000] ....  8287.963311: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [000] ....  8287.968957: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [002] ....  8287.971328: writeback_congestion_wait: usec_timeout=100000 usec_delayed=0
              dd-4267  [000] ....  8287.975932: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
              dd-4267  [000] ....  8287.996004: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8288.000541: writeback_congestion_wait: usec_timeout=100000 usec_delayed=0
              dd-4267  [002] ....  8288.007408: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [000] ....  8288.017420: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [000] ....  8288.024017: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [000] ....  8288.029668: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [002] ....  8288.034148: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [000] ....  8288.038464: writeback_congestion_wait: usec_timeout=100000 usec_delayed=0
              dd-4267  [000] ....  8288.042966: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [002] ....  8288.145630: writeback_congestion_wait: usec_timeout=100000 usec_delayed=100000
              dd-4267  [000] ....  8288.198092: writeback_congestion_wait: usec_timeout=100000 usec_delayed=52000
              dd-4267  [000] ....  8288.214718: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [000] ....  8288.224982: writeback_congestion_wait: usec_timeout=100000 usec_delayed=7000
              dd-4267  [002] ....  8288.235294: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [000] ....  8288.245537: writeback_congestion_wait: usec_timeout=100000 usec_delayed=7000
              dd-4267  [000] ....  8288.255814: writeback_congestion_wait: usec_timeout=100000 usec_delayed=7000
              dd-4267  [000] ....  8288.262402: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [002] ....  8288.270398: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [002] ....  8288.281573: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8288.285848: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [000] ....  8288.293826: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [002] ....  8288.298364: writeback_congestion_wait: usec_timeout=100000 usec_delayed=0
              dd-4267  [000] ....  8288.306692: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [000] ....  8288.316956: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [002] ....  8288.327285: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [000] ....  8288.337529: writeback_congestion_wait: usec_timeout=100000 usec_delayed=7000
              dd-4267  [000] ....  8288.343307: writeback_congestion_wait: usec_timeout=100000 usec_delayed=0
              dd-4267  [000] ....  8288.350208: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [000] ....  8288.360343: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8288.364844: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [000] ....  8288.368427: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=846336
       flush-8:0-4272  [002] ....  8288.368440: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=846336
              dd-4267  [000] ....  8288.368442: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=846336
              dd-4267  [000] ....  8288.374199: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=848384
       flush-8:0-4272  [002] ....  8288.374913: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=846336
              dd-4267  [001] ....  8288.376663: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=850432
       flush-8:0-4272  [002] ....  8288.377993: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=848384
              dd-4267  [001] ....  8288.378387: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=852480
              dd-4267  [001] ....  8288.380037: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=854528
              dd-4267  [001] ....  8288.381692: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=856576
       flush-8:0-4272  [002] ....  8288.387743: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=850432
              dd-4267  [001] ....  8288.390190: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=858624
       flush-8:0-4272  [002] ....  8288.395961: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=852480
       flush-8:0-4272  [002] ....  8288.403948: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=854528
              dd-4267  [001] ....  8288.404992: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=860672
              dd-4267  [001] ....  8288.406709: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=862720
              dd-4267  [001] ....  8288.408423: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=864768
              dd-4267  [001] ....  8288.410722: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=866816
       flush-8:0-4272  [002] ....  8288.410728: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=856576
              dd-4267  [001] ....  8288.412372: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=868864
       flush-8:0-4272  [002] ....  8288.417731: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=858624
       flush-8:0-4272  [002] ....  8288.422959: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=860672
       flush-8:0-4272  [002] ....  8288.427513: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=862720
       flush-8:0-4272  [002] ....  8288.431706: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=864768
       flush-8:0-4272  [002] ....  8288.435613: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=866816
              dd-4267  [003] ....  8288.513425: writeback_congestion_wait: usec_timeout=100000 usec_delayed=100000
              dd-4267  [001] ....  8288.613432: writeback_congestion_wait: usec_timeout=100000 usec_delayed=100000
       flush-8:0-4272  [002] ....  8288.807055: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=1792 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=868864
       flush-8:0-4272  [002] ....  8288.813588: writeback_pages_written: 24320
              dd-4267  [003] ....  8288.964246: writeback_congestion_wait: usec_timeout=100000 usec_delayed=100000
              dd-4267  [003] ....  8289.018315: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=870656
       flush-8:0-4272  [002] ....  8289.018333: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=870656
              dd-4267  [003] ....  8289.018334: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=870656
       flush-8:0-4272  [002] ....  8289.022448: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=512 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=870656
       flush-8:0-4272  [002] ....  8289.023068: writeback_pages_written: 512
              dd-4267  [000] .N..  8289.043118: writeback_congestion_wait: usec_timeout=100000 usec_delayed=24000
              dd-4267  [000] ....  8289.062610: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [000] ....  8289.072452: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [000] ....  8289.082237: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [002] ....  8289.092057: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [000] ....  8289.101888: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [000] ....  8289.113107: writeback_congestion_wait: usec_timeout=100000 usec_delayed=8000
              dd-4267  [000] ....  8289.117139: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=871168
       flush-8:0-4272  [002] ....  8289.117151: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=871168
              dd-4267  [000] ....  8289.117152: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=871168
       flush-8:0-4272  [002] ....  8289.120609: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=768 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=871168
       flush-8:0-4272  [002] ....  8289.121378: writeback_pages_written: 768
              dd-4267  [000] ....  8289.122889: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
              dd-4267  [002] ....  8289.132666: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [000] ....  8289.140943: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [000] ....  8289.145446: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [000] ....  8289.149985: writeback_congestion_wait: usec_timeout=100000 usec_delayed=0
              dd-4267  [002] ....  8289.154423: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [000] ....  8289.169836: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8289.174289: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [002] ....  8289.239886: writeback_congestion_wait: usec_timeout=100000 usec_delayed=62000
              dd-4267  [000] ....  8289.244347: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [000] ....  8289.248919: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [002] ....  8289.258780: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [000] ....  8289.268549: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [000] ....  8289.278371: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [000] ....  8289.288216: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [002] ....  8289.297985: writeback_congestion_wait: usec_timeout=100000 usec_delayed=7000
              dd-4267  [000] ....  8289.307821: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [000] ....  8289.317661: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [000] ....  8289.327410: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [002] ....  8289.337217: writeback_congestion_wait: usec_timeout=100000 usec_delayed=7000
              dd-4267  [000] ....  8289.347038: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [000] ....  8289.353471: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8289.359251: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [002] ....  8289.363735: writeback_congestion_wait: usec_timeout=100000 usec_delayed=0
              dd-4267  [000] ....  8289.368137: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [000] ....  8289.372635: writeback_congestion_wait: usec_timeout=100000 usec_delayed=0
              dd-4267  [000] ....  8289.377206: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [000] ....  8289.480585: writeback_congestion_wait: usec_timeout=100000 usec_delayed=81000
              dd-4267  [000] ....  8289.482791: writeback_congestion_wait: usec_timeout=100000 usec_delayed=0
              dd-4267  [000] ....  8289.485066: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [002] ....  8289.487555: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [000] ....  8289.495298: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [000] ....  8289.505137: writeback_congestion_wait: usec_timeout=100000 usec_delayed=7000
              dd-4267  [000] ....  8289.514910: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [002] ....  8289.520479: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [000] ....  8289.527108: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
              dd-4267  [000] ....  8289.544021: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=871936
       flush-8:0-4272  [002] ....  8289.544039: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=871936
              dd-4267  [000] ....  8289.544046: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=871936
              dd-4267  [000] ....  8289.545565: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=873984
              dd-4267  [000] ....  8289.547139: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=876032
              dd-4267  [000] ....  8289.548677: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=878080
       flush-8:0-4272  [002] ....  8289.549822: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=871936
              dd-4267  [000] ....  8289.553126: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=880128
       flush-8:0-4272  [001] ....  8289.554779: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=873984
       flush-8:0-4272  [001] ....  8289.567484: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=876032
              dd-4267  [000] ....  8289.568912: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=882176
              dd-4267  [000] ....  8289.570643: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=884224
              dd-4267  [000] ....  8289.572336: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=886272
              dd-4267  [000] ....  8289.574000: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=888320
       flush-8:0-4272  [001] ....  8289.575450: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=878080
              dd-4267  [000] ....  8289.575687: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=890368
              dd-4267  [000] ....  8289.577346: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=892416
              dd-4267  [000] ....  8289.579606: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=894464
       flush-8:0-4272  [001] ....  8289.583703: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=880128
       flush-8:0-4272  [001] ....  8289.590178: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=882176
       flush-8:0-4272  [001] ....  8289.595612: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=884224
       flush-8:0-4272  [001] ....  8289.600583: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=886272
       flush-8:0-4272  [001] ....  8289.605480: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=888320
       flush-8:0-4272  [001] ....  8289.610298: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=890368
       flush-8:0-4272  [001] ....  8289.614567: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=892416
              dd-4267  [002] ....  8289.679818: writeback_congestion_wait: usec_timeout=100000 usec_delayed=100000
       flush-8:0-4272  [001] ....  8289.790935: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=1280 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=894464
       flush-8:0-4272  [001] ....  8289.794239: writeback_pages_written: 23808
              dd-4267  [000] ....  8289.830712: writeback_congestion_wait: usec_timeout=100000 usec_delayed=100000
              dd-4267  [000] ....  8289.979274: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=895744
       flush-8:0-4272  [001] ....  8289.979295: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=895744
              dd-4267  [000] ....  8289.979299: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=895744
       flush-8:0-4272  [001] ....  8289.984149: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=1024 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=895744
       flush-8:0-4272  [001] ....  8289.985910: writeback_pages_written: 1024
              dd-4267  [003] ....  8289.986879: writeback_congestion_wait: usec_timeout=100000 usec_delayed=7000
              dd-4267  [001] ....  8289.989278: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [001] ....  8289.993746: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
              dd-4267  [001] ....  8290.022891: writeback_congestion_wait: usec_timeout=100000 usec_delayed=29000
              dd-4267  [003] ....  8290.138516: writeback_congestion_wait: usec_timeout=100000 usec_delayed=100000
              dd-4267  [001] ....  8290.238484: writeback_congestion_wait: usec_timeout=100000 usec_delayed=100000
              dd-4267  [001] ....  8290.258044: writeback_congestion_wait: usec_timeout=100000 usec_delayed=19000
              dd-4267  [001] ....  8290.272386: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [003] ....  8290.282389: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [001] ....  8290.292359: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [001] ....  8290.302381: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [003] ....  8290.312326: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [001] ....  8290.330620: writeback_congestion_wait: usec_timeout=100000 usec_delayed=15000
              dd-4267  [001] ....  8290.340609: writeback_congestion_wait: usec_timeout=100000 usec_delayed=7000
              dd-4267  [001] ....  8290.350579: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [003] ....  8290.360570: writeback_congestion_wait: usec_timeout=100000 usec_delayed=7000
              dd-4267  [001] ....  8290.370533: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [001] ....  8290.380548: writeback_congestion_wait: usec_timeout=100000 usec_delayed=7000
              dd-4267  [001] ....  8290.390506: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [003] ....  8290.400468: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [001] ....  8290.410469: writeback_congestion_wait: usec_timeout=100000 usec_delayed=7000
              dd-4267  [001] ....  8290.420238: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [003] ....  8290.424433: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [001] ....  8290.430244: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [001] ....  8290.434405: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [001] ....  8290.440201: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [003] ....  8290.444394: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [001] ....  8290.450177: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [001] ....  8290.454365: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [001] ....  8290.460164: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [003] ....  8290.464360: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [001] ....  8290.466690: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [001] ....  8290.472687: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [001] ....  8290.494518: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
              dd-4267  [001] ....  8290.499431: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [001] ....  8290.506101: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [001] ....  8290.514874: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
              dd-4267  [001] ....  8290.525026: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [003] ....  8290.535090: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [001] ....  8290.545126: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [001] ....  8290.550820: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [001] ....  8290.555010: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [003] ....  8290.560856: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [001] ....  8290.573413: writeback_congestion_wait: usec_timeout=100000 usec_delayed=11000
              dd-4267  [001] ....  8290.579244: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [001] ....  8290.585075: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [003] ....  8290.589299: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [003] ....  8290.591103: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=896768
       flush-8:0-4272  [001] ....  8290.591114: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=896768
              dd-4267  [003] ....  8290.591115: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=896768
              dd-4267  [003] ....  8290.592629: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=898816
              dd-4267  [003] ....  8290.594151: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=900864
              dd-4267  [003] ....  8290.595687: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=902912
       flush-8:0-4272  [001] ....  8290.596742: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=896768
       flush-8:0-4272  [001] ....  8290.599560: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=898816
              dd-4267  [003] ....  8290.599589: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=904960
              dd-4267  [000] ....  8290.601759: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=907008
              dd-4267  [000] ....  8290.604180: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=909056
              dd-4267  [000] ....  8290.606776: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8290.606967: writeback_congestion_wait: usec_timeout=100000 usec_delayed=0
       flush-8:0-4272  [001] ....  8290.608163: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=900864
              dd-4267  [003] ....  8290.616932: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
       flush-8:0-4272  [001] ....  8290.807466: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=902912
       flush-8:0-4272  [001] ....  8290.815087: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=904960
       flush-8:0-4272  [000] ....  8290.825087: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=907008
       flush-8:0-4272  [000] ....  8290.833291: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=768 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=909056
       flush-8:0-4272  [000] ....  8290.837230: writeback_pages_written: 13056
              dd-4267  [003] ....  8290.898320: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=909824
       flush-8:0-4272  [000] ....  8290.898341: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=909824
              dd-4267  [003] ....  8290.898342: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=909824
              dd-4267  [003] ....  8290.900086: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=911872
              dd-4267  [003] ....  8290.902263: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=913920
       flush-8:0-4272  [000] ....  8290.903241: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=909824
              dd-4267  [003] ....  8290.903966: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=915968
              dd-4267  [003] ....  8290.905605: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=918016
       flush-8:0-4272  [000] ....  8290.907371: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=911872
       flush-8:0-4272  [000] ....  8290.914048: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=913920
       flush-8:0-4272  [000] ....  8290.919004: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=915968
       flush-8:0-4272  [000] ....  8290.923475: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=918016
       flush-8:0-4272  [000] ....  8290.927621: writeback_pages_written: 10240
              dd-4267  [001] ....  8291.007124: writeback_congestion_wait: usec_timeout=100000 usec_delayed=100000
              dd-4267  [001] ....  8291.055558: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=920064
       flush-8:0-4272  [000] ....  8291.055576: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=920064
              dd-4267  [001] ....  8291.055577: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=920064
       flush-8:0-4272  [000] ....  8291.058663: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=920064
       flush-8:0-4272  [000] ....  8291.058672: writeback_pages_written: 256
              dd-4267  [000] ....  8291.120020: writeback_congestion_wait: usec_timeout=100000 usec_delayed=65000
              dd-4267  [000] ....  8291.228552: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=920320
       flush-8:0-4272  [002] ....  8291.228569: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=920320
              dd-4267  [000] ....  8291.228570: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=920320
              dd-4267  [000] ....  8291.230580: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=922368
              dd-4267  [000] ....  8291.232100: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=924416
       flush-8:0-4272  [002] ....  8291.232583: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=920320
       flush-8:0-4272  [002] ....  8291.235968: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=922368
       flush-8:0-4272  [002] ....  8291.241987: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=924416
              dd-4267  [000] ....  8291.242861: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
              dd-4267  [000] ....  8291.246767: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=926464
       flush-8:0-4272  [002] ....  8291.247352: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=926464
       flush-8:0-4272  [002] ....  8291.249590: writeback_pages_written: 6400
              dd-4267  [000] ....  8291.303192: writeback_congestion_wait: usec_timeout=100000 usec_delayed=57000
              dd-4267  [002] ....  8291.418747: writeback_congestion_wait: usec_timeout=100000 usec_delayed=70000
              dd-4267  [000] ....  8291.421290: writeback_congestion_wait: usec_timeout=100000 usec_delayed=0
              dd-4267  [002] ....  8291.430170: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8291.438818: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
              dd-4267  [000] ....  8291.448613: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [000] ....  8291.452546: writeback_congestion_wait: usec_timeout=100000 usec_delayed=0
              dd-4267  [002] ....  8291.458291: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [000] ....  8291.463059: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [000] ....  8291.501991: writeback_congestion_wait: usec_timeout=100000 usec_delayed=8000
              dd-4267  [002] ....  8291.511822: writeback_congestion_wait: usec_timeout=100000 usec_delayed=7000
              dd-4267  [000] ....  8291.521615: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
              dd-4267  [000] ....  8291.531477: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [000] ....  8291.541259: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [000] ....  8291.545313: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=926720
       flush-8:0-4272  [002] ....  8291.545327: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=926720
              dd-4267  [000] ....  8291.545329: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=926720
              dd-4267  [000] ....  8291.546859: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=928768
       flush-8:0-4272  [002] ....  8291.549677: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=926720
              dd-4267  [000] ....  8291.550805: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [000] ....  8291.550981: writeback_congestion_wait: usec_timeout=100000 usec_delayed=0
       flush-8:0-4272  [002] ....  8291.553082: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=1024 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=928768
              dd-4267  [000] ....  8291.554588: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=929536
              dd-4267  [001] ....  8291.556446: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=931584
       flush-8:0-4272  [002] ....  8291.558013: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=929536
              dd-4267  [001] ....  8291.558925: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=933632
       flush-8:0-4272  [002] ....  8291.560978: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=931584
              dd-4267  [001] ....  8291.561309: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=935680
              dd-4267  [000] ....  8291.566775: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
       flush-8:0-4272  [002] ....  8291.567625: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=933632
       flush-8:0-4272  [000] ....  8291.691938: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=1024 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=935680
       flush-8:0-4272  [000] ....  8291.696544: writeback_pages_written: 9984
              dd-4267  [000] ....  8291.755429: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [000] ....  8291.765681: writeback_congestion_wait: usec_timeout=100000 usec_delayed=7000
              dd-4267  [000] ....  8291.769854: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=936704
       flush-8:0-4272  [002] ....  8291.769869: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=936704
              dd-4267  [000] ....  8291.769870: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=936704
              dd-4267  [000] ....  8291.771371: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
       flush-8:0-4272  [002] ....  8291.773234: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=936704
       flush-8:0-4272  [002] ....  8291.773241: writeback_pages_written: 256
              dd-4267  [002] ....  8291.775760: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [000] ....  8291.781707: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
              dd-4267  [000] ....  8291.784087: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [000] ....  8291.788791: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
              dd-4267  [002] ....  8291.809228: writeback_congestion_wait: usec_timeout=100000 usec_delayed=12000
              dd-4267  [000] ....  8291.811942: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [000] ....  8291.819305: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=936960
       flush-8:0-4272  [002] ....  8291.819318: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=936960
              dd-4267  [000] ....  8291.819319: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=936960
              dd-4267  [000] ....  8291.820882: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=939008
              dd-4267  [000] ....  8291.822428: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=941056
       flush-8:0-4272  [002] ....  8291.822757: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=936960
              dd-4267  [000] ....  8291.824434: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=943104
       flush-8:0-4272  [002] ....  8291.825686: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=939008
              dd-4267  [000] ....  8291.827048: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8291.827226: writeback_congestion_wait: usec_timeout=100000 usec_delayed=0
       flush-8:0-4272  [002] ....  8291.831471: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=941056
       flush-8:0-4272  [002] ....  8291.836478: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=1536 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=943104
       flush-8:0-4272  [002] ....  8291.839973: writeback_pages_written: 7680
              dd-4267  [002] ....  8291.930645: writeback_congestion_wait: usec_timeout=100000 usec_delayed=100000
              dd-4267  [002] ....  8291.992029: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=944640
       flush-8:0-4272  [000] ....  8291.992046: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=944640
              dd-4267  [002] ....  8291.992056: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=944640
       flush-8:0-4272  [000] ....  8291.994832: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=944640
       flush-8:0-4272  [000] ....  8291.994838: writeback_pages_written: 256
              dd-4267  [002] ....  8292.030123: writeback_congestion_wait: usec_timeout=100000 usec_delayed=38000
              dd-4267  [002] ....  8292.050503: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [002] ....  8292.054831: writeback_congestion_wait: usec_timeout=100000 usec_delayed=0
              dd-4267  [000] ....  8292.060738: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [002] ....  8292.065103: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [002] ....  8292.071061: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [002] ....  8292.075356: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8292.081306: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [002] ....  8292.087263: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [002] ....  8292.091595: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [002] ....  8292.097536: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
              dd-4267  [002] ....  8292.122806: writeback_congestion_wait: usec_timeout=100000 usec_delayed=24000
              dd-4267  [002] ....  8292.127164: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [002] ....  8292.168759: writeback_congestion_wait: usec_timeout=100000 usec_delayed=38000
              dd-4267  [000] ....  8292.173114: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [002] ....  8292.235518: writeback_congestion_wait: usec_timeout=100000 usec_delayed=59000
              dd-4267  [002] ....  8292.240176: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [000] ....  8292.247496: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [002] ....  8292.257469: writeback_congestion_wait: usec_timeout=100000 usec_delayed=7000
              dd-4267  [002] ....  8292.267454: writeback_congestion_wait: usec_timeout=100000 usec_delayed=7000
              dd-4267  [002] ....  8292.273257: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [000] ....  8292.277430: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [002] ....  8292.283231: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [002] ....  8292.287413: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [002] ....  8292.293190: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [002] ....  8292.297383: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [002] ....  8292.303189: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [002] ....  8292.305193: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=944896
       flush-8:0-4272  [000] ....  8292.305204: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=944896
              dd-4267  [002] ....  8292.305205: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=944896
              dd-4267  [002] ....  8292.306738: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=946944
              dd-4267  [002] ....  8292.309833: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=948992
              dd-4267  [000] ....  8292.313157: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
       flush-8:0-4272  [000] ....  8292.735920: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=944896
       flush-8:0-4272  [000] ....  8292.740409: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=946944
       flush-8:0-4272  [001] ....  8292.748894: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=768 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=948992
       flush-8:0-4272  [001] ....  8292.752966: writeback_start: bdi 8:0: sb_dev 0:0 nr_pages=33828 sync_mode=0 kupdate=1 range_cyclic=1 background=0 reason=periodic ino=0 offset=0
       flush-8:0-4272  [001] ....  8292.752968: writeback_queue_io: bdi 8:0: older=4302934457 age=30000 enqueue=0 reason=periodic
       flush-8:0-4272  [001] ....  8292.752968: writeback_written: bdi 8:0: sb_dev 0:0 nr_pages=33828 sync_mode=0 kupdate=1 range_cyclic=1 background=0 reason=periodic ino=0 offset=0
       flush-8:0-4272  [001] ....  8292.752971: writeback_pages_written: 4864
              dd-4267  [002] ....  8292.865367: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=949760
       flush-8:0-4272  [001] ....  8292.865385: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=949760
              dd-4267  [002] ....  8292.865386: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=949760
              dd-4267  [002] ....  8292.867161: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=951808
              dd-4267  [002] ....  8292.868811: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=953856
              dd-4267  [002] ....  8292.870466: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=955904
       flush-8:0-4272  [001] ....  8292.871705: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=949760
              dd-4267  [002] ....  8292.873150: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=957952
              dd-4267  [002] ....  8292.875731: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=960000
       flush-8:0-4272  [001] ....  8292.876282: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=951808
       flush-8:0-4272  [001] ....  8292.883855: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=953856
       flush-8:0-4272  [001] ....  8292.889576: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=955904
       flush-8:0-4272  [001] ....  8292.894750: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=957952
       flush-8:0-4272  [001] ....  8292.899538: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=768 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=960000
       flush-8:0-4272  [001] ....  8292.901892: writeback_pages_written: 11008
              dd-4267  [000] ....  8292.976950: writeback_congestion_wait: usec_timeout=100000 usec_delayed=100000
              dd-4267  [001] ....  8293.020761: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=960768
       flush-8:0-4272  [003] ....  8293.020778: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=960768
              dd-4267  [001] ....  8293.021248: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=960768
       flush-8:0-4272  [003] ....  8293.023781: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=960768
       flush-8:0-4272  [003] ....  8293.023789: writeback_pages_written: 256
              dd-4267  [001] ....  8293.083843: writeback_congestion_wait: usec_timeout=100000 usec_delayed=62000
              dd-4267  [003] ....  8293.097927: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [001] ....  8293.108016: writeback_congestion_wait: usec_timeout=100000 usec_delayed=7000
              dd-4267  [001] ....  8293.126388: writeback_congestion_wait: usec_timeout=100000 usec_delayed=14000
              dd-4267  [001] ....  8293.136458: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [003] ....  8293.146577: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [001] ....  8293.156636: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [001] ....  8293.166659: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [001] ....  8293.170791: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=961024
       flush-8:0-4272  [003] ....  8293.170804: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=961024
              dd-4267  [001] ....  8293.170805: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=961024
              dd-4267  [001] ....  8293.172326: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=963072
              dd-4267  [001] ....  8293.173830: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=965120
       flush-8:0-4272  [003] ....  8293.174168: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=961024
              dd-4267  [001] ....  8293.175352: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=967168
       flush-8:0-4272  [003] ....  8293.177053: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=963072
       flush-8:0-4272  [003] ....  8293.182445: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=965120
              dd-4267  [002] ....  8293.282859: writeback_congestion_wait: usec_timeout=100000 usec_delayed=100000
       flush-8:0-4272  [003] ....  8293.321004: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=967168
       flush-8:0-4272  [003] ....  8293.325879: writeback_pages_written: 8192
              dd-4267  [002] ....  8293.424762: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
              dd-4267  [002] ....  8293.428997: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [002] ....  8293.434825: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [002] ....  8293.439025: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [002] ....  8293.444901: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [002] ....  8293.449071: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [002] ....  8293.454973: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [002] ....  8293.459202: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [002] ....  8293.465002: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [002] ....  8293.469260: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [002] ....  8293.475132: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [002] ....  8293.479335: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [002] ....  8293.485174: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [002] ....  8293.489424: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [002] ....  8293.495268: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [002] ....  8293.499474: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [002] ....  8293.505310: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [002] ....  8293.509565: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [002] ....  8293.515405: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [002] ....  8293.519611: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [003] ....  8293.529871: writeback_congestion_wait: usec_timeout=100000 usec_delayed=8000
              dd-4267  [001] ....  8293.539968: writeback_congestion_wait: usec_timeout=100000 usec_delayed=7000
              dd-4267  [001] ....  8293.550034: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [001] ....  8293.560097: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [003] ....  8293.570116: writeback_congestion_wait: usec_timeout=100000 usec_delayed=7000
              dd-4267  [003] ....  8293.574101: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=969216
       flush-8:0-4272  [001] ....  8293.574114: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=969216
              dd-4267  [003] ....  8293.574119: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=969216
       flush-8:0-4272  [001] ....  8293.578134: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=969216
       flush-8:0-4272  [001] ....  8293.578142: writeback_pages_written: 256
              dd-4267  [001] ....  8293.580245: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [001] ....  8293.583993: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=969472
       flush-8:0-4272  [003] ....  8293.584005: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=969472
              dd-4267  [001] ....  8293.584006: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=969472
              dd-4267  [001] ....  8293.585670: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=971520
       flush-8:0-4272  [003] ....  8293.587582: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=969472
              dd-4267  [001] ....  8293.590259: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
       flush-8:0-4272  [003] ....  8293.590489: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=512 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=971520
       flush-8:0-4272  [003] ....  8293.593743: writeback_pages_written: 2560
              dd-4267  [001] ....  8293.594087: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=972032
       flush-8:0-4272  [003] ....  8293.594095: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=972032
              dd-4267  [001] ....  8293.594104: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=972032
              dd-4267  [001] ....  8293.595677: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=974080
       flush-8:0-4272  [003] ....  8293.596789: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=972032
       flush-8:0-4272  [003] ....  8293.599663: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=974080
       flush-8:0-4272  [003] ....  8293.601695: writeback_pages_written: 2304
              dd-4267  [001] ....  8293.620507: writeback_congestion_wait: usec_timeout=100000 usec_delayed=24000
              dd-4267  [001] ....  8293.640663: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
              dd-4267  [003] ....  8293.648906: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
              dd-4267  [001] ....  8293.657732: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [001] ....  8293.662163: writeback_congestion_wait: usec_timeout=100000 usec_delayed=0
              dd-4267  [003] ....  8293.666667: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [001] ....  8293.719584: writeback_congestion_wait: usec_timeout=100000 usec_delayed=48000
              dd-4267  [001] ....  8293.721790: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [001] ....  8293.724056: writeback_congestion_wait: usec_timeout=100000 usec_delayed=0
              dd-4267  [001] ....  8293.731220: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
              dd-4267  [001] ....  8293.741252: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [001] ....  8293.748399: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [001] ....  8293.752893: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [001] ....  8293.757387: writeback_congestion_wait: usec_timeout=100000 usec_delayed=0
              dd-4267  [003] ....  8293.761705: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [001] ....  8293.768565: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [001] ....  8293.773111: writeback_congestion_wait: usec_timeout=100000 usec_delayed=0
              dd-4267  [003] ....  8293.784513: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [001] ....  8293.794486: writeback_congestion_wait: usec_timeout=100000 usec_delayed=7000
              dd-4267  [001] ....  8293.804558: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [001] ....  8293.816096: writeback_congestion_wait: usec_timeout=100000 usec_delayed=7000
              dd-4267  [000] ....  8293.820338: writeback_congestion_wait: usec_timeout=100000 usec_delayed=0
              dd-4267  [002] ....  8293.822648: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [002] ....  8293.824862: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [003] ....  8293.827337: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [001] ....  8293.831982: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [001] ....  8293.850276: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=974336
       flush-8:0-4272  [003] ....  8293.850291: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=974336
              dd-4267  [001] ....  8293.850292: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=974336
       flush-8:0-4272  [003] ....  8293.854749: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=974336
       flush-8:0-4272  [003] ....  8293.854756: writeback_pages_written: 256
              dd-4267  [003] ....  8293.949419: writeback_congestion_wait: usec_timeout=100000 usec_delayed=100000
              dd-4267  [001] ....  8293.950099: writeback_congestion_wait: usec_timeout=100000 usec_delayed=0
              dd-4267  [001] ....  8293.954661: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [003] ....  8293.957135: writeback_congestion_wait: usec_timeout=100000 usec_delayed=0
              dd-4267  [003] ....  8293.959495: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=974592
       flush-8:0-4272  [001] ....  8293.959505: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=974592
              dd-4267  [003] ....  8293.959506: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=974592
       flush-8:0-4272  [001] ....  8293.964429: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=974592
              dd-4267  [003] ....  8293.964831: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=976640
              dd-4267  [003] ....  8293.966420: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=978688
       flush-8:0-4272  [001] ....  8293.967125: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=976640
              dd-4267  [003] ....  8293.967957: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=980736
       flush-8:0-4272  [001] ....  8293.974557: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=978688
              dd-4267  [000] ....  8293.976671: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=982784
              dd-4267  [000] ....  8293.979998: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=984832
              dd-4267  [000] ....  8293.980544: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
       flush-8:0-4272  [001] ....  8293.981418: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=980736
       flush-8:0-4272  [001] ....  8293.987940: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=982784
              dd-4267  [000] ....  8293.988730: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
       flush-8:0-4272  [001] ....  8293.994540: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=984832
       flush-8:0-4272  [001] ....  8293.996489: writeback_pages_written: 10496
              dd-4267  [001] ....  8294.029115: writeback_congestion_wait: usec_timeout=100000 usec_delayed=27000
              dd-4267  [001] ....  8294.064816: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=985088
       flush-8:0-4272  [003] ....  8294.064833: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=985088
              dd-4267  [001] ....  8294.064834: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=985088
       flush-8:0-4272  [003] ....  8294.067992: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=985088
       flush-8:0-4272  [003] ....  8294.068000: writeback_pages_written: 256
              dd-4267  [001] ....  8294.068875: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [001] ....  8294.078863: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [001] ....  8294.088870: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [003] ....  8294.098835: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [001] ....  8294.108811: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [001] ....  8294.118806: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [001] ....  8294.122874: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=985344
       flush-8:0-4272  [003] ....  8294.122890: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=985344
              dd-4267  [001] ....  8294.122895: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=985344
              dd-4267  [001] ....  8294.124491: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=987392
              dd-4267  [001] ....  8294.126057: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=989440
       flush-8:0-4272  [003] .N..  8294.126285: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=985344
              dd-4267  [001] ....  8294.127632: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=991488
       flush-8:0-4272  [003] ....  8294.129080: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=987392
       flush-8:0-4272  [003] ....  8294.134718: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=989440
       flush-8:0-4272  [003] ....  8294.139698: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=1792 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=991488
       flush-8:0-4272  [003] ....  8294.143930: writeback_pages_written: 7936
              dd-4267  [000] ....  8294.154721: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=993280
       flush-8:0-4272  [003] ....  8294.154740: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=993280
              dd-4267  [000] ....  8294.154741: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=993280
       flush-8:0-4272  [003] ....  8294.156714: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=993280
       flush-8:0-4272  [003] ....  8294.156722: writeback_pages_written: 256
              dd-4267  [002] ....  8294.254308: writeback_congestion_wait: usec_timeout=100000 usec_delayed=100000
              dd-4267  [002] ....  8294.351041: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=993536
       flush-8:0-4272  [003] ....  8294.351061: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=993536
              dd-4267  [002] ....  8294.351067: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=993536
       flush-8:0-4272  [003] ....  8294.355385: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=993536
       flush-8:0-4272  [003] ....  8294.355392: writeback_pages_written: 256
              dd-4267  [003] ....  8294.429885: writeback_congestion_wait: usec_timeout=100000 usec_delayed=74000
              dd-4267  [001] ....  8294.432726: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [001] ....  8294.435228: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [001] ....  8294.449908: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [001] ....  8294.455841: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [003] ....  8294.461793: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [001] ....  8294.466140: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
              dd-4267  [001] .N..  8294.472105: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [001] ....  8294.476408: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [003] ....  8294.482332: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [001] ....  8294.486666: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [001] ....  8294.492631: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [001] ....  8294.496951: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [003] ....  8294.502897: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [001] ....  8294.507247: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
              dd-4267  [001] ....  8294.513196: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [001] ....  8294.517488: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [003] ....  8294.523451: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [001] ....  8294.527785: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [001] ....  8294.533730: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [001] ....  8294.538058: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
              dd-4267  [003] ....  8294.544007: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
              dd-4267  [001] ....  8294.548348: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
              dd-4267  [001] ....  8294.613136: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
              dd-4267  [001] ....  8294.617232: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [001] ....  8294.622950: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
              dd-4267  [003] ....  8294.627046: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
              dd-4267  [001] ....  8294.632738: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
              dd-4267  [003] ....  8294.646693: writeback_congestion_wait: usec_timeout=100000 usec_delayed=0
              dd-4267  [001] ....  8294.656647: writeback_congestion_wait: usec_timeout=100000 usec_delayed=10000
              dd-4267  [001] ....  8294.662184: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [001] ....  8294.676087: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [001] ....  8294.680177: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [001] ....  8294.685900: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [003] ....  8294.690000: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
              dd-4267  [001] ....  8294.695720: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
              dd-4267  [001] ....  8294.699792: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [001] ....  8294.705508: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [003] ....  8294.709607: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [001] ....  8294.715336: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [001] ....  8294.719415: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [001] ....  8294.725125: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [003] ....  8294.729222: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [001] ....  8294.734952: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [001] ....  8294.739050: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [001] ....  8294.744735: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
              dd-4267  [003] ....  8294.748838: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [001] ....  8294.754564: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [001] ....  8294.758690: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [001] ....  8294.768628: writeback_congestion_wait: usec_timeout=100000 usec_delayed=10000
              dd-4267  [001] ....  8294.827256: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [000] ....  8294.829463: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [000] ....  8294.831574: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [002] ....  8294.833749: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [002] ....  8294.835917: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [003] ....  8294.849642: writeback_congestion_wait: usec_timeout=100000 usec_delayed=13000
              dd-4267  [001] ....  8294.851883: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [001] ....  8294.926921: writeback_congestion_wait: usec_timeout=100000 usec_delayed=71000
              dd-4267  [003] ....  8294.929117: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [003] ....  8294.931419: writeback_congestion_wait: usec_timeout=100000 usec_delayed=0
              dd-4267  [003] ....  8294.936759: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=993792
       flush-8:0-4272  [001] ....  8294.936771: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=993792
              dd-4267  [003] ....  8294.936776: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=993792
              dd-4267  [003] ....  8294.938346: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=995840
              dd-4267  [003] ....  8294.939895: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=997888
              dd-4267  [003] ....  8294.941461: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=999936
       flush-8:0-4272  [001] ....  8294.942432: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=993792
              dd-4267  [003] ....  8294.943044: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=1001984
              dd-4267  [003] ....  8294.944594: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=1004032
       flush-8:0-4272  [001] ....  8294.945567: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=995840
              dd-4267  [003] ....  8294.948509: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=1006080
              dd-4267  [000] ....  8294.950142: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=1008128
       flush-8:0-4272  [000] ....  8294.954509: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=997888
       flush-8:0-4272  [000] ....  8295.060073: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=999936
       flush-8:0-4272  [001] ....  8295.070258: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=1001984
       flush-8:0-4272  [001] ....  8295.079359: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=1004032
       flush-8:0-4272  [001] ....  8295.086159: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=1006080
       flush-8:0-4272  [001] ....  8295.092211: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=1024 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=1008128
       flush-8:0-4272  [001] ....  8295.095943: writeback_pages_written: 15360
              dd-4267  [002] ....  8295.149629: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=1009152
       flush-8:0-4272  [001] ....  8295.149649: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=1009152
              dd-4267  [002] ....  8295.149661: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=1009152
       flush-8:0-4272  [001] ....  8295.154294: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=768 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=1009152
              dd-4267  [002] ....  8295.154608: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=1009920
       flush-8:0-4272  [001] ....  8295.155299: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=1024 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=1009920
              dd-4267  [002] ....  8295.155309: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=1010688
              dd-4267  [002] ....  8295.156976: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=1012736
              dd-4267  [002] ....  8295.158592: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=1014784
       flush-8:0-4272  [001] ....  8295.159917: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=1010688
       flush-8:0-4272  [001] ....  8295.163902: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=1012736
              dd-4267  [002] ....  8295.164768: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=1016832
       flush-8:0-4272  [001] ....  8295.170302: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=1014784
       flush-8:0-4272  [001] ....  8295.175152: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=1536 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=1016832
       flush-8:0-4272  [001] ....  8295.178552: writeback_pages_written: 9216
              dd-4267  [000] ....  8295.265787: writeback_congestion_wait: usec_timeout=100000 usec_delayed=100000
              dd-4267  [003] ....  8295.368201: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [001] ....  8295.377759: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [001] ....  8295.381826: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [003] ....  8295.387549: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [001] ....  8295.391664: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [001] ....  8295.397376: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [001] ....  8295.401466: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [003] ....  8295.407182: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [001] ....  8295.411274: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [001] ....  8295.417013: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [001] ....  8295.421116: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [003] ....  8295.426809: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [001] ....  8295.430867: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [001] ....  8295.436645: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [001] ....  8295.446602: writeback_congestion_wait: usec_timeout=100000 usec_delayed=7000
              dd-4267  [001] ....  8295.450132: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=1018368
       flush-8:0-4272  [003] ....  8295.450147: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=1018368
              dd-4267  [001] ....  8295.450152: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=1018368
              dd-4267  [001] ....  8295.454012: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=1020416
       flush-8:0-4272  [003] ....  8295.454533: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=1018368
              dd-4267  [001] ....  8295.455564: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=1022464
       flush-8:0-4272  [003] ....  8295.457413: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=1020416
              dd-4267  [000] ....  8295.459310: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=1024512
              dd-4267  [001] ....  8295.460273: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
       flush-8:0-4272  [003] ....  8295.463336: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=1022464
       flush-8:0-4272  [003] ....  8295.468614: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=1024512
       flush-8:0-4272  [003] ....  8295.470473: writeback_pages_written: 6400
              dd-4267  [003] ....  8295.561536: writeback_congestion_wait: usec_timeout=100000 usec_delayed=100000
              dd-4267  [000] ....  8295.619605: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=1024768
       flush-8:0-4272  [003] ....  8295.619628: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=1024768
              dd-4267  [000] ....  8295.619629: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=1024768
       flush-8:0-4272  [003] ....  8295.624325: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=1024768
       flush-8:0-4272  [003] ....  8295.624334: writeback_pages_written: 256
              dd-4267  [001] ....  8295.626436: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [001] ....  8295.630738: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [001] ....  8295.636693: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [003] ....  8295.641010: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [001] ....  8295.646977: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [001] ....  8295.651282: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [001] ....  8295.657228: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [003] ....  8295.661569: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [001] ....  8295.667523: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
              dd-4267  [001] ....  8295.671798: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [001] ....  8295.677742: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [003] ....  8295.682120: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [001] ....  8295.688077: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [001] ....  8295.698626: writeback_congestion_wait: usec_timeout=100000 usec_delayed=9000
              dd-4267  [001] ....  8295.708811: writeback_congestion_wait: usec_timeout=100000 usec_delayed=5000
              dd-4267  [003] ....  8295.719075: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [001] ....  8295.724796: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [001] ....  8295.729170: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [001] ....  8295.735113: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [003] ....  8295.739452: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [003] ....  8295.741771: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=1025024
       flush-8:0-4272  [001] ....  8295.741782: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=1025024
              dd-4267  [003] ....  8295.741783: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=1025024
              dd-4267  [003] ....  8295.745338: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
       flush-8:0-4272  [001] ....  8295.745929: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=1025024
       flush-8:0-4272  [001] ....  8295.745935: writeback_pages_written: 256
              dd-4267  [001] ....  8295.749693: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [001] ....  8295.755677: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [003] ....  8295.760008: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [001] ....  8295.765967: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [001] ....  8295.770295: writeback_congestion_wait: usec_timeout=100000 usec_delayed=1000
              dd-4267  [001] ....  8295.776226: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [003] ....  8295.780561: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [001] ....  8295.786491: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [001] ....  8295.790761: writeback_congestion_wait: usec_timeout=100000 usec_delayed=3000
              dd-4267  [001] ....  8295.796727: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [003] ....  8295.801103: writeback_congestion_wait: usec_timeout=100000 usec_delayed=2000
              dd-4267  [001] ....  8295.807061: writeback_congestion_wait: usec_timeout=100000 usec_delayed=4000
              dd-4267  [001] ....  8295.808918: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=1025280
       flush-8:0-4272  [003] ....  8295.808931: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=1025280
              dd-4267  [001] ....  8295.808942: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=1025280
              dd-4267  [001] ....  8295.810446: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=1027328
              dd-4267  [001] ....  8295.811934: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=1029376
       flush-8:0-4272  [003] ....  8295.813114: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=1025280
              dd-4267  [001] ....  8295.815817: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=1031424
       flush-8:0-4272  [003] ....  8295.816182: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=1027328
              dd-4267  [000] ....  8295.821315: writeback_queue: bdi 8:0: sb_dev 0:0 nr_pages=256 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=1033472
       flush-8:0-4272  [003] ....  8295.822702: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=1029376
              dd-4267  [000] ....  8295.827544: writeback_congestion_wait: usec_timeout=100000 usec_delayed=6000
              dd-4267  [000] ....  8295.827734: writeback_congestion_wait: usec_timeout=100000 usec_delayed=0
       flush-8:0-4272  [003] ....  8295.828932: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=2048 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=1031424
              dd-4267  [001] ....  8295.864365: writeback_congestion_wait: usec_timeout=100000 usec_delayed=33000
       flush-8:0-4272  [001] ....  8296.051145: writeback_exec: bdi 8:0: sb_dev 0:0 nr_pages=512 sync_mode=0 kupdate=0 range_cyclic=0 background=0 reason=pageout ino=13 offset=1033472
       flush-8:0-4272  [001] ....  8296.054689: writeback_pages_written: 8704
       flush-8:0-4272  [003] ....  8301.051487: writeback_start: bdi 8:0: sb_dev 0:0 nr_pages=21838 sync_mode=0 kupdate=1 range_cyclic=1 background=0 reason=periodic ino=0 offset=0
       flush-8:0-4272  [003] ....  8301.051491: writeback_queue_io: bdi 8:0: older=4302942761 age=30000 enqueue=1 reason=periodic
       flush-8:0-4272  [003] ....  8301.051498: writeback_single_inode: bdi 8:0: ino=13 state= dirtied_when=4302938315 age=39 index=0 to_write=1024 wrote=0
       flush-8:0-4272  [003] ....  8301.051501: writeback_written: bdi 8:0: sb_dev 0:0 nr_pages=21838 sync_mode=0 kupdate=1 range_cyclic=1 background=0 reason=periodic ino=0 offset=0
       flush-8:0-4272  [003] ....  8301.051502: writeback_start: bdi 8:0: sb_dev 0:0 nr_pages=21838 sync_mode=0 kupdate=1 range_cyclic=1 background=0 reason=periodic ino=0 offset=0
       flush-8:0-4272  [003] ....  8301.051502: writeback_queue_io: bdi 8:0: older=4302942761 age=30000 enqueue=0 reason=periodic
       flush-8:0-4272  [003] ....  8301.051502: writeback_written: bdi 8:0: sb_dev 0:0 nr_pages=21838 sync_mode=0 kupdate=1 range_cyclic=1 background=0 reason=periodic ino=0 offset=0
       flush-8:0-4272  [003] ....  8301.051505: writeback_pages_written: 0

[-- Attachment #2: balance_dirty_pages-task-bw-300.png --]
[-- Type: image/png, Size: 41820 bytes --]

[-- Attachment #3: balance_dirty_pages-task-bw.png --]
[-- Type: image/png, Size: 34173 bytes --]

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: reclaim the LRU lists full of dirty/writeback pages
  2012-02-12  3:10           ` Wu Fengguang
  2012-02-12  6:45             ` Wu Fengguang
@ 2012-02-13 15:43             ` Jan Kara
  2012-02-14 10:03               ` Wu Fengguang
  1 sibling, 1 reply; 33+ messages in thread
From: Jan Kara @ 2012-02-13 15:43 UTC (permalink / raw)
  To: Wu Fengguang
  Cc: Rik van Riel, Greg Thelen, Jan Kara, bsingharora, Hugh Dickins,
	Michal Hocko, linux-mm, Mel Gorman, Ying Han, hannes,
	KAMEZAWA Hiroyuki, Minchan Kim

On Sun 12-02-12 11:10:29, Wu Fengguang wrote:
> On Sat, Feb 11, 2012 at 09:55:38AM -0500, Rik van Riel wrote:
> > On 02/11/2012 07:44 AM, Wu Fengguang wrote:
> > 
> > >Note that it's data for XFS. ext4 seems to have some problem with the
> > >workload: the majority pages are found to be writeback pages, and the
> > >flusher ends up blocking on the unconditional wait_on_page_writeback()
> > >in write_cache_pages_da() from time to time...
> 
> Sorry I overlooked the WB_SYNC_NONE test before the wait_on_page_writeback()
> call! And the issue can no longer be reproduce anyway. ext4 performs pretty
> good now, here is the result for one single memcg dd:
> 
>         dd if=/dev/zero of=/fs/f$i bs=4k count=1M
> 
>         4294967296 bytes (4.3 GB) copied, 44.5759 s, 96.4 MB/s
> 
> iostat -kx 3
> 
> avg-cpu:  %user   %nice %system %iowait  %steal   %idle
>            0.25    0.00   11.03   28.54    0.00   60.19
>            0.25    0.00   13.71   16.65    0.00   69.39
>            0.17    0.00    8.41   24.81    0.00   66.61
>            0.25    0.00   15.00   19.63    0.00   65.12
> 
> Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util
> sda               0.00    17.00    0.00  178.33     0.00 90694.67  1017.14   111.34  520.88   5.45  97.23
> sda               0.00     0.00    0.00  193.67     0.00 98816.00  1020.48    86.22  496.81   4.81  93.07
> sda               0.00     3.33    0.00  182.33     0.00 92345.33  1012.93   101.14  623.98   5.49 100.03
> sda               0.00     3.00    0.00  187.00     0.00 95586.67  1022.32    89.36  441.70   4.96  92.70
> 
> > >XXX: commit NFS unstable pages via write_inode()
> > >XXX: the added congestion_wait() may be undesirable in some situations
> > 
> > Even with these caveats, this seems to be the right way forward.
> 
> > Acked-by: Rik van Riel <riel@redhat.com>
> 
> Thank you!
>  
> Here is the updated patch.
> - ~10ms write around chunk size, adaptive to the bdi bandwith 
> - cleanup flush_inode_page()
> 
> Thanks,
> Fengguang
> ---
> Subject: writeback: introduce the pageout work
> Date: Thu Jul 29 14:41:19 CST 2010
> 
> This relays file pageout IOs to the flusher threads.
> 
> The ultimate target is to gracefully handle the LRU lists full of
> dirty/writeback pages.
> 
> 1) I/O efficiency
> 
> The flusher will piggy back the nearby ~10ms worth of dirty pages for I/O.
> 
> This takes advantage of the time/spacial locality in most workloads: the
> nearby pages of one file are typically populated into the LRU at the same
> time, hence will likely be close to each other in the LRU list. Writing
> them in one shot helps clean more pages effectively for page reclaim.
> 
> 2) OOM avoidance and scan rate control
> 
> Typically we do LRU scan w/o rate control and quickly get enough clean
> pages for the LRU lists not full of dirty pages.
> 
> Or we can still get a number of freshly cleaned pages (moved to LRU tail
> by end_page_writeback()) when the queued pageout I/O is completed within
> tens of milli-seconds.
> 
> However if the LRU list is small and full of dirty pages, it can be
> quickly fully scanned and go OOM before the flusher manages to clean
> enough pages.
> 
> A simple yet reliable scheme is employed to avoid OOM and keep scan rate
> in sync with the I/O rate:
> 
> 	if (PageReclaim(page))
> 		congestion_wait(HZ/10);
> 
> PG_reclaim plays the key role. When dirty pages are encountered, we
> queue I/O for it, set PG_reclaim and put it back to the LRU head.
> So if PG_reclaim pages are encountered again, it means the dirty page
> has not yet been cleaned by the flusher after a full zone scan. It
> indicates we are scanning more fast than I/O and shall take a snap.
> 
> The runtime behavior on a fully dirtied small LRU list would be:
> It will start with a quick scan of the list, queuing all pages for I/O.
> Then the scan will be slowed down by the PG_reclaim pages *adaptively*
> to match the I/O bandwidth.
> 
> 3) writeback work coordinations
> 
> To avoid memory allocations at page reclaim, a mempool for struct
> wb_writeback_work is created.
> 
> wakeup_flusher_threads() is removed because it can easily delay the
> more oriented pageout works and even exhaust the mempool reservations.
> It's also found to not I/O efficient by frequently submitting writeback
> works with small ->nr_pages.
> 
> Background/periodic works will quit automatically, so as to clean the
> pages under reclaim ASAP. However for now the sync work can still block
> us for long time.
> 
> Jan Kara: limit the search scope. Note that the limited search and work
> pool is not a big problem: 1000 IOs under flight are typically more than
> enough to saturate the disk. And the overheads of searching in the work
> list didn't even show up in the perf report.
> 
> 4) test case
> 
> Run 2 dd tasks in a 100MB memcg (a very handy test case from Greg Thelen):
> 
> 	mkdir /cgroup/x
> 	echo 100M > /cgroup/x/memory.limit_in_bytes
> 	echo $$ > /cgroup/x/tasks
> 
> 	for i in `seq 2`
> 	do
> 		dd if=/dev/zero of=/fs/f$i bs=1k count=1M &
> 	done
> 
> Before patch, the dd tasks are quickly OOM killed.
> After patch, they run well with reasonably good performance and overheads:
> 
> 1073741824 bytes (1.1 GB) copied, 22.2196 s, 48.3 MB/s
> 1073741824 bytes (1.1 GB) copied, 22.4675 s, 47.8 MB/s
  I wonder what happens if you run:
       mkdir /cgroup/x
       echo 100M > /cgroup/x/memory.limit_in_bytes
       echo $$ > /cgroup/x/tasks

       for (( i = 0; i < 2; i++ )); do
         mkdir /fs/d$i
         for (( j = 0; j < 5000; j++ )); do 
           dd if=/dev/zero of=/fs/d$i/f$j bs=1k count=50
         done &
       done
  Because for small files the writearound logic won't help much... Also the
number of work items queued might become interesting.

Another common case to test - run 'slapadd' command in each cgroup to
create big LDAP database. That does pretty much random IO on a big mmaped
DB file.

> +/*
> + * schedule writeback on a range of inode pages.
> + */
> +static struct wb_writeback_work *
> +bdi_flush_inode_range(struct backing_dev_info *bdi,
> +		      struct inode *inode,
> +		      pgoff_t offset,
> +		      pgoff_t len,
> +		      bool wait)
> +{
> +	struct wb_writeback_work *work;
> +
> +	if (!igrab(inode))
> +		return ERR_PTR(-ENOENT);
  One technical note here: If the inode is deleted while it is queued, this
reference will keep it living until flusher thread gets to it. Then when
flusher thread puts its reference, the inode will get deleted in flusher
thread context. I don't see an immediate problem in that but it might be
surprising sometimes. Another problem I see is that if you try to
unmount the filesystem while the work item is queued, you'll get EBUSY for
no apparent reason (for userspace).

									Honza
-- 
Jan Kara <jack@suse.cz>
SUSE Labs, CR

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: memcg writeback (was Re: [Lsf-pc] [LSF/MM TOPIC] memcg topics.)
  2012-02-09 13:50     ` Wu Fengguang
@ 2012-02-13 18:40       ` Ying Han
  0 siblings, 0 replies; 33+ messages in thread
From: Ying Han @ 2012-02-13 18:40 UTC (permalink / raw)
  To: Wu Fengguang
  Cc: Greg Thelen, Jan Kara, bsingharora, Hugh Dickins, Michal Hocko,
	linux-mm, Mel Gorman, hannes, lsf-pc, KAMEZAWA Hiroyuki

On Thu, Feb 9, 2012 at 5:50 AM, Wu Fengguang <fengguang.wu@intel.com> wrote:
> On Wed, Feb 08, 2012 at 12:54:33PM -0800, Ying Han wrote:
>> On Wed, Feb 8, 2012 at 1:31 AM, Wu Fengguang <fengguang.wu@intel.com> wrote:
>> > On Tue, Feb 07, 2012 at 11:55:05PM -0800, Greg Thelen wrote:
>> >> On Fri, Feb 3, 2012 at 1:40 AM, Wu Fengguang <fengguang.wu@intel.com> wrote:
>> >> > If moving dirty pages out of the memcg to the 20% global dirty pages
>> >> > pool on page reclaim, the above OOM can be avoided. It does change the
>> >> > meaning of memory.limit_in_bytes in that the memcg tasks can now
>> >> > actually consume more pages (up to the shared global 20% dirty limit).
>> >>
>> >> This seems like an easy change, but unfortunately the global 20% pool
>> >> has some shortcomings for my needs:
>> >>
>> >> 1. the global 20% pool is not moderated.  One cgroup can dominate it
>> >>     and deny service to other cgroups.
>> >
>> > It is moderated by balance_dirty_pages() -- in terms of dirty ratelimit.
>> > And you have the freedom to control the bandwidth allocation with some
>> > async write I/O controller.
>> >
>> > Even though there is no direct control of dirty pages, we can roughly
>> > get it as the side effect of rate control. Given
>> >
>> >        ratelimit_cgroup_A = 2 * ratelimit_cgroup_B
>> >
>> > There will naturally be more dirty pages for cgroup A to be worked by
>> > the flusher. And the dirty pages will be roughly balanced around
>> >
>> >        nr_dirty_cgroup_A = 2 * nr_dirty_cgroup_B
>> >
>> > when writeout bandwidths for their dirty pages are equal.
>> >
>> >> 2. the global 20% pool is free, unaccounted memory.  Ideally cgroups only
>> >>     use the amount of memory specified in their memory.limit_in_bytes.  The
>> >>     goal is to sell portions of a system.  Global resource like the 20% are an
>> >>     undesirable system-wide tax that's shared by jobs that may not even
>> >>     perform buffered writes.
>> >
>> > Right, it is the shortcoming.
>> >
>> >> 3. Setting aside 20% extra memory for system wide dirty buffers is a lot of
>> >>     memory.  This becomes a larger issue when the global dirty_ratio is
>> >>     higher than 20%.
>> >
>> > Yeah the global pool scheme does mean that you'd better allocate at
>> > most 80% memory to individual memory cgroups, otherwise it's possible
>> > for a tiny memcg doing dd writes to push dirty pages to global LRU and
>> > *squeeze* the size of other memcgs.
>> >
>> > However I guess it should be mitigated by the fact that
>> >
>> > - we typically already reserve some space for the root memcg
>>
>> Can you give more details on that? AFAIK, we don't treat root cgroup
>> differently than other sub-cgroups, except root cgroup doesn't have
>> limit.
>
> OK. I'd imagine this to be the typical usage for desktop and quite a
> few servers: a few cgroups are employed to limit the resource usage
> for selected tasks (such as backups, background GUI tasks, cron tasks,
> etc.). These systems are still running mainly in the global context.

The use case makes senses, but still not sure about the "reservation
for root" part.

For other tasks not running under cgroups, they runs under global
context as you said. However, there is no memory limit for root cgroup
and it will only trigger global reclaim when running short of memory.
It doesn't sounds like a straight-forward configuration for
environments requires memory isolation badly. The worst part is the
unpredictability, which we don't have control of how much
dirty-and-later-clean pages being leaked to root and stays.

--Ying

>
>> In general, I don't like the idea of shared pool in root for all the
>> dirty pages.
>>
>> Imagining a system which has nothing running under root and every
>> application runs within sub-cgroup. It is easy to track and limit each
>> cgroup's memory usage, but not the pages being moved to root. We have
>> been experiencing difficulties of tracking pages being re-parented to
>> root, and this will make it even harder.
>
> So you want to push memcg allocations to the hardware limits. This is
> a worthwhile target for cloud servers that run a number of well
> contained jobs.
>
> I guess it can be achieved reasonably well with the global shared
> dirty pool.  Let's discuss the two major cases.
>
> 1) no change of behavior
>
> For example, when the system memory is divided equally to 10 cgroups
> each running 1 dd. In this case, the dirty pages will be contained
> within the memcg LRUs. Page reclaim rarely encounters any dirty pages.
> There is no moving to the global LRU, so no side effect at all.

>
> 2) small memcg squeezing other memcg(s)
>
> When system memory is divided to 1 small memcg A and 1 large memcg B,
> each running a dd task. In this case the dirty pages from A will be
> moved to the global LRU, and global page reclaims will be triggered.
>
> In the end it will be balanced around
>
> - global LRU: 10% memory (which are A's dirty pages)
> - memcg B: 90% memory
> - memcg A: a tiny ignorable fraction of memory
>
> Now job B uses 10% less memory than w/o the global dirty pool scheme.
> I guess this is bad for some type of jobs.
>
> However my question is, will the typical demand be more flexible?
> Something like the "minimal" and "recommended" setup: "this job
> requires at least XXX memory and better at YYY memory", rather than
> some fixed size memory allocation.
>
> The minimal requirement should be trivially satisfied by adding a
> memcg watermark that protects the memcg LRU from being reclaimed
> when dropped under it.
>
> Then the cloud server could be configured to
>
>        sum(memcg.limit_in_bytes) / memtotal = 100%
>        sum(memcg.minimal_size)   / memtotal < 100% - dirty_ratio
>
> Which makes a simple and flexibly partitioned system.
>
> Thanks,
> Fengguang
>
>> > - 20% dirty ratio is mostly an overkill for large memory systems.
>> >  It's often enough to hold 10-30s worth of dirty data for them, which
>> >  is 1-3GB for one 100MB/s disk. This is the reason vm.dirty_bytes is
>> >  introduced: someone wants to do some <1% dirty ratio.
>> >
>> > Thanks,
>> > Fengguang

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: reclaim the LRU lists full of dirty/writeback pages
  2012-02-13 15:43             ` Jan Kara
@ 2012-02-14 10:03               ` Wu Fengguang
  2012-02-14 13:29                 ` Jan Kara
  0 siblings, 1 reply; 33+ messages in thread
From: Wu Fengguang @ 2012-02-14 10:03 UTC (permalink / raw)
  To: Jan Kara
  Cc: Rik van Riel, Greg Thelen, bsingharora, Hugh Dickins,
	Michal Hocko, linux-mm, Mel Gorman, Ying Han, hannes,
	KAMEZAWA Hiroyuki, Minchan Kim

[-- Attachment #1: Type: text/plain, Size: 32079 bytes --]

On Mon, Feb 13, 2012 at 04:43:13PM +0100, Jan Kara wrote:
> On Sun 12-02-12 11:10:29, Wu Fengguang wrote:

> > 4) test case
> > 
> > Run 2 dd tasks in a 100MB memcg (a very handy test case from Greg Thelen):
> > 
> > 	mkdir /cgroup/x
> > 	echo 100M > /cgroup/x/memory.limit_in_bytes
> > 	echo $$ > /cgroup/x/tasks
> > 
> > 	for i in `seq 2`
> > 	do
> > 		dd if=/dev/zero of=/fs/f$i bs=1k count=1M &
> > 	done
> > 
> > Before patch, the dd tasks are quickly OOM killed.
> > After patch, they run well with reasonably good performance and overheads:
> > 
> > 1073741824 bytes (1.1 GB) copied, 22.2196 s, 48.3 MB/s
> > 1073741824 bytes (1.1 GB) copied, 22.4675 s, 47.8 MB/s
>   I wonder what happens if you run:
>        mkdir /cgroup/x
>        echo 100M > /cgroup/x/memory.limit_in_bytes
>        echo $$ > /cgroup/x/tasks
> 
>        for (( i = 0; i < 2; i++ )); do
>          mkdir /fs/d$i
>          for (( j = 0; j < 5000; j++ )); do 
>            dd if=/dev/zero of=/fs/d$i/f$j bs=1k count=50
>          done &
>        done

That's a very good case, thanks!
 
>   Because for small files the writearound logic won't help much...

Right, it also means the native background work cannot be more I/O
efficient than the pageout works, except for the overheads of more
work items..

>   Also the number of work items queued might become interesting.

It turns out that the 1024 mempool reservations are not exhausted at
all (the below patch as a trace_printk on alloc failure and it didn't
trigger at all).

Here is the representative iostat lines on XFS (full "iostat -kx 1 20" log attached):

avg-cpu:  %user   %nice %system %iowait  %steal   %idle                                                                     
           0.80    0.00    6.03    0.03    0.00   93.14                                                                     
                                                                                                                            
Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util                   
sda               0.00   205.00    0.00  163.00     0.00 16900.00   207.36     4.09   21.63   1.88  30.70                   

The attached dirtied/written progress graph looks interesting.
Although the iostat disk utilization is low, the "dirtied" progress
line is pretty straight and there is no single congestion_wait event
in the trace log. Which makes me wonder if there are some unknown
blocking issues in the way.

> Another common case to test - run 'slapadd' command in each cgroup to
> create big LDAP database. That does pretty much random IO on a big mmaped
> DB file.

I've not used this. Will it need some configuration and data feed?
fio looks more handy to me for emulating mmap random IO.

> > +/*
> > + * schedule writeback on a range of inode pages.
> > + */
> > +static struct wb_writeback_work *
> > +bdi_flush_inode_range(struct backing_dev_info *bdi,
> > +		      struct inode *inode,
> > +		      pgoff_t offset,
> > +		      pgoff_t len,
> > +		      bool wait)
> > +{
> > +	struct wb_writeback_work *work;
> > +
> > +	if (!igrab(inode))
> > +		return ERR_PTR(-ENOENT);
>   One technical note here: If the inode is deleted while it is queued, this
> reference will keep it living until flusher thread gets to it. Then when
> flusher thread puts its reference, the inode will get deleted in flusher
> thread context. I don't see an immediate problem in that but it might be
> surprising sometimes. Another problem I see is that if you try to
> unmount the filesystem while the work item is queued, you'll get EBUSY for
> no apparent reason (for userspace).

Yeah, we need to make umount work.

And I find the pageout works seem to have some problems with ext4.
For example, this can be easily triggered with 10 dd tasks running
inside the 100MB limited memcg:

[18006.858109] INFO: task jbd2/sda1-8:51294 blocked for more than 120 seconds.
[18006.866425] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[18006.876096] jbd2/sda1-8     D 0000000000000000  5464 51294      2 0x00000000
[18006.884729]  ffff88040b097c70 0000000000000046 ffff880823032310 ffff88040b096000
[18006.894356]  00000000001d2f00 00000000001d2f00 ffff8808230322a0 00000000001d2f00
[18006.904000]  ffff88040b097fd8 00000000001d2f00 ffff88040b097fd8 00000000001d2f00
[18006.913652] Call Trace:
[18006.916901]  [<ffffffff8103d4af>] ? native_sched_clock+0x29/0x70
[18006.924134]  [<ffffffff81232aab>] ? jbd2_journal_commit_transaction+0x1d0/0x1281
[18006.933324]  [<ffffffff8109660d>] ? local_clock+0x41/0x5a
[18006.939879]  [<ffffffff810b0ddd>] ? lock_release_holdtime+0xa3/0xac
[18006.947410]  [<ffffffff81232aab>] ? jbd2_journal_commit_transaction+0x1d0/0x1281
[18006.956607]  [<ffffffff81a57904>] schedule+0x5a/0x5c
[18006.962677]  [<ffffffff81232ab0>] jbd2_journal_commit_transaction+0x1d5/0x1281
[18006.971683]  [<ffffffff8103d4af>] ? native_sched_clock+0x29/0x70
[18006.978933]  [<ffffffff810738ce>] ? try_to_del_timer_sync+0xba/0xc8
[18006.986452]  [<ffffffff8109660d>] ? local_clock+0x41/0x5a
[18006.992999]  [<ffffffff8108683a>] ? wake_up_bit+0x2a/0x2a
[18006.999542]  [<ffffffff810738ce>] ? try_to_del_timer_sync+0xba/0xc8
[18007.007062]  [<ffffffff81073a6f>] ? del_timer_sync+0xbb/0xce
[18007.013898]  [<ffffffff810739b4>] ? process_timeout+0x10/0x10
[18007.020835]  [<ffffffff81237bc1>] kjournald2+0xcf/0x242
[18007.027187]  [<ffffffff8108683a>] ? wake_up_bit+0x2a/0x2a
[18007.033733]  [<ffffffff81237af2>] ? commit_timeout+0x10/0x10
[18007.040574]  [<ffffffff81086384>] kthread+0x95/0x9d
[18007.046542]  [<ffffffff81a61134>] kernel_thread_helper+0x4/0x10
[18007.053675]  [<ffffffff81a591b4>] ? retint_restore_args+0x13/0x13
[18007.061003]  [<ffffffff810862ef>] ? __init_kthread_worker+0x5b/0x5b
[18007.068521]  [<ffffffff81a61130>] ? gs_change+0x13/0x13
[18007.074878] no locks held by jbd2/sda1-8/51294.

Sometimes I also catch dd/ext4lazyinit/flush all stalling in start_this_handle:

[17985.439567] dd              D 0000000000000007  3616 61440      1 0x00000004
[17985.448088]  ffff88080d71b9b8 0000000000000046 ffff88081ec80070 ffff88080d71a000
[17985.457545]  00000000001d2f00 00000000001d2f00 ffff88081ec80000 00000000001d2f00
[17985.467168]  ffff88080d71bfd8 00000000001d2f00 ffff88080d71bfd8 00000000001d2f00
[17985.476647] Call Trace:
[17985.479843]  [<ffffffff8103d4af>] ? native_sched_clock+0x29/0x70
[17985.487025]  [<ffffffff81230b9d>] ? start_this_handle+0x357/0x4ed
[17985.494313]  [<ffffffff8109660d>] ? local_clock+0x41/0x5a
[17985.500815]  [<ffffffff810b0ddd>] ? lock_release_holdtime+0xa3/0xac
[17985.508287]  [<ffffffff81230b9d>] ? start_this_handle+0x357/0x4ed
[17985.515575]  [<ffffffff81a57904>] schedule+0x5a/0x5c
[17985.521588]  [<ffffffff81230c39>] start_this_handle+0x3f3/0x4ed
[17985.528669]  [<ffffffff81147820>] ? kmem_cache_free+0xfa/0x13a
[17985.545142]  [<ffffffff8108683a>] ? wake_up_bit+0x2a/0x2a
[17985.551650]  [<ffffffff81230f0e>] jbd2__journal_start+0xb0/0xf6
[17985.558732]  [<ffffffff811f7ad7>] ? ext4_dirty_inode+0x1d/0x4c
[17985.565716]  [<ffffffff81230f67>] jbd2_journal_start+0x13/0x15
[17985.572703]  [<ffffffff8120e3e9>] ext4_journal_start_sb+0x13f/0x157
[17985.580172]  [<ffffffff8109660d>] ? local_clock+0x41/0x5a
[17985.586680]  [<ffffffff811f7ad7>] ext4_dirty_inode+0x1d/0x4c
[17985.593472]  [<ffffffff81176827>] __mark_inode_dirty+0x2e/0x1cc
[17985.600552]  [<ffffffff81168e84>] file_update_time+0xe4/0x106
[17985.607441]  [<ffffffff811079f6>] __generic_file_aio_write+0x254/0x364
[17985.615202]  [<ffffffff81a565da>] ? mutex_lock_nested+0x2e4/0x2f3
[17985.622488]  [<ffffffff81107b50>] ? generic_file_aio_write+0x4a/0xc1
[17985.630057]  [<ffffffff81107b6c>] generic_file_aio_write+0x66/0xc1
[17985.637442]  [<ffffffff811ef72b>] ext4_file_write+0x1f9/0x251
[17985.644330]  [<ffffffff8109660d>] ? local_clock+0x41/0x5a
[17985.650835]  [<ffffffff8118809e>] ? fsnotify+0x222/0x27b
[17985.657238]  [<ffffffff81153612>] do_sync_write+0xce/0x10b
[17985.663844]  [<ffffffff8118809e>] ? fsnotify+0x222/0x27b
[17985.670243]  [<ffffffff81187ef8>] ? fsnotify+0x7c/0x27b
[17985.676561]  [<ffffffff81153dbe>] vfs_write+0xb8/0x157
[17985.682767]  [<ffffffff81154075>] sys_write+0x4d/0x77
[17985.688878]  [<ffffffff81a5fce9>] system_call_fastpath+0x16/0x1b

and jbd2 in

[17983.623657] jbd2/sda1-8     D 0000000000000000  5464 51294      2 0x00000000
[17983.632173]  ffff88040b097c70 0000000000000046 ffff880823032310 ffff88040b096000
[17983.641640]  00000000001d2f00 00000000001d2f00 ffff8808230322a0 00000000001d2f00
[17983.651119]  ffff88040b097fd8 00000000001d2f00 ffff88040b097fd8 00000000001d2f00
[17983.660603] Call Trace:
[17983.663808]  [<ffffffff8103d4af>] ? native_sched_clock+0x29/0x70
[17983.670997]  [<ffffffff81232aab>] ? jbd2_journal_commit_transaction+0x1d0/0x1281
[17983.680124]  [<ffffffff8109660d>] ? local_clock+0x41/0x5a
[17983.686638]  [<ffffffff810b0ddd>] ? lock_release_holdtime+0xa3/0xac
[17983.694108]  [<ffffffff81232aab>] ? jbd2_journal_commit_transaction+0x1d0/0x1281
[17983.703243]  [<ffffffff81a57904>] schedule+0x5a/0x5c
[17983.709262]  [<ffffffff81232ab0>] jbd2_journal_commit_transaction+0x1d5/0x1281
[17983.718195]  [<ffffffff8103d4af>] ? native_sched_clock+0x29/0x70
[17983.725392]  [<ffffffff810738ce>] ? try_to_del_timer_sync+0xba/0xc8
[17983.732867]  [<ffffffff8109660d>] ? local_clock+0x41/0x5a
[17983.739374]  [<ffffffff8108683a>] ? wake_up_bit+0x2a/0x2a
[17983.745864]  [<ffffffff810738ce>] ? try_to_del_timer_sync+0xba/0xc8
[17983.753343]  [<ffffffff81073a6f>] ? del_timer_sync+0xbb/0xce
[17983.760137]  [<ffffffff810739b4>] ? process_timeout+0x10/0x10
[17983.767041]  [<ffffffff81237bc1>] kjournald2+0xcf/0x242
[17983.773361]  [<ffffffff8108683a>] ? wake_up_bit+0x2a/0x2a
[17983.779863]  [<ffffffff81237af2>] ? commit_timeout+0x10/0x10
[17983.786665]  [<ffffffff81086384>] kthread+0x95/0x9d
[17983.792585]  [<ffffffff81a61134>] kernel_thread_helper+0x4/0x10
[17983.799670]  [<ffffffff81a591b4>] ? retint_restore_args+0x13/0x13
[17983.806948]  [<ffffffff810862ef>] ? __init_kthread_worker+0x5b/0x5b

Here is the updated patch used in the new tests. It moves
congestion_wait() out of the page lock and make flush_inode_page() no
longer wait for memory allocation (looks unnecessary).

Thanks,
Fengguang
---
Subject: writeback: introduce the pageout work
Date: Thu Jul 29 14:41:19 CST 2010

This relays file pageout IOs to the flusher threads.

The ultimate target is to gracefully handle the LRU lists full of
dirty/writeback pages.

1) I/O efficiency

The flusher will piggy back the nearby ~10ms worth of dirty pages for I/O.

This takes advantage of the time/spacial locality in most workloads: the
nearby pages of one file are typically populated into the LRU at the same
time, hence will likely be close to each other in the LRU list. Writing
them in one shot helps clean more pages effectively for page reclaim.

2) OOM avoidance and scan rate control

Typically we do LRU scan w/o rate control and quickly get enough clean
pages for the LRU lists not full of dirty pages.

Or we can still get a number of freshly cleaned pages (moved to LRU tail
by end_page_writeback()) when the queued pageout I/O is completed within
tens of milli-seconds.

However if the LRU list is small and full of dirty pages, it can be
quickly fully scanned and go OOM before the flusher manages to clean
enough pages.

A simple yet reliable scheme is employed to avoid OOM and keep scan rate
in sync with the I/O rate:

	if (PageReclaim(page))
		congestion_wait(HZ/10);

PG_reclaim plays the key role. When dirty pages are encountered, we
queue I/O for it, set PG_reclaim and put it back to the LRU head.
So if PG_reclaim pages are encountered again, it means the dirty page
has not yet been cleaned by the flusher after a full zone scan. It
indicates we are scanning more fast than I/O and shall take a snap.

The runtime behavior on a fully dirtied small LRU list would be:
It will start with a quick scan of the list, queuing all pages for I/O.
Then the scan will be slowed down by the PG_reclaim pages *adaptively*
to match the I/O bandwidth.

3) writeback work coordinations

To avoid memory allocations at page reclaim, a mempool for struct
wb_writeback_work is created.

wakeup_flusher_threads() is removed because it can easily delay the
more oriented pageout works and even exhaust the mempool reservations.
It's also found to not I/O efficient by frequently submitting writeback
works with small ->nr_pages.

Background/periodic works will quit automatically, so as to clean the
pages under reclaim ASAP. However for now the sync work can still block
us for long time.

Jan Kara: limit the search scope. Note that the limited search and work
pool is not a big problem: 1000 IOs under flight are typically more than
enough to saturate the disk. And the overheads of searching in the work
list didn't even show up in the perf report.

4) test case

Run 2 dd tasks in a 100MB memcg (a very handy test case from Greg Thelen):

	mkdir /cgroup/x
	echo 100M > /cgroup/x/memory.limit_in_bytes
	echo $$ > /cgroup/x/tasks

	for i in `seq 2`
	do
		dd if=/dev/zero of=/fs/f$i bs=1k count=1M &
	done

Before patch, the dd tasks are quickly OOM killed.
After patch, they run well with reasonably good performance and overheads:

1073741824 bytes (1.1 GB) copied, 22.2196 s, 48.3 MB/s
1073741824 bytes (1.1 GB) copied, 22.4675 s, 47.8 MB/s

iostat -kx 1

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0.00     0.00    0.00  178.00     0.00 89568.00  1006.38    74.35  417.71   4.80  85.40
sda               0.00     2.00    0.00  191.00     0.00 94428.00   988.77    53.34  219.03   4.34  82.90
sda               0.00    20.00    0.00  196.00     0.00 97712.00   997.06    71.11  337.45   4.77  93.50
sda               0.00     5.00    0.00  175.00     0.00 84648.00   967.41    54.03  316.44   5.06  88.60
sda               0.00     0.00    0.00  186.00     0.00 92432.00   993.89    56.22  267.54   5.38 100.00
sda               0.00     1.00    0.00  183.00     0.00 90156.00   985.31    37.99  325.55   4.33  79.20
sda               0.00     0.00    0.00  175.00     0.00 88692.00  1013.62    48.70  218.43   4.69  82.10
sda               0.00     0.00    0.00  196.00     0.00 97528.00   995.18    43.38  236.87   5.10 100.00
sda               0.00     0.00    0.00  179.00     0.00 88648.00   990.48    45.83  285.43   5.59 100.00
sda               0.00     0.00    0.00  178.00     0.00 88500.00   994.38    28.28  158.89   4.99  88.80
sda               0.00     0.00    0.00  194.00     0.00 95852.00   988.16    32.58  167.39   5.15 100.00
sda               0.00     2.00    0.00  215.00     0.00 105996.00   986.01    41.72  201.43   4.65 100.00
sda               0.00     4.00    0.00  173.00     0.00 84332.00   974.94    50.48  260.23   5.76  99.60
sda               0.00     0.00    0.00  182.00     0.00 90312.00   992.44    36.83  212.07   5.49 100.00
sda               0.00     8.00    0.00  195.00     0.00 95940.50   984.01    50.18  221.06   5.13 100.00
sda               0.00     1.00    0.00  220.00     0.00 108852.00   989.56    40.99  202.68   4.55 100.00
sda               0.00     2.00    0.00  161.00     0.00 80384.00   998.56    37.19  268.49   6.21 100.00
sda               0.00     4.00    0.00  182.00     0.00 90830.00   998.13    50.58  239.77   5.49 100.00
sda               0.00     0.00    0.00  197.00     0.00 94877.00   963.22    36.68  196.79   5.08 100.00

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.25    0.00   15.08   33.92    0.00   50.75
           0.25    0.00   14.54   35.09    0.00   50.13
           0.50    0.00   13.57   32.41    0.00   53.52
           0.50    0.00   11.28   36.84    0.00   51.38
           0.50    0.00   15.75   32.00    0.00   51.75
           0.50    0.00   10.50   34.00    0.00   55.00
           0.50    0.00   17.63   27.46    0.00   54.41
           0.50    0.00   15.08   30.90    0.00   53.52
           0.50    0.00   11.28   32.83    0.00   55.39
           0.75    0.00   16.79   26.82    0.00   55.64
           0.50    0.00   16.08   29.15    0.00   54.27
           0.50    0.00   13.50   30.50    0.00   55.50
           0.50    0.00   14.32   35.18    0.00   50.00
           0.50    0.00   12.06   33.92    0.00   53.52
           0.50    0.00   17.29   30.58    0.00   51.63
           0.50    0.00   15.08   29.65    0.00   54.77
           0.50    0.00   12.53   29.32    0.00   57.64
           0.50    0.00   15.29   31.83    0.00   52.38

The global dd numbers for comparison:

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0.00     0.00    0.00  189.00     0.00 95752.00  1013.25   143.09  684.48   5.29 100.00
sda               0.00     0.00    0.00  208.00     0.00 105480.00  1014.23   143.06  733.29   4.81 100.00
sda               0.00     0.00    0.00  161.00     0.00 81924.00  1017.69   141.71  757.79   6.21 100.00
sda               0.00     0.00    0.00  217.00     0.00 109580.00  1009.95   143.09  749.55   4.61 100.10
sda               0.00     0.00    0.00  187.00     0.00 94728.00  1013.13   144.31  773.67   5.35 100.00
sda               0.00     0.00    0.00  189.00     0.00 95752.00  1013.25   144.14  742.00   5.29 100.00
sda               0.00     0.00    0.00  177.00     0.00 90032.00  1017.31   143.32  656.59   5.65 100.00
sda               0.00     0.00    0.00  215.00     0.00 108640.00  1010.60   142.90  817.54   4.65 100.00
sda               0.00     2.00    0.00  166.00     0.00 83858.00  1010.34   143.64  808.61   6.02 100.00
sda               0.00     0.00    0.00  186.00     0.00 92813.00   997.99   141.18  736.95   5.38 100.00
sda               0.00     0.00    0.00  206.00     0.00 104456.00  1014.14   146.27  729.33   4.85 100.00
sda               0.00     0.00    0.00  213.00     0.00 107024.00  1004.92   143.25  705.70   4.69 100.00
sda               0.00     0.00    0.00  188.00     0.00 95748.00  1018.60   141.82  764.78   5.32 100.00

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.51    0.00   11.22   52.30    0.00   35.97
           0.25    0.00   10.15   52.54    0.00   37.06
           0.25    0.00    5.01   56.64    0.00   38.10
           0.51    0.00   15.15   43.94    0.00   40.40
           0.25    0.00   12.12   48.23    0.00   39.39
           0.51    0.00   11.20   53.94    0.00   34.35
           0.26    0.00    9.72   51.41    0.00   38.62
           0.76    0.00    9.62   50.63    0.00   38.99
           0.51    0.00   10.46   53.32    0.00   35.71
           0.51    0.00    9.41   51.91    0.00   38.17
           0.25    0.00   10.69   49.62    0.00   39.44
           0.51    0.00   12.21   52.67    0.00   34.61
           0.51    0.00   11.45   53.18    0.00   34.86

XXX: commit NFS unstable pages via write_inode()
XXX: the added congestion_wait() may be undesirable in some situations

CC: Jan Kara <jack@suse.cz>
CC: Mel Gorman <mgorman@suse.de>
Acked-by: Rik van Riel <riel@redhat.com>
CC: Greg Thelen <gthelen@google.com>
CC: Minchan Kim <minchan.kim@gmail.com>
Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
---
 fs/fs-writeback.c                |  169 ++++++++++++++++++++++++++++-
 include/linux/writeback.h        |    4 
 include/trace/events/writeback.h |   12 +-
 mm/vmscan.c                      |   35 ++++--
 4 files changed, 202 insertions(+), 18 deletions(-)

- move congestion_wait() out of the page lock: it's blocking btrfs lock_delalloc_pages()

--- linux.orig/mm/vmscan.c	2012-02-12 21:27:28.000000000 +0800
+++ linux/mm/vmscan.c	2012-02-13 12:14:20.000000000 +0800
@@ -767,7 +767,8 @@ static unsigned long shrink_page_list(st
 				      struct scan_control *sc,
 				      int priority,
 				      unsigned long *ret_nr_dirty,
-				      unsigned long *ret_nr_writeback)
+				      unsigned long *ret_nr_writeback,
+				      unsigned long *ret_nr_pgreclaim)
 {
 	LIST_HEAD(ret_pages);
 	LIST_HEAD(free_pages);
@@ -776,6 +777,7 @@ static unsigned long shrink_page_list(st
 	unsigned long nr_congested = 0;
 	unsigned long nr_reclaimed = 0;
 	unsigned long nr_writeback = 0;
+	unsigned long nr_pgreclaim = 0;
 
 	cond_resched();
 
@@ -813,6 +815,10 @@ static unsigned long shrink_page_list(st
 
 		if (PageWriteback(page)) {
 			nr_writeback++;
+			if (PageReclaim(page))
+				nr_pgreclaim++;
+			else
+				SetPageReclaim(page);
 			/*
 			 * Synchronous reclaim cannot queue pages for
 			 * writeback due to the possibility of stack overflow
@@ -874,12 +880,15 @@ static unsigned long shrink_page_list(st
 			nr_dirty++;
 
 			/*
-			 * Only kswapd can writeback filesystem pages to
-			 * avoid risk of stack overflow but do not writeback
-			 * unless under significant pressure.
+			 * run into the visited page again: we are scanning
+			 * faster than the flusher can writeout dirty pages
 			 */
-			if (page_is_file_cache(page) &&
-					(!current_is_kswapd() || priority >= DEF_PRIORITY - 2)) {
+			if (page_is_file_cache(page) && PageReclaim(page)) {
+				nr_pgreclaim++;
+				goto keep_locked;
+			}
+			if (page_is_file_cache(page) && mapping &&
+			    flush_inode_page(mapping, page, false) >= 0) {
 				/*
 				 * Immediately reclaim when written back.
 				 * Similar in principal to deactivate_page()
@@ -1028,6 +1037,7 @@ keep_lumpy:
 	count_vm_events(PGACTIVATE, pgactivate);
 	*ret_nr_dirty += nr_dirty;
 	*ret_nr_writeback += nr_writeback;
+	*ret_nr_pgreclaim += nr_pgreclaim;
 	return nr_reclaimed;
 }
 
@@ -1087,8 +1097,10 @@ int __isolate_lru_page(struct page *page
 	 */
 	if (mode & (ISOLATE_CLEAN|ISOLATE_ASYNC_MIGRATE)) {
 		/* All the caller can do on PageWriteback is block */
-		if (PageWriteback(page))
+		if (PageWriteback(page)) {
+			SetPageReclaim(page);
 			return ret;
+		}
 
 		if (PageDirty(page)) {
 			struct address_space *mapping;
@@ -1509,6 +1521,7 @@ shrink_inactive_list(unsigned long nr_to
 	unsigned long nr_file;
 	unsigned long nr_dirty = 0;
 	unsigned long nr_writeback = 0;
+	unsigned long nr_pgreclaim = 0;
 	isolate_mode_t reclaim_mode = ISOLATE_INACTIVE;
 	struct zone *zone = mz->zone;
 
@@ -1559,13 +1572,13 @@ shrink_inactive_list(unsigned long nr_to
 	spin_unlock_irq(&zone->lru_lock);
 
 	nr_reclaimed = shrink_page_list(&page_list, mz, sc, priority,
-						&nr_dirty, &nr_writeback);
+				&nr_dirty, &nr_writeback, &nr_pgreclaim);
 
 	/* Check if we should syncronously wait for writeback */
 	if (should_reclaim_stall(nr_taken, nr_reclaimed, priority, sc)) {
 		set_reclaim_mode(priority, sc, true);
 		nr_reclaimed += shrink_page_list(&page_list, mz, sc,
-					priority, &nr_dirty, &nr_writeback);
+			priority, &nr_dirty, &nr_writeback, &nr_pgreclaim);
 	}
 
 	spin_lock_irq(&zone->lru_lock);
@@ -1608,6 +1621,8 @@ shrink_inactive_list(unsigned long nr_to
 	 */
 	if (nr_writeback && nr_writeback >= (nr_taken >> (DEF_PRIORITY-priority)))
 		wait_iff_congested(zone, BLK_RW_ASYNC, HZ/10);
+	if (nr_pgreclaim)
+		congestion_wait(BLK_RW_ASYNC, HZ/10);
 
 	trace_mm_vmscan_lru_shrink_inactive(zone->zone_pgdat->node_id,
 		zone_idx(zone),
@@ -2382,8 +2397,6 @@ static unsigned long do_try_to_free_page
 		 */
 		writeback_threshold = sc->nr_to_reclaim + sc->nr_to_reclaim / 2;
 		if (total_scanned > writeback_threshold) {
-			wakeup_flusher_threads(laptop_mode ? 0 : total_scanned,
-						WB_REASON_TRY_TO_FREE_PAGES);
 			sc->may_writepage = 1;
 		}
 
--- linux.orig/fs/fs-writeback.c	2012-02-12 21:27:28.000000000 +0800
+++ linux/fs/fs-writeback.c	2012-02-13 12:15:50.000000000 +0800
@@ -41,6 +41,8 @@ struct wb_writeback_work {
 	long nr_pages;
 	struct super_block *sb;
 	unsigned long *older_than_this;
+	struct inode *inode;
+	pgoff_t offset;
 	enum writeback_sync_modes sync_mode;
 	unsigned int tagged_writepages:1;
 	unsigned int for_kupdate:1;
@@ -65,6 +67,27 @@ struct wb_writeback_work {
  */
 int nr_pdflush_threads;
 
+static mempool_t *wb_work_mempool;
+
+static void *wb_work_alloc(gfp_t gfp_mask, void *pool_data)
+{
+	/*
+	 * bdi_flush_inode_range() may be called on page reclaim
+	 */
+	if (current->flags & PF_MEMALLOC)
+		return NULL;
+
+	return kmalloc(sizeof(struct wb_writeback_work), gfp_mask);
+}
+
+static __init int wb_work_init(void)
+{
+	wb_work_mempool = mempool_create(1024,
+					 wb_work_alloc, mempool_kfree, NULL);
+	return wb_work_mempool ? 0 : -ENOMEM;
+}
+fs_initcall(wb_work_init);
+
 /**
  * writeback_in_progress - determine whether there is writeback in progress
  * @bdi: the device's backing_dev_info structure.
@@ -129,7 +152,7 @@ __bdi_start_writeback(struct backing_dev
 	 * This is WB_SYNC_NONE writeback, so if allocation fails just
 	 * wakeup the thread for old dirty data writeback
 	 */
-	work = kzalloc(sizeof(*work), GFP_ATOMIC);
+	work = mempool_alloc(wb_work_mempool, GFP_NOWAIT);
 	if (!work) {
 		if (bdi->wb.task) {
 			trace_writeback_nowork(bdi);
@@ -138,6 +161,7 @@ __bdi_start_writeback(struct backing_dev
 		return;
 	}
 
+	memset(work, 0, sizeof(*work));
 	work->sync_mode	= WB_SYNC_NONE;
 	work->nr_pages	= nr_pages;
 	work->range_cyclic = range_cyclic;
@@ -186,6 +210,125 @@ void bdi_start_background_writeback(stru
 	spin_unlock_bh(&bdi->wb_lock);
 }
 
+static bool extend_writeback_range(struct wb_writeback_work *work,
+				   pgoff_t offset,
+				   unsigned long write_around_pages)
+{
+	pgoff_t end = work->offset + work->nr_pages;
+
+	if (offset >= work->offset && offset < end)
+		return true;
+
+	/*
+	 * for sequential workloads with good locality, include up to 8 times
+	 * more data in one chunk
+	 */
+	if (work->nr_pages >= 8 * write_around_pages)
+		return false;
+
+	/* the unsigned comparison helps eliminate one compare */
+	if (work->offset - offset < write_around_pages) {
+		work->nr_pages += write_around_pages;
+		work->offset -= write_around_pages;
+		return true;
+	}
+
+	if (offset - end < write_around_pages) {
+		work->nr_pages += write_around_pages;
+		return true;
+	}
+
+	return false;
+}
+
+/*
+ * schedule writeback on a range of inode pages.
+ */
+static struct wb_writeback_work *
+bdi_flush_inode_range(struct backing_dev_info *bdi,
+		      struct inode *inode,
+		      pgoff_t offset,
+		      pgoff_t len,
+		      bool wait)
+{
+	struct wb_writeback_work *work;
+
+	if (!igrab(inode))
+		return ERR_PTR(-ENOENT);
+
+	work = mempool_alloc(wb_work_mempool, wait ? GFP_NOIO : GFP_NOWAIT);
+	if (!work) {
+		trace_printk("wb_work_mempool alloc fail\n");
+		return ERR_PTR(-ENOMEM);
+	}
+
+	memset(work, 0, sizeof(*work));
+	work->sync_mode		= WB_SYNC_NONE;
+	work->inode		= inode;
+	work->offset		= offset;
+	work->nr_pages		= len;
+	work->reason		= WB_REASON_PAGEOUT;
+
+	bdi_queue_work(bdi, work);
+
+	return work;
+}
+
+/*
+ * Called by page reclaim code to flush the dirty page ASAP. Do write-around to
+ * improve IO throughput. The nearby pages will have good chance to reside in
+ * the same LRU list that vmscan is working on, and even close to each other
+ * inside the LRU list in the common case of sequential read/write.
+ *
+ * ret > 0: success, found/reused a previous writeback work
+ * ret = 0: success, allocated/queued a new writeback work
+ * ret < 0: failed
+ */
+long flush_inode_page(struct address_space *mapping,
+		      struct page *page,
+		      bool wait)
+{
+	struct backing_dev_info *bdi = mapping->backing_dev_info;
+	struct inode *inode = mapping->host;
+	struct wb_writeback_work *work;
+	unsigned long write_around_pages;
+	pgoff_t offset = page->index;
+	int i;
+	long ret = 0;
+
+	if (unlikely(!inode))
+		return -ENOENT;
+
+	/*
+	 * piggy back 8-15ms worth of data
+	 */
+	write_around_pages = bdi->avg_write_bandwidth + MIN_WRITEBACK_PAGES;
+	write_around_pages = rounddown_pow_of_two(write_around_pages) >> 6;
+
+	i = 1;
+	spin_lock_bh(&bdi->wb_lock);
+	list_for_each_entry_reverse(work, &bdi->work_list, list) {
+		if (work->inode != inode)
+			continue;
+		if (extend_writeback_range(work, offset, write_around_pages)) {
+			ret = i;
+			break;
+		}
+		if (i++ > 100)	/* limit search depth */
+			break;
+	}
+	spin_unlock_bh(&bdi->wb_lock);
+
+	if (!ret) {
+		offset = round_down(offset, write_around_pages);
+		work = bdi_flush_inode_range(bdi, inode,
+					     offset, write_around_pages, wait);
+		if (IS_ERR(work))
+			ret = PTR_ERR(work);
+	}
+	return ret;
+}
+
 /*
  * Remove the inode from the writeback list it is on.
  */
@@ -833,6 +976,23 @@ static unsigned long get_nr_dirty_pages(
 		get_nr_dirty_inodes();
 }
 
+static long wb_flush_inode(struct bdi_writeback *wb,
+			   struct wb_writeback_work *work)
+{
+	struct writeback_control wbc = {
+		.sync_mode = WB_SYNC_NONE,
+		.nr_to_write = LONG_MAX,
+		.range_start = work->offset << PAGE_CACHE_SHIFT,
+		.range_end = (work->offset + work->nr_pages - 1)
+						<< PAGE_CACHE_SHIFT,
+	};
+
+	do_writepages(work->inode->i_mapping, &wbc);
+	iput(work->inode);
+
+	return LONG_MAX - wbc.nr_to_write;
+}
+
 static long wb_check_background_flush(struct bdi_writeback *wb)
 {
 	if (over_bground_thresh(wb->bdi)) {
@@ -905,7 +1065,10 @@ long wb_do_writeback(struct bdi_writebac
 
 		trace_writeback_exec(bdi, work);
 
-		wrote += wb_writeback(wb, work);
+		if (work->inode)
+			wrote += wb_flush_inode(wb, work);
+		else
+			wrote += wb_writeback(wb, work);
 
 		/*
 		 * Notify the caller of completion if this is a synchronous
@@ -914,7 +1077,7 @@ long wb_do_writeback(struct bdi_writebac
 		if (work->done)
 			complete(work->done);
 		else
-			kfree(work);
+			mempool_free(work, wb_work_mempool);
 	}
 
 	/*
--- linux.orig/include/trace/events/writeback.h	2012-02-12 21:27:33.000000000 +0800
+++ linux/include/trace/events/writeback.h	2012-02-12 21:27:34.000000000 +0800
@@ -23,7 +23,7 @@
 
 #define WB_WORK_REASON							\
 		{WB_REASON_BACKGROUND,		"background"},		\
-		{WB_REASON_TRY_TO_FREE_PAGES,	"try_to_free_pages"},	\
+		{WB_REASON_PAGEOUT,		"pageout"},		\
 		{WB_REASON_SYNC,		"sync"},		\
 		{WB_REASON_PERIODIC,		"periodic"},		\
 		{WB_REASON_LAPTOP_TIMER,	"laptop_timer"},	\
@@ -45,6 +45,8 @@ DECLARE_EVENT_CLASS(writeback_work_class
 		__field(int, range_cyclic)
 		__field(int, for_background)
 		__field(int, reason)
+		__field(unsigned long, ino)
+		__field(unsigned long, offset)
 	),
 	TP_fast_assign(
 		strncpy(__entry->name, dev_name(bdi->dev), 32);
@@ -55,9 +57,11 @@ DECLARE_EVENT_CLASS(writeback_work_class
 		__entry->range_cyclic = work->range_cyclic;
 		__entry->for_background	= work->for_background;
 		__entry->reason = work->reason;
+		__entry->ino = work->inode ? work->inode->i_ino : 0;
+		__entry->offset = work->offset;
 	),
 	TP_printk("bdi %s: sb_dev %d:%d nr_pages=%ld sync_mode=%d "
-		  "kupdate=%d range_cyclic=%d background=%d reason=%s",
+		  "kupdate=%d range_cyclic=%d background=%d reason=%s ino=%lu offset=%lu",
 		  __entry->name,
 		  MAJOR(__entry->sb_dev), MINOR(__entry->sb_dev),
 		  __entry->nr_pages,
@@ -65,7 +69,9 @@ DECLARE_EVENT_CLASS(writeback_work_class
 		  __entry->for_kupdate,
 		  __entry->range_cyclic,
 		  __entry->for_background,
-		  __print_symbolic(__entry->reason, WB_WORK_REASON)
+		  __print_symbolic(__entry->reason, WB_WORK_REASON),
+		  __entry->ino,
+		  __entry->offset
 	)
 );
 #define DEFINE_WRITEBACK_WORK_EVENT(name) \
--- linux.orig/include/linux/writeback.h	2012-02-12 21:27:28.000000000 +0800
+++ linux/include/linux/writeback.h	2012-02-12 21:27:34.000000000 +0800
@@ -40,7 +40,7 @@ enum writeback_sync_modes {
  */
 enum wb_reason {
 	WB_REASON_BACKGROUND,
-	WB_REASON_TRY_TO_FREE_PAGES,
+	WB_REASON_PAGEOUT,
 	WB_REASON_SYNC,
 	WB_REASON_PERIODIC,
 	WB_REASON_LAPTOP_TIMER,
@@ -94,6 +94,8 @@ long writeback_inodes_wb(struct bdi_writ
 				enum wb_reason reason);
 long wb_do_writeback(struct bdi_writeback *wb, int force_wait);
 void wakeup_flusher_threads(long nr_pages, enum wb_reason reason);
+long flush_inode_page(struct address_space *mapping, struct page *page,
+		      bool wait);
 
 /* writeback.h requires fs.h; it, too, is not included from here. */
 static inline void wait_on_inode(struct inode *inode)

[-- Attachment #2: global_dirtied_written.png --]
[-- Type: image/png, Size: 37386 bytes --]

[-- Attachment #3: iostat --]
[-- Type: text/plain, Size: 6586 bytes --]

Linux 3.3.0-rc3-flush-page+ (snb) 	02/14/2012 	_x86_64_	(32 CPU)

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.03    0.00    0.05    0.38    0.00   99.54

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0.20    33.00    0.05    2.91     0.16  1025.66   694.73     1.50  508.37   6.10   1.80

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.90    0.00    5.72    0.00    0.00   93.38

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0.00     0.00    1.00    0.00     8.00     0.00    16.00     0.00    1.00   1.00   0.10

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.90    0.00    5.82    0.00    0.00   93.28

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.90    0.00    5.81    0.00    0.00   93.29

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.83    0.00    5.84    0.00    0.00   93.33

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.83    0.00    5.94    0.00    0.00   93.23

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0.00   370.00    0.00   71.00     0.00 12480.00   351.55    11.17   71.76   3.56  25.30

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.80    0.00    5.84    0.00    0.00   93.35

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0.00    65.00    0.00   51.00     0.00 13104.00   513.88     2.85  170.22   4.00  20.40

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.87    0.00    5.93    0.00    0.00   93.20

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0.00   379.00    0.00  134.00     0.00 30004.00   447.82    15.43  116.99   4.11  55.10

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.90    0.00    5.91    0.00    0.00   93.19

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0.00   133.00    0.00  114.00     0.00 12792.00   224.42     3.20   28.12   2.30  26.20

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.83    0.00    5.96    0.00    0.00   93.21

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0.00   367.00    0.00  114.00     0.00 24960.00   437.89    12.55  110.12   4.35  49.60

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.80    0.00    6.03    0.03    0.00   93.14

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0.00   205.00    0.00  163.00     0.00 16900.00   207.36     4.09   21.63   1.88  30.70

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.87    0.00    5.98    0.00    0.00   93.16

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0.00   219.00    0.00  182.00     0.00 19292.00   212.00     4.42   21.41   1.97  35.80

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.93    0.00    5.87    0.00    0.00   93.20

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0.00   201.00    0.00  135.00     0.00 21216.00   314.31     6.39   55.44   2.97  40.10

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.93    0.00    5.99    0.00    0.00   93.08

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0.00   329.00    0.00  166.00     0.00 24336.00   293.20    10.42   58.39   3.19  53.00

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.80    0.00    5.96    0.00    0.00   93.24

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0.00   258.00    0.00  181.00     0.00 16848.00   186.17     4.01   17.64   1.36  24.70

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.90    0.00    5.92    0.00    0.00   93.18

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0.00   166.00    0.00   88.00     0.00 20488.00   465.64     6.09   86.72   4.68  41.20

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.86    0.00    6.05    0.00    0.00   93.08

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0.00   566.00    0.00  194.00     0.00 34528.00   355.96    20.50   78.32   3.16  61.30

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.86    0.00    5.79    0.00    0.00   93.34

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0.00     0.00    0.00   12.00     0.00  5616.00   936.00     0.77  506.67   9.58  11.50

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.90    0.00    5.93    0.00    0.00   93.18

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0.00   259.00    0.00  173.00     0.00 22464.00   259.70     4.79   27.67   2.09  36.10

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.74    0.00    6.16    0.00    0.00   93.10

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0.00   190.00    0.00  163.00     0.00 18304.00   224.59     3.67   22.50   1.78  29.00


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: reclaim the LRU lists full of dirty/writeback pages
  2012-02-11 12:44       ` reclaim the LRU lists full of dirty/writeback pages Wu Fengguang
  2012-02-11 14:55         ` Rik van Riel
@ 2012-02-14 10:19         ` Mel Gorman
  2012-02-14 13:18           ` Wu Fengguang
  1 sibling, 1 reply; 33+ messages in thread
From: Mel Gorman @ 2012-02-14 10:19 UTC (permalink / raw)
  To: Wu Fengguang
  Cc: Greg Thelen, Jan Kara, bsingharora, Hugh Dickins, Michal Hocko,
	linux-mm, Ying Han, hannes, KAMEZAWA Hiroyuki, Rik van Riel,
	Minchan Kim

On Sat, Feb 11, 2012 at 08:44:45PM +0800, Wu Fengguang wrote:
> <SNIP>
> --- linux.orig/mm/vmscan.c	2012-02-03 21:42:21.000000000 +0800
> +++ linux/mm/vmscan.c	2012-02-11 17:28:54.000000000 +0800
> @@ -813,6 +813,8 @@ static unsigned long shrink_page_list(st
>  
>  		if (PageWriteback(page)) {
>  			nr_writeback++;
> +			if (PageReclaim(page))
> +				congestion_wait(BLK_RW_ASYNC, HZ/10);
>  			/*
>  			 * Synchronous reclaim cannot queue pages for
>  			 * writeback due to the possibility of stack overflow

I didn't look closely at the rest of the patch, I'm just focusing on the
congestion_wait part. You called this out yourself but this is in fact
really really bad. If this is in place and a user copies a large amount of
data to slow storage like a USB stick, the system will stall severely. A
parallel streaming reader will certainly have major issues as it will enter
page reclaim, find a bunch of dirty USB-backed pages at the end of the LRU
(20% of memory potentially) and stall for HZ/10 on each one of them. How
badly each process is affected will vary.

For the OOM problem, a more reasonable stopgap might be to identify when
a process is scanning a memcg at high priority and encountered all
PageReclaim with no forward progress and to congestion_wait() if that
situation occurs. A preferable way would be to wait until the flusher
wakes up a waiter on PageReclaim pages to be written out because we want
to keep moving way from congestion_wait() if at all possible.

Another possibility would be to relook at LRU_IMMEDIATE but right now it
requires a page flag and I haven't devised a way around that. Besides,
it would only address the problem of PageREclaim pages being encountered,
it would not handle the case where a memcg was filled with PageReclaim pages.


-- 
Mel Gorman
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: reclaim the LRU lists full of dirty/writeback pages
  2012-02-14 10:19         ` Mel Gorman
@ 2012-02-14 13:18           ` Wu Fengguang
  2012-02-14 13:35             ` Wu Fengguang
                               ` (2 more replies)
  0 siblings, 3 replies; 33+ messages in thread
From: Wu Fengguang @ 2012-02-14 13:18 UTC (permalink / raw)
  To: Mel Gorman
  Cc: Greg Thelen, Jan Kara, bsingharora, Hugh Dickins, Michal Hocko,
	linux-mm, Ying Han, hannes, KAMEZAWA Hiroyuki, Rik van Riel,
	Minchan Kim

On Tue, Feb 14, 2012 at 10:19:31AM +0000, Mel Gorman wrote:
> On Sat, Feb 11, 2012 at 08:44:45PM +0800, Wu Fengguang wrote:
> > <SNIP>
> > --- linux.orig/mm/vmscan.c	2012-02-03 21:42:21.000000000 +0800
> > +++ linux/mm/vmscan.c	2012-02-11 17:28:54.000000000 +0800
> > @@ -813,6 +813,8 @@ static unsigned long shrink_page_list(st
> >  
> >  		if (PageWriteback(page)) {
> >  			nr_writeback++;
> > +			if (PageReclaim(page))
> > +				congestion_wait(BLK_RW_ASYNC, HZ/10);
> >  			/*
> >  			 * Synchronous reclaim cannot queue pages for
> >  			 * writeback due to the possibility of stack overflow
> 
> I didn't look closely at the rest of the patch, I'm just focusing on the
> congestion_wait part. You called this out yourself but this is in fact
> really really bad. If this is in place and a user copies a large amount of
> data to slow storage like a USB stick, the system will stall severely. A
> parallel streaming reader will certainly have major issues as it will enter
> page reclaim, find a bunch of dirty USB-backed pages at the end of the LRU
> (20% of memory potentially) and stall for HZ/10 on each one of them. How
> badly each process is affected will vary.

Cannot agree any more the principle...me just want to demonstrate the
idea first :-)
 
> For the OOM problem, a more reasonable stopgap might be to identify when
> a process is scanning a memcg at high priority and encountered all
> PageReclaim with no forward progress and to congestion_wait() if that
> situation occurs. A preferable way would be to wait until the flusher
> wakes up a waiter on PageReclaim pages to be written out because we want
> to keep moving way from congestion_wait() if at all possible.

Good points! Below is the more serious page reclaim changes.

The dirty/writeback pages may often come close to each other in the
LRU list, so the local test during a 32-page scan may still trigger
reclaim waits unnecessarily. Some global information on the percent
of dirty/writeback pages in the LRU list may help. Anyway the added
tests should still be much better than no protection.

A global wait queue and reclaim_wait() is introduced. The waiters will
be wakeup when pages are rotated by end_page_writeback() or lru drain.

I have to say its effectiveness depends on the filesystem... ext4
and btrfs do fluent IO completions, so reclaim_wait() works pretty
well:
              dd-14560 [017] ....  1360.894605: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=10000
              dd-14560 [017] ....  1360.904456: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=8000
              dd-14560 [017] ....  1360.908293: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=2000
              dd-14560 [017] ....  1360.923960: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=15000
              dd-14560 [017] ....  1360.927810: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=2000
              dd-14560 [017] ....  1360.931656: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=2000
              dd-14560 [017] ....  1360.943503: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=10000
              dd-14560 [017] ....  1360.953289: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=7000
              dd-14560 [017] ....  1360.957177: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=2000
              dd-14560 [017] ....  1360.972949: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=15000

However XFS does IO completions in very large batches (there may be
only several big IO completions in one second). So reclaim_wait()
mostly end up waiting to the full HZ/10 timeout:

              dd-4177  [008] ....   866.367661: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=100000
              dd-4177  [010] ....   866.567583: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=100000
              dd-4177  [012] ....   866.767458: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=100000
              dd-4177  [013] ....   866.867419: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=100000
              dd-4177  [008] ....   867.167266: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=100000
              dd-4177  [010] ....   867.367168: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=100000
              dd-4177  [012] ....   867.818950: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=100000
              dd-4177  [013] ....   867.918905: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=100000
              dd-4177  [013] ....   867.971657: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=52000
              dd-4177  [013] ....   867.971812: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=0
              dd-4177  [008] ....   868.355700: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=100000
              dd-4177  [010] ....   868.700515: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=100000

> Another possibility would be to relook at LRU_IMMEDIATE but right now it
> requires a page flag and I haven't devised a way around that. Besides,
> it would only address the problem of PageREclaim pages being encountered,
> it would not handle the case where a memcg was filled with PageReclaim pages.

I also considered things like LRU_IMMEDIATE, however got no clear idea yet.
Since the simple "wait on PG_reclaim" approach appears to work for this
memcg dd case, it effectively disables me to think any further ;-)

For the single dd inside memcg, ext4 is now working pretty well, with
least CPU overheads:

(running from another test box, so not directly comparable with old tests)

        avg-cpu:  %user   %nice %system %iowait  %steal   %idle
                   0.03    0.00    0.85    5.35    0.00   93.77

        Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util
        sda               0.00     0.00    0.00  112.00     0.00 57348.00  1024.07    81.66 1045.21   8.93 100.00

        avg-cpu:  %user   %nice %system %iowait  %steal   %idle
                   0.00    0.00    0.69    4.07    0.00   95.24

        Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util
        sda               0.00   142.00    0.00  112.00     0.00 56832.00  1014.86   127.94  790.04   8.93 100.00

And xfs a bit less fluent:

        avg-cpu:  %user   %nice %system %iowait  %steal   %idle
                   0.00    0.00    3.79    2.54    0.00   93.68

        Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util
        sda               0.00     0.00    0.00  108.00     0.00 54644.00  1011.93    48.13 1044.83   8.44  91.20

        avg-cpu:  %user   %nice %system %iowait  %steal   %idle
                   0.00    0.00    3.38    3.88    0.00   92.74

        Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util
        sda               0.00     0.00    0.00  105.00     0.00 53156.00  1012.50   128.50  451.90   9.25  97.10

btrfs also looks good:

        avg-cpu:  %user   %nice %system %iowait  %steal   %idle
                   0.00    0.00    8.05    3.85    0.00   88.10

        Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util
        sda               0.00     0.00    0.00  108.00     0.00 53248.00   986.07    88.11  643.99   9.26 100.00

        avg-cpu:  %user   %nice %system %iowait  %steal   %idle
                   0.00    0.00    4.04    2.51    0.00   93.45

        Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util
        sda               0.00     0.00    0.00  112.00     0.00 57344.00  1024.00    91.58  998.41   8.93 100.00


Thanks,
Fengguang
---

--- linux.orig/include/linux/backing-dev.h	2012-02-14 19:43:06.000000000 +0800
+++ linux/include/linux/backing-dev.h	2012-02-14 19:49:26.000000000 +0800
@@ -304,6 +304,8 @@ void clear_bdi_congested(struct backing_
 void set_bdi_congested(struct backing_dev_info *bdi, int sync);
 long congestion_wait(int sync, long timeout);
 long wait_iff_congested(struct zone *zone, int sync, long timeout);
+long reclaim_wait(long timeout);
+void reclaim_rotated(void);
 
 static inline bool bdi_cap_writeback_dirty(struct backing_dev_info *bdi)
 {
--- linux.orig/mm/backing-dev.c	2012-02-14 19:26:15.000000000 +0800
+++ linux/mm/backing-dev.c	2012-02-14 20:09:45.000000000 +0800
@@ -873,3 +873,38 @@ out:
 	return ret;
 }
 EXPORT_SYMBOL(wait_iff_congested);
+
+static DECLARE_WAIT_QUEUE_HEAD(reclaim_wqh);
+
+/**
+ * reclaim_wait - wait for some pages being rotated to the LRU tail
+ * @timeout: timeout in jiffies
+ *
+ * Wait until @timeout, or when some (typically PG_reclaim under writeback)
+ * pages rotated to the LRU so that page reclaim can make progress.
+ */
+long reclaim_wait(long timeout)
+{
+	long ret;
+	unsigned long start = jiffies;
+	DEFINE_WAIT(wait);
+
+	prepare_to_wait(&reclaim_wqh, &wait, TASK_KILLABLE);
+	ret = io_schedule_timeout(timeout);
+	finish_wait(&reclaim_wqh, &wait);
+
+	trace_writeback_reclaim_wait(jiffies_to_usecs(timeout),
+				     jiffies_to_usecs(jiffies - start));
+
+	return ret;
+}
+EXPORT_SYMBOL(reclaim_wait);
+
+void reclaim_rotated()
+{
+	wait_queue_head_t *wqh = &reclaim_wqh;
+
+	if (waitqueue_active(wqh))
+		wake_up(wqh);
+}
+
--- linux.orig/mm/swap.c	2012-02-14 19:40:10.000000000 +0800
+++ linux/mm/swap.c	2012-02-14 19:45:13.000000000 +0800
@@ -253,6 +253,7 @@ static void pagevec_move_tail(struct pag
 
 	pagevec_lru_move_fn(pvec, pagevec_move_tail_fn, &pgmoved);
 	__count_vm_events(PGROTATED, pgmoved);
+	reclaim_rotated();
 }
 
 /*
--- linux.orig/mm/vmscan.c	2012-02-14 17:53:27.000000000 +0800
+++ linux/mm/vmscan.c	2012-02-14 19:44:11.000000000 +0800
@@ -767,7 +767,8 @@ static unsigned long shrink_page_list(st
 				      struct scan_control *sc,
 				      int priority,
 				      unsigned long *ret_nr_dirty,
-				      unsigned long *ret_nr_writeback)
+				      unsigned long *ret_nr_writeback,
+				      unsigned long *ret_nr_pgreclaim)
 {
 	LIST_HEAD(ret_pages);
 	LIST_HEAD(free_pages);
@@ -776,6 +777,7 @@ static unsigned long shrink_page_list(st
 	unsigned long nr_congested = 0;
 	unsigned long nr_reclaimed = 0;
 	unsigned long nr_writeback = 0;
+	unsigned long nr_pgreclaim = 0;
 
 	cond_resched();
 
@@ -813,6 +815,10 @@ static unsigned long shrink_page_list(st
 
 		if (PageWriteback(page)) {
 			nr_writeback++;
+			if (PageReclaim(page))
+				nr_pgreclaim++;
+			else
+				SetPageReclaim(page);
 			/*
 			 * Synchronous reclaim cannot queue pages for
 			 * writeback due to the possibility of stack overflow
@@ -874,12 +880,15 @@ static unsigned long shrink_page_list(st
 			nr_dirty++;
 
 			/*
-			 * Only kswapd can writeback filesystem pages to
-			 * avoid risk of stack overflow but do not writeback
-			 * unless under significant pressure.
+			 * run into the visited page again: we are scanning
+			 * faster than the flusher can writeout dirty pages
 			 */
-			if (page_is_file_cache(page) &&
-					(!current_is_kswapd() || priority >= DEF_PRIORITY - 2)) {
+			if (page_is_file_cache(page) && PageReclaim(page)) {
+				nr_pgreclaim++;
+				goto keep_locked;
+			}
+			if (page_is_file_cache(page) && mapping &&
+			    flush_inode_page(mapping, page, false) >= 0) {
 				/*
 				 * Immediately reclaim when written back.
 				 * Similar in principal to deactivate_page()
@@ -1028,6 +1037,7 @@ keep_lumpy:
 	count_vm_events(PGACTIVATE, pgactivate);
 	*ret_nr_dirty += nr_dirty;
 	*ret_nr_writeback += nr_writeback;
+	*ret_nr_pgreclaim += nr_pgreclaim;
 	return nr_reclaimed;
 }
 
@@ -1087,8 +1097,10 @@ int __isolate_lru_page(struct page *page
 	 */
 	if (mode & (ISOLATE_CLEAN|ISOLATE_ASYNC_MIGRATE)) {
 		/* All the caller can do on PageWriteback is block */
-		if (PageWriteback(page))
+		if (PageWriteback(page)) {
+			SetPageReclaim(page);
 			return ret;
+		}
 
 		if (PageDirty(page)) {
 			struct address_space *mapping;
@@ -1509,6 +1521,7 @@ shrink_inactive_list(unsigned long nr_to
 	unsigned long nr_file;
 	unsigned long nr_dirty = 0;
 	unsigned long nr_writeback = 0;
+	unsigned long nr_pgreclaim = 0;
 	isolate_mode_t reclaim_mode = ISOLATE_INACTIVE;
 	struct zone *zone = mz->zone;
 
@@ -1559,13 +1572,13 @@ shrink_inactive_list(unsigned long nr_to
 	spin_unlock_irq(&zone->lru_lock);
 
 	nr_reclaimed = shrink_page_list(&page_list, mz, sc, priority,
-						&nr_dirty, &nr_writeback);
+				&nr_dirty, &nr_writeback, &nr_pgreclaim);
 
 	/* Check if we should syncronously wait for writeback */
 	if (should_reclaim_stall(nr_taken, nr_reclaimed, priority, sc)) {
 		set_reclaim_mode(priority, sc, true);
 		nr_reclaimed += shrink_page_list(&page_list, mz, sc,
-					priority, &nr_dirty, &nr_writeback);
+			priority, &nr_dirty, &nr_writeback, &nr_pgreclaim);
 	}
 
 	spin_lock_irq(&zone->lru_lock);
@@ -1608,6 +1621,8 @@ shrink_inactive_list(unsigned long nr_to
 	 */
 	if (nr_writeback && nr_writeback >= (nr_taken >> (DEF_PRIORITY-priority)))
 		wait_iff_congested(zone, BLK_RW_ASYNC, HZ/10);
+	if (nr_pgreclaim && nr_pgreclaim >= (nr_taken >> (DEF_PRIORITY-priority)))
+		reclaim_wait(HZ/10);
 
 	trace_mm_vmscan_lru_shrink_inactive(zone->zone_pgdat->node_id,
 		zone_idx(zone),
@@ -2382,8 +2397,6 @@ static unsigned long do_try_to_free_page
 		 */
 		writeback_threshold = sc->nr_to_reclaim + sc->nr_to_reclaim / 2;
 		if (total_scanned > writeback_threshold) {
-			wakeup_flusher_threads(laptop_mode ? 0 : total_scanned,
-						WB_REASON_TRY_TO_FREE_PAGES);
 			sc->may_writepage = 1;
 		}
 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: reclaim the LRU lists full of dirty/writeback pages
  2012-02-14 10:03               ` Wu Fengguang
@ 2012-02-14 13:29                 ` Jan Kara
  2012-02-16  4:00                   ` Wu Fengguang
  0 siblings, 1 reply; 33+ messages in thread
From: Jan Kara @ 2012-02-14 13:29 UTC (permalink / raw)
  To: Wu Fengguang
  Cc: Jan Kara, Rik van Riel, Greg Thelen, bsingharora, Hugh Dickins,
	Michal Hocko, linux-mm, Mel Gorman, Ying Han, hannes,
	KAMEZAWA Hiroyuki, Minchan Kim

On Tue 14-02-12 18:03:48, Wu Fengguang wrote:
> On Mon, Feb 13, 2012 at 04:43:13PM +0100, Jan Kara wrote:
> > On Sun 12-02-12 11:10:29, Wu Fengguang wrote:
> 
> > > 4) test case
> > > 
> > > Run 2 dd tasks in a 100MB memcg (a very handy test case from Greg Thelen):
> > > 
> > > 	mkdir /cgroup/x
> > > 	echo 100M > /cgroup/x/memory.limit_in_bytes
> > > 	echo $$ > /cgroup/x/tasks
> > > 
> > > 	for i in `seq 2`
> > > 	do
> > > 		dd if=/dev/zero of=/fs/f$i bs=1k count=1M &
> > > 	done
> > > 
> > > Before patch, the dd tasks are quickly OOM killed.
> > > After patch, they run well with reasonably good performance and overheads:
> > > 
> > > 1073741824 bytes (1.1 GB) copied, 22.2196 s, 48.3 MB/s
> > > 1073741824 bytes (1.1 GB) copied, 22.4675 s, 47.8 MB/s
> >   I wonder what happens if you run:
> >        mkdir /cgroup/x
> >        echo 100M > /cgroup/x/memory.limit_in_bytes
> >        echo $$ > /cgroup/x/tasks
> > 
> >        for (( i = 0; i < 2; i++ )); do
> >          mkdir /fs/d$i
> >          for (( j = 0; j < 5000; j++ )); do 
> >            dd if=/dev/zero of=/fs/d$i/f$j bs=1k count=50
> >          done &
> >        done
> 
> That's a very good case, thanks!
>  
> >   Because for small files the writearound logic won't help much...
> 
> Right, it also means the native background work cannot be more I/O
> efficient than the pageout works, except for the overheads of more
> work items..
  Yes, that's true.

> >   Also the number of work items queued might become interesting.
> 
> It turns out that the 1024 mempool reservations are not exhausted at
> all (the below patch as a trace_printk on alloc failure and it didn't
> trigger at all).
> 
> Here is the representative iostat lines on XFS (full "iostat -kx 1 20" log attached):
> 
> avg-cpu:  %user   %nice %system %iowait  %steal   %idle                                                                     
>            0.80    0.00    6.03    0.03    0.00   93.14                                                                     
>                                                                                                                             
> Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util                   
> sda               0.00   205.00    0.00  163.00     0.00 16900.00   207.36     4.09   21.63   1.88  30.70                   
> 
> The attached dirtied/written progress graph looks interesting.
> Although the iostat disk utilization is low, the "dirtied" progress
> line is pretty straight and there is no single congestion_wait event
> in the trace log. Which makes me wonder if there are some unknown
> blocking issues in the way.
  Interesting. I'd also expect we should block in reclaim path. How fast
can dd threads progress when there is no cgroup involved?
 
> > Another common case to test - run 'slapadd' command in each cgroup to
> > create big LDAP database. That does pretty much random IO on a big mmaped
> > DB file.
> 
> I've not used this. Will it need some configuration and data feed?
> fio looks more handy to me for emulating mmap random IO.
  Yes, fio can generate random mmap IO. It's just that this is a real life
workload. So it is not completely random, it happens on several files and
is also interleaved with other memory allocations from DB. I can send you
the config files and data feed if you are interested.

> > > +/*
> > > + * schedule writeback on a range of inode pages.
> > > + */
> > > +static struct wb_writeback_work *
> > > +bdi_flush_inode_range(struct backing_dev_info *bdi,
> > > +		      struct inode *inode,
> > > +		      pgoff_t offset,
> > > +		      pgoff_t len,
> > > +		      bool wait)
> > > +{
> > > +	struct wb_writeback_work *work;
> > > +
> > > +	if (!igrab(inode))
> > > +		return ERR_PTR(-ENOENT);
> >   One technical note here: If the inode is deleted while it is queued, this
> > reference will keep it living until flusher thread gets to it. Then when
> > flusher thread puts its reference, the inode will get deleted in flusher
> > thread context. I don't see an immediate problem in that but it might be
> > surprising sometimes. Another problem I see is that if you try to
> > unmount the filesystem while the work item is queued, you'll get EBUSY for
> > no apparent reason (for userspace).
> 
> Yeah, we need to make umount work.
  The positive thing is that if the inode is reaped while the work item is
queue, we know all that needed to be done is done. So we don't really need
to pin the inode.

> And I find the pageout works seem to have some problems with ext4.
> For example, this can be easily triggered with 10 dd tasks running
> inside the 100MB limited memcg:
  So journal thread is getting stuck while committing transaction. Most
likely waiting for some dd thread to stop a transaction so that commit can
proceed. The processes waiting in start_this_handle() are just secondary
effect resulting from the first problem. It might be interesting to get
stack traces of all bloked processes when the journal thread is stuck.


> [18006.858109] INFO: task jbd2/sda1-8:51294 blocked for more than 120 seconds.
> [18006.866425] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> [18006.876096] jbd2/sda1-8     D 0000000000000000  5464 51294      2 0x00000000
> [18006.884729]  ffff88040b097c70 0000000000000046 ffff880823032310 ffff88040b096000
> [18006.894356]  00000000001d2f00 00000000001d2f00 ffff8808230322a0 00000000001d2f00
> [18006.904000]  ffff88040b097fd8 00000000001d2f00 ffff88040b097fd8 00000000001d2f00
> [18006.913652] Call Trace:
> [18006.916901]  [<ffffffff8103d4af>] ? native_sched_clock+0x29/0x70
> [18006.924134]  [<ffffffff81232aab>] ? jbd2_journal_commit_transaction+0x1d0/0x1281
> [18006.933324]  [<ffffffff8109660d>] ? local_clock+0x41/0x5a
> [18006.939879]  [<ffffffff810b0ddd>] ? lock_release_holdtime+0xa3/0xac
> [18006.947410]  [<ffffffff81232aab>] ? jbd2_journal_commit_transaction+0x1d0/0x1281
> [18006.956607]  [<ffffffff81a57904>] schedule+0x5a/0x5c
> [18006.962677]  [<ffffffff81232ab0>] jbd2_journal_commit_transaction+0x1d5/0x1281
> [18006.971683]  [<ffffffff8103d4af>] ? native_sched_clock+0x29/0x70
> [18006.978933]  [<ffffffff810738ce>] ? try_to_del_timer_sync+0xba/0xc8
> [18006.986452]  [<ffffffff8109660d>] ? local_clock+0x41/0x5a
> [18006.992999]  [<ffffffff8108683a>] ? wake_up_bit+0x2a/0x2a
> [18006.999542]  [<ffffffff810738ce>] ? try_to_del_timer_sync+0xba/0xc8
> [18007.007062]  [<ffffffff81073a6f>] ? del_timer_sync+0xbb/0xce
> [18007.013898]  [<ffffffff810739b4>] ? process_timeout+0x10/0x10
> [18007.020835]  [<ffffffff81237bc1>] kjournald2+0xcf/0x242
> [18007.027187]  [<ffffffff8108683a>] ? wake_up_bit+0x2a/0x2a
> [18007.033733]  [<ffffffff81237af2>] ? commit_timeout+0x10/0x10
> [18007.040574]  [<ffffffff81086384>] kthread+0x95/0x9d
> [18007.046542]  [<ffffffff81a61134>] kernel_thread_helper+0x4/0x10
> [18007.053675]  [<ffffffff81a591b4>] ? retint_restore_args+0x13/0x13
> [18007.061003]  [<ffffffff810862ef>] ? __init_kthread_worker+0x5b/0x5b
> [18007.068521]  [<ffffffff81a61130>] ? gs_change+0x13/0x13
> [18007.074878] no locks held by jbd2/sda1-8/51294.
> 
> Sometimes I also catch dd/ext4lazyinit/flush all stalling in start_this_handle:
> 
> [17985.439567] dd              D 0000000000000007  3616 61440      1 0x00000004
> [17985.448088]  ffff88080d71b9b8 0000000000000046 ffff88081ec80070 ffff88080d71a000
> [17985.457545]  00000000001d2f00 00000000001d2f00 ffff88081ec80000 00000000001d2f00
> [17985.467168]  ffff88080d71bfd8 00000000001d2f00 ffff88080d71bfd8 00000000001d2f00
> [17985.476647] Call Trace:
> [17985.479843]  [<ffffffff8103d4af>] ? native_sched_clock+0x29/0x70
> [17985.487025]  [<ffffffff81230b9d>] ? start_this_handle+0x357/0x4ed
> [17985.494313]  [<ffffffff8109660d>] ? local_clock+0x41/0x5a
> [17985.500815]  [<ffffffff810b0ddd>] ? lock_release_holdtime+0xa3/0xac
> [17985.508287]  [<ffffffff81230b9d>] ? start_this_handle+0x357/0x4ed
> [17985.515575]  [<ffffffff81a57904>] schedule+0x5a/0x5c
> [17985.521588]  [<ffffffff81230c39>] start_this_handle+0x3f3/0x4ed
> [17985.528669]  [<ffffffff81147820>] ? kmem_cache_free+0xfa/0x13a
> [17985.545142]  [<ffffffff8108683a>] ? wake_up_bit+0x2a/0x2a
> [17985.551650]  [<ffffffff81230f0e>] jbd2__journal_start+0xb0/0xf6
> [17985.558732]  [<ffffffff811f7ad7>] ? ext4_dirty_inode+0x1d/0x4c
> [17985.565716]  [<ffffffff81230f67>] jbd2_journal_start+0x13/0x15
> [17985.572703]  [<ffffffff8120e3e9>] ext4_journal_start_sb+0x13f/0x157
> [17985.580172]  [<ffffffff8109660d>] ? local_clock+0x41/0x5a
> [17985.586680]  [<ffffffff811f7ad7>] ext4_dirty_inode+0x1d/0x4c
> [17985.593472]  [<ffffffff81176827>] __mark_inode_dirty+0x2e/0x1cc
> [17985.600552]  [<ffffffff81168e84>] file_update_time+0xe4/0x106
> [17985.607441]  [<ffffffff811079f6>] __generic_file_aio_write+0x254/0x364
> [17985.615202]  [<ffffffff81a565da>] ? mutex_lock_nested+0x2e4/0x2f3
> [17985.622488]  [<ffffffff81107b50>] ? generic_file_aio_write+0x4a/0xc1
> [17985.630057]  [<ffffffff81107b6c>] generic_file_aio_write+0x66/0xc1
> [17985.637442]  [<ffffffff811ef72b>] ext4_file_write+0x1f9/0x251
> [17985.644330]  [<ffffffff8109660d>] ? local_clock+0x41/0x5a
> [17985.650835]  [<ffffffff8118809e>] ? fsnotify+0x222/0x27b
> [17985.657238]  [<ffffffff81153612>] do_sync_write+0xce/0x10b
> [17985.663844]  [<ffffffff8118809e>] ? fsnotify+0x222/0x27b
> [17985.670243]  [<ffffffff81187ef8>] ? fsnotify+0x7c/0x27b
> [17985.676561]  [<ffffffff81153dbe>] vfs_write+0xb8/0x157
> [17985.682767]  [<ffffffff81154075>] sys_write+0x4d/0x77
> [17985.688878]  [<ffffffff81a5fce9>] system_call_fastpath+0x16/0x1b
> 
> and jbd2 in
> 
> [17983.623657] jbd2/sda1-8     D 0000000000000000  5464 51294      2 0x00000000
> [17983.632173]  ffff88040b097c70 0000000000000046 ffff880823032310 ffff88040b096000
> [17983.641640]  00000000001d2f00 00000000001d2f00 ffff8808230322a0 00000000001d2f00
> [17983.651119]  ffff88040b097fd8 00000000001d2f00 ffff88040b097fd8 00000000001d2f00
> [17983.660603] Call Trace:
> [17983.663808]  [<ffffffff8103d4af>] ? native_sched_clock+0x29/0x70
> [17983.670997]  [<ffffffff81232aab>] ? jbd2_journal_commit_transaction+0x1d0/0x1281
> [17983.680124]  [<ffffffff8109660d>] ? local_clock+0x41/0x5a
> [17983.686638]  [<ffffffff810b0ddd>] ? lock_release_holdtime+0xa3/0xac
> [17983.694108]  [<ffffffff81232aab>] ? jbd2_journal_commit_transaction+0x1d0/0x1281
> [17983.703243]  [<ffffffff81a57904>] schedule+0x5a/0x5c
> [17983.709262]  [<ffffffff81232ab0>] jbd2_journal_commit_transaction+0x1d5/0x1281
> [17983.718195]  [<ffffffff8103d4af>] ? native_sched_clock+0x29/0x70
> [17983.725392]  [<ffffffff810738ce>] ? try_to_del_timer_sync+0xba/0xc8
> [17983.732867]  [<ffffffff8109660d>] ? local_clock+0x41/0x5a
> [17983.739374]  [<ffffffff8108683a>] ? wake_up_bit+0x2a/0x2a
> [17983.745864]  [<ffffffff810738ce>] ? try_to_del_timer_sync+0xba/0xc8
> [17983.753343]  [<ffffffff81073a6f>] ? del_timer_sync+0xbb/0xce
> [17983.760137]  [<ffffffff810739b4>] ? process_timeout+0x10/0x10
> [17983.767041]  [<ffffffff81237bc1>] kjournald2+0xcf/0x242
> [17983.773361]  [<ffffffff8108683a>] ? wake_up_bit+0x2a/0x2a
> [17983.779863]  [<ffffffff81237af2>] ? commit_timeout+0x10/0x10
> [17983.786665]  [<ffffffff81086384>] kthread+0x95/0x9d
> [17983.792585]  [<ffffffff81a61134>] kernel_thread_helper+0x4/0x10
> [17983.799670]  [<ffffffff81a591b4>] ? retint_restore_args+0x13/0x13
> [17983.806948]  [<ffffffff810862ef>] ? __init_kthread_worker+0x5b/0x5b
> 
> Here is the updated patch used in the new tests. It moves
> congestion_wait() out of the page lock and make flush_inode_page() no
> longer wait for memory allocation (looks unnecessary).

								Honza
-- 
Jan Kara <jack@suse.cz>
SUSE Labs, CR

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: reclaim the LRU lists full of dirty/writeback pages
  2012-02-14 13:18           ` Wu Fengguang
@ 2012-02-14 13:35             ` Wu Fengguang
  2012-02-14 15:51             ` Mel Gorman
  2012-02-16  0:00             ` KAMEZAWA Hiroyuki
  2 siblings, 0 replies; 33+ messages in thread
From: Wu Fengguang @ 2012-02-14 13:35 UTC (permalink / raw)
  To: Mel Gorman
  Cc: Greg Thelen, Jan Kara, bsingharora, Hugh Dickins, Michal Hocko,
	linux-mm, Ying Han, hannes, KAMEZAWA Hiroyuki, Rik van Riel,
	Minchan Kim

[-- Attachment #1: Type: text/plain, Size: 432 bytes --]

> For the single dd inside memcg, ext4 is now working pretty well, with
> least CPU overheads:

Attached are the dd progresses for btrfs, ext4 and xfs in turn, with
btrfs performing the most fluent and fast.

Also attached the "10 memcg each running 1 dd" graph for btrfs. Well
it's unacceptably bumpy because there are no any efforts to make a
round-robin walk between the works from different memcg LRU lists.

Thanks,
Fengguang


[-- Attachment #2: balance_dirty_pages-task-bw.png --]
[-- Type: image/png, Size: 36661 bytes --]

[-- Attachment #3: balance_dirty_pages-task-bw.png --]
[-- Type: image/png, Size: 37858 bytes --]

[-- Attachment #4: balance_dirty_pages-task-bw.png --]
[-- Type: image/png, Size: 38995 bytes --]

[-- Attachment #5: balance_dirty_pages-task-bw.png --]
[-- Type: image/png, Size: 45482 bytes --]

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: reclaim the LRU lists full of dirty/writeback pages
  2012-02-14 13:18           ` Wu Fengguang
  2012-02-14 13:35             ` Wu Fengguang
@ 2012-02-14 15:51             ` Mel Gorman
  2012-02-16  9:50               ` Wu Fengguang
  2012-02-16  0:00             ` KAMEZAWA Hiroyuki
  2 siblings, 1 reply; 33+ messages in thread
From: Mel Gorman @ 2012-02-14 15:51 UTC (permalink / raw)
  To: Wu Fengguang
  Cc: Greg Thelen, Jan Kara, bsingharora, Hugh Dickins, Michal Hocko,
	linux-mm, Ying Han, hannes, KAMEZAWA Hiroyuki, Rik van Riel,
	Minchan Kim

On Tue, Feb 14, 2012 at 09:18:12PM +0800, Wu Fengguang wrote:
> > For the OOM problem, a more reasonable stopgap might be to identify when
> > a process is scanning a memcg at high priority and encountered all
> > PageReclaim with no forward progress and to congestion_wait() if that
> > situation occurs. A preferable way would be to wait until the flusher
> > wakes up a waiter on PageReclaim pages to be written out because we want
> > to keep moving way from congestion_wait() if at all possible.
> 
> Good points! Below is the more serious page reclaim changes.
> 
> The dirty/writeback pages may often come close to each other in the
> LRU list, so the local test during a 32-page scan may still trigger
> reclaim waits unnecessarily.

Yes, this is particularly the case when writing back to USB. It is not
unusual that all dirty pages under writeback are backed by USB and at the
end of the LRU. Right now what happens is that reclaimers see higher CPU
usage as they scan over these pages uselessly. If the wrong choice is
made on how to throttle, we'll see yet more variants of the "system
responsiveness drops when writing to USB".

> Some global information on the percent
> of dirty/writeback pages in the LRU list may help. Anyway the added
> tests should still be much better than no protection.
> 

You can tell how many dirty pages and writeback pages are in the zone
already.

> A global wait queue and reclaim_wait() is introduced. The waiters will
> be wakeup when pages are rotated by end_page_writeback() or lru drain.
> 
> I have to say its effectiveness depends on the filesystem... ext4
> and btrfs do fluent IO completions, so reclaim_wait() works pretty
> well:
>               dd-14560 [017] ....  1360.894605: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=10000
>               dd-14560 [017] ....  1360.904456: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=8000
>               dd-14560 [017] ....  1360.908293: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=2000
>               dd-14560 [017] ....  1360.923960: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=15000
>               dd-14560 [017] ....  1360.927810: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=2000
>               dd-14560 [017] ....  1360.931656: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=2000
>               dd-14560 [017] ....  1360.943503: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=10000
>               dd-14560 [017] ....  1360.953289: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=7000
>               dd-14560 [017] ....  1360.957177: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=2000
>               dd-14560 [017] ....  1360.972949: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=15000
> 
> However XFS does IO completions in very large batches (there may be
> only several big IO completions in one second). So reclaim_wait()
> mostly end up waiting to the full HZ/10 timeout:
> 
>               dd-4177  [008] ....   866.367661: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=100000
>               dd-4177  [010] ....   866.567583: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=100000
>               dd-4177  [012] ....   866.767458: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=100000
>               dd-4177  [013] ....   866.867419: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=100000
>               dd-4177  [008] ....   867.167266: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=100000
>               dd-4177  [010] ....   867.367168: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=100000
>               dd-4177  [012] ....   867.818950: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=100000
>               dd-4177  [013] ....   867.918905: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=100000
>               dd-4177  [013] ....   867.971657: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=52000
>               dd-4177  [013] ....   867.971812: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=0
>               dd-4177  [008] ....   868.355700: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=100000
>               dd-4177  [010] ....   868.700515: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=100000
> 

And where people will get hit by regressions in this area is writing to
vfat and in more rare cases ntfs on USB stick.

> > Another possibility would be to relook at LRU_IMMEDIATE but right now it
> > requires a page flag and I haven't devised a way around that. Besides,
> > it would only address the problem of PageREclaim pages being encountered,
> > it would not handle the case where a memcg was filled with PageReclaim pages.
> 
> I also considered things like LRU_IMMEDIATE, however got no clear idea yet.
> Since the simple "wait on PG_reclaim" approach appears to work for this
> memcg dd case, it effectively disables me to think any further ;-)
> 

Test with interactive use while writing heavily to a USB stick.

> For the single dd inside memcg, ext4 is now working pretty well, with
> least CPU overheads:
> 
> (running from another test box, so not directly comparable with old tests)
> 
>         avg-cpu:  %user   %nice %system %iowait  %steal   %idle
>                    0.03    0.00    0.85    5.35    0.00   93.77
> 
>         Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util
>         sda               0.00     0.00    0.00  112.00     0.00 57348.00  1024.07    81.66 1045.21   8.93 100.00
> 
>         avg-cpu:  %user   %nice %system %iowait  %steal   %idle
>                    0.00    0.00    0.69    4.07    0.00   95.24
> 
>         Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util
>         sda               0.00   142.00    0.00  112.00     0.00 56832.00  1014.86   127.94  790.04   8.93 100.00
> 
> And xfs a bit less fluent:
> 
>         avg-cpu:  %user   %nice %system %iowait  %steal   %idle
>                    0.00    0.00    3.79    2.54    0.00   93.68
> 
>         Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util
>         sda               0.00     0.00    0.00  108.00     0.00 54644.00  1011.93    48.13 1044.83   8.44  91.20
> 
>         avg-cpu:  %user   %nice %system %iowait  %steal   %idle
>                    0.00    0.00    3.38    3.88    0.00   92.74
> 
>         Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util
>         sda               0.00     0.00    0.00  105.00     0.00 53156.00  1012.50   128.50  451.90   9.25  97.10
> 
> btrfs also looks good:
> 
>         avg-cpu:  %user   %nice %system %iowait  %steal   %idle
>                    0.00    0.00    8.05    3.85    0.00   88.10
> 
>         Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util
>         sda               0.00     0.00    0.00  108.00     0.00 53248.00   986.07    88.11  643.99   9.26 100.00
> 
>         avg-cpu:  %user   %nice %system %iowait  %steal   %idle
>                    0.00    0.00    4.04    2.51    0.00   93.45
> 
>         Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util
>         sda               0.00     0.00    0.00  112.00     0.00 57344.00  1024.00    91.58  998.41   8.93 100.00
> 
> ---
> 
> --- linux.orig/include/linux/backing-dev.h	2012-02-14 19:43:06.000000000 +0800
> +++ linux/include/linux/backing-dev.h	2012-02-14 19:49:26.000000000 +0800
> @@ -304,6 +304,8 @@ void clear_bdi_congested(struct backing_
>  void set_bdi_congested(struct backing_dev_info *bdi, int sync);
>  long congestion_wait(int sync, long timeout);
>  long wait_iff_congested(struct zone *zone, int sync, long timeout);
> +long reclaim_wait(long timeout);
> +void reclaim_rotated(void);
>  
>  static inline bool bdi_cap_writeback_dirty(struct backing_dev_info *bdi)
>  {
> --- linux.orig/mm/backing-dev.c	2012-02-14 19:26:15.000000000 +0800
> +++ linux/mm/backing-dev.c	2012-02-14 20:09:45.000000000 +0800
> @@ -873,3 +873,38 @@ out:
>  	return ret;
>  }
>  EXPORT_SYMBOL(wait_iff_congested);
> +
> +static DECLARE_WAIT_QUEUE_HEAD(reclaim_wqh);
> +
> +/**
> + * reclaim_wait - wait for some pages being rotated to the LRU tail
> + * @timeout: timeout in jiffies
> + *
> + * Wait until @timeout, or when some (typically PG_reclaim under writeback)
> + * pages rotated to the LRU so that page reclaim can make progress.
> + */
> +long reclaim_wait(long timeout)
> +{
> +	long ret;
> +	unsigned long start = jiffies;
> +	DEFINE_WAIT(wait);
> +
> +	prepare_to_wait(&reclaim_wqh, &wait, TASK_KILLABLE);
> +	ret = io_schedule_timeout(timeout);
> +	finish_wait(&reclaim_wqh, &wait);
> +
> +	trace_writeback_reclaim_wait(jiffies_to_usecs(timeout),
> +				     jiffies_to_usecs(jiffies - start));
> +
> +	return ret;
> +}
> +EXPORT_SYMBOL(reclaim_wait);
> +
> +void reclaim_rotated()
> +{
> +	wait_queue_head_t *wqh = &reclaim_wqh;
> +
> +	if (waitqueue_active(wqh))
> +		wake_up(wqh);
> +}
> +
> --- linux.orig/mm/swap.c	2012-02-14 19:40:10.000000000 +0800
> +++ linux/mm/swap.c	2012-02-14 19:45:13.000000000 +0800
> @@ -253,6 +253,7 @@ static void pagevec_move_tail(struct pag
>  
>  	pagevec_lru_move_fn(pvec, pagevec_move_tail_fn, &pgmoved);
>  	__count_vm_events(PGROTATED, pgmoved);
> +	reclaim_rotated();
>  }
>  
>  /*
> --- linux.orig/mm/vmscan.c	2012-02-14 17:53:27.000000000 +0800
> +++ linux/mm/vmscan.c	2012-02-14 19:44:11.000000000 +0800
> @@ -767,7 +767,8 @@ static unsigned long shrink_page_list(st
>  				      struct scan_control *sc,
>  				      int priority,
>  				      unsigned long *ret_nr_dirty,
> -				      unsigned long *ret_nr_writeback)
> +				      unsigned long *ret_nr_writeback,
> +				      unsigned long *ret_nr_pgreclaim)
>  {
>  	LIST_HEAD(ret_pages);
>  	LIST_HEAD(free_pages);
> @@ -776,6 +777,7 @@ static unsigned long shrink_page_list(st
>  	unsigned long nr_congested = 0;
>  	unsigned long nr_reclaimed = 0;
>  	unsigned long nr_writeback = 0;
> +	unsigned long nr_pgreclaim = 0;
>  
>  	cond_resched();
>  
> @@ -813,6 +815,10 @@ static unsigned long shrink_page_list(st
>  
>  		if (PageWriteback(page)) {
>  			nr_writeback++;
> +			if (PageReclaim(page))
> +				nr_pgreclaim++;
> +			else
> +				SetPageReclaim(page);
>  			/*

This check is unexpected. We already SetPageReclaim when queuing pages for
IO from reclaim context and if dirty pages are encountered during the LRU
scan that cannot be queued for IO. How often is it that nr_pgreclaim !=
nr_writeback and by how much do they differ?

>  			 * Synchronous reclaim cannot queue pages for
>  			 * writeback due to the possibility of stack overflow
> @@ -874,12 +880,15 @@ static unsigned long shrink_page_list(st
>  			nr_dirty++;
>  
>  			/*
> -			 * Only kswapd can writeback filesystem pages to
> -			 * avoid risk of stack overflow but do not writeback
> -			 * unless under significant pressure.
> +			 * run into the visited page again: we are scanning
> +			 * faster than the flusher can writeout dirty pages
>  			 */

which in itself is not an abnormal condition. We get into this situation
when writing to USB. Dirty throttling stops too much memory getting dirtied
but that does not mean we should throttle instead of reclaiming clean pages.

That's why I worry that if this is aimed at fixing a memcg problem, it
will have the impact of making interactive performance on normal systems
worse.

> -			if (page_is_file_cache(page) &&
> -					(!current_is_kswapd() || priority >= DEF_PRIORITY - 2)) {
> +			if (page_is_file_cache(page) && PageReclaim(page)) {
> +				nr_pgreclaim++;
> +				goto keep_locked;
> +			}
> +			if (page_is_file_cache(page) && mapping &&
> +			    flush_inode_page(mapping, page, false) >= 0) {
>  				/*
>  				 * Immediately reclaim when written back.
>  				 * Similar in principal to deactivate_page()
> @@ -1028,6 +1037,7 @@ keep_lumpy:
>  	count_vm_events(PGACTIVATE, pgactivate);
>  	*ret_nr_dirty += nr_dirty;
>  	*ret_nr_writeback += nr_writeback;
> +	*ret_nr_pgreclaim += nr_pgreclaim;
>  	return nr_reclaimed;
>  }
>  
> @@ -1087,8 +1097,10 @@ int __isolate_lru_page(struct page *page
>  	 */
>  	if (mode & (ISOLATE_CLEAN|ISOLATE_ASYNC_MIGRATE)) {
>  		/* All the caller can do on PageWriteback is block */
> -		if (PageWriteback(page))
> +		if (PageWriteback(page)) {
> +			SetPageReclaim(page);
>  			return ret;
> +		}
>  

This hunk means that if async compaction (common for THP) encounters a page
under writeback, it will still skip it but mark it for immediate reclaim
after IO completes. This will have the impact that compaction causes an
abnormally high number of pages to be reclaimed.

>  		if (PageDirty(page)) {
>  			struct address_space *mapping;
> @@ -1509,6 +1521,7 @@ shrink_inactive_list(unsigned long nr_to
>  	unsigned long nr_file;
>  	unsigned long nr_dirty = 0;
>  	unsigned long nr_writeback = 0;
> +	unsigned long nr_pgreclaim = 0;
>  	isolate_mode_t reclaim_mode = ISOLATE_INACTIVE;
>  	struct zone *zone = mz->zone;
>  
> @@ -1559,13 +1572,13 @@ shrink_inactive_list(unsigned long nr_to
>  	spin_unlock_irq(&zone->lru_lock);
>  
>  	nr_reclaimed = shrink_page_list(&page_list, mz, sc, priority,
> -						&nr_dirty, &nr_writeback);
> +				&nr_dirty, &nr_writeback, &nr_pgreclaim);
>  
>  	/* Check if we should syncronously wait for writeback */
>  	if (should_reclaim_stall(nr_taken, nr_reclaimed, priority, sc)) {
>  		set_reclaim_mode(priority, sc, true);
>  		nr_reclaimed += shrink_page_list(&page_list, mz, sc,
> -					priority, &nr_dirty, &nr_writeback);
> +			priority, &nr_dirty, &nr_writeback, &nr_pgreclaim);
>  	}
>  
>  	spin_lock_irq(&zone->lru_lock);
> @@ -1608,6 +1621,8 @@ shrink_inactive_list(unsigned long nr_to
>  	 */
>  	if (nr_writeback && nr_writeback >= (nr_taken >> (DEF_PRIORITY-priority)))
>  		wait_iff_congested(zone, BLK_RW_ASYNC, HZ/10);
> +	if (nr_pgreclaim && nr_pgreclaim >= (nr_taken >> (DEF_PRIORITY-priority)))
> +		reclaim_wait(HZ/10);
>  

We risk going to sleep too easily when USB-backed pages are at the end of
the LRU list. Note that the nr_writeback check only goes to sleep if it
detects that the underlying storage is also congested. In contrast, it
will take very few PageReclaim pages at teh end of the LRU to cause the
process to sleep when it instead should find clean pages to discard.

If the intention is to avoid memcg going OOM prematurely, the
nr_pgreclaim value needs to be treated at a higher level that records
how many PageReclaim pages were encountered. If no progress was made
because all the pages were PageReclaim, then throttle and return 1 to
the page allocator where it will retry the allocation without going OOM
after some pages have been cleaned and reclaimed.

>  	trace_mm_vmscan_lru_shrink_inactive(zone->zone_pgdat->node_id,
>  		zone_idx(zone),
> @@ -2382,8 +2397,6 @@ static unsigned long do_try_to_free_page
>  		 */
>  		writeback_threshold = sc->nr_to_reclaim + sc->nr_to_reclaim / 2;
>  		if (total_scanned > writeback_threshold) {
> -			wakeup_flusher_threads(laptop_mode ? 0 : total_scanned,
> -						WB_REASON_TRY_TO_FREE_PAGES);
>  			sc->may_writepage = 1;
>  		}
>  

-- 
Mel Gorman
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: reclaim the LRU lists full of dirty/writeback pages
  2012-02-14 13:18           ` Wu Fengguang
  2012-02-14 13:35             ` Wu Fengguang
  2012-02-14 15:51             ` Mel Gorman
@ 2012-02-16  0:00             ` KAMEZAWA Hiroyuki
  2012-02-16  3:04               ` Wu Fengguang
  2 siblings, 1 reply; 33+ messages in thread
From: KAMEZAWA Hiroyuki @ 2012-02-16  0:00 UTC (permalink / raw)
  To: Wu Fengguang
  Cc: Mel Gorman, Greg Thelen, Jan Kara, bsingharora, Hugh Dickins,
	Michal Hocko, linux-mm, Ying Han, hannes, Rik van Riel,
	Minchan Kim

On Tue, 14 Feb 2012 21:18:12 +0800
Wu Fengguang <fengguang.wu@intel.com> wrote:

> 
> --- linux.orig/include/linux/backing-dev.h	2012-02-14 19:43:06.000000000 +0800
> +++ linux/include/linux/backing-dev.h	2012-02-14 19:49:26.000000000 +0800
> @@ -304,6 +304,8 @@ void clear_bdi_congested(struct backing_
>  void set_bdi_congested(struct backing_dev_info *bdi, int sync);
>  long congestion_wait(int sync, long timeout);
>  long wait_iff_congested(struct zone *zone, int sync, long timeout);
> +long reclaim_wait(long timeout);
> +void reclaim_rotated(void);
>  
>  static inline bool bdi_cap_writeback_dirty(struct backing_dev_info *bdi)
>  {
> --- linux.orig/mm/backing-dev.c	2012-02-14 19:26:15.000000000 +0800
> +++ linux/mm/backing-dev.c	2012-02-14 20:09:45.000000000 +0800
> @@ -873,3 +873,38 @@ out:
>  	return ret;
>  }
>  EXPORT_SYMBOL(wait_iff_congested);
> +
> +static DECLARE_WAIT_QUEUE_HEAD(reclaim_wqh);
> +
> +/**
> + * reclaim_wait - wait for some pages being rotated to the LRU tail
> + * @timeout: timeout in jiffies
> + *
> + * Wait until @timeout, or when some (typically PG_reclaim under writeback)
> + * pages rotated to the LRU so that page reclaim can make progress.
> + */
> +long reclaim_wait(long timeout)
> +{
> +	long ret;
> +	unsigned long start = jiffies;
> +	DEFINE_WAIT(wait);
> +
> +	prepare_to_wait(&reclaim_wqh, &wait, TASK_KILLABLE);
> +	ret = io_schedule_timeout(timeout);
> +	finish_wait(&reclaim_wqh, &wait);
> +
> +	trace_writeback_reclaim_wait(jiffies_to_usecs(timeout),
> +				     jiffies_to_usecs(jiffies - start));
> +
> +	return ret;
> +}
> +EXPORT_SYMBOL(reclaim_wait);
> +
> +void reclaim_rotated()
> +{
> +	wait_queue_head_t *wqh = &reclaim_wqh;
> +
> +	if (waitqueue_active(wqh))
> +		wake_up(wqh);
> +}
> +

Thank you.

I like this approach. A nitpick is that this may wake up all waiters 
in the system when a memcg is rotated.

How about wait_event() + condition by bitmap (using per memcg unique IDs.) ?


Thanks,
-Kame

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: reclaim the LRU lists full of dirty/writeback pages
  2012-02-16  0:00             ` KAMEZAWA Hiroyuki
@ 2012-02-16  3:04               ` Wu Fengguang
  2012-02-16  3:52                 ` KAMEZAWA Hiroyuki
  0 siblings, 1 reply; 33+ messages in thread
From: Wu Fengguang @ 2012-02-16  3:04 UTC (permalink / raw)
  To: KAMEZAWA Hiroyuki
  Cc: Mel Gorman, Greg Thelen, Jan Kara, bsingharora, Hugh Dickins,
	Michal Hocko, linux-mm, Ying Han, hannes, Rik van Riel,
	Minchan Kim

On Thu, Feb 16, 2012 at 09:00:37AM +0900, KAMEZAWA Hiroyuki wrote:
> On Tue, 14 Feb 2012 21:18:12 +0800
> Wu Fengguang <fengguang.wu@intel.com> wrote:
> 
> > 
> > --- linux.orig/include/linux/backing-dev.h	2012-02-14 19:43:06.000000000 +0800
> > +++ linux/include/linux/backing-dev.h	2012-02-14 19:49:26.000000000 +0800
> > @@ -304,6 +304,8 @@ void clear_bdi_congested(struct backing_
> >  void set_bdi_congested(struct backing_dev_info *bdi, int sync);
> >  long congestion_wait(int sync, long timeout);
> >  long wait_iff_congested(struct zone *zone, int sync, long timeout);
> > +long reclaim_wait(long timeout);
> > +void reclaim_rotated(void);
> >  
> >  static inline bool bdi_cap_writeback_dirty(struct backing_dev_info *bdi)
> >  {
> > --- linux.orig/mm/backing-dev.c	2012-02-14 19:26:15.000000000 +0800
> > +++ linux/mm/backing-dev.c	2012-02-14 20:09:45.000000000 +0800
> > @@ -873,3 +873,38 @@ out:
> >  	return ret;
> >  }
> >  EXPORT_SYMBOL(wait_iff_congested);
> > +
> > +static DECLARE_WAIT_QUEUE_HEAD(reclaim_wqh);
> > +
> > +/**
> > + * reclaim_wait - wait for some pages being rotated to the LRU tail
> > + * @timeout: timeout in jiffies
> > + *
> > + * Wait until @timeout, or when some (typically PG_reclaim under writeback)
> > + * pages rotated to the LRU so that page reclaim can make progress.
> > + */
> > +long reclaim_wait(long timeout)
> > +{
> > +	long ret;
> > +	unsigned long start = jiffies;
> > +	DEFINE_WAIT(wait);
> > +
> > +	prepare_to_wait(&reclaim_wqh, &wait, TASK_KILLABLE);
> > +	ret = io_schedule_timeout(timeout);
> > +	finish_wait(&reclaim_wqh, &wait);
> > +
> > +	trace_writeback_reclaim_wait(jiffies_to_usecs(timeout),
> > +				     jiffies_to_usecs(jiffies - start));
> > +
> > +	return ret;
> > +}
> > +EXPORT_SYMBOL(reclaim_wait);
> > +
> > +void reclaim_rotated()
> > +{
> > +	wait_queue_head_t *wqh = &reclaim_wqh;
> > +
> > +	if (waitqueue_active(wqh))
> > +		wake_up(wqh);
> > +}
> > +
> 
> Thank you.
> 
> I like this approach. A nitpick is that this may wake up all waiters 
> in the system when a memcg is rotated.

Thank you. It sure helps to start it simple :-)

> How about wait_event() + condition by bitmap (using per memcg unique IDs.) ?

I'm not sure how to manage the bitmap. The idea in my mind is to

- maintain a memcg->pages_rotated counter

- in reclaim_wait(), grab the current ->pages_rotated value before
  going to wait, compare it to the new value on every wakeup, and
  return to the user when seeing a different ->pages_rotated value.
  (this cannot stop waking up multiple tasks in the same memcg...) 

Does that sound reasonable?

Thanks,
Fengguang

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: reclaim the LRU lists full of dirty/writeback pages
  2012-02-16  3:04               ` Wu Fengguang
@ 2012-02-16  3:52                 ` KAMEZAWA Hiroyuki
  2012-02-16  4:05                   ` Wu Fengguang
  0 siblings, 1 reply; 33+ messages in thread
From: KAMEZAWA Hiroyuki @ 2012-02-16  3:52 UTC (permalink / raw)
  To: Wu Fengguang
  Cc: Mel Gorman, Greg Thelen, Jan Kara, bsingharora, Hugh Dickins,
	Michal Hocko, linux-mm, Ying Han, hannes, Rik van Riel,
	Minchan Kim

On Thu, 16 Feb 2012 11:04:15 +0800
Wu Fengguang <fengguang.wu@intel.com> wrote:

> On Thu, Feb 16, 2012 at 09:00:37AM +0900, KAMEZAWA Hiroyuki wrote:
> > On Tue, 14 Feb 2012 21:18:12 +0800
> > Wu Fengguang <fengguang.wu@intel.com> wrote:
> > 
> > > 
> > > --- linux.orig/include/linux/backing-dev.h	2012-02-14 19:43:06.000000000 +0800
> > > +++ linux/include/linux/backing-dev.h	2012-02-14 19:49:26.000000000 +0800
> > > @@ -304,6 +304,8 @@ void clear_bdi_congested(struct backing_
> > >  void set_bdi_congested(struct backing_dev_info *bdi, int sync);
> > >  long congestion_wait(int sync, long timeout);
> > >  long wait_iff_congested(struct zone *zone, int sync, long timeout);
> > > +long reclaim_wait(long timeout);
> > > +void reclaim_rotated(void);
> > >  
> > >  static inline bool bdi_cap_writeback_dirty(struct backing_dev_info *bdi)
> > >  {
> > > --- linux.orig/mm/backing-dev.c	2012-02-14 19:26:15.000000000 +0800
> > > +++ linux/mm/backing-dev.c	2012-02-14 20:09:45.000000000 +0800
> > > @@ -873,3 +873,38 @@ out:
> > >  	return ret;
> > >  }
> > >  EXPORT_SYMBOL(wait_iff_congested);
> > > +
> > > +static DECLARE_WAIT_QUEUE_HEAD(reclaim_wqh);
> > > +
> > > +/**
> > > + * reclaim_wait - wait for some pages being rotated to the LRU tail
> > > + * @timeout: timeout in jiffies
> > > + *
> > > + * Wait until @timeout, or when some (typically PG_reclaim under writeback)
> > > + * pages rotated to the LRU so that page reclaim can make progress.
> > > + */
> > > +long reclaim_wait(long timeout)
> > > +{
> > > +	long ret;
> > > +	unsigned long start = jiffies;
> > > +	DEFINE_WAIT(wait);
> > > +
> > > +	prepare_to_wait(&reclaim_wqh, &wait, TASK_KILLABLE);
> > > +	ret = io_schedule_timeout(timeout);
> > > +	finish_wait(&reclaim_wqh, &wait);
> > > +
> > > +	trace_writeback_reclaim_wait(jiffies_to_usecs(timeout),
> > > +				     jiffies_to_usecs(jiffies - start));
> > > +
> > > +	return ret;
> > > +}
> > > +EXPORT_SYMBOL(reclaim_wait);
> > > +
> > > +void reclaim_rotated()
> > > +{
> > > +	wait_queue_head_t *wqh = &reclaim_wqh;
> > > +
> > > +	if (waitqueue_active(wqh))
> > > +		wake_up(wqh);
> > > +}
> > > +
> > 
> > Thank you.
> > 
> > I like this approach. A nitpick is that this may wake up all waiters 
> > in the system when a memcg is rotated.
> 
> Thank you. It sure helps to start it simple :-)
> 
> > How about wait_event() + condition by bitmap (using per memcg unique IDs.) ?
> 
> I'm not sure how to manage the bitmap. The idea in my mind is to
> 
> - maintain a memcg->pages_rotated counter
> 
> - in reclaim_wait(), grab the current ->pages_rotated value before
>   going to wait, compare it to the new value on every wakeup, and
>   return to the user when seeing a different ->pages_rotated value.
>   (this cannot stop waking up multiple tasks in the same memcg...) 
> 
> Does that sound reasonable?
> 

Maybe. But there may be problem in looking up memcg from page at every
rotation. I think it's ok to start with a way ignoring per-memcg status.
Sorry for noise.

Thanks,
-Kame

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: reclaim the LRU lists full of dirty/writeback pages
  2012-02-14 13:29                 ` Jan Kara
@ 2012-02-16  4:00                   ` Wu Fengguang
  2012-02-16 12:44                     ` Jan Kara
  2012-02-17 16:41                     ` Wu Fengguang
  0 siblings, 2 replies; 33+ messages in thread
From: Wu Fengguang @ 2012-02-16  4:00 UTC (permalink / raw)
  To: Jan Kara
  Cc: Rik van Riel, Greg Thelen, bsingharora, Hugh Dickins,
	Michal Hocko, linux-mm, Mel Gorman, Ying Han, hannes,
	KAMEZAWA Hiroyuki, Minchan Kim

On Tue, Feb 14, 2012 at 02:29:50PM +0100, Jan Kara wrote:

> > >   I wonder what happens if you run:
> > >        mkdir /cgroup/x
> > >        echo 100M > /cgroup/x/memory.limit_in_bytes
> > >        echo $$ > /cgroup/x/tasks
> > > 
> > >        for (( i = 0; i < 2; i++ )); do
> > >          mkdir /fs/d$i
> > >          for (( j = 0; j < 5000; j++ )); do 
> > >            dd if=/dev/zero of=/fs/d$i/f$j bs=1k count=50
> > >          done &
> > >        done
> > 
> > That's a very good case, thanks!
> >  
> > >   Because for small files the writearound logic won't help much...
> > 
> > Right, it also means the native background work cannot be more I/O
> > efficient than the pageout works, except for the overheads of more
> > work items..
>   Yes, that's true.
> 
> > >   Also the number of work items queued might become interesting.
> > 
> > It turns out that the 1024 mempool reservations are not exhausted at
> > all (the below patch as a trace_printk on alloc failure and it didn't
> > trigger at all).
> > 
> > Here is the representative iostat lines on XFS (full "iostat -kx 1 20" log attached):
> > 
> > avg-cpu:  %user   %nice %system %iowait  %steal   %idle                                                                     
> >            0.80    0.00    6.03    0.03    0.00   93.14                                                                     
> >                                                                                                                             
> > Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util                   
> > sda               0.00   205.00    0.00  163.00     0.00 16900.00   207.36     4.09   21.63   1.88  30.70                   
> > 
> > The attached dirtied/written progress graph looks interesting.
> > Although the iostat disk utilization is low, the "dirtied" progress
> > line is pretty straight and there is no single congestion_wait event
> > in the trace log. Which makes me wonder if there are some unknown
> > blocking issues in the way.
>   Interesting. I'd also expect we should block in reclaim path. How fast
> can dd threads progress when there is no cgroup involved?

I tried running the dd tasks in global context with

        echo $((100<<20)) > /proc/sys/vm/dirty_bytes

and got mostly the same results on XFS:

        avg-cpu:  %user   %nice %system %iowait  %steal   %idle
                   0.85    0.00    8.88    0.00    0.00   90.26

        Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util
        sda               0.00     0.00    0.00   50.00     0.00 23036.00   921.44     9.59  738.02   7.38  36.90

        avg-cpu:  %user   %nice %system %iowait  %steal   %idle
                   0.95    0.00    8.95    0.00    0.00   90.11

        Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util
        sda               0.00   854.00    0.00   99.00     0.00 19552.00   394.99    34.14   87.98   3.82  37.80

Interestingly, ext4 shows comparable throughput, however is reporting
near 100% disk utilization:

        avg-cpu:  %user   %nice %system %iowait  %steal   %idle
                   0.76    0.00    9.02    0.00    0.00   90.23

        Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util
        sda               0.00     0.00    0.00  317.00     0.00 20956.00   132.21    28.57   82.71   3.16 100.10

        avg-cpu:  %user   %nice %system %iowait  %steal   %idle
                   0.82    0.00    8.95    0.00    0.00   90.23

        Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util
        sda               0.00     0.00    0.00  402.00     0.00 24388.00   121.33    21.09   58.55   2.42  97.40

        avg-cpu:  %user   %nice %system %iowait  %steal   %idle
                   0.82    0.00    8.99    0.00    0.00   90.19

        Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util
        sda               0.00     0.00    0.00  409.00     0.00 21996.00   107.56    15.25   36.74   2.30  94.10

And btrfs shows

        avg-cpu:  %user   %nice %system %iowait  %steal   %idle
                   0.76    0.00   23.59    0.00    0.00   75.65

        Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util
        sda               0.00   801.00    0.00  141.00     0.00 48984.00   694.81    41.08  291.36   6.11  86.20

        avg-cpu:  %user   %nice %system %iowait  %steal   %idle
                   0.72    0.00   12.65    0.00    0.00   86.62

        Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util
        sda               0.00   792.00    0.00   69.00     0.00 15288.00   443.13    22.74   69.35   4.09  28.20

        avg-cpu:  %user   %nice %system %iowait  %steal   %idle
                   0.83    0.00   23.11    0.00    0.00   76.06

        Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util
        sda               0.00     0.00    0.00   73.00     0.00 33280.00   911.78    22.09  548.58   8.10  59.10

> > > Another common case to test - run 'slapadd' command in each cgroup to
> > > create big LDAP database. That does pretty much random IO on a big mmaped
> > > DB file.
> > 
> > I've not used this. Will it need some configuration and data feed?
> > fio looks more handy to me for emulating mmap random IO.
>   Yes, fio can generate random mmap IO. It's just that this is a real life
> workload. So it is not completely random, it happens on several files and
> is also interleaved with other memory allocations from DB. I can send you
> the config files and data feed if you are interested.

I'm very interested, thank you!

> > > > +/*
> > > > + * schedule writeback on a range of inode pages.
> > > > + */
> > > > +static struct wb_writeback_work *
> > > > +bdi_flush_inode_range(struct backing_dev_info *bdi,
> > > > +		      struct inode *inode,
> > > > +		      pgoff_t offset,
> > > > +		      pgoff_t len,
> > > > +		      bool wait)
> > > > +{
> > > > +	struct wb_writeback_work *work;
> > > > +
> > > > +	if (!igrab(inode))
> > > > +		return ERR_PTR(-ENOENT);
> > >   One technical note here: If the inode is deleted while it is queued, this
> > > reference will keep it living until flusher thread gets to it. Then when
> > > flusher thread puts its reference, the inode will get deleted in flusher
> > > thread context. I don't see an immediate problem in that but it might be
> > > surprising sometimes. Another problem I see is that if you try to
> > > unmount the filesystem while the work item is queued, you'll get EBUSY for
> > > no apparent reason (for userspace).
> > 
> > Yeah, we need to make umount work.
>   The positive thing is that if the inode is reaped while the work item is
> queue, we know all that needed to be done is done. So we don't really need
> to pin the inode.

But I do need to make sure the *inode pointer does not point to some
invalid memory at work exec time. Is this possible without raising
->i_count?

> > And I find the pageout works seem to have some problems with ext4.
> > For example, this can be easily triggered with 10 dd tasks running
> > inside the 100MB limited memcg:
>   So journal thread is getting stuck while committing transaction. Most
> likely waiting for some dd thread to stop a transaction so that commit can
> proceed. The processes waiting in start_this_handle() are just secondary
> effect resulting from the first problem. It might be interesting to get
> stack traces of all bloked processes when the journal thread is stuck.

For completeness of discussion, citing your conclusion on my private
data feed:

: We enter memcg reclaim from grab_cache_page_write_begin() and are
: waiting in congestion_wait(). Because grab_cache_page_write_begin() is
: called with transaction started, this blocks transaction from
: committing and subsequently blocks all other activity on the
: filesystem. The fact is this isn't new with your patches, just your
: changes or the fact that we are running in a memory constrained cgroup
: make this more visible.

Thanks,
Fengguang

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: reclaim the LRU lists full of dirty/writeback pages
  2012-02-16  3:52                 ` KAMEZAWA Hiroyuki
@ 2012-02-16  4:05                   ` Wu Fengguang
  0 siblings, 0 replies; 33+ messages in thread
From: Wu Fengguang @ 2012-02-16  4:05 UTC (permalink / raw)
  To: KAMEZAWA Hiroyuki
  Cc: Mel Gorman, Greg Thelen, Jan Kara, bsingharora, Hugh Dickins,
	Michal Hocko, linux-mm, Ying Han, hannes, Rik van Riel,
	Minchan Kim

On Thu, Feb 16, 2012 at 12:52:21PM +0900, KAMEZAWA Hiroyuki wrote:
> On Thu, 16 Feb 2012 11:04:15 +0800
> Wu Fengguang <fengguang.wu@intel.com> wrote:
> 
> > On Thu, Feb 16, 2012 at 09:00:37AM +0900, KAMEZAWA Hiroyuki wrote:
> > > On Tue, 14 Feb 2012 21:18:12 +0800
> > > Wu Fengguang <fengguang.wu@intel.com> wrote:
> > > 
> > > > 
> > > > --- linux.orig/include/linux/backing-dev.h	2012-02-14 19:43:06.000000000 +0800
> > > > +++ linux/include/linux/backing-dev.h	2012-02-14 19:49:26.000000000 +0800
> > > > @@ -304,6 +304,8 @@ void clear_bdi_congested(struct backing_
> > > >  void set_bdi_congested(struct backing_dev_info *bdi, int sync);
> > > >  long congestion_wait(int sync, long timeout);
> > > >  long wait_iff_congested(struct zone *zone, int sync, long timeout);
> > > > +long reclaim_wait(long timeout);
> > > > +void reclaim_rotated(void);
> > > >  
> > > >  static inline bool bdi_cap_writeback_dirty(struct backing_dev_info *bdi)
> > > >  {
> > > > --- linux.orig/mm/backing-dev.c	2012-02-14 19:26:15.000000000 +0800
> > > > +++ linux/mm/backing-dev.c	2012-02-14 20:09:45.000000000 +0800
> > > > @@ -873,3 +873,38 @@ out:
> > > >  	return ret;
> > > >  }
> > > >  EXPORT_SYMBOL(wait_iff_congested);
> > > > +
> > > > +static DECLARE_WAIT_QUEUE_HEAD(reclaim_wqh);
> > > > +
> > > > +/**
> > > > + * reclaim_wait - wait for some pages being rotated to the LRU tail
> > > > + * @timeout: timeout in jiffies
> > > > + *
> > > > + * Wait until @timeout, or when some (typically PG_reclaim under writeback)
> > > > + * pages rotated to the LRU so that page reclaim can make progress.
> > > > + */
> > > > +long reclaim_wait(long timeout)
> > > > +{
> > > > +	long ret;
> > > > +	unsigned long start = jiffies;
> > > > +	DEFINE_WAIT(wait);
> > > > +
> > > > +	prepare_to_wait(&reclaim_wqh, &wait, TASK_KILLABLE);
> > > > +	ret = io_schedule_timeout(timeout);
> > > > +	finish_wait(&reclaim_wqh, &wait);
> > > > +
> > > > +	trace_writeback_reclaim_wait(jiffies_to_usecs(timeout),
> > > > +				     jiffies_to_usecs(jiffies - start));
> > > > +
> > > > +	return ret;
> > > > +}
> > > > +EXPORT_SYMBOL(reclaim_wait);
> > > > +
> > > > +void reclaim_rotated()
> > > > +{
> > > > +	wait_queue_head_t *wqh = &reclaim_wqh;
> > > > +
> > > > +	if (waitqueue_active(wqh))
> > > > +		wake_up(wqh);
> > > > +}
> > > > +
> > > 
> > > Thank you.
> > > 
> > > I like this approach. A nitpick is that this may wake up all waiters 
> > > in the system when a memcg is rotated.
> > 
> > Thank you. It sure helps to start it simple :-)
> > 
> > > How about wait_event() + condition by bitmap (using per memcg unique IDs.) ?
> > 
> > I'm not sure how to manage the bitmap. The idea in my mind is to
> > 
> > - maintain a memcg->pages_rotated counter
> > 
> > - in reclaim_wait(), grab the current ->pages_rotated value before
> >   going to wait, compare it to the new value on every wakeup, and
> >   return to the user when seeing a different ->pages_rotated value.
> >   (this cannot stop waking up multiple tasks in the same memcg...) 
> > 
> > Does that sound reasonable?
> > 
> 
> Maybe. But there may be problem in looking up memcg from page at every
> rotation. I think it's ok to start with a way ignoring per-memcg status.
> Sorry for noise.

When there comes such a need, we could take some sampled approach: for
each pagevec rotated, only dereference one of its pages and wakeup the
corresponding memcg task(s). That should do well enough in practice
even for random write patterns.

Thanks,
Fengguang

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: reclaim the LRU lists full of dirty/writeback pages
  2012-02-14 15:51             ` Mel Gorman
@ 2012-02-16  9:50               ` Wu Fengguang
  2012-02-16 17:31                 ` Mel Gorman
  0 siblings, 1 reply; 33+ messages in thread
From: Wu Fengguang @ 2012-02-16  9:50 UTC (permalink / raw)
  To: Mel Gorman
  Cc: Greg Thelen, Jan Kara, bsingharora, Hugh Dickins, Michal Hocko,
	linux-mm, Ying Han, hannes, KAMEZAWA Hiroyuki, Rik van Riel,
	Minchan Kim

On Tue, Feb 14, 2012 at 03:51:24PM +0000, Mel Gorman wrote:
> On Tue, Feb 14, 2012 at 09:18:12PM +0800, Wu Fengguang wrote:
> > > For the OOM problem, a more reasonable stopgap might be to identify when
> > > a process is scanning a memcg at high priority and encountered all
> > > PageReclaim with no forward progress and to congestion_wait() if that
> > > situation occurs. A preferable way would be to wait until the flusher
> > > wakes up a waiter on PageReclaim pages to be written out because we want
> > > to keep moving way from congestion_wait() if at all possible.
> > 
> > Good points! Below is the more serious page reclaim changes.
> > 
> > The dirty/writeback pages may often come close to each other in the
> > LRU list, so the local test during a 32-page scan may still trigger
> > reclaim waits unnecessarily.
> 
> Yes, this is particularly the case when writing back to USB. It is not
> unusual that all dirty pages under writeback are backed by USB and at the
> end of the LRU. Right now what happens is that reclaimers see higher CPU
> usage as they scan over these pages uselessly. If the wrong choice is
> made on how to throttle, we'll see yet more variants of the "system
> responsiveness drops when writing to USB".

Yes, USB is an important case to support.  I'd imagine the heavy USB
writes typically happen in desktops and run *outside* of any memcg.
So they'll typically take <= 20% memory in the zone. As long as we
start the PG_reclaim throttling only when above the 20% dirty
threshold (ie. on zone_dirty_ok()), the USB case should be safe.

> > Some global information on the percent
> > of dirty/writeback pages in the LRU list may help. Anyway the added
> > tests should still be much better than no protection.
> > 
> 
> You can tell how many dirty pages and writeback pages are in the zone
> already.

Right. I changed the test to

+       if (nr_pgreclaim && nr_pgreclaim >= (nr_taken >> (DEF_PRIORITY-priority)) &&
+           (!global_reclaim(sc) || !zone_dirty_ok(zone)))
+               reclaim_wait(HZ/10);

And I'd prefer to use a higher threshold than the default 20% for the
above zone_dirty_ok() test, so that when Johannes' zone dirty
balancing does the job fine, PG_reclaim based page reclaim throttling
won't happen at all.

> > A global wait queue and reclaim_wait() is introduced. The waiters will
> > be wakeup when pages are rotated by end_page_writeback() or lru drain.
> > 
> > I have to say its effectiveness depends on the filesystem... ext4
> > and btrfs do fluent IO completions, so reclaim_wait() works pretty
> > well:
> >               dd-14560 [017] ....  1360.894605: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=10000
> >               dd-14560 [017] ....  1360.904456: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=8000
> >               dd-14560 [017] ....  1360.908293: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=2000
> >               dd-14560 [017] ....  1360.923960: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=15000
> >               dd-14560 [017] ....  1360.927810: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=2000
> >               dd-14560 [017] ....  1360.931656: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=2000
> >               dd-14560 [017] ....  1360.943503: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=10000
> >               dd-14560 [017] ....  1360.953289: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=7000
> >               dd-14560 [017] ....  1360.957177: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=2000
> >               dd-14560 [017] ....  1360.972949: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=15000
> > 
> > However XFS does IO completions in very large batches (there may be
> > only several big IO completions in one second). So reclaim_wait()
> > mostly end up waiting to the full HZ/10 timeout:
> > 
> >               dd-4177  [008] ....   866.367661: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=100000
> >               dd-4177  [010] ....   866.567583: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=100000
> >               dd-4177  [012] ....   866.767458: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=100000
> >               dd-4177  [013] ....   866.867419: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=100000
> >               dd-4177  [008] ....   867.167266: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=100000
> >               dd-4177  [010] ....   867.367168: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=100000
> >               dd-4177  [012] ....   867.818950: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=100000
> >               dd-4177  [013] ....   867.918905: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=100000
> >               dd-4177  [013] ....   867.971657: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=52000
> >               dd-4177  [013] ....   867.971812: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=0
> >               dd-4177  [008] ....   868.355700: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=100000
> >               dd-4177  [010] ....   868.700515: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=100000
> > 
> 
> And where people will get hit by regressions in this area is writing to
> vfat and in more rare cases ntfs on USB stick.

vfat IO completions seem to lie somewhere between ext4 and xfs:

           <...>-46385 [010] .... 143570.714470: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=100000
           <...>-46385 [008] .... 143570.752391: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=12000
           <...>-46385 [008] .... 143570.937327: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=52000
           <...>-46385 [010] .... 143571.160252: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=100000
           <...>-46385 [011] .... 143571.286197: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=100000
           <...>-46385 [008] .... 143571.329644: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=15000
           <...>-46385 [008] .... 143571.475433: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=18000
           <...>-46385 [008] .... 143571.653461: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=52000
           <...>-46385 [008] .... 143571.839949: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=56000
           <...>-46385 [010] .... 143572.060816: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=100000
           <...>-46385 [011] .... 143572.185754: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=100000
           <...>-46385 [008] .... 143572.212522: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=1000
           <...>-46385 [008] .... 143572.217825: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=2000
           <...>-46385 [008] .... 143572.312395: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=91000
           <...>-46385 [008] .... 143572.315122: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=1000
           <...>-46385 [009] .... 143572.433630: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=100000
           <...>-46385 [010] .... 143572.534569: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=100000
 
and has lower throughput

        avg-cpu:  %user   %nice %system %iowait  %steal   %idle
                   0.03    0.00    3.88    2.22    0.00   93.86

        Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util
        sda               0.00    34.00   20.00   82.00    10.00 34137.50   669.56     8.09   79.34   5.97  60.90

I'm yet to get a USB stick for the vfat-on-USB test.

> > > Another possibility would be to relook at LRU_IMMEDIATE but right now it
> > > requires a page flag and I haven't devised a way around that. Besides,
> > > it would only address the problem of PageREclaim pages being encountered,
> > > it would not handle the case where a memcg was filled with PageReclaim pages.
> > 
> > I also considered things like LRU_IMMEDIATE, however got no clear idea yet.
> > Since the simple "wait on PG_reclaim" approach appears to work for this
> > memcg dd case, it effectively disables me to think any further ;-)
> > 
> 
> Test with interactive use while writing heavily to a USB stick.

Sure.

> > @@ -813,6 +815,10 @@ static unsigned long shrink_page_list(st
> >  
> >  		if (PageWriteback(page)) {
> >  			nr_writeback++;
> > +			if (PageReclaim(page))
> > +				nr_pgreclaim++;
> > +			else
> > +				SetPageReclaim(page);
> >  			/*
> 
> This check is unexpected. We already SetPageReclaim when queuing pages for
> IO from reclaim context and if dirty pages are encountered during the LRU
> scan that cannot be queued for IO. How often is it that nr_pgreclaim !=
> nr_writeback and by how much do they differ?

Quite often, I suspect. The pageout writeback works do 1-8MB write
around which may start I/O a bit earlier than the covered pages are
encountered by page reclaim. ext4 forces 128MB write chunk size, which
further increases the opportunities.

> >  			 * Synchronous reclaim cannot queue pages for
> >  			 * writeback due to the possibility of stack overflow
> > @@ -874,12 +880,15 @@ static unsigned long shrink_page_list(st
> >  			nr_dirty++;
> >  
> >  			/*
> > -			 * Only kswapd can writeback filesystem pages to
> > -			 * avoid risk of stack overflow but do not writeback
> > -			 * unless under significant pressure.
> > +			 * run into the visited page again: we are scanning
> > +			 * faster than the flusher can writeout dirty pages
> >  			 */
> 
> which in itself is not an abnormal condition. We get into this situation
> when writing to USB. Dirty throttling stops too much memory getting dirtied
> but that does not mean we should throttle instead of reclaiming clean pages.
> 
> That's why I worry that if this is aimed at fixing a memcg problem, it
> will have the impact of making interactive performance on normal systems
> worse.

You are right. This patch only addresses the pageout I/O efficiency
and dirty throttling problems for a fully dirtied LRU. Next step, I'll
think about the interactive performance problem for a less dirtied LRU.

> > @@ -1087,8 +1097,10 @@ int __isolate_lru_page(struct page *page
> >  	 */
> >  	if (mode & (ISOLATE_CLEAN|ISOLATE_ASYNC_MIGRATE)) {
> >  		/* All the caller can do on PageWriteback is block */
> > -		if (PageWriteback(page))
> > +		if (PageWriteback(page)) {
> > +			SetPageReclaim(page);
> >  			return ret;
> > +		}
> >  
> 
> This hunk means that if async compaction (common for THP) encounters a page
> under writeback, it will still skip it but mark it for immediate reclaim
> after IO completes. This will have the impact that compaction causes an
> abnormally high number of pages to be reclaimed.

Sorry I overlooked that, will drop it.  isolate_migratepages() walks
by PFN and the opportunistic peak at the writeback pages should not
make it rotated (and disturb its LRU order) on I/O completion.

> > @@ -1608,6 +1621,8 @@ shrink_inactive_list(unsigned long nr_to
> >  	 */
> >  	if (nr_writeback && nr_writeback >= (nr_taken >> (DEF_PRIORITY-priority)))
> >  		wait_iff_congested(zone, BLK_RW_ASYNC, HZ/10);
> > +	if (nr_pgreclaim && nr_pgreclaim >= (nr_taken >> (DEF_PRIORITY-priority)))
> > +		reclaim_wait(HZ/10);
> >  
> 
> We risk going to sleep too easily when USB-backed pages are at the end of
> the LRU list. Note that the nr_writeback check only goes to sleep if it
> detects that the underlying storage is also congested. In contrast, it
> will take very few PageReclaim pages at teh end of the LRU to cause the
> process to sleep when it instead should find clean pages to discard.

Right.

> If the intention is to avoid memcg going OOM prematurely, the
> nr_pgreclaim value needs to be treated at a higher level that records
> how many PageReclaim pages were encountered. If no progress was made
> because all the pages were PageReclaim, then throttle and return 1 to
> the page allocator where it will retry the allocation without going OOM
> after some pages have been cleaned and reclaimed.
 
Agreed in general, but changed to this test for now, which is made a
bit more global wise with the use of zone_dirty_ok().

memcg is ignored due to no dirty accounting (Greg has the patch though).
And even zone_dirty_ok() may be inaccurate for the global reclaim, if
some memcgs are skipped by the global reclaim by the memcg soft limit.

But anyway, it's a handy hack for now. I'm looking into some more
radical changes to put most dirty/writeback pages into a standalone
LRU list (in addition to your LRU_IMMEDIATE, which I think is a good
idea) for addressing the clustered way they tend to lie in the
inactive LRU list.

+       if (nr_pgreclaim && nr_pgreclaim >= (nr_taken >> (DEF_PRIORITY-priority)) &&
+           (!global_reclaim(sc) || !zone_dirty_ok(zone)))
+               reclaim_wait(HZ/10);

Thanks,
Fengguang
---
Subject: writeback: introduce the pageout work
Date: Thu Jul 29 14:41:19 CST 2010

This relays file pageout IOs to the flusher threads.

The ultimate target is to gracefully handle the LRU lists full of
dirty/writeback pages.

1) I/O efficiency

The flusher will piggy back the nearby ~10ms worth of dirty pages for I/O.

This takes advantage of the time/spacial locality in most workloads: the
nearby pages of one file are typically populated into the LRU at the same
time, hence will likely be close to each other in the LRU list. Writing
them in one shot helps clean more pages effectively for page reclaim.

2) OOM avoidance and scan rate control

Typically we do LRU scan w/o rate control and quickly get enough clean
pages for the LRU lists not full of dirty pages.

Or we can still get a number of freshly cleaned pages (moved to LRU tail
by end_page_writeback()) when the queued pageout I/O is completed within
tens of milli-seconds.

However if the LRU list is small and full of dirty pages, it can be
quickly fully scanned and go OOM before the flusher manages to clean
enough pages.

A simple yet reliable scheme is employed to avoid OOM and keep scan rate
in sync with the I/O rate:

	if (PageReclaim(page))
		congestion_wait(HZ/10);

PG_reclaim plays the key role. When dirty pages are encountered, we
queue I/O for it, set PG_reclaim and put it back to the LRU head.
So if PG_reclaim pages are encountered again, it means the dirty page
has not yet been cleaned by the flusher after a full zone scan. It
indicates we are scanning more fast than I/O and shall take a snap.

The runtime behavior on a fully dirtied small LRU list would be:
It will start with a quick scan of the list, queuing all pages for I/O.
Then the scan will be slowed down by the PG_reclaim pages *adaptively*
to match the I/O bandwidth.

3) writeback work coordinations

To avoid memory allocations at page reclaim, a mempool for struct
wb_writeback_work is created.

wakeup_flusher_threads() is removed because it can easily delay the
more oriented pageout works and even exhaust the mempool reservations.
It's also found to not I/O efficient by frequently submitting writeback
works with small ->nr_pages.

Background/periodic works will quit automatically, so as to clean the
pages under reclaim ASAP. However for now the sync work can still block
us for long time.

Jan Kara: limit the search scope. Note that the limited search and work
pool is not a big problem: 1000 IOs under flight are typically more than
enough to saturate the disk. And the overheads of searching in the work
list didn't even show up in the perf report.

4) test case

Run 2 dd tasks in a 100MB memcg (a very handy test case from Greg Thelen):

	mkdir /cgroup/x
	echo 100M > /cgroup/x/memory.limit_in_bytes
	echo $$ > /cgroup/x/tasks

	for i in `seq 2`
	do
		dd if=/dev/zero of=/fs/f$i bs=1k count=1M &
	done

Before patch, the dd tasks are quickly OOM killed.
After patch, they run well with reasonably good performance and overheads:

1073741824 bytes (1.1 GB) copied, 22.2196 s, 48.3 MB/s
1073741824 bytes (1.1 GB) copied, 22.4675 s, 47.8 MB/s

iostat -kx 1

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0.00     0.00    0.00  178.00     0.00 89568.00  1006.38    74.35  417.71   4.80  85.40
sda               0.00     2.00    0.00  191.00     0.00 94428.00   988.77    53.34  219.03   4.34  82.90
sda               0.00    20.00    0.00  196.00     0.00 97712.00   997.06    71.11  337.45   4.77  93.50
sda               0.00     5.00    0.00  175.00     0.00 84648.00   967.41    54.03  316.44   5.06  88.60
sda               0.00     0.00    0.00  186.00     0.00 92432.00   993.89    56.22  267.54   5.38 100.00
sda               0.00     1.00    0.00  183.00     0.00 90156.00   985.31    37.99  325.55   4.33  79.20
sda               0.00     0.00    0.00  175.00     0.00 88692.00  1013.62    48.70  218.43   4.69  82.10
sda               0.00     0.00    0.00  196.00     0.00 97528.00   995.18    43.38  236.87   5.10 100.00
sda               0.00     0.00    0.00  179.00     0.00 88648.00   990.48    45.83  285.43   5.59 100.00
sda               0.00     0.00    0.00  178.00     0.00 88500.00   994.38    28.28  158.89   4.99  88.80
sda               0.00     0.00    0.00  194.00     0.00 95852.00   988.16    32.58  167.39   5.15 100.00
sda               0.00     2.00    0.00  215.00     0.00 105996.00   986.01    41.72  201.43   4.65 100.00
sda               0.00     4.00    0.00  173.00     0.00 84332.00   974.94    50.48  260.23   5.76  99.60
sda               0.00     0.00    0.00  182.00     0.00 90312.00   992.44    36.83  212.07   5.49 100.00
sda               0.00     8.00    0.00  195.00     0.00 95940.50   984.01    50.18  221.06   5.13 100.00
sda               0.00     1.00    0.00  220.00     0.00 108852.00   989.56    40.99  202.68   4.55 100.00
sda               0.00     2.00    0.00  161.00     0.00 80384.00   998.56    37.19  268.49   6.21 100.00
sda               0.00     4.00    0.00  182.00     0.00 90830.00   998.13    50.58  239.77   5.49 100.00
sda               0.00     0.00    0.00  197.00     0.00 94877.00   963.22    36.68  196.79   5.08 100.00

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.25    0.00   15.08   33.92    0.00   50.75
           0.25    0.00   14.54   35.09    0.00   50.13
           0.50    0.00   13.57   32.41    0.00   53.52
           0.50    0.00   11.28   36.84    0.00   51.38
           0.50    0.00   15.75   32.00    0.00   51.75
           0.50    0.00   10.50   34.00    0.00   55.00
           0.50    0.00   17.63   27.46    0.00   54.41
           0.50    0.00   15.08   30.90    0.00   53.52
           0.50    0.00   11.28   32.83    0.00   55.39
           0.75    0.00   16.79   26.82    0.00   55.64
           0.50    0.00   16.08   29.15    0.00   54.27
           0.50    0.00   13.50   30.50    0.00   55.50
           0.50    0.00   14.32   35.18    0.00   50.00
           0.50    0.00   12.06   33.92    0.00   53.52
           0.50    0.00   17.29   30.58    0.00   51.63
           0.50    0.00   15.08   29.65    0.00   54.77
           0.50    0.00   12.53   29.32    0.00   57.64
           0.50    0.00   15.29   31.83    0.00   52.38

The global dd numbers for comparison:

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0.00     0.00    0.00  189.00     0.00 95752.00  1013.25   143.09  684.48   5.29 100.00
sda               0.00     0.00    0.00  208.00     0.00 105480.00  1014.23   143.06  733.29   4.81 100.00
sda               0.00     0.00    0.00  161.00     0.00 81924.00  1017.69   141.71  757.79   6.21 100.00
sda               0.00     0.00    0.00  217.00     0.00 109580.00  1009.95   143.09  749.55   4.61 100.10
sda               0.00     0.00    0.00  187.00     0.00 94728.00  1013.13   144.31  773.67   5.35 100.00
sda               0.00     0.00    0.00  189.00     0.00 95752.00  1013.25   144.14  742.00   5.29 100.00
sda               0.00     0.00    0.00  177.00     0.00 90032.00  1017.31   143.32  656.59   5.65 100.00
sda               0.00     0.00    0.00  215.00     0.00 108640.00  1010.60   142.90  817.54   4.65 100.00
sda               0.00     2.00    0.00  166.00     0.00 83858.00  1010.34   143.64  808.61   6.02 100.00
sda               0.00     0.00    0.00  186.00     0.00 92813.00   997.99   141.18  736.95   5.38 100.00
sda               0.00     0.00    0.00  206.00     0.00 104456.00  1014.14   146.27  729.33   4.85 100.00
sda               0.00     0.00    0.00  213.00     0.00 107024.00  1004.92   143.25  705.70   4.69 100.00
sda               0.00     0.00    0.00  188.00     0.00 95748.00  1018.60   141.82  764.78   5.32 100.00

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.51    0.00   11.22   52.30    0.00   35.97
           0.25    0.00   10.15   52.54    0.00   37.06
           0.25    0.00    5.01   56.64    0.00   38.10
           0.51    0.00   15.15   43.94    0.00   40.40
           0.25    0.00   12.12   48.23    0.00   39.39
           0.51    0.00   11.20   53.94    0.00   34.35
           0.26    0.00    9.72   51.41    0.00   38.62
           0.76    0.00    9.62   50.63    0.00   38.99
           0.51    0.00   10.46   53.32    0.00   35.71
           0.51    0.00    9.41   51.91    0.00   38.17
           0.25    0.00   10.69   49.62    0.00   39.44
           0.51    0.00   12.21   52.67    0.00   34.61
           0.51    0.00   11.45   53.18    0.00   34.86

XXX: commit NFS unstable pages via write_inode()
XXX: the added congestion_wait() may be undesirable in some situations

CC: Jan Kara <jack@suse.cz>
CC: Mel Gorman <mgorman@suse.de>
Acked-by: Rik van Riel <riel@redhat.com>
CC: Greg Thelen <gthelen@google.com>
CC: Minchan Kim <minchan.kim@gmail.com>
Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
---
 fs/fs-writeback.c                |  169 ++++++++++++++++++++++++++++-
 include/linux/backing-dev.h      |    2 
 include/linux/writeback.h        |    4 
 include/trace/events/writeback.h |   19 ++-
 mm/backing-dev.c                 |   35 ++++++
 mm/swap.c                        |    1 
 mm/vmscan.c                      |   32 +++--
 7 files changed, 245 insertions(+), 17 deletions(-)

- move congestion_wait() out of the page lock: it's blocking btrfs lock_delalloc_pages()

--- linux.orig/include/linux/backing-dev.h	2012-02-14 20:11:21.000000000 +0800
+++ linux/include/linux/backing-dev.h	2012-02-15 12:34:24.000000000 +0800
@@ -304,6 +304,8 @@ void clear_bdi_congested(struct backing_
 void set_bdi_congested(struct backing_dev_info *bdi, int sync);
 long congestion_wait(int sync, long timeout);
 long wait_iff_congested(struct zone *zone, int sync, long timeout);
+long reclaim_wait(long timeout);
+void reclaim_rotated(void);
 
 static inline bool bdi_cap_writeback_dirty(struct backing_dev_info *bdi)
 {
--- linux.orig/mm/backing-dev.c	2012-02-14 20:11:21.000000000 +0800
+++ linux/mm/backing-dev.c	2012-02-15 12:34:19.000000000 +0800
@@ -873,3 +873,38 @@ out:
 	return ret;
 }
 EXPORT_SYMBOL(wait_iff_congested);
+
+static DECLARE_WAIT_QUEUE_HEAD(reclaim_wqh);
+
+/**
+ * reclaim_wait - wait for some pages being rotated to the LRU tail
+ * @timeout: timeout in jiffies
+ *
+ * Wait until @timeout, or when some (typically PG_reclaim under writeback)
+ * pages rotated to the LRU so that page reclaim can make progress.
+ */
+long reclaim_wait(long timeout)
+{
+	long ret;
+	unsigned long start = jiffies;
+	DEFINE_WAIT(wait);
+
+	prepare_to_wait(&reclaim_wqh, &wait, TASK_KILLABLE);
+	ret = io_schedule_timeout(timeout);
+	finish_wait(&reclaim_wqh, &wait);
+
+	trace_writeback_reclaim_wait(jiffies_to_usecs(timeout),
+				     jiffies_to_usecs(jiffies - start));
+
+	return ret;
+}
+EXPORT_SYMBOL(reclaim_wait);
+
+void reclaim_rotated()
+{
+	wait_queue_head_t *wqh = &reclaim_wqh;
+
+	if (waitqueue_active(wqh))
+		wake_up(wqh);
+}
+
--- linux.orig/mm/swap.c	2012-02-14 20:11:21.000000000 +0800
+++ linux/mm/swap.c	2012-02-15 12:27:35.000000000 +0800
@@ -253,6 +253,7 @@ static void pagevec_move_tail(struct pag
 
 	pagevec_lru_move_fn(pvec, pagevec_move_tail_fn, &pgmoved);
 	__count_vm_events(PGROTATED, pgmoved);
+	reclaim_rotated();
 }
 
 /*
--- linux.orig/mm/vmscan.c	2012-02-14 20:11:21.000000000 +0800
+++ linux/mm/vmscan.c	2012-02-16 17:23:17.000000000 +0800
@@ -767,7 +767,8 @@ static unsigned long shrink_page_list(st
 				      struct scan_control *sc,
 				      int priority,
 				      unsigned long *ret_nr_dirty,
-				      unsigned long *ret_nr_writeback)
+				      unsigned long *ret_nr_writeback,
+				      unsigned long *ret_nr_pgreclaim)
 {
 	LIST_HEAD(ret_pages);
 	LIST_HEAD(free_pages);
@@ -776,6 +777,7 @@ static unsigned long shrink_page_list(st
 	unsigned long nr_congested = 0;
 	unsigned long nr_reclaimed = 0;
 	unsigned long nr_writeback = 0;
+	unsigned long nr_pgreclaim = 0;
 
 	cond_resched();
 
@@ -813,6 +815,10 @@ static unsigned long shrink_page_list(st
 
 		if (PageWriteback(page)) {
 			nr_writeback++;
+			if (PageReclaim(page))
+				nr_pgreclaim++;
+			else
+				SetPageReclaim(page);
 			/*
 			 * Synchronous reclaim cannot queue pages for
 			 * writeback due to the possibility of stack overflow
@@ -874,12 +880,15 @@ static unsigned long shrink_page_list(st
 			nr_dirty++;
 
 			/*
-			 * Only kswapd can writeback filesystem pages to
-			 * avoid risk of stack overflow but do not writeback
-			 * unless under significant pressure.
+			 * run into the visited page again: we are scanning
+			 * faster than the flusher can writeout dirty pages
 			 */
-			if (page_is_file_cache(page) &&
-					(!current_is_kswapd() || priority >= DEF_PRIORITY - 2)) {
+			if (page_is_file_cache(page) && PageReclaim(page)) {
+				nr_pgreclaim++;
+				goto keep_locked;
+			}
+			if (page_is_file_cache(page) && mapping &&
+			    flush_inode_page(mapping, page, false) >= 0) {
 				/*
 				 * Immediately reclaim when written back.
 				 * Similar in principal to deactivate_page()
@@ -1028,6 +1037,7 @@ keep_lumpy:
 	count_vm_events(PGACTIVATE, pgactivate);
 	*ret_nr_dirty += nr_dirty;
 	*ret_nr_writeback += nr_writeback;
+	*ret_nr_pgreclaim += nr_pgreclaim;
 	return nr_reclaimed;
 }
 
@@ -1509,6 +1519,7 @@ shrink_inactive_list(unsigned long nr_to
 	unsigned long nr_file;
 	unsigned long nr_dirty = 0;
 	unsigned long nr_writeback = 0;
+	unsigned long nr_pgreclaim = 0;
 	isolate_mode_t reclaim_mode = ISOLATE_INACTIVE;
 	struct zone *zone = mz->zone;
 
@@ -1559,13 +1570,13 @@ shrink_inactive_list(unsigned long nr_to
 	spin_unlock_irq(&zone->lru_lock);
 
 	nr_reclaimed = shrink_page_list(&page_list, mz, sc, priority,
-						&nr_dirty, &nr_writeback);
+				&nr_dirty, &nr_writeback, &nr_pgreclaim);
 
 	/* Check if we should syncronously wait for writeback */
 	if (should_reclaim_stall(nr_taken, nr_reclaimed, priority, sc)) {
 		set_reclaim_mode(priority, sc, true);
 		nr_reclaimed += shrink_page_list(&page_list, mz, sc,
-					priority, &nr_dirty, &nr_writeback);
+			priority, &nr_dirty, &nr_writeback, &nr_pgreclaim);
 	}
 
 	spin_lock_irq(&zone->lru_lock);
@@ -1608,6 +1619,9 @@ shrink_inactive_list(unsigned long nr_to
 	 */
 	if (nr_writeback && nr_writeback >= (nr_taken >> (DEF_PRIORITY-priority)))
 		wait_iff_congested(zone, BLK_RW_ASYNC, HZ/10);
+	if (nr_pgreclaim && nr_pgreclaim >= (nr_taken >> (DEF_PRIORITY-priority)) &&
+	    (!global_reclaim(sc) || !zone_dirty_ok(zone)))
+		reclaim_wait(HZ/10);
 
 	trace_mm_vmscan_lru_shrink_inactive(zone->zone_pgdat->node_id,
 		zone_idx(zone),
@@ -2382,8 +2396,6 @@ static unsigned long do_try_to_free_page
 		 */
 		writeback_threshold = sc->nr_to_reclaim + sc->nr_to_reclaim / 2;
 		if (total_scanned > writeback_threshold) {
-			wakeup_flusher_threads(laptop_mode ? 0 : total_scanned,
-						WB_REASON_TRY_TO_FREE_PAGES);
 			sc->may_writepage = 1;
 		}
 
--- linux.orig/fs/fs-writeback.c	2012-02-14 20:11:21.000000000 +0800
+++ linux/fs/fs-writeback.c	2012-02-15 12:27:35.000000000 +0800
@@ -41,6 +41,8 @@ struct wb_writeback_work {
 	long nr_pages;
 	struct super_block *sb;
 	unsigned long *older_than_this;
+	struct inode *inode;
+	pgoff_t offset;
 	enum writeback_sync_modes sync_mode;
 	unsigned int tagged_writepages:1;
 	unsigned int for_kupdate:1;
@@ -65,6 +67,27 @@ struct wb_writeback_work {
  */
 int nr_pdflush_threads;
 
+static mempool_t *wb_work_mempool;
+
+static void *wb_work_alloc(gfp_t gfp_mask, void *pool_data)
+{
+	/*
+	 * bdi_flush_inode_range() may be called on page reclaim
+	 */
+	if (current->flags & PF_MEMALLOC)
+		return NULL;
+
+	return kmalloc(sizeof(struct wb_writeback_work), gfp_mask);
+}
+
+static __init int wb_work_init(void)
+{
+	wb_work_mempool = mempool_create(1024,
+					 wb_work_alloc, mempool_kfree, NULL);
+	return wb_work_mempool ? 0 : -ENOMEM;
+}
+fs_initcall(wb_work_init);
+
 /**
  * writeback_in_progress - determine whether there is writeback in progress
  * @bdi: the device's backing_dev_info structure.
@@ -129,7 +152,7 @@ __bdi_start_writeback(struct backing_dev
 	 * This is WB_SYNC_NONE writeback, so if allocation fails just
 	 * wakeup the thread for old dirty data writeback
 	 */
-	work = kzalloc(sizeof(*work), GFP_ATOMIC);
+	work = mempool_alloc(wb_work_mempool, GFP_NOWAIT);
 	if (!work) {
 		if (bdi->wb.task) {
 			trace_writeback_nowork(bdi);
@@ -138,6 +161,7 @@ __bdi_start_writeback(struct backing_dev
 		return;
 	}
 
+	memset(work, 0, sizeof(*work));
 	work->sync_mode	= WB_SYNC_NONE;
 	work->nr_pages	= nr_pages;
 	work->range_cyclic = range_cyclic;
@@ -186,6 +210,125 @@ void bdi_start_background_writeback(stru
 	spin_unlock_bh(&bdi->wb_lock);
 }
 
+static bool extend_writeback_range(struct wb_writeback_work *work,
+				   pgoff_t offset,
+				   unsigned long write_around_pages)
+{
+	pgoff_t end = work->offset + work->nr_pages;
+
+	if (offset >= work->offset && offset < end)
+		return true;
+
+	/*
+	 * for sequential workloads with good locality, include up to 8 times
+	 * more data in one chunk
+	 */
+	if (work->nr_pages >= 8 * write_around_pages)
+		return false;
+
+	/* the unsigned comparison helps eliminate one compare */
+	if (work->offset - offset < write_around_pages) {
+		work->nr_pages += write_around_pages;
+		work->offset -= write_around_pages;
+		return true;
+	}
+
+	if (offset - end < write_around_pages) {
+		work->nr_pages += write_around_pages;
+		return true;
+	}
+
+	return false;
+}
+
+/*
+ * schedule writeback on a range of inode pages.
+ */
+static struct wb_writeback_work *
+bdi_flush_inode_range(struct backing_dev_info *bdi,
+		      struct inode *inode,
+		      pgoff_t offset,
+		      pgoff_t len,
+		      bool wait)
+{
+	struct wb_writeback_work *work;
+
+	if (!igrab(inode))
+		return ERR_PTR(-ENOENT);
+
+	work = mempool_alloc(wb_work_mempool, wait ? GFP_NOIO : GFP_NOWAIT);
+	if (!work) {
+		trace_printk("wb_work_mempool alloc fail\n");
+		return ERR_PTR(-ENOMEM);
+	}
+
+	memset(work, 0, sizeof(*work));
+	work->sync_mode		= WB_SYNC_NONE;
+	work->inode		= inode;
+	work->offset		= offset;
+	work->nr_pages		= len;
+	work->reason		= WB_REASON_PAGEOUT;
+
+	bdi_queue_work(bdi, work);
+
+	return work;
+}
+
+/*
+ * Called by page reclaim code to flush the dirty page ASAP. Do write-around to
+ * improve IO throughput. The nearby pages will have good chance to reside in
+ * the same LRU list that vmscan is working on, and even close to each other
+ * inside the LRU list in the common case of sequential read/write.
+ *
+ * ret > 0: success, found/reused a previous writeback work
+ * ret = 0: success, allocated/queued a new writeback work
+ * ret < 0: failed
+ */
+long flush_inode_page(struct address_space *mapping,
+		      struct page *page,
+		      bool wait)
+{
+	struct backing_dev_info *bdi = mapping->backing_dev_info;
+	struct inode *inode = mapping->host;
+	struct wb_writeback_work *work;
+	unsigned long write_around_pages;
+	pgoff_t offset = page->index;
+	int i;
+	long ret = 0;
+
+	if (unlikely(!inode))
+		return -ENOENT;
+
+	/*
+	 * piggy back 8-15ms worth of data
+	 */
+	write_around_pages = bdi->avg_write_bandwidth + MIN_WRITEBACK_PAGES;
+	write_around_pages = rounddown_pow_of_two(write_around_pages) >> 6;
+
+	i = 1;
+	spin_lock_bh(&bdi->wb_lock);
+	list_for_each_entry_reverse(work, &bdi->work_list, list) {
+		if (work->inode != inode)
+			continue;
+		if (extend_writeback_range(work, offset, write_around_pages)) {
+			ret = i;
+			break;
+		}
+		if (i++ > 100)	/* limit search depth */
+			break;
+	}
+	spin_unlock_bh(&bdi->wb_lock);
+
+	if (!ret) {
+		offset = round_down(offset, write_around_pages);
+		work = bdi_flush_inode_range(bdi, inode,
+					     offset, write_around_pages, wait);
+		if (IS_ERR(work))
+			ret = PTR_ERR(work);
+	}
+	return ret;
+}
+
 /*
  * Remove the inode from the writeback list it is on.
  */
@@ -833,6 +976,23 @@ static unsigned long get_nr_dirty_pages(
 		get_nr_dirty_inodes();
 }
 
+static long wb_flush_inode(struct bdi_writeback *wb,
+			   struct wb_writeback_work *work)
+{
+	struct writeback_control wbc = {
+		.sync_mode = WB_SYNC_NONE,
+		.nr_to_write = LONG_MAX,
+		.range_start = work->offset << PAGE_CACHE_SHIFT,
+		.range_end = (work->offset + work->nr_pages - 1)
+						<< PAGE_CACHE_SHIFT,
+	};
+
+	do_writepages(work->inode->i_mapping, &wbc);
+	iput(work->inode);
+
+	return LONG_MAX - wbc.nr_to_write;
+}
+
 static long wb_check_background_flush(struct bdi_writeback *wb)
 {
 	if (over_bground_thresh(wb->bdi)) {
@@ -905,7 +1065,10 @@ long wb_do_writeback(struct bdi_writebac
 
 		trace_writeback_exec(bdi, work);
 
-		wrote += wb_writeback(wb, work);
+		if (work->inode)
+			wrote += wb_flush_inode(wb, work);
+		else
+			wrote += wb_writeback(wb, work);
 
 		/*
 		 * Notify the caller of completion if this is a synchronous
@@ -914,7 +1077,7 @@ long wb_do_writeback(struct bdi_writebac
 		if (work->done)
 			complete(work->done);
 		else
-			kfree(work);
+			mempool_free(work, wb_work_mempool);
 	}
 
 	/*
--- linux.orig/include/trace/events/writeback.h	2012-02-14 20:11:22.000000000 +0800
+++ linux/include/trace/events/writeback.h	2012-02-15 12:27:35.000000000 +0800
@@ -23,7 +23,7 @@
 
 #define WB_WORK_REASON							\
 		{WB_REASON_BACKGROUND,		"background"},		\
-		{WB_REASON_TRY_TO_FREE_PAGES,	"try_to_free_pages"},	\
+		{WB_REASON_PAGEOUT,		"pageout"},		\
 		{WB_REASON_SYNC,		"sync"},		\
 		{WB_REASON_PERIODIC,		"periodic"},		\
 		{WB_REASON_LAPTOP_TIMER,	"laptop_timer"},	\
@@ -45,6 +45,8 @@ DECLARE_EVENT_CLASS(writeback_work_class
 		__field(int, range_cyclic)
 		__field(int, for_background)
 		__field(int, reason)
+		__field(unsigned long, ino)
+		__field(unsigned long, offset)
 	),
 	TP_fast_assign(
 		strncpy(__entry->name, dev_name(bdi->dev), 32);
@@ -55,9 +57,11 @@ DECLARE_EVENT_CLASS(writeback_work_class
 		__entry->range_cyclic = work->range_cyclic;
 		__entry->for_background	= work->for_background;
 		__entry->reason = work->reason;
+		__entry->ino = work->inode ? work->inode->i_ino : 0;
+		__entry->offset = work->offset;
 	),
 	TP_printk("bdi %s: sb_dev %d:%d nr_pages=%ld sync_mode=%d "
-		  "kupdate=%d range_cyclic=%d background=%d reason=%s",
+		  "kupdate=%d range_cyclic=%d background=%d reason=%s ino=%lu offset=%lu",
 		  __entry->name,
 		  MAJOR(__entry->sb_dev), MINOR(__entry->sb_dev),
 		  __entry->nr_pages,
@@ -65,7 +69,9 @@ DECLARE_EVENT_CLASS(writeback_work_class
 		  __entry->for_kupdate,
 		  __entry->range_cyclic,
 		  __entry->for_background,
-		  __print_symbolic(__entry->reason, WB_WORK_REASON)
+		  __print_symbolic(__entry->reason, WB_WORK_REASON),
+		  __entry->ino,
+		  __entry->offset
 	)
 );
 #define DEFINE_WRITEBACK_WORK_EVENT(name) \
@@ -437,6 +443,13 @@ DEFINE_EVENT(writeback_congest_waited_te
 	TP_ARGS(usec_timeout, usec_delayed)
 );
 
+DEFINE_EVENT(writeback_congest_waited_template, writeback_reclaim_wait,
+
+	TP_PROTO(unsigned int usec_timeout, unsigned int usec_delayed),
+
+	TP_ARGS(usec_timeout, usec_delayed)
+);
+
 DECLARE_EVENT_CLASS(writeback_single_inode_template,
 
 	TP_PROTO(struct inode *inode,
--- linux.orig/include/linux/writeback.h	2012-02-14 20:11:21.000000000 +0800
+++ linux/include/linux/writeback.h	2012-02-15 12:27:35.000000000 +0800
@@ -40,7 +40,7 @@ enum writeback_sync_modes {
  */
 enum wb_reason {
 	WB_REASON_BACKGROUND,
-	WB_REASON_TRY_TO_FREE_PAGES,
+	WB_REASON_PAGEOUT,
 	WB_REASON_SYNC,
 	WB_REASON_PERIODIC,
 	WB_REASON_LAPTOP_TIMER,
@@ -94,6 +94,8 @@ long writeback_inodes_wb(struct bdi_writ
 				enum wb_reason reason);
 long wb_do_writeback(struct bdi_writeback *wb, int force_wait);
 void wakeup_flusher_threads(long nr_pages, enum wb_reason reason);
+long flush_inode_page(struct address_space *mapping, struct page *page,
+		      bool wait);
 
 /* writeback.h requires fs.h; it, too, is not included from here. */
 static inline void wait_on_inode(struct inode *inode)

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: reclaim the LRU lists full of dirty/writeback pages
  2012-02-16  4:00                   ` Wu Fengguang
@ 2012-02-16 12:44                     ` Jan Kara
  2012-02-16 13:32                       ` Wu Fengguang
  2012-02-17 16:41                     ` Wu Fengguang
  1 sibling, 1 reply; 33+ messages in thread
From: Jan Kara @ 2012-02-16 12:44 UTC (permalink / raw)
  To: Wu Fengguang
  Cc: Jan Kara, Rik van Riel, Greg Thelen, bsingharora, Hugh Dickins,
	Michal Hocko, linux-mm, Mel Gorman, Ying Han, hannes,
	KAMEZAWA Hiroyuki, Minchan Kim

On Thu 16-02-12 12:00:19, Wu Fengguang wrote:
> On Tue, Feb 14, 2012 at 02:29:50PM +0100, Jan Kara wrote:
> > > >   I wonder what happens if you run:
> > > >        mkdir /cgroup/x
> > > >        echo 100M > /cgroup/x/memory.limit_in_bytes
> > > >        echo $$ > /cgroup/x/tasks
> > > > 
> > > >        for (( i = 0; i < 2; i++ )); do
> > > >          mkdir /fs/d$i
> > > >          for (( j = 0; j < 5000; j++ )); do 
> > > >            dd if=/dev/zero of=/fs/d$i/f$j bs=1k count=50
> > > >          done &
> > > >        done
> > > 
> > > That's a very good case, thanks!
> > >  
> > > >   Because for small files the writearound logic won't help much...
> > > 
> > > Right, it also means the native background work cannot be more I/O
> > > efficient than the pageout works, except for the overheads of more
> > > work items..
> >   Yes, that's true.
> > 
> > > >   Also the number of work items queued might become interesting.
> > > 
> > > It turns out that the 1024 mempool reservations are not exhausted at
> > > all (the below patch as a trace_printk on alloc failure and it didn't
> > > trigger at all).
> > > 
> > > Here is the representative iostat lines on XFS (full "iostat -kx 1 20" log attached):
> > > 
> > > avg-cpu:  %user   %nice %system %iowait  %steal   %idle                                                                     
> > >            0.80    0.00    6.03    0.03    0.00   93.14                                                                     
> > >                                                                                                                             
> > > Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util                   
> > > sda               0.00   205.00    0.00  163.00     0.00 16900.00   207.36     4.09   21.63   1.88  30.70                   
> > > 
> > > The attached dirtied/written progress graph looks interesting.
> > > Although the iostat disk utilization is low, the "dirtied" progress
> > > line is pretty straight and there is no single congestion_wait event
> > > in the trace log. Which makes me wonder if there are some unknown
> > > blocking issues in the way.
> >   Interesting. I'd also expect we should block in reclaim path. How fast
> > can dd threads progress when there is no cgroup involved?
> 
> I tried running the dd tasks in global context with
> 
>         echo $((100<<20)) > /proc/sys/vm/dirty_bytes
> 
> and got mostly the same results on XFS:
> 
>         avg-cpu:  %user   %nice %system %iowait  %steal   %idle
>                    0.85    0.00    8.88    0.00    0.00   90.26
> 
>         Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util
>         sda               0.00     0.00    0.00   50.00     0.00 23036.00   921.44     9.59  738.02   7.38  36.90
> 
>         avg-cpu:  %user   %nice %system %iowait  %steal   %idle
>                    0.95    0.00    8.95    0.00    0.00   90.11
> 
>         Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util
>         sda               0.00   854.00    0.00   99.00     0.00 19552.00   394.99    34.14   87.98   3.82  37.80
  OK, so it seems that reclaiming pages in memcg reclaim acted as a natural
throttling similar to what balance_dirty_pages() does in the global case.


> Interestingly, ext4 shows comparable throughput, however is reporting
> near 100% disk utilization:
> 
>         avg-cpu:  %user   %nice %system %iowait  %steal   %idle
>                    0.76    0.00    9.02    0.00    0.00   90.23
> 
>         Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util
>         sda               0.00     0.00    0.00  317.00     0.00 20956.00   132.21    28.57   82.71   3.16 100.10
> 
>         avg-cpu:  %user   %nice %system %iowait  %steal   %idle
>                    0.82    0.00    8.95    0.00    0.00   90.23
> 
>         Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util
>         sda               0.00     0.00    0.00  402.00     0.00 24388.00   121.33    21.09   58.55   2.42  97.40
> 
>         avg-cpu:  %user   %nice %system %iowait  %steal   %idle
>                    0.82    0.00    8.99    0.00    0.00   90.19
> 
>         Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util
>         sda               0.00     0.00    0.00  409.00     0.00 21996.00   107.56    15.25   36.74   2.30  94.10
  Average request size is smaller so maybe ext4 does more seeking.

> > > > Another common case to test - run 'slapadd' command in each cgroup to
> > > > create big LDAP database. That does pretty much random IO on a big mmaped
> > > > DB file.
> > > 
> > > I've not used this. Will it need some configuration and data feed?
> > > fio looks more handy to me for emulating mmap random IO.
> >   Yes, fio can generate random mmap IO. It's just that this is a real life
> > workload. So it is not completely random, it happens on several files and
> > is also interleaved with other memory allocations from DB. I can send you
> > the config files and data feed if you are interested.
> 
> I'm very interested, thank you!
  OK, I'll send it in private email...

> > > > > +/*
> > > > > + * schedule writeback on a range of inode pages.
> > > > > + */
> > > > > +static struct wb_writeback_work *
> > > > > +bdi_flush_inode_range(struct backing_dev_info *bdi,
> > > > > +		      struct inode *inode,
> > > > > +		      pgoff_t offset,
> > > > > +		      pgoff_t len,
> > > > > +		      bool wait)
> > > > > +{
> > > > > +	struct wb_writeback_work *work;
> > > > > +
> > > > > +	if (!igrab(inode))
> > > > > +		return ERR_PTR(-ENOENT);
> > > >   One technical note here: If the inode is deleted while it is queued, this
> > > > reference will keep it living until flusher thread gets to it. Then when
> > > > flusher thread puts its reference, the inode will get deleted in flusher
> > > > thread context. I don't see an immediate problem in that but it might be
> > > > surprising sometimes. Another problem I see is that if you try to
> > > > unmount the filesystem while the work item is queued, you'll get EBUSY for
> > > > no apparent reason (for userspace).
> > > 
> > > Yeah, we need to make umount work.
> >   The positive thing is that if the inode is reaped while the work item is
> > queue, we know all that needed to be done is done. So we don't really need
> > to pin the inode.
> 
> But I do need to make sure the *inode pointer does not point to some
> invalid memory at work exec time. Is this possible without raising
> ->i_count?
  I was thinking about it and what should work is that we have inode
reference in work item but in generic_shutdown_super() we go through
the worklist and drop all work items for superblock before calling
evict_inodes()...

								Honza

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: reclaim the LRU lists full of dirty/writeback pages
  2012-02-16 12:44                     ` Jan Kara
@ 2012-02-16 13:32                       ` Wu Fengguang
  2012-02-16 14:06                         ` Wu Fengguang
  0 siblings, 1 reply; 33+ messages in thread
From: Wu Fengguang @ 2012-02-16 13:32 UTC (permalink / raw)
  To: Jan Kara
  Cc: Rik van Riel, Greg Thelen, bsingharora, Hugh Dickins,
	Michal Hocko, linux-mm, Mel Gorman, Ying Han, hannes,
	KAMEZAWA Hiroyuki, Minchan Kim

On Thu, Feb 16, 2012 at 01:44:45PM +0100, Jan Kara wrote:
> On Thu 16-02-12 12:00:19, Wu Fengguang wrote:
> > On Tue, Feb 14, 2012 at 02:29:50PM +0100, Jan Kara wrote:

> > > > > > +/*
> > > > > > + * schedule writeback on a range of inode pages.
> > > > > > + */
> > > > > > +static struct wb_writeback_work *
> > > > > > +bdi_flush_inode_range(struct backing_dev_info *bdi,
> > > > > > +		      struct inode *inode,
> > > > > > +		      pgoff_t offset,
> > > > > > +		      pgoff_t len,
> > > > > > +		      bool wait)
> > > > > > +{
> > > > > > +	struct wb_writeback_work *work;
> > > > > > +
> > > > > > +	if (!igrab(inode))
> > > > > > +		return ERR_PTR(-ENOENT);
> > > > >   One technical note here: If the inode is deleted while it is queued, this
> > > > > reference will keep it living until flusher thread gets to it. Then when
> > > > > flusher thread puts its reference, the inode will get deleted in flusher
> > > > > thread context. I don't see an immediate problem in that but it might be
> > > > > surprising sometimes. Another problem I see is that if you try to
> > > > > unmount the filesystem while the work item is queued, you'll get EBUSY for
> > > > > no apparent reason (for userspace).
> > > > 
> > > > Yeah, we need to make umount work.
> > >   The positive thing is that if the inode is reaped while the work item is
> > > queue, we know all that needed to be done is done. So we don't really need
> > > to pin the inode.
> > 
> > But I do need to make sure the *inode pointer does not point to some
> > invalid memory at work exec time. Is this possible without raising
> > ->i_count?
>   I was thinking about it and what should work is that we have inode
> reference in work item but in generic_shutdown_super() we go through
> the worklist and drop all work items for superblock before calling
> evict_inodes()...

Good point!

This diff removes the works after the sync_filesystem(sb) call.  After
which, no more dirty pages are expected on that sb (otherwise the
umount will fail anyway), hence no more pageout works will be queued
for that sb.

+static void wb_free_work(struct wb_writeback_work *work)
+{
+	/*
+	 * Notify the caller of completion if this is a synchronous
+	 * work item, otherwise just free it.
+	 */
+	if (work->done)
+		complete(work->done);
+	else
+		mempool_free(work, wb_work_mempool);
+}
+
+/*
+ * Remove works for @sb; or if (@sb == NULL), remove all works on @bdi.
+ */
+void bdi_remove_works(struct backing_dev_info *bdi, struct super_block *sb)
+{
+	struct inode *inode = mapping->host;
+	struct wb_writeback_work *work;
+
+	spin_lock_bh(&bdi->wb_lock);
+	list_for_each_entry_safe(work, &bdi->work_list, list) {
+		if (work->inode && work->inode->i_sb == sb) {
+			iput(inode);
+		} else if (sb && work->sb != sb)
+			continue;
+
+		list_del_init(&work->list);
+		wb_free_work(work);
+	}
+	spin_unlock_bh(&bdi->wb_lock);
+}

--- linux.orig/fs/super.c	2012-02-16 21:08:09.000000000 +0800
+++ linux/fs/super.c	2012-02-16 21:22:19.000000000 +0800
@@ -389,6 +389,7 @@ void generic_shutdown_super(struct super
 
 		fsnotify_unmount_inodes(&sb->s_inodes);
 
+		bdi_remove_works(sb->s_bdi, sb);
 		evict_inodes(sb);
 
 		if (sop->put_super)

Thanks,
Fengguang

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: reclaim the LRU lists full of dirty/writeback pages
  2012-02-16 13:32                       ` Wu Fengguang
@ 2012-02-16 14:06                         ` Wu Fengguang
  0 siblings, 0 replies; 33+ messages in thread
From: Wu Fengguang @ 2012-02-16 14:06 UTC (permalink / raw)
  To: Jan Kara
  Cc: Rik van Riel, Greg Thelen, bsingharora, Hugh Dickins,
	Michal Hocko, linux-mm, Mel Gorman, Ying Han, hannes,
	KAMEZAWA Hiroyuki, Minchan Kim

On Thu, Feb 16, 2012 at 09:32:33PM +0800, Wu Fengguang wrote:
> On Thu, Feb 16, 2012 at 01:44:45PM +0100, Jan Kara wrote:
> > On Thu 16-02-12 12:00:19, Wu Fengguang wrote:
> > > On Tue, Feb 14, 2012 at 02:29:50PM +0100, Jan Kara wrote:
> 
> > > > > > > +/*
> > > > > > > + * schedule writeback on a range of inode pages.
> > > > > > > + */
> > > > > > > +static struct wb_writeback_work *
> > > > > > > +bdi_flush_inode_range(struct backing_dev_info *bdi,
> > > > > > > +		      struct inode *inode,
> > > > > > > +		      pgoff_t offset,
> > > > > > > +		      pgoff_t len,
> > > > > > > +		      bool wait)
> > > > > > > +{
> > > > > > > +	struct wb_writeback_work *work;
> > > > > > > +
> > > > > > > +	if (!igrab(inode))
> > > > > > > +		return ERR_PTR(-ENOENT);
> > > > > >   One technical note here: If the inode is deleted while it is queued, this
> > > > > > reference will keep it living until flusher thread gets to it. Then when
> > > > > > flusher thread puts its reference, the inode will get deleted in flusher
> > > > > > thread context. I don't see an immediate problem in that but it might be
> > > > > > surprising sometimes. Another problem I see is that if you try to
> > > > > > unmount the filesystem while the work item is queued, you'll get EBUSY for
> > > > > > no apparent reason (for userspace).
> > > > > 
> > > > > Yeah, we need to make umount work.
> > > >   The positive thing is that if the inode is reaped while the work item is
> > > > queue, we know all that needed to be done is done. So we don't really need
> > > > to pin the inode.
> > > 
> > > But I do need to make sure the *inode pointer does not point to some
> > > invalid memory at work exec time. Is this possible without raising
> > > ->i_count?
> >   I was thinking about it and what should work is that we have inode
> > reference in work item but in generic_shutdown_super() we go through
> > the worklist and drop all work items for superblock before calling
> > evict_inodes()...
> 
> Good point!
> 
> This diff removes the works after the sync_filesystem(sb) call.  After
> which, no more dirty pages are expected on that sb (otherwise the
> umount will fail anyway), hence no more pageout works will be queued
> for that sb.
> 
> +static void wb_free_work(struct wb_writeback_work *work)
> +{
> +	/*
> +	 * Notify the caller of completion if this is a synchronous
> +	 * work item, otherwise just free it.
> +	 */
> +	if (work->done)
> +		complete(work->done);
> +	else
> +		mempool_free(work, wb_work_mempool);
> +}
> +
> +/*
> + * Remove works for @sb; or if (@sb == NULL), remove all works on @bdi.
> + */
> +void bdi_remove_works(struct backing_dev_info *bdi, struct super_block *sb)
> +{
> +	struct inode *inode = mapping->host;
> +	struct wb_writeback_work *work;
> +
> +	spin_lock_bh(&bdi->wb_lock);
> +	list_for_each_entry_safe(work, &bdi->work_list, list) {
> +		if (work->inode && work->inode->i_sb == sb) {
> +			iput(inode);
> +		} else if (sb && work->sb != sb)
> +			continue;
> +
> +		list_del_init(&work->list);
> +		wb_free_work(work);
> +	}
> +	spin_unlock_bh(&bdi->wb_lock);
> +}

Sorry, this corrected function actually compiles:

+/* 
+ * Remove works for @sb; or if (@sb == NULL), remove all works on @bdi. 
+ */ 
+void bdi_remove_works(struct backing_dev_info *bdi, struct super_block *sb) 
+{ 
+       struct wb_writeback_work *work, *tmp; 
+       LIST_HEAD(works); 
+ 
+       spin_lock_bh(&bdi->wb_lock); 
+       list_for_each_entry_safe(work, tmp, &bdi->work_list, list) { 
+               if (sb) { 
+                       if (work->sb && work->sb != sb) 
+                               continue; 
+                       if (work->inode && work->inode->i_sb != sb) 
+                               continue; 
+               } 
+               list_move(&work->list, &works); 
+       } 
+       spin_unlock_bh(&bdi->wb_lock); 
+ 
+       while (!list_empty(&works)) { 
+               work = list_entry(works.next, 
+                                 struct wb_writeback_work, list); 
+               list_del_init(&work->list); 
+               if (work->inode) 
+                       iput(work->inode); 
+               wb_free_work(work); 
+       } 
+} 

> --- linux.orig/fs/super.c	2012-02-16 21:08:09.000000000 +0800
> +++ linux/fs/super.c	2012-02-16 21:22:19.000000000 +0800
> @@ -389,6 +389,7 @@ void generic_shutdown_super(struct super
>  
>  		fsnotify_unmount_inodes(&sb->s_inodes);
>  
> +		bdi_remove_works(sb->s_bdi, sb);
>  		evict_inodes(sb);
>  
>  		if (sop->put_super)
> 
> Thanks,
> Fengguang

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: reclaim the LRU lists full of dirty/writeback pages
  2012-02-16  9:50               ` Wu Fengguang
@ 2012-02-16 17:31                 ` Mel Gorman
  2012-02-27 14:24                   ` Fengguang Wu
  0 siblings, 1 reply; 33+ messages in thread
From: Mel Gorman @ 2012-02-16 17:31 UTC (permalink / raw)
  To: Wu Fengguang
  Cc: Greg Thelen, Jan Kara, bsingharora, Hugh Dickins, Michal Hocko,
	linux-mm, Ying Han, hannes, KAMEZAWA Hiroyuki, Rik van Riel,
	Minchan Kim

On Thu, Feb 16, 2012 at 05:50:42PM +0800, Wu Fengguang wrote:
> On Tue, Feb 14, 2012 at 03:51:24PM +0000, Mel Gorman wrote:
> > On Tue, Feb 14, 2012 at 09:18:12PM +0800, Wu Fengguang wrote:
> > > > For the OOM problem, a more reasonable stopgap might be to identify when
> > > > a process is scanning a memcg at high priority and encountered all
> > > > PageReclaim with no forward progress and to congestion_wait() if that
> > > > situation occurs. A preferable way would be to wait until the flusher
> > > > wakes up a waiter on PageReclaim pages to be written out because we want
> > > > to keep moving way from congestion_wait() if at all possible.
> > > 
> > > Good points! Below is the more serious page reclaim changes.
> > > 
> > > The dirty/writeback pages may often come close to each other in the
> > > LRU list, so the local test during a 32-page scan may still trigger
> > > reclaim waits unnecessarily.
> > 
> > Yes, this is particularly the case when writing back to USB. It is not
> > unusual that all dirty pages under writeback are backed by USB and at the
> > end of the LRU. Right now what happens is that reclaimers see higher CPU
> > usage as they scan over these pages uselessly. If the wrong choice is
> > made on how to throttle, we'll see yet more variants of the "system
> > responsiveness drops when writing to USB".
> 
> Yes, USB is an important case to support.  I'd imagine the heavy USB
> writes typically happen in desktops and run *outside* of any memcg.

I would expect it's common that USB writes are outside a memcg.

> So they'll typically take <= 20% memory in the zone. As long as we
> start the PG_reclaim throttling only when above the 20% dirty
> threshold (ie. on zone_dirty_ok()), the USB case should be safe.
> 

It's not just the USB writer, it's unrelated process that are allocating
memory at the same time the writing happens. What we want to avoid is a
situation where something like firefox or evolution or even gnome-terminal
is performing a small read and gets either

a) started for IO bandwidth and stalls (not the focus here obviously)
b) enter page reclaim, finds PG_reclaim pages from the USB write and stalls

It's (b) we need to watch out for. I accept that this patch is heading
in the right direction and that the tracepoint can be used to identify
processes get throttled unfairly. Before merging, it'd be nice to hear of
such a test and include details in the changelog similar to the test case
in https://bugzilla.kernel.org/show_bug.cgi?id=31142 (a bug that lasted a
*long* time as it turned out, fixes merged for 3.3 with sync-light migration).

> > > Some global information on the percent
> > > of dirty/writeback pages in the LRU list may help. Anyway the added
> > > tests should still be much better than no protection.
> > > 
> > 
> > You can tell how many dirty pages and writeback pages are in the zone
> > already.
> 
> Right. I changed the test to
> 
> +       if (nr_pgreclaim && nr_pgreclaim >= (nr_taken >> (DEF_PRIORITY-priority)) &&
> +           (!global_reclaim(sc) || !zone_dirty_ok(zone)))
> +               reclaim_wait(HZ/10);
> 
> And I'd prefer to use a higher threshold than the default 20% for the
> above zone_dirty_ok() test, so that when Johannes' zone dirty
> balancing does the job fine, PG_reclaim based page reclaim throttling
> won't happen at all.
> 

We'd also need to watch that we do not get throttled on small zones
like ZONE_DMA (which shouldn't happen but still). To detect this if
it happens, please consider including node and zone information in the
writeback_reclaim_wait tracepoint. The memcg people might want to be
able to see the memcg which I guess could be available

node=[NID|memcg]
NID if zone >=0
memcg if zone == -1

Which is hacky but avoids creating two tracepoints.

> > > A global wait queue and reclaim_wait() is introduced. The waiters will
> > > be wakeup when pages are rotated by end_page_writeback() or lru drain.
> > > 
> > > I have to say its effectiveness depends on the filesystem... ext4
> > > and btrfs do fluent IO completions, so reclaim_wait() works pretty
> > > well:
> > >               dd-14560 [017] ....  1360.894605: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=10000
> > >               dd-14560 [017] ....  1360.904456: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=8000
> > >               dd-14560 [017] ....  1360.908293: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=2000
> > >               dd-14560 [017] ....  1360.923960: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=15000
> > >               dd-14560 [017] ....  1360.927810: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=2000
> > >               dd-14560 [017] ....  1360.931656: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=2000
> > >               dd-14560 [017] ....  1360.943503: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=10000
> > >               dd-14560 [017] ....  1360.953289: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=7000
> > >               dd-14560 [017] ....  1360.957177: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=2000
> > >               dd-14560 [017] ....  1360.972949: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=15000
> > > 
> > > However XFS does IO completions in very large batches (there may be
> > > only several big IO completions in one second). So reclaim_wait()
> > > mostly end up waiting to the full HZ/10 timeout:
> > > 
> > >               dd-4177  [008] ....   866.367661: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=100000
> > >               dd-4177  [010] ....   866.567583: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=100000
> > >               dd-4177  [012] ....   866.767458: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=100000
> > >               dd-4177  [013] ....   866.867419: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=100000
> > >               dd-4177  [008] ....   867.167266: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=100000
> > >               dd-4177  [010] ....   867.367168: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=100000
> > >               dd-4177  [012] ....   867.818950: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=100000
> > >               dd-4177  [013] ....   867.918905: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=100000
> > >               dd-4177  [013] ....   867.971657: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=52000
> > >               dd-4177  [013] ....   867.971812: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=0
> > >               dd-4177  [008] ....   868.355700: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=100000
> > >               dd-4177  [010] ....   868.700515: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=100000
> > > 
> > 
> > And where people will get hit by regressions in this area is writing to
> > vfat and in more rare cases ntfs on USB stick.
> 
> vfat IO completions seem to lie somewhere between ext4 and xfs:
> 
>            <...>-46385 [010] .... 143570.714470: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=100000
>            <...>-46385 [008] .... 143570.752391: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=12000
>            <...>-46385 [008] .... 143570.937327: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=52000
>            <...>-46385 [010] .... 143571.160252: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=100000
>            <...>-46385 [011] .... 143571.286197: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=100000
>            <...>-46385 [008] .... 143571.329644: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=15000
>            <...>-46385 [008] .... 143571.475433: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=18000
>            <...>-46385 [008] .... 143571.653461: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=52000
>            <...>-46385 [008] .... 143571.839949: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=56000
>            <...>-46385 [010] .... 143572.060816: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=100000
>            <...>-46385 [011] .... 143572.185754: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=100000
>            <...>-46385 [008] .... 143572.212522: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=1000
>            <...>-46385 [008] .... 143572.217825: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=2000
>            <...>-46385 [008] .... 143572.312395: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=91000
>            <...>-46385 [008] .... 143572.315122: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=1000
>            <...>-46385 [009] .... 143572.433630: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=100000
>            <...>-46385 [010] .... 143572.534569: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=100000
>  

Ok. It's interesting to note that we are stalling a lot there - roughly
30ms every second. As long as it's the writer, that's fine. If it's
firefox, it will create bug reports :)

> > > <SNIP>
> > > @@ -813,6 +815,10 @@ static unsigned long shrink_page_list(st
> > >  
> > >  		if (PageWriteback(page)) {
> > >  			nr_writeback++;
> > > +			if (PageReclaim(page))
> > > +				nr_pgreclaim++;
> > > +			else
> > > +				SetPageReclaim(page);
> > >  			/*
> > 
> > This check is unexpected. We already SetPageReclaim when queuing pages for
> > IO from reclaim context and if dirty pages are encountered during the LRU
> > scan that cannot be queued for IO. How often is it that nr_pgreclaim !=
> > nr_writeback and by how much do they differ?
> 
> Quite often, I suspect. The pageout writeback works do 1-8MB write
> around which may start I/O a bit earlier than the covered pages are
> encountered by page reclaim. ext4 forces 128MB write chunk size, which
> further increases the opportunities.
> 

Ok, thanks for the clarification. Stick a wee comment on it please.

> > >  			 * Synchronous reclaim cannot queue pages for
> > >  			 * writeback due to the possibility of stack overflow
> > > @@ -874,12 +880,15 @@ static unsigned long shrink_page_list(st
> > >  			nr_dirty++;
> > >  
> > >  			/*
> > > -			 * Only kswapd can writeback filesystem pages to
> > > -			 * avoid risk of stack overflow but do not writeback
> > > -			 * unless under significant pressure.
> > > +			 * run into the visited page again: we are scanning
> > > +			 * faster than the flusher can writeout dirty pages
> > >  			 */
> > 
> > which in itself is not an abnormal condition. We get into this situation
> > when writing to USB. Dirty throttling stops too much memory getting dirtied
> > but that does not mean we should throttle instead of reclaiming clean pages.
> > 
> > That's why I worry that if this is aimed at fixing a memcg problem, it
> > will have the impact of making interactive performance on normal systems
> > worse.
> 
> You are right. This patch only addresses the pageout I/O efficiency
> and dirty throttling problems for a fully dirtied LRU. Next step, I'll
> think about the interactive performance problem for a less dirtied LRU.
> 

Ok, thanks.

> > <SNIP>
> >
> > If the intention is to avoid memcg going OOM prematurely, the
> > nr_pgreclaim value needs to be treated at a higher level that records
> > how many PageReclaim pages were encountered. If no progress was made
> > because all the pages were PageReclaim, then throttle and return 1 to
> > the page allocator where it will retry the allocation without going OOM
> > after some pages have been cleaned and reclaimed.
>  
> Agreed in general, but changed to this test for now, which is made a
> bit more global wise with the use of zone_dirty_ok().
> 

Ok, sure. We know what to look out for and where unrelated regressions
might get introduced.

> memcg is ignored due to no dirty accounting (Greg has the patch though).
> And even zone_dirty_ok() may be inaccurate for the global reclaim, if
> some memcgs are skipped by the global reclaim by the memcg soft limit.
> 
> But anyway, it's a handy hack for now. I'm looking into some more
> radical changes to put most dirty/writeback pages into a standalone
> LRU list (in addition to your LRU_IMMEDIATE, which I think is a good
> idea) for addressing the clustered way they tend to lie in the
> inactive LRU list.
> 
> +       if (nr_pgreclaim && nr_pgreclaim >= (nr_taken >> (DEF_PRIORITY-priority)) &&
> +           (!global_reclaim(sc) || !zone_dirty_ok(zone)))
> +               reclaim_wait(HZ/10);
> 

This should make it harder to get stalled. Your tracepoint should help
us catch if it happens unnecessarily.

> ---
> Subject: writeback: introduce the pageout work
> Date: Thu Jul 29 14:41:19 CST 2010
> 
> This relays file pageout IOs to the flusher threads.
> 
> The ultimate target is to gracefully handle the LRU lists full of
> dirty/writeback pages.
> 

It would be worth mentioning in the changelog that this is much more
important now that page reclaim generally does not writeout filesystem-backed
pages.

> 1) I/O efficiency
> 
> The flusher will piggy back the nearby ~10ms worth of dirty pages for I/O.
> 
> This takes advantage of the time/spacial locality in most workloads: the
> nearby pages of one file are typically populated into the LRU at the same
> time, hence will likely be close to each other in the LRU list. Writing
> them in one shot helps clean more pages effectively for page reclaim.
> 
> 2) OOM avoidance and scan rate control
> 
> Typically we do LRU scan w/o rate control and quickly get enough clean
> pages for the LRU lists not full of dirty pages.
> 
> Or we can still get a number of freshly cleaned pages (moved to LRU tail
> by end_page_writeback()) when the queued pageout I/O is completed within
> tens of milli-seconds.
> 
> However if the LRU list is small and full of dirty pages, it can be
> quickly fully scanned and go OOM before the flusher manages to clean
> enough pages.
> 

It's worth pointing out here that generally this does not happen for global
reclaim which does dirty throttling but happens easily with memcg LRUs.

> A simple yet reliable scheme is employed to avoid OOM and keep scan rate
> in sync with the I/O rate:
> 
> 	if (PageReclaim(page))
> 		congestion_wait(HZ/10);
> 

This comment is stale now.

> PG_reclaim plays the key role. When dirty pages are encountered, we
> queue I/O for it,

This is misleading. The process that encounters the dirty page does
not queue the page for IO unless it is kswapd scanning at high priority
(currently anyway, you patch changes it). The process that finds the page
queues work for flusher threads that will queue the actual I/O for it at
some unknown time in the future.

> set PG_reclaim and put it back to the LRU head.
> So if PG_reclaim pages are encountered again, it means the dirty page
> has not yet been cleaned by the flusher after a full zone scan. It
> indicates we are scanning more fast than I/O and shall take a snap.
> 

This is also slightly misleading because the page can be encountered
after rescanning the inactive list, not necessarily a full zone scan but
it's a minor point.

> The runtime behavior on a fully dirtied small LRU list would be:
> It will start with a quick scan of the list, queuing all pages for I/O.
> Then the scan will be slowed down by the PG_reclaim pages *adaptively*
> to match the I/O bandwidth.
> 
> 3) writeback work coordinations
> 
> To avoid memory allocations at page reclaim, a mempool for struct
> wb_writeback_work is created.
> 
> wakeup_flusher_threads() is removed because it can easily delay the
> more oriented pageout works and even exhaust the mempool reservations.
> It's also found to not I/O efficient by frequently submitting writeback
> works with small ->nr_pages.
> 
> Background/periodic works will quit automatically, so as to clean the
> pages under reclaim ASAP. However for now the sync work can still block
> us for long time.
> 
> Jan Kara: limit the search scope. Note that the limited search and work
> pool is not a big problem: 1000 IOs under flight are typically more than
> enough to saturate the disk. And the overheads of searching in the work
> list didn't even show up in the perf report.
> 
> 4) test case
> 
> Run 2 dd tasks in a 100MB memcg (a very handy test case from Greg Thelen):
> 
> 	mkdir /cgroup/x
> 	echo 100M > /cgroup/x/memory.limit_in_bytes
> 	echo $$ > /cgroup/x/tasks
> 
> 	for i in `seq 2`
> 	do
> 		dd if=/dev/zero of=/fs/f$i bs=1k count=1M &
> 	done
> 
> Before patch, the dd tasks are quickly OOM killed.
> After patch, they run well with reasonably good performance and overheads:
> 
> 1073741824 bytes (1.1 GB) copied, 22.2196 s, 48.3 MB/s
> 1073741824 bytes (1.1 GB) copied, 22.4675 s, 47.8 MB/s
> 
> iostat -kx 1
> 
> Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util
> sda               0.00     0.00    0.00  178.00     0.00 89568.00  1006.38    74.35  417.71   4.80  85.40
> sda               0.00     2.00    0.00  191.00     0.00 94428.00   988.77    53.34  219.03   4.34  82.90
> sda               0.00    20.00    0.00  196.00     0.00 97712.00   997.06    71.11  337.45   4.77  93.50
> sda               0.00     5.00    0.00  175.00     0.00 84648.00   967.41    54.03  316.44   5.06  88.60
> sda               0.00     0.00    0.00  186.00     0.00 92432.00   993.89    56.22  267.54   5.38 100.00
> sda               0.00     1.00    0.00  183.00     0.00 90156.00   985.31    37.99  325.55   4.33  79.20
> sda               0.00     0.00    0.00  175.00     0.00 88692.00  1013.62    48.70  218.43   4.69  82.10
> sda               0.00     0.00    0.00  196.00     0.00 97528.00   995.18    43.38  236.87   5.10 100.00
> sda               0.00     0.00    0.00  179.00     0.00 88648.00   990.48    45.83  285.43   5.59 100.00
> sda               0.00     0.00    0.00  178.00     0.00 88500.00   994.38    28.28  158.89   4.99  88.80
> sda               0.00     0.00    0.00  194.00     0.00 95852.00   988.16    32.58  167.39   5.15 100.00
> sda               0.00     2.00    0.00  215.00     0.00 105996.00   986.01    41.72  201.43   4.65 100.00
> sda               0.00     4.00    0.00  173.00     0.00 84332.00   974.94    50.48  260.23   5.76  99.60
> sda               0.00     0.00    0.00  182.00     0.00 90312.00   992.44    36.83  212.07   5.49 100.00
> sda               0.00     8.00    0.00  195.00     0.00 95940.50   984.01    50.18  221.06   5.13 100.00
> sda               0.00     1.00    0.00  220.00     0.00 108852.00   989.56    40.99  202.68   4.55 100.00
> sda               0.00     2.00    0.00  161.00     0.00 80384.00   998.56    37.19  268.49   6.21 100.00
> sda               0.00     4.00    0.00  182.00     0.00 90830.00   998.13    50.58  239.77   5.49 100.00
> sda               0.00     0.00    0.00  197.00     0.00 94877.00   963.22    36.68  196.79   5.08 100.00
> 
> avg-cpu:  %user   %nice %system %iowait  %steal   %idle
>            0.25    0.00   15.08   33.92    0.00   50.75
>            0.25    0.00   14.54   35.09    0.00   50.13
>            0.50    0.00   13.57   32.41    0.00   53.52
>            0.50    0.00   11.28   36.84    0.00   51.38
>            0.50    0.00   15.75   32.00    0.00   51.75
>            0.50    0.00   10.50   34.00    0.00   55.00
>            0.50    0.00   17.63   27.46    0.00   54.41
>            0.50    0.00   15.08   30.90    0.00   53.52
>            0.50    0.00   11.28   32.83    0.00   55.39
>            0.75    0.00   16.79   26.82    0.00   55.64
>            0.50    0.00   16.08   29.15    0.00   54.27
>            0.50    0.00   13.50   30.50    0.00   55.50
>            0.50    0.00   14.32   35.18    0.00   50.00
>            0.50    0.00   12.06   33.92    0.00   53.52
>            0.50    0.00   17.29   30.58    0.00   51.63
>            0.50    0.00   15.08   29.65    0.00   54.77
>            0.50    0.00   12.53   29.32    0.00   57.64
>            0.50    0.00   15.29   31.83    0.00   52.38
> 
> The global dd numbers for comparison:
> 
> Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util
> sda               0.00     0.00    0.00  189.00     0.00 95752.00  1013.25   143.09  684.48   5.29 100.00
> sda               0.00     0.00    0.00  208.00     0.00 105480.00  1014.23   143.06  733.29   4.81 100.00
> sda               0.00     0.00    0.00  161.00     0.00 81924.00  1017.69   141.71  757.79   6.21 100.00
> sda               0.00     0.00    0.00  217.00     0.00 109580.00  1009.95   143.09  749.55   4.61 100.10
> sda               0.00     0.00    0.00  187.00     0.00 94728.00  1013.13   144.31  773.67   5.35 100.00
> sda               0.00     0.00    0.00  189.00     0.00 95752.00  1013.25   144.14  742.00   5.29 100.00
> sda               0.00     0.00    0.00  177.00     0.00 90032.00  1017.31   143.32  656.59   5.65 100.00
> sda               0.00     0.00    0.00  215.00     0.00 108640.00  1010.60   142.90  817.54   4.65 100.00
> sda               0.00     2.00    0.00  166.00     0.00 83858.00  1010.34   143.64  808.61   6.02 100.00
> sda               0.00     0.00    0.00  186.00     0.00 92813.00   997.99   141.18  736.95   5.38 100.00
> sda               0.00     0.00    0.00  206.00     0.00 104456.00  1014.14   146.27  729.33   4.85 100.00
> sda               0.00     0.00    0.00  213.00     0.00 107024.00  1004.92   143.25  705.70   4.69 100.00
> sda               0.00     0.00    0.00  188.00     0.00 95748.00  1018.60   141.82  764.78   5.32 100.00
> 
> avg-cpu:  %user   %nice %system %iowait  %steal   %idle
>            0.51    0.00   11.22   52.30    0.00   35.97
>            0.25    0.00   10.15   52.54    0.00   37.06
>            0.25    0.00    5.01   56.64    0.00   38.10
>            0.51    0.00   15.15   43.94    0.00   40.40
>            0.25    0.00   12.12   48.23    0.00   39.39
>            0.51    0.00   11.20   53.94    0.00   34.35
>            0.26    0.00    9.72   51.41    0.00   38.62
>            0.76    0.00    9.62   50.63    0.00   38.99
>            0.51    0.00   10.46   53.32    0.00   35.71
>            0.51    0.00    9.41   51.91    0.00   38.17
>            0.25    0.00   10.69   49.62    0.00   39.44
>            0.51    0.00   12.21   52.67    0.00   34.61
>            0.51    0.00   11.45   53.18    0.00   34.86
> 
> XXX: commit NFS unstable pages via write_inode()
> XXX: the added congestion_wait() may be undesirable in some situations
> 

This second XXX may also not be redundant.

> CC: Jan Kara <jack@suse.cz>
> CC: Mel Gorman <mgorman@suse.de>
> Acked-by: Rik van Riel <riel@redhat.com>
> CC: Greg Thelen <gthelen@google.com>
> CC: Minchan Kim <minchan.kim@gmail.com>
> Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
> ---
>  fs/fs-writeback.c                |  169 ++++++++++++++++++++++++++++-
>  include/linux/backing-dev.h      |    2 
>  include/linux/writeback.h        |    4 
>  include/trace/events/writeback.h |   19 ++-
>  mm/backing-dev.c                 |   35 ++++++
>  mm/swap.c                        |    1 
>  mm/vmscan.c                      |   32 +++--
>  7 files changed, 245 insertions(+), 17 deletions(-)
> 
> - move congestion_wait() out of the page lock: it's blocking btrfs lock_delalloc_pages()
> 
> --- linux.orig/include/linux/backing-dev.h	2012-02-14 20:11:21.000000000 +0800
> +++ linux/include/linux/backing-dev.h	2012-02-15 12:34:24.000000000 +0800
> @@ -304,6 +304,8 @@ void clear_bdi_congested(struct backing_
>  void set_bdi_congested(struct backing_dev_info *bdi, int sync);
>  long congestion_wait(int sync, long timeout);
>  long wait_iff_congested(struct zone *zone, int sync, long timeout);
> +long reclaim_wait(long timeout);
> +void reclaim_rotated(void);
>  
>  static inline bool bdi_cap_writeback_dirty(struct backing_dev_info *bdi)
>  {
> --- linux.orig/mm/backing-dev.c	2012-02-14 20:11:21.000000000 +0800
> +++ linux/mm/backing-dev.c	2012-02-15 12:34:19.000000000 +0800
> @@ -873,3 +873,38 @@ out:
>  	return ret;
>  }
>  EXPORT_SYMBOL(wait_iff_congested);
> +
> +static DECLARE_WAIT_QUEUE_HEAD(reclaim_wqh);
> +

Should this be declared on a per-NUMA node basis to avoid throttling on one
node being woken up by activity on an unrelated node?  reclaim_rorated()
is called from a context that has a page so looking up the waitqueue would
be easy. Grep for place that initialise kswapd_wait and the initialisation
code will be easier although watch that if a node is hot-removed that
the queue is woken.

> +/**
> + * reclaim_wait - wait for some pages being rotated to the LRU tail
> + * @timeout: timeout in jiffies
> + *
> + * Wait until @timeout, or when some (typically PG_reclaim under writeback)
> + * pages rotated to the LRU so that page reclaim can make progress.
> + */
> +long reclaim_wait(long timeout)
> +{
> +	long ret;
> +	unsigned long start = jiffies;
> +	DEFINE_WAIT(wait);
> +
> +	prepare_to_wait(&reclaim_wqh, &wait, TASK_KILLABLE);
> +	ret = io_schedule_timeout(timeout);
> +	finish_wait(&reclaim_wqh, &wait);
> +
> +	trace_writeback_reclaim_wait(jiffies_to_usecs(timeout),
> +				     jiffies_to_usecs(jiffies - start));
> +
> +	return ret;
> +}
> +EXPORT_SYMBOL(reclaim_wait);
> +

Why do we export this? Only vmscan.c is calling it and I'm scratching my
head trying to figure out why a kernel module would want to call it.

> +void reclaim_rotated()
> +{

style nit

void reclaim_rotated(void)

> +	wait_queue_head_t *wqh = &reclaim_wqh;
> +
> +	if (waitqueue_active(wqh))
> +		wake_up(wqh);
> +}
> +
> --- linux.orig/mm/swap.c	2012-02-14 20:11:21.000000000 +0800
> +++ linux/mm/swap.c	2012-02-15 12:27:35.000000000 +0800
> @@ -253,6 +253,7 @@ static void pagevec_move_tail(struct pag
>  
>  	pagevec_lru_move_fn(pvec, pagevec_move_tail_fn, &pgmoved);
>  	__count_vm_events(PGROTATED, pgmoved);
> +	reclaim_rotated();
>  }
>  
>  /*
> --- linux.orig/mm/vmscan.c	2012-02-14 20:11:21.000000000 +0800
> +++ linux/mm/vmscan.c	2012-02-16 17:23:17.000000000 +0800
> @@ -767,7 +767,8 @@ static unsigned long shrink_page_list(st
>  				      struct scan_control *sc,
>  				      int priority,
>  				      unsigned long *ret_nr_dirty,
> -				      unsigned long *ret_nr_writeback)
> +				      unsigned long *ret_nr_writeback,
> +				      unsigned long *ret_nr_pgreclaim)
>  {
>  	LIST_HEAD(ret_pages);
>  	LIST_HEAD(free_pages);
> @@ -776,6 +777,7 @@ static unsigned long shrink_page_list(st
>  	unsigned long nr_congested = 0;
>  	unsigned long nr_reclaimed = 0;
>  	unsigned long nr_writeback = 0;
> +	unsigned long nr_pgreclaim = 0;
>  
>  	cond_resched();
>  
> @@ -813,6 +815,10 @@ static unsigned long shrink_page_list(st
>  
>  		if (PageWriteback(page)) {
>  			nr_writeback++;
> +			if (PageReclaim(page))
> +				nr_pgreclaim++;
> +			else
> +				SetPageReclaim(page);
>  			/*
>  			 * Synchronous reclaim cannot queue pages for
>  			 * writeback due to the possibility of stack overflow
> @@ -874,12 +880,15 @@ static unsigned long shrink_page_list(st
>  			nr_dirty++;
>  
>  			/*
> -			 * Only kswapd can writeback filesystem pages to
> -			 * avoid risk of stack overflow but do not writeback
> -			 * unless under significant pressure.
> +			 * run into the visited page again: we are scanning
> +			 * faster than the flusher can writeout dirty pages
>  			 */
> -			if (page_is_file_cache(page) &&
> -					(!current_is_kswapd() || priority >= DEF_PRIORITY - 2)) {
> +			if (page_is_file_cache(page) && PageReclaim(page)) {
> +				nr_pgreclaim++;
> +				goto keep_locked;
> +			}

This change means that kswapd is no longer doing any writeback from page
reclaim. Was that intended because it's not discussed in the changelog. I
know writeback from kswapd is poor in terms of IO performance but it's a
last resort for freeing a page when reclaim is in trouble. If we are to
disable it and depend 100% on the flusher threads, it should be in its
own patch for bisection reasons if nothing else.

> +			if (page_is_file_cache(page) && mapping &&
> +			    flush_inode_page(mapping, page, false) >= 0) {
>  				/*
>  				 * Immediately reclaim when written back.
>  				 * Similar in principal to deactivate_page()
> @@ -1028,6 +1037,7 @@ keep_lumpy:
>  	count_vm_events(PGACTIVATE, pgactivate);
>  	*ret_nr_dirty += nr_dirty;
>  	*ret_nr_writeback += nr_writeback;
> +	*ret_nr_pgreclaim += nr_pgreclaim;
>  	return nr_reclaimed;
>  }
>  
> @@ -1509,6 +1519,7 @@ shrink_inactive_list(unsigned long nr_to
>  	unsigned long nr_file;
>  	unsigned long nr_dirty = 0;
>  	unsigned long nr_writeback = 0;
> +	unsigned long nr_pgreclaim = 0;
>  	isolate_mode_t reclaim_mode = ISOLATE_INACTIVE;
>  	struct zone *zone = mz->zone;
>  
> @@ -1559,13 +1570,13 @@ shrink_inactive_list(unsigned long nr_to
>  	spin_unlock_irq(&zone->lru_lock);
>  
>  	nr_reclaimed = shrink_page_list(&page_list, mz, sc, priority,
> -						&nr_dirty, &nr_writeback);
> +				&nr_dirty, &nr_writeback, &nr_pgreclaim);
>  
>  	/* Check if we should syncronously wait for writeback */
>  	if (should_reclaim_stall(nr_taken, nr_reclaimed, priority, sc)) {
>  		set_reclaim_mode(priority, sc, true);
>  		nr_reclaimed += shrink_page_list(&page_list, mz, sc,
> -					priority, &nr_dirty, &nr_writeback);
> +			priority, &nr_dirty, &nr_writeback, &nr_pgreclaim);
>  	}
>  
>  	spin_lock_irq(&zone->lru_lock);
> @@ -1608,6 +1619,9 @@ shrink_inactive_list(unsigned long nr_to
>  	 */
>  	if (nr_writeback && nr_writeback >= (nr_taken >> (DEF_PRIORITY-priority)))
>  		wait_iff_congested(zone, BLK_RW_ASYNC, HZ/10);
> +	if (nr_pgreclaim && nr_pgreclaim >= (nr_taken >> (DEF_PRIORITY-priority)) &&
> +	    (!global_reclaim(sc) || !zone_dirty_ok(zone)))
> +		reclaim_wait(HZ/10);
>  

I prefer this but it would be nice if there was a comment explaining it
or at least expand the comment explaining how nr_writeback can lead to
wait_iff_congested() being called.

>  	trace_mm_vmscan_lru_shrink_inactive(zone->zone_pgdat->node_id,
>  		zone_idx(zone),
> @@ -2382,8 +2396,6 @@ static unsigned long do_try_to_free_page
>  		 */
>  		writeback_threshold = sc->nr_to_reclaim + sc->nr_to_reclaim / 2;
>  		if (total_scanned > writeback_threshold) {
> -			wakeup_flusher_threads(laptop_mode ? 0 : total_scanned,
> -						WB_REASON_TRY_TO_FREE_PAGES);
>  			sc->may_writepage = 1;
>  		}
>  
> --- linux.orig/fs/fs-writeback.c	2012-02-14 20:11:21.000000000 +0800
> +++ linux/fs/fs-writeback.c	2012-02-15 12:27:35.000000000 +0800
> @@ -41,6 +41,8 @@ struct wb_writeback_work {
>  	long nr_pages;
>  	struct super_block *sb;
>  	unsigned long *older_than_this;
> +	struct inode *inode;
> +	pgoff_t offset;
>  	enum writeback_sync_modes sync_mode;
>  	unsigned int tagged_writepages:1;
>  	unsigned int for_kupdate:1;
> @@ -65,6 +67,27 @@ struct wb_writeback_work {
>   */
>  int nr_pdflush_threads;
>  
> +static mempool_t *wb_work_mempool;
> +
> +static void *wb_work_alloc(gfp_t gfp_mask, void *pool_data)
> +{
> +	/*
> +	 * bdi_flush_inode_range() may be called on page reclaim
> +	 */
> +	if (current->flags & PF_MEMALLOC)
> +		return NULL;
> +

This check is why I worry about kswapd being unable to write pages at
all. If the mempool is depleted for whatever reason, reclaim has no way
of telling the flushers what work is needed or waking them. Potentially,
we could be waiting a long time for pending flusher work to complete to
free up a slot.  I recognise it may not be bad in practice because the
pool is large and other work will be completing but it's why kswapd not
writing back pages should be in its own patch.

> +	return kmalloc(sizeof(struct wb_writeback_work), gfp_mask);
> +}
> +
> +static __init int wb_work_init(void)
> +{
> +	wb_work_mempool = mempool_create(1024,
> +					 wb_work_alloc, mempool_kfree, NULL);
> +	return wb_work_mempool ? 0 : -ENOMEM;
> +}
> +fs_initcall(wb_work_init);
> +
>  /**
>   * writeback_in_progress - determine whether there is writeback in progress
>   * @bdi: the device's backing_dev_info structure.
> @@ -129,7 +152,7 @@ __bdi_start_writeback(struct backing_dev
>  	 * This is WB_SYNC_NONE writeback, so if allocation fails just
>  	 * wakeup the thread for old dirty data writeback
>  	 */
> -	work = kzalloc(sizeof(*work), GFP_ATOMIC);
> +	work = mempool_alloc(wb_work_mempool, GFP_NOWAIT);
>  	if (!work) {
>  		if (bdi->wb.task) {
>  			trace_writeback_nowork(bdi);
> @@ -138,6 +161,7 @@ __bdi_start_writeback(struct backing_dev
>  		return;
>  	}
>  
> +	memset(work, 0, sizeof(*work));
>  	work->sync_mode	= WB_SYNC_NONE;
>  	work->nr_pages	= nr_pages;
>  	work->range_cyclic = range_cyclic;
> @@ -186,6 +210,125 @@ void bdi_start_background_writeback(stru
>  	spin_unlock_bh(&bdi->wb_lock);
>  }
>  
> +static bool extend_writeback_range(struct wb_writeback_work *work,
> +				   pgoff_t offset,
> +				   unsigned long write_around_pages)
> +{

comment on what this function is for and what the return values mean.

"returns true if the wb_writeback_work now encompasses the request"

or something

> +	pgoff_t end = work->offset + work->nr_pages;
> +
> +	if (offset >= work->offset && offset < end)
> +		return true;
> +

This does not ensure that the full span of
offset -> offset+write_around_pages is encompassed by work. All it
checks is that the start of the requested range is going to be handled.

I guess it's ok because the page reclaims cares about is covered and
avoids a situation where too much IO is being queued. It's unclear if
this is what you intended though because you check for too much IO being
queued in the next block.

> +	/*
> +	 * for sequential workloads with good locality, include up to 8 times
> +	 * more data in one chunk
> +	 */
> +	if (work->nr_pages >= 8 * write_around_pages)
> +		return false;
> +
> +	/* the unsigned comparison helps eliminate one compare */
> +	if (work->offset - offset < write_around_pages) {
> +		work->nr_pages += write_around_pages;
> +		work->offset -= write_around_pages;
> +		return true;
> +	}
> +
> +	if (offset - end < write_around_pages) {
> +		work->nr_pages += write_around_pages;
> +		return true;
> +	}
> +
> +	return false;
> +}
> +
> +/*
> + * schedule writeback on a range of inode pages.
> + */
> +static struct wb_writeback_work *
> +bdi_flush_inode_range(struct backing_dev_info *bdi,
> +		      struct inode *inode,
> +		      pgoff_t offset,
> +		      pgoff_t len,
> +		      bool wait)
> +{
> +	struct wb_writeback_work *work;
> +
> +	if (!igrab(inode))
> +		return ERR_PTR(-ENOENT);
> +

Explain why the igrab is necessary. I think it's because we are calling
this from page reclaim context and the only thing pinning the
address_space is the page lock . If I'm right, it should be made clear
in the comment for bdi_flush_inode_range that this should only be called
from page reclaim context. Maybe even VM_BUG_ON if
!(current->flags & PF_MEMALLOC)?

> +	work = mempool_alloc(wb_work_mempool, wait ? GFP_NOIO : GFP_NOWAIT);
> +	if (!work) {
> +		trace_printk("wb_work_mempool alloc fail\n");
> +		return ERR_PTR(-ENOMEM);
> +	}
> +
> +	memset(work, 0, sizeof(*work));
> +	work->sync_mode		= WB_SYNC_NONE;
> +	work->inode		= inode;
> +	work->offset		= offset;
> +	work->nr_pages		= len;
> +	work->reason		= WB_REASON_PAGEOUT;
> +
> +	bdi_queue_work(bdi, work);
> +
> +	return work;
> +}
> +
> +/*
> + * Called by page reclaim code to flush the dirty page ASAP. Do write-around to
> + * improve IO throughput. The nearby pages will have good chance to reside in
> + * the same LRU list that vmscan is working on, and even close to each other
> + * inside the LRU list in the common case of sequential read/write.
> + *
> + * ret > 0: success, found/reused a previous writeback work
> + * ret = 0: success, allocated/queued a new writeback work
> + * ret < 0: failed
> + */
> +long flush_inode_page(struct address_space *mapping,
> +		      struct page *page,
> +		      bool wait)
> +{
> +	struct backing_dev_info *bdi = mapping->backing_dev_info;
> +	struct inode *inode = mapping->host;
> +	struct wb_writeback_work *work;
> +	unsigned long write_around_pages;
> +	pgoff_t offset = page->index;
> +	int i;
> +	long ret = 0;
> +
> +	if (unlikely(!inode))
> +		return -ENOENT;
> +
> +	/*
> +	 * piggy back 8-15ms worth of data
> +	 */
> +	write_around_pages = bdi->avg_write_bandwidth + MIN_WRITEBACK_PAGES;
> +	write_around_pages = rounddown_pow_of_two(write_around_pages) >> 6;
> +
> +	i = 1;
> +	spin_lock_bh(&bdi->wb_lock);
> +	list_for_each_entry_reverse(work, &bdi->work_list, list) {
> +		if (work->inode != inode)
> +			continue;
> +		if (extend_writeback_range(work, offset, write_around_pages)) {
> +			ret = i;
> +			break;
> +		}
> +		if (i++ > 100)	/* limit search depth */
> +			break;

No harm to move Jan's comment on depth limit search to here adding why
100 is as good as number as any to use.

> +	}
> +	spin_unlock_bh(&bdi->wb_lock);
> +
> +	if (!ret) {
> +		offset = round_down(offset, write_around_pages);
> +		work = bdi_flush_inode_range(bdi, inode,
> +					     offset, write_around_pages, wait);
> +		if (IS_ERR(work))
> +			ret = PTR_ERR(work);
> +	}
> +	return ret;
> +}
> +
>  /*
>   * Remove the inode from the writeback list it is on.
>   */
> @@ -833,6 +976,23 @@ static unsigned long get_nr_dirty_pages(
>  		get_nr_dirty_inodes();
>  }
>  
> +static long wb_flush_inode(struct bdi_writeback *wb,
> +			   struct wb_writeback_work *work)
> +{
> +	struct writeback_control wbc = {
> +		.sync_mode = WB_SYNC_NONE,
> +		.nr_to_write = LONG_MAX,
> +		.range_start = work->offset << PAGE_CACHE_SHIFT,
> +		.range_end = (work->offset + work->nr_pages - 1)
> +						<< PAGE_CACHE_SHIFT,
> +	};
> +
> +	do_writepages(work->inode->i_mapping, &wbc);
> +	iput(work->inode);
> +
> +	return LONG_MAX - wbc.nr_to_write;
> +}
> +
>  static long wb_check_background_flush(struct bdi_writeback *wb)
>  {
>  	if (over_bground_thresh(wb->bdi)) {
> @@ -905,7 +1065,10 @@ long wb_do_writeback(struct bdi_writebac
>  
>  		trace_writeback_exec(bdi, work);
>  
> -		wrote += wb_writeback(wb, work);
> +		if (work->inode)
> +			wrote += wb_flush_inode(wb, work);
> +		else
> +			wrote += wb_writeback(wb, work);
>  
>  		/*
>  		 * Notify the caller of completion if this is a synchronous
> @@ -914,7 +1077,7 @@ long wb_do_writeback(struct bdi_writebac
>  		if (work->done)
>  			complete(work->done);
>  		else
> -			kfree(work);
> +			mempool_free(work, wb_work_mempool);
>  	}
>  
>  	/*
> --- linux.orig/include/trace/events/writeback.h	2012-02-14 20:11:22.000000000 +0800
> +++ linux/include/trace/events/writeback.h	2012-02-15 12:27:35.000000000 +0800
> @@ -23,7 +23,7 @@
>  
>  #define WB_WORK_REASON							\
>  		{WB_REASON_BACKGROUND,		"background"},		\
> -		{WB_REASON_TRY_TO_FREE_PAGES,	"try_to_free_pages"},	\
> +		{WB_REASON_PAGEOUT,		"pageout"},		\
>  		{WB_REASON_SYNC,		"sync"},		\
>  		{WB_REASON_PERIODIC,		"periodic"},		\
>  		{WB_REASON_LAPTOP_TIMER,	"laptop_timer"},	\
> @@ -45,6 +45,8 @@ DECLARE_EVENT_CLASS(writeback_work_class
>  		__field(int, range_cyclic)
>  		__field(int, for_background)
>  		__field(int, reason)
> +		__field(unsigned long, ino)
> +		__field(unsigned long, offset)
>  	),
>  	TP_fast_assign(
>  		strncpy(__entry->name, dev_name(bdi->dev), 32);
> @@ -55,9 +57,11 @@ DECLARE_EVENT_CLASS(writeback_work_class
>  		__entry->range_cyclic = work->range_cyclic;
>  		__entry->for_background	= work->for_background;
>  		__entry->reason = work->reason;
> +		__entry->ino = work->inode ? work->inode->i_ino : 0;
> +		__entry->offset = work->offset;
>  	),
>  	TP_printk("bdi %s: sb_dev %d:%d nr_pages=%ld sync_mode=%d "
> -		  "kupdate=%d range_cyclic=%d background=%d reason=%s",
> +		  "kupdate=%d range_cyclic=%d background=%d reason=%s ino=%lu offset=%lu",
>  		  __entry->name,
>  		  MAJOR(__entry->sb_dev), MINOR(__entry->sb_dev),
>  		  __entry->nr_pages,
> @@ -65,7 +69,9 @@ DECLARE_EVENT_CLASS(writeback_work_class
>  		  __entry->for_kupdate,
>  		  __entry->range_cyclic,
>  		  __entry->for_background,
> -		  __print_symbolic(__entry->reason, WB_WORK_REASON)
> +		  __print_symbolic(__entry->reason, WB_WORK_REASON),
> +		  __entry->ino,
> +		  __entry->offset
>  	)
>  );
>  #define DEFINE_WRITEBACK_WORK_EVENT(name) \
> @@ -437,6 +443,13 @@ DEFINE_EVENT(writeback_congest_waited_te
>  	TP_ARGS(usec_timeout, usec_delayed)
>  );
>  
> +DEFINE_EVENT(writeback_congest_waited_template, writeback_reclaim_wait,
> +
> +	TP_PROTO(unsigned int usec_timeout, unsigned int usec_delayed),
> +
> +	TP_ARGS(usec_timeout, usec_delayed)
> +);
> +
>  DECLARE_EVENT_CLASS(writeback_single_inode_template,
>  
>  	TP_PROTO(struct inode *inode,
> --- linux.orig/include/linux/writeback.h	2012-02-14 20:11:21.000000000 +0800
> +++ linux/include/linux/writeback.h	2012-02-15 12:27:35.000000000 +0800
> @@ -40,7 +40,7 @@ enum writeback_sync_modes {
>   */
>  enum wb_reason {
>  	WB_REASON_BACKGROUND,
> -	WB_REASON_TRY_TO_FREE_PAGES,
> +	WB_REASON_PAGEOUT,
>  	WB_REASON_SYNC,
>  	WB_REASON_PERIODIC,
>  	WB_REASON_LAPTOP_TIMER,
> @@ -94,6 +94,8 @@ long writeback_inodes_wb(struct bdi_writ
>  				enum wb_reason reason);
>  long wb_do_writeback(struct bdi_writeback *wb, int force_wait);
>  void wakeup_flusher_threads(long nr_pages, enum wb_reason reason);
> +long flush_inode_page(struct address_space *mapping, struct page *page,
> +		      bool wait);
>  
>  /* writeback.h requires fs.h; it, too, is not included from here. */
>  static inline void wait_on_inode(struct inode *inode)

-- 
Mel Gorman
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: reclaim the LRU lists full of dirty/writeback pages
  2012-02-16  4:00                   ` Wu Fengguang
  2012-02-16 12:44                     ` Jan Kara
@ 2012-02-17 16:41                     ` Wu Fengguang
  2012-02-20 14:00                       ` Jan Kara
  1 sibling, 1 reply; 33+ messages in thread
From: Wu Fengguang @ 2012-02-17 16:41 UTC (permalink / raw)
  To: Jan Kara
  Cc: Rik van Riel, Greg Thelen, bsingharora, Hugh Dickins,
	Michal Hocko, linux-mm, Mel Gorman, Ying Han, hannes,
	KAMEZAWA Hiroyuki, Minchan Kim

> > > And I find the pageout works seem to have some problems with ext4.
> > > For example, this can be easily triggered with 10 dd tasks running
> > > inside the 100MB limited memcg:
> >   So journal thread is getting stuck while committing transaction. Most
> > likely waiting for some dd thread to stop a transaction so that commit can
> > proceed. The processes waiting in start_this_handle() are just secondary
> > effect resulting from the first problem. It might be interesting to get
> > stack traces of all bloked processes when the journal thread is stuck.
> 
> For completeness of discussion, citing your conclusion on my private
> data feed:
> 
> : We enter memcg reclaim from grab_cache_page_write_begin() and are
> : waiting in congestion_wait(). Because grab_cache_page_write_begin() is
> : called with transaction started, this blocks transaction from
> : committing and subsequently blocks all other activity on the
> : filesystem. The fact is this isn't new with your patches, just your
> : changes or the fact that we are running in a memory constrained cgroup
> : make this more visible.

Maybe I'm missing some deep FS restrictions, but can this page
allocation (and the one in ext4_write_begin) be moved before
ext4_journal_start()? So that the page reclaim can throttle the
__GFP_WRITE allocations at will.

---
 fs/ext4/inode.c |   22 +++++++++++-----------
 1 file changed, 11 insertions(+), 11 deletions(-)

--- linux.orig/fs/ext4/inode.c	2012-02-18 00:10:27.000000000 +0800
+++ linux/fs/ext4/inode.c	2012-02-18 00:31:19.000000000 +0800
@@ -2398,38 +2398,38 @@ static int ext4_da_write_begin(struct fi
 	if (ext4_nonda_switch(inode->i_sb)) {
 		*fsdata = (void *)FALL_BACK_TO_NONDELALLOC;
 		return ext4_write_begin(file, mapping, pos,
 					len, flags, pagep, fsdata);
 	}
 	*fsdata = (void *)0;
 	trace_ext4_da_write_begin(inode, pos, len, flags);
 retry:
+	page = grab_cache_page_write_begin(mapping, index, flags);
+	if (!page) {
+		ret = -ENOMEM;
+		goto out;
+	}
+	*pagep = page;
+
 	/*
 	 * With delayed allocation, we don't log the i_disksize update
 	 * if there is delayed block allocation. But we still need
 	 * to journalling the i_disksize update if writes to the end
 	 * of file which has an already mapped buffer.
 	 */
 	handle = ext4_journal_start(inode, 1);
 	if (IS_ERR(handle)) {
 		ret = PTR_ERR(handle);
+		unlock_page(page);
+		page_cache_release(page);
+		if (pos + len > inode->i_size)
+			truncate_inode_pages(inode->i_mapping, inode->i_size);
 		goto out;
 	}
-	/* We cannot recurse into the filesystem as the transaction is already
-	 * started */
-	flags |= AOP_FLAG_NOFS;
-
-	page = grab_cache_page_write_begin(mapping, index, flags);
-	if (!page) {
-		ext4_journal_stop(handle);
-		ret = -ENOMEM;
-		goto out;
-	}
-	*pagep = page;
 
 	ret = __block_write_begin(page, pos, len, ext4_da_get_block_prep);
 	if (ret < 0) {
 		unlock_page(page);
 		ext4_journal_stop(handle);
 		page_cache_release(page);
 		/*
 		 * block_write_begin may have instantiated a few blocks

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: reclaim the LRU lists full of dirty/writeback pages
  2012-02-17 16:41                     ` Wu Fengguang
@ 2012-02-20 14:00                       ` Jan Kara
  0 siblings, 0 replies; 33+ messages in thread
From: Jan Kara @ 2012-02-20 14:00 UTC (permalink / raw)
  To: Wu Fengguang
  Cc: Jan Kara, Rik van Riel, Greg Thelen, bsingharora, Hugh Dickins,
	Michal Hocko, linux-mm, Mel Gorman, Ying Han, hannes,
	KAMEZAWA Hiroyuki, Minchan Kim

On Sat 18-02-12 00:41:33, Wu Fengguang wrote:
> > > > And I find the pageout works seem to have some problems with ext4.
> > > > For example, this can be easily triggered with 10 dd tasks running
> > > > inside the 100MB limited memcg:
> > >   So journal thread is getting stuck while committing transaction. Most
> > > likely waiting for some dd thread to stop a transaction so that commit can
> > > proceed. The processes waiting in start_this_handle() are just secondary
> > > effect resulting from the first problem. It might be interesting to get
> > > stack traces of all bloked processes when the journal thread is stuck.
> > 
> > For completeness of discussion, citing your conclusion on my private
> > data feed:
> > 
> > : We enter memcg reclaim from grab_cache_page_write_begin() and are
> > : waiting in congestion_wait(). Because grab_cache_page_write_begin() is
> > : called with transaction started, this blocks transaction from
> > : committing and subsequently blocks all other activity on the
> > : filesystem. The fact is this isn't new with your patches, just your
> > : changes or the fact that we are running in a memory constrained cgroup
> > : make this more visible.
> 
> Maybe I'm missing some deep FS restrictions, but can this page
> allocation (and the one in ext4_write_begin) be moved before
> ext4_journal_start()? So that the page reclaim can throttle the
> __GFP_WRITE allocations at will.
  You are missing the fact that in this way, things would deadlock quickly.
Lock ordering of ext4 is 'transaction start' -> 'page lock' (so that
writepages may be efficient) and so you cannot really have it differently
here - you can think of transaction start - transaction end pair as a
lock - unlock for lock ordering purposes. You could play some tricks like
allocating the page without locking, starting a transaction, locking a page
and checking that everything is OK but it isn't really nice and when memory
pressure is big, the gain is questionable.

								Honza
> ---
>  fs/ext4/inode.c |   22 +++++++++++-----------
>  1 file changed, 11 insertions(+), 11 deletions(-)
> 
> --- linux.orig/fs/ext4/inode.c	2012-02-18 00:10:27.000000000 +0800
> +++ linux/fs/ext4/inode.c	2012-02-18 00:31:19.000000000 +0800
> @@ -2398,38 +2398,38 @@ static int ext4_da_write_begin(struct fi
>  	if (ext4_nonda_switch(inode->i_sb)) {
>  		*fsdata = (void *)FALL_BACK_TO_NONDELALLOC;
>  		return ext4_write_begin(file, mapping, pos,
>  					len, flags, pagep, fsdata);
>  	}
>  	*fsdata = (void *)0;
>  	trace_ext4_da_write_begin(inode, pos, len, flags);
>  retry:
> +	page = grab_cache_page_write_begin(mapping, index, flags);
> +	if (!page) {
> +		ret = -ENOMEM;
> +		goto out;
> +	}
> +	*pagep = page;
> +
>  	/*
>  	 * With delayed allocation, we don't log the i_disksize update
>  	 * if there is delayed block allocation. But we still need
>  	 * to journalling the i_disksize update if writes to the end
>  	 * of file which has an already mapped buffer.
>  	 */
>  	handle = ext4_journal_start(inode, 1);
>  	if (IS_ERR(handle)) {
>  		ret = PTR_ERR(handle);
> +		unlock_page(page);
> +		page_cache_release(page);
> +		if (pos + len > inode->i_size)
> +			truncate_inode_pages(inode->i_mapping, inode->i_size);
>  		goto out;
>  	}
> -	/* We cannot recurse into the filesystem as the transaction is already
> -	 * started */
> -	flags |= AOP_FLAG_NOFS;
> -
> -	page = grab_cache_page_write_begin(mapping, index, flags);
> -	if (!page) {
> -		ext4_journal_stop(handle);
> -		ret = -ENOMEM;
> -		goto out;
> -	}
> -	*pagep = page;
>  
>  	ret = __block_write_begin(page, pos, len, ext4_da_get_block_prep);
>  	if (ret < 0) {
>  		unlock_page(page);
>  		ext4_journal_stop(handle);
>  		page_cache_release(page);
>  		/*
>  		 * block_write_begin may have instantiated a few blocks
-- 
Jan Kara <jack@suse.cz>
SUSE Labs, CR

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: reclaim the LRU lists full of dirty/writeback pages
  2012-02-16 17:31                 ` Mel Gorman
@ 2012-02-27 14:24                   ` Fengguang Wu
  0 siblings, 0 replies; 33+ messages in thread
From: Fengguang Wu @ 2012-02-27 14:24 UTC (permalink / raw)
  To: Mel Gorman
  Cc: Greg Thelen, Jan Kara, bsingharora, Hugh Dickins, Michal Hocko,
	linux-mm, Ying Han, hannes, KAMEZAWA Hiroyuki, Rik van Riel,
	Minchan Kim

Mel,

Thanks a lot for the in depth review!

On Thu, Feb 16, 2012 at 05:31:11PM +0000, Mel Gorman wrote:
> On Thu, Feb 16, 2012 at 05:50:42PM +0800, Wu Fengguang wrote:
> > On Tue, Feb 14, 2012 at 03:51:24PM +0000, Mel Gorman wrote:
> > > On Tue, Feb 14, 2012 at 09:18:12PM +0800, Wu Fengguang wrote:
> > > > > For the OOM problem, a more reasonable stopgap might be to identify when
> > > > > a process is scanning a memcg at high priority and encountered all
> > > > > PageReclaim with no forward progress and to congestion_wait() if that
> > > > > situation occurs. A preferable way would be to wait until the flusher
> > > > > wakes up a waiter on PageReclaim pages to be written out because we want
> > > > > to keep moving way from congestion_wait() if at all possible.
> > > > 
> > > > Good points! Below is the more serious page reclaim changes.
> > > > 
> > > > The dirty/writeback pages may often come close to each other in the
> > > > LRU list, so the local test during a 32-page scan may still trigger
> > > > reclaim waits unnecessarily.
> > > 
> > > Yes, this is particularly the case when writing back to USB. It is not
> > > unusual that all dirty pages under writeback are backed by USB and at the
> > > end of the LRU. Right now what happens is that reclaimers see higher CPU
> > > usage as they scan over these pages uselessly. If the wrong choice is
> > > made on how to throttle, we'll see yet more variants of the "system
> > > responsiveness drops when writing to USB".
> > 
> > Yes, USB is an important case to support.  I'd imagine the heavy USB
> > writes typically happen in desktops and run *outside* of any memcg.
> 
> I would expect it's common that USB writes are outside a memcg.
> 
> > So they'll typically take <= 20% memory in the zone. As long as we
> > start the PG_reclaim throttling only when above the 20% dirty
> > threshold (ie. on zone_dirty_ok()), the USB case should be safe.
> > 
> 
> It's not just the USB writer, it's unrelated process that are allocating
> memory at the same time the writing happens. What we want to avoid is a
> situation where something like firefox or evolution or even gnome-terminal
> is performing a small read and gets either
> 
> a) started for IO bandwidth and stalls (not the focus here obviously)
> b) enter page reclaim, finds PG_reclaim pages from the USB write and stalls
> 
> It's (b) we need to watch out for. I accept that this patch is heading
> in the right direction and that the tracepoint can be used to identify
> processes get throttled unfairly. Before merging, it'd be nice to hear of
> such a test and include details in the changelog similar to the test case
> in https://bugzilla.kernel.org/show_bug.cgi?id=31142 (a bug that lasted a
> *long* time as it turned out, fixes merged for 3.3 with sync-light migration).

To avoid PG_reclaim waiting for unrelated page allocators, I improved
the patch to

a) distinguish dirtier tasks and unrelated clean tasks by testing
   1) whether __GFP_WRITE is set
   2) whether current->nr_dirtied changed recently

b) put dirtier tasks to wait at lower dirty fill levels (~50%) and
   clean tasks at much higher threshold (80%).

To test the "desktop responsiveness on dd to USB" case, I run one dd
to USB (6-7MB/s) and another dd from a huge sparse file (857 MB/s) on
kernel 3.3.0-rc4 plus the to-be-posted dirty reclaim patch. The result
is perfect, I see no single waits of any kind:

$ grep -r . /debug/vm
/debug/vm/nr_reclaim_throttle_clean:0
/debug/vm/nr_reclaim_throttle_kswapd:0
/debug/vm/nr_reclaim_throttle_recent_write:0
/debug/vm/nr_reclaim_throttle_write:0
/debug/vm/nr_congestion_wait:0
/debug/vm/nr_reclaim_wait_congested:0
/debug/vm/nr_reclaim_wait_writeback:0
/debug/vm/nr_migrate_wait_writeback:0

The above numbers are collected by a debug patch that accounts the
wait_on_page_writeback(), congestion_wait(), wait_iff_congested(), and
reclaim_wait() sleeps.

I went further to currently do
- startx
- start a number of applications and switching through all windows in a loop
- run "usemem 6G" (it's a 8G memory box)
- swapon

All goes pretty smooth until swap is turned on. Upon swapon, the
system immediately becomes unusably slow. The wait numbers go up to

/debug/vm/nr_reclaim_throttle_clean:0
/debug/vm/nr_reclaim_throttle_kswapd:1771
/debug/vm/nr_reclaim_throttle_recent_write:0
/debug/vm/nr_reclaim_throttle_write:0
/debug/vm/nr_congestion_wait:33
/debug/vm/nr_reclaim_wait_congested:8626
/debug/vm/nr_reclaim_wait_writeback:0
/debug/vm/nr_migrate_wait_writeback:0

There are over 8k wait_iff_congested() sleeps and 1.7k reclaim_wait() sleeps
for kswapd. I'm a bit surprised to see nr_reclaim_throttle_write still
remains 0 while nr_reclaim_throttle_kswapd goes up..

The congestion_wait traces indicate kswapd run into trouble around some time
and triggered a series of congestion_wait()s. This should in turn decrease its
scan priority and triggered the reclaim_wait()s for kswapd.

           <...>-903   [011] ....  1230.617682: congestion_wait: balance_pgdat+0x477/0x58c kswapd+0x309/0x322
           <...>-903   [010] ....  1230.905387: congestion_wait: balance_pgdat+0x477/0x58c kswapd+0x309/0x322
           <...>-903   [000] ....  1231.016542: congestion_wait: balance_pgdat+0x477/0x58c kswapd+0x309/0x322
           <...>-903   [010] ....  1231.042926: congestion_wait: balance_pgdat+0x477/0x58c kswapd+0x309/0x322
           <...>-903   [010] ....  1231.131773: congestion_wait: balance_pgdat+0x477/0x58c kswapd+0x309/0x322
           <...>-903   [000] ....  1231.143408: congestion_wait: balance_pgdat+0x477/0x58c kswapd+0x309/0x322
           <...>-903   [010] ....  1231.165119: congestion_wait: balance_pgdat+0x477/0x58c kswapd+0x309/0x322
           <...>-903   [010] ....  1231.170291: congestion_wait: balance_pgdat+0x477/0x58c kswapd+0x309/0x322
           <...>-903   [010] ....  1231.273022: congestion_wait: balance_pgdat+0x477/0x58c kswapd+0x309/0x322
           <...>-903   [004] ....  1231.359771: congestion_wait: balance_pgdat+0x477/0x58c kswapd+0x309/0x322
           <...>-903   [005] ....  1231.457118: congestion_wait: balance_pgdat+0x477/0x58c kswapd+0x309/0x322
           <...>-903   [011] ....  1231.514525: congestion_wait: balance_pgdat+0x477/0x58c kswapd+0x309/0x322
           <...>-903   [011] ....  1231.515736: congestion_wait: balance_pgdat+0x477/0x58c kswapd+0x309/0x322
           <...>-903   [002] ....  1231.556657: congestion_wait: balance_pgdat+0x477/0x58c kswapd+0x309/0x322
           <...>-903   [002] ....  1231.591155: congestion_wait: balance_pgdat+0x477/0x58c kswapd+0x309/0x322
           <...>-903   [002] ....  1231.592753: congestion_wait: balance_pgdat+0x477/0x58c kswapd+0x309/0x322
           <...>-903   [000] ....  1231.597581: congestion_wait: balance_pgdat+0x477/0x58c kswapd+0x309/0x322
           <...>-903   [002] ....  1231.614586: congestion_wait: balance_pgdat+0x477/0x58c kswapd+0x309/0x322
           <...>-903   [000] ....  1231.639371: congestion_wait: balance_pgdat+0x477/0x58c kswapd+0x309/0x322
           <...>-903   [003] ....  1231.653001: congestion_wait: balance_pgdat+0x477/0x58c kswapd+0x309/0x322
           <...>-903   [003] ....  1231.666135: congestion_wait: balance_pgdat+0x477/0x58c kswapd+0x309/0x322
           <...>-903   [003] ....  1231.676705: congestion_wait: balance_pgdat+0x477/0x58c kswapd+0x309/0x322
           <...>-903   [004] ....  1231.715875: congestion_wait: balance_pgdat+0x477/0x58c kswapd+0x309/0x322
           <...>-903   [004] ....  1231.717754: congestion_wait: balance_pgdat+0x477/0x58c kswapd+0x309/0x322
           <...>-903   [004] ....  1231.718269: congestion_wait: balance_pgdat+0x477/0x58c kswapd+0x309/0x322
           <...>-903   [004] ....  1231.796658: congestion_wait: balance_pgdat+0x477/0x58c kswapd+0x309/0x322

The wait_iff_congested() sleeps w/ swap may be explained by clustered
PG_writeback pages and clustered PG_dirty pages. When there are 32 such pages
come back-to-back in the LRU, wait_iff_congested() can easily be
triggered into sleeps. IMHO the conditions wait_iff_congested() based
on is still kind of "local information" which is not very reliable.
The sleeps are not triggered w/o swap because there are now almost no
pageout() calls for file pages, hence no chances to increase
nr_congested and set ZONE_CONGESTED.

> > > > Some global information on the percent
> > > > of dirty/writeback pages in the LRU list may help. Anyway the added
> > > > tests should still be much better than no protection.
> > > > 
> > > 
> > > You can tell how many dirty pages and writeback pages are in the zone
> > > already.
> > 
> > Right. I changed the test to
> > 
> > +       if (nr_pgreclaim && nr_pgreclaim >= (nr_taken >> (DEF_PRIORITY-priority)) &&
> > +           (!global_reclaim(sc) || !zone_dirty_ok(zone)))
> > +               reclaim_wait(HZ/10);
> > 
> > And I'd prefer to use a higher threshold than the default 20% for the
> > above zone_dirty_ok() test, so that when Johannes' zone dirty
> > balancing does the job fine, PG_reclaim based page reclaim throttling
> > won't happen at all.
> > 
> 
> We'd also need to watch that we do not get throttled on small zones
> like ZONE_DMA (which shouldn't happen but still). To detect this if
> it happens, please consider including node and zone information in the
> writeback_reclaim_wait tracepoint. The memcg people might want to be
> able to see the memcg which I guess could be available
> 
> node=[NID|memcg]
> NID if zone >=0
> memcg if zone == -1
> 
> Which is hacky but avoids creating two tracepoints.

I changed it to mm_vmscan_reclaim_wait w/ output format

+       TP_printk("usec_timeout=%u usec_delayed=%u memcg=%u node=%u zone=%u",

Since it's kind of rate limited code path, there is no need to compact
the output. Actually memcg reclaims do have the node/zone ids which
might be helpful information.

> > > > A global wait queue and reclaim_wait() is introduced. The waiters will
> > > > be wakeup when pages are rotated by end_page_writeback() or lru drain.
> > > > 
> > > > I have to say its effectiveness depends on the filesystem... ext4
> > > > and btrfs do fluent IO completions, so reclaim_wait() works pretty
> > > > well:
> > > >               dd-14560 [017] ....  1360.894605: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=10000
> > > >               dd-14560 [017] ....  1360.904456: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=8000
> > > >               dd-14560 [017] ....  1360.908293: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=2000
> > > >               dd-14560 [017] ....  1360.923960: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=15000
> > > >               dd-14560 [017] ....  1360.927810: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=2000
> > > >               dd-14560 [017] ....  1360.931656: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=2000
> > > >               dd-14560 [017] ....  1360.943503: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=10000
> > > >               dd-14560 [017] ....  1360.953289: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=7000
> > > >               dd-14560 [017] ....  1360.957177: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=2000
> > > >               dd-14560 [017] ....  1360.972949: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=15000
> > > > 
> > > > However XFS does IO completions in very large batches (there may be
> > > > only several big IO completions in one second). So reclaim_wait()
> > > > mostly end up waiting to the full HZ/10 timeout:
> > > > 
> > > >               dd-4177  [008] ....   866.367661: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=100000
> > > >               dd-4177  [010] ....   866.567583: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=100000
> > > >               dd-4177  [012] ....   866.767458: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=100000
> > > >               dd-4177  [013] ....   866.867419: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=100000
> > > >               dd-4177  [008] ....   867.167266: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=100000
> > > >               dd-4177  [010] ....   867.367168: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=100000
> > > >               dd-4177  [012] ....   867.818950: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=100000
> > > >               dd-4177  [013] ....   867.918905: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=100000
> > > >               dd-4177  [013] ....   867.971657: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=52000
> > > >               dd-4177  [013] ....   867.971812: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=0
> > > >               dd-4177  [008] ....   868.355700: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=100000
> > > >               dd-4177  [010] ....   868.700515: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=100000
> > > > 
> > > 
> > > And where people will get hit by regressions in this area is writing to
> > > vfat and in more rare cases ntfs on USB stick.
> > 
> > vfat IO completions seem to lie somewhere between ext4 and xfs:
> > 
> >            <...>-46385 [010] .... 143570.714470: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=100000
> >            <...>-46385 [008] .... 143570.752391: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=12000
> >            <...>-46385 [008] .... 143570.937327: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=52000
> >            <...>-46385 [010] .... 143571.160252: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=100000
> >            <...>-46385 [011] .... 143571.286197: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=100000
> >            <...>-46385 [008] .... 143571.329644: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=15000
> >            <...>-46385 [008] .... 143571.475433: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=18000
> >            <...>-46385 [008] .... 143571.653461: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=52000
> >            <...>-46385 [008] .... 143571.839949: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=56000
> >            <...>-46385 [010] .... 143572.060816: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=100000
> >            <...>-46385 [011] .... 143572.185754: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=100000
> >            <...>-46385 [008] .... 143572.212522: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=1000
> >            <...>-46385 [008] .... 143572.217825: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=2000
> >            <...>-46385 [008] .... 143572.312395: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=91000
> >            <...>-46385 [008] .... 143572.315122: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=1000
> >            <...>-46385 [009] .... 143572.433630: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=100000
> >            <...>-46385 [010] .... 143572.534569: writeback_reclaim_wait: usec_timeout=100000 usec_delayed=100000
> >  
> 
> Ok. It's interesting to note that we are stalling a lot there - roughly
> 30ms every second. As long as it's the writer, that's fine. If it's
> firefox, it will create bug reports :)

Good news is, we can reasonably tell firefox from heavy writers with
the help of __GFP_WRITE/->nr_dirtied indicators  :-)

> > > > <SNIP>
> > > > @@ -813,6 +815,10 @@ static unsigned long shrink_page_list(st
> > > >  
> > > >  		if (PageWriteback(page)) {
> > > >  			nr_writeback++;
> > > > +			if (PageReclaim(page))
> > > > +				nr_pgreclaim++;
> > > > +			else
> > > > +				SetPageReclaim(page);
> > > >  			/*
> > > 
> > > This check is unexpected. We already SetPageReclaim when queuing pages for
> > > IO from reclaim context and if dirty pages are encountered during the LRU
> > > scan that cannot be queued for IO. How often is it that nr_pgreclaim !=
> > > nr_writeback and by how much do they differ?
> > 
> > Quite often, I suspect. The pageout writeback works do 1-8MB write
> > around which may start I/O a bit earlier than the covered pages are
> > encountered by page reclaim. ext4 forces 128MB write chunk size, which
> > further increases the opportunities.
> > 
> 
> Ok, thanks for the clarification. Stick a wee comment on it please.

OK, added a simple comment:

                if (PageWriteback(page)) {
                        nr_writeback++;
                        /*
+                        * The pageout works do write around which may put
+                        * close-to-LRU-tail pages to writeback a bit earlier.
+                        */
+                       if (PageReclaim(page))
+                               nr_pgreclaim++;
+                       else
+                               SetPageReclaim(page);

> > > >  			 * Synchronous reclaim cannot queue pages for
> > > >  			 * writeback due to the possibility of stack overflow
> > > > @@ -874,12 +880,15 @@ static unsigned long shrink_page_list(st
> > > >  			nr_dirty++;
> > > >  
> > > >  			/*
> > > > -			 * Only kswapd can writeback filesystem pages to
> > > > -			 * avoid risk of stack overflow but do not writeback
> > > > -			 * unless under significant pressure.
> > > > +			 * run into the visited page again: we are scanning
> > > > +			 * faster than the flusher can writeout dirty pages
> > > >  			 */
> > > 
> > > which in itself is not an abnormal condition. We get into this situation
> > > when writing to USB. Dirty throttling stops too much memory getting dirtied
> > > but that does not mean we should throttle instead of reclaiming clean pages.
> > > 
> > > That's why I worry that if this is aimed at fixing a memcg problem, it
> > > will have the impact of making interactive performance on normal systems
> > > worse.
> > 
> > You are right. This patch only addresses the pageout I/O efficiency
> > and dirty throttling problems for a fully dirtied LRU. Next step, I'll
> > think about the interactive performance problem for a less dirtied LRU.
> > 
> 
> Ok, thanks.
> 
> > > <SNIP>
> > >
> > > If the intention is to avoid memcg going OOM prematurely, the
> > > nr_pgreclaim value needs to be treated at a higher level that records
> > > how many PageReclaim pages were encountered. If no progress was made
> > > because all the pages were PageReclaim, then throttle and return 1 to
> > > the page allocator where it will retry the allocation without going OOM
> > > after some pages have been cleaned and reclaimed.
> >  
> > Agreed in general, but changed to this test for now, which is made a
> > bit more global wise with the use of zone_dirty_ok().
> > 
> 
> Ok, sure. We know what to look out for and where unrelated regressions
> might get introduced.
> 
> > memcg is ignored due to no dirty accounting (Greg has the patch though).
> > And even zone_dirty_ok() may be inaccurate for the global reclaim, if
> > some memcgs are skipped by the global reclaim by the memcg soft limit.
> > 
> > But anyway, it's a handy hack for now. I'm looking into some more
> > radical changes to put most dirty/writeback pages into a standalone
> > LRU list (in addition to your LRU_IMMEDIATE, which I think is a good
> > idea) for addressing the clustered way they tend to lie in the
> > inactive LRU list.
> > 
> > +       if (nr_pgreclaim && nr_pgreclaim >= (nr_taken >> (DEF_PRIORITY-priority)) &&
> > +           (!global_reclaim(sc) || !zone_dirty_ok(zone)))
> > +               reclaim_wait(HZ/10);
> > 
> 
> This should make it harder to get stalled. Your tracepoint should help
> us catch if it happens unnecessarily.
 
Yeah ;) FYI the above zone_dirty_ok() etc. tests are further expanded to  

+       /*
+        * If reclaimed any pages, it's safe from busy scanning. Otherwise when
+        * encountered PG_reclaim pages or writeback work queue congested,
+        * consider I/O throttling. Try to throttle only the dirtier tasks by
+        * honouring higher throttle thresholds to kswapd and other clean tasks.
+        */
+       if (!nr_reclaimed && nr_pgreclaim &&
+           should_throttle_dirty(mz, sc, priority))
+               reclaim_wait(mz, HZ/10);

I choose to not trust the exact number of nr_pgreclaim because it's
kind of *unreliable* local information. Instead, the reclaim priority
looks a better indicator of a long contiguous run of hard-to-reclaim
pages which will be taken into account in should_throttle_dirty().

> > ---
> > Subject: writeback: introduce the pageout work
> > Date: Thu Jul 29 14:41:19 CST 2010
> > 
> > This relays file pageout IOs to the flusher threads.
> > 
> > The ultimate target is to gracefully handle the LRU lists full of
> > dirty/writeback pages.
> > 
> 
> It would be worth mentioning in the changelog that this is much more
> important now that page reclaim generally does not writeout filesystem-backed
> pages.

OK. Good point.

> > 1) I/O efficiency
> > 
> > The flusher will piggy back the nearby ~10ms worth of dirty pages for I/O.
> > 
> > This takes advantage of the time/spacial locality in most workloads: the
> > nearby pages of one file are typically populated into the LRU at the same
> > time, hence will likely be close to each other in the LRU list. Writing
> > them in one shot helps clean more pages effectively for page reclaim.
> > 
> > 2) OOM avoidance and scan rate control
> > 
> > Typically we do LRU scan w/o rate control and quickly get enough clean
> > pages for the LRU lists not full of dirty pages.
> > 
> > Or we can still get a number of freshly cleaned pages (moved to LRU tail
> > by end_page_writeback()) when the queued pageout I/O is completed within
> > tens of milli-seconds.
> > 
> > However if the LRU list is small and full of dirty pages, it can be
> > quickly fully scanned and go OOM before the flusher manages to clean
> > enough pages.
> > 
> 
> It's worth pointing out here that generally this does not happen for global
> reclaim which does dirty throttling but happens easily with memcg LRUs.

Done.

> > A simple yet reliable scheme is employed to avoid OOM and keep scan rate
> > in sync with the I/O rate:
> > 
> > 	if (PageReclaim(page))
> > 		congestion_wait(HZ/10);
> > 
> 
> This comment is stale now.

Yeah, will update to

        if (encountered PG_reclaim pages)
                do some throttle wait

followed by introduction of the basic idea as well as further
considerations for interactive performance.

> > PG_reclaim plays the key role. When dirty pages are encountered, we
> > queue I/O for it,
> 
> This is misleading. The process that encounters the dirty page does
> not queue the page for IO unless it is kswapd scanning at high priority
> (currently anyway, you patch changes it). The process that finds the page
> queues work for flusher threads that will queue the actual I/O for it at
> some unknown time in the future.

You are right. Will change to "queue pageout writeback work for it".

> > set PG_reclaim and put it back to the LRU head.
> > So if PG_reclaim pages are encountered again, it means the dirty page
> > has not yet been cleaned by the flusher after a full zone scan. It
> > indicates we are scanning more fast than I/O and shall take a snap.
> > 
> 
> This is also slightly misleading because the page can be encountered
> after rescanning the inactive list, not necessarily a full zone scan but
> it's a minor point.

Yeah, "inactive list" is more precise. Updated.

> > The runtime behavior on a fully dirtied small LRU list would be:
> > It will start with a quick scan of the list, queuing all pages for I/O.
> > Then the scan will be slowed down by the PG_reclaim pages *adaptively*
> > to match the I/O bandwidth.
> > 
> > 3) writeback work coordinations
> > 
> > To avoid memory allocations at page reclaim, a mempool for struct
> > wb_writeback_work is created.
> > 
> > wakeup_flusher_threads() is removed because it can easily delay the
> > more oriented pageout works and even exhaust the mempool reservations.
> > It's also found to not I/O efficient by frequently submitting writeback
> > works with small ->nr_pages.
> > 
> > Background/periodic works will quit automatically, so as to clean the
> > pages under reclaim ASAP. However for now the sync work can still block
> > us for long time.
> > 
> > Jan Kara: limit the search scope. Note that the limited search and work
> > pool is not a big problem: 1000 IOs under flight are typically more than
> > enough to saturate the disk. And the overheads of searching in the work
> > list didn't even show up in the perf report.
> > 
> > 4) test case
> > 
> > Run 2 dd tasks in a 100MB memcg (a very handy test case from Greg Thelen):
> > 
> > 	mkdir /cgroup/x
> > 	echo 100M > /cgroup/x/memory.limit_in_bytes
> > 	echo $$ > /cgroup/x/tasks
> > 
> > 	for i in `seq 2`
> > 	do
> > 		dd if=/dev/zero of=/fs/f$i bs=1k count=1M &
> > 	done
> > 
> > Before patch, the dd tasks are quickly OOM killed.
> > After patch, they run well with reasonably good performance and overheads:
> > 
> > 1073741824 bytes (1.1 GB) copied, 22.2196 s, 48.3 MB/s
> > 1073741824 bytes (1.1 GB) copied, 22.4675 s, 47.8 MB/s
> > 
> > iostat -kx 1
> > 
> > Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util
> > sda               0.00     0.00    0.00  178.00     0.00 89568.00  1006.38    74.35  417.71   4.80  85.40
> > sda               0.00     2.00    0.00  191.00     0.00 94428.00   988.77    53.34  219.03   4.34  82.90
> > sda               0.00    20.00    0.00  196.00     0.00 97712.00   997.06    71.11  337.45   4.77  93.50
> > sda               0.00     5.00    0.00  175.00     0.00 84648.00   967.41    54.03  316.44   5.06  88.60
> > sda               0.00     0.00    0.00  186.00     0.00 92432.00   993.89    56.22  267.54   5.38 100.00
> > sda               0.00     1.00    0.00  183.00     0.00 90156.00   985.31    37.99  325.55   4.33  79.20
> > sda               0.00     0.00    0.00  175.00     0.00 88692.00  1013.62    48.70  218.43   4.69  82.10
> > sda               0.00     0.00    0.00  196.00     0.00 97528.00   995.18    43.38  236.87   5.10 100.00
> > sda               0.00     0.00    0.00  179.00     0.00 88648.00   990.48    45.83  285.43   5.59 100.00
> > sda               0.00     0.00    0.00  178.00     0.00 88500.00   994.38    28.28  158.89   4.99  88.80
> > sda               0.00     0.00    0.00  194.00     0.00 95852.00   988.16    32.58  167.39   5.15 100.00
> > sda               0.00     2.00    0.00  215.00     0.00 105996.00   986.01    41.72  201.43   4.65 100.00
> > sda               0.00     4.00    0.00  173.00     0.00 84332.00   974.94    50.48  260.23   5.76  99.60
> > sda               0.00     0.00    0.00  182.00     0.00 90312.00   992.44    36.83  212.07   5.49 100.00
> > sda               0.00     8.00    0.00  195.00     0.00 95940.50   984.01    50.18  221.06   5.13 100.00
> > sda               0.00     1.00    0.00  220.00     0.00 108852.00   989.56    40.99  202.68   4.55 100.00
> > sda               0.00     2.00    0.00  161.00     0.00 80384.00   998.56    37.19  268.49   6.21 100.00
> > sda               0.00     4.00    0.00  182.00     0.00 90830.00   998.13    50.58  239.77   5.49 100.00
> > sda               0.00     0.00    0.00  197.00     0.00 94877.00   963.22    36.68  196.79   5.08 100.00
> > 
> > avg-cpu:  %user   %nice %system %iowait  %steal   %idle
> >            0.25    0.00   15.08   33.92    0.00   50.75
> >            0.25    0.00   14.54   35.09    0.00   50.13
> >            0.50    0.00   13.57   32.41    0.00   53.52
> >            0.50    0.00   11.28   36.84    0.00   51.38
> >            0.50    0.00   15.75   32.00    0.00   51.75
> >            0.50    0.00   10.50   34.00    0.00   55.00
> >            0.50    0.00   17.63   27.46    0.00   54.41
> >            0.50    0.00   15.08   30.90    0.00   53.52
> >            0.50    0.00   11.28   32.83    0.00   55.39
> >            0.75    0.00   16.79   26.82    0.00   55.64
> >            0.50    0.00   16.08   29.15    0.00   54.27
> >            0.50    0.00   13.50   30.50    0.00   55.50
> >            0.50    0.00   14.32   35.18    0.00   50.00
> >            0.50    0.00   12.06   33.92    0.00   53.52
> >            0.50    0.00   17.29   30.58    0.00   51.63
> >            0.50    0.00   15.08   29.65    0.00   54.77
> >            0.50    0.00   12.53   29.32    0.00   57.64
> >            0.50    0.00   15.29   31.83    0.00   52.38
> > 
> > The global dd numbers for comparison:
> > 
> > Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util
> > sda               0.00     0.00    0.00  189.00     0.00 95752.00  1013.25   143.09  684.48   5.29 100.00
> > sda               0.00     0.00    0.00  208.00     0.00 105480.00  1014.23   143.06  733.29   4.81 100.00
> > sda               0.00     0.00    0.00  161.00     0.00 81924.00  1017.69   141.71  757.79   6.21 100.00
> > sda               0.00     0.00    0.00  217.00     0.00 109580.00  1009.95   143.09  749.55   4.61 100.10
> > sda               0.00     0.00    0.00  187.00     0.00 94728.00  1013.13   144.31  773.67   5.35 100.00
> > sda               0.00     0.00    0.00  189.00     0.00 95752.00  1013.25   144.14  742.00   5.29 100.00
> > sda               0.00     0.00    0.00  177.00     0.00 90032.00  1017.31   143.32  656.59   5.65 100.00
> > sda               0.00     0.00    0.00  215.00     0.00 108640.00  1010.60   142.90  817.54   4.65 100.00
> > sda               0.00     2.00    0.00  166.00     0.00 83858.00  1010.34   143.64  808.61   6.02 100.00
> > sda               0.00     0.00    0.00  186.00     0.00 92813.00   997.99   141.18  736.95   5.38 100.00
> > sda               0.00     0.00    0.00  206.00     0.00 104456.00  1014.14   146.27  729.33   4.85 100.00
> > sda               0.00     0.00    0.00  213.00     0.00 107024.00  1004.92   143.25  705.70   4.69 100.00
> > sda               0.00     0.00    0.00  188.00     0.00 95748.00  1018.60   141.82  764.78   5.32 100.00
> > 
> > avg-cpu:  %user   %nice %system %iowait  %steal   %idle
> >            0.51    0.00   11.22   52.30    0.00   35.97
> >            0.25    0.00   10.15   52.54    0.00   37.06
> >            0.25    0.00    5.01   56.64    0.00   38.10
> >            0.51    0.00   15.15   43.94    0.00   40.40
> >            0.25    0.00   12.12   48.23    0.00   39.39
> >            0.51    0.00   11.20   53.94    0.00   34.35
> >            0.26    0.00    9.72   51.41    0.00   38.62
> >            0.76    0.00    9.62   50.63    0.00   38.99
> >            0.51    0.00   10.46   53.32    0.00   35.71
> >            0.51    0.00    9.41   51.91    0.00   38.17
> >            0.25    0.00   10.69   49.62    0.00   39.44
> >            0.51    0.00   12.21   52.67    0.00   34.61
> >            0.51    0.00   11.45   53.18    0.00   34.86
> > 
> > XXX: commit NFS unstable pages via write_inode()
> > XXX: the added congestion_wait() may be undesirable in some situations
> > 
> 
> This second XXX may also not be redundant.

Sure. Removed.

> > CC: Jan Kara <jack@suse.cz>
> > CC: Mel Gorman <mgorman@suse.de>
> > Acked-by: Rik van Riel <riel@redhat.com>
> > CC: Greg Thelen <gthelen@google.com>
> > CC: Minchan Kim <minchan.kim@gmail.com>
> > Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
> > ---
> >  fs/fs-writeback.c                |  169 ++++++++++++++++++++++++++++-
> >  include/linux/backing-dev.h      |    2 
> >  include/linux/writeback.h        |    4 
> >  include/trace/events/writeback.h |   19 ++-
> >  mm/backing-dev.c                 |   35 ++++++
> >  mm/swap.c                        |    1 
> >  mm/vmscan.c                      |   32 +++--
> >  7 files changed, 245 insertions(+), 17 deletions(-)
> > 
> > - move congestion_wait() out of the page lock: it's blocking btrfs lock_delalloc_pages()
> > 
> > --- linux.orig/include/linux/backing-dev.h	2012-02-14 20:11:21.000000000 +0800
> > +++ linux/include/linux/backing-dev.h	2012-02-15 12:34:24.000000000 +0800
> > @@ -304,6 +304,8 @@ void clear_bdi_congested(struct backing_
> >  void set_bdi_congested(struct backing_dev_info *bdi, int sync);
> >  long congestion_wait(int sync, long timeout);
> >  long wait_iff_congested(struct zone *zone, int sync, long timeout);
> > +long reclaim_wait(long timeout);
> > +void reclaim_rotated(void);
> >  
> >  static inline bool bdi_cap_writeback_dirty(struct backing_dev_info *bdi)
> >  {
> > --- linux.orig/mm/backing-dev.c	2012-02-14 20:11:21.000000000 +0800
> > +++ linux/mm/backing-dev.c	2012-02-15 12:34:19.000000000 +0800
> > @@ -873,3 +873,38 @@ out:
> >  	return ret;
> >  }
> >  EXPORT_SYMBOL(wait_iff_congested);
> > +
> > +static DECLARE_WAIT_QUEUE_HEAD(reclaim_wqh);
> > +
> 
> Should this be declared on a per-NUMA node basis to avoid throttling on one
> node being woken up by activity on an unrelated node?  reclaim_rorated()
> is called from a context that has a page so looking up the waitqueue would
> be easy. Grep for place that initialise kswapd_wait and the initialisation
> code will be easier although watch that if a node is hot-removed that
> the queue is woken.

OK I've changed reclaim_wqh to per-NUMA node wqh. It may be further
expanded to per-memcg wqh in similar way.

But note that it will now only wake the node corresponding to one
random page in the pagevec. It should work well enough in practice,
though.

> > +/**
> > + * reclaim_wait - wait for some pages being rotated to the LRU tail
> > + * @timeout: timeout in jiffies
> > + *
> > + * Wait until @timeout, or when some (typically PG_reclaim under writeback)
> > + * pages rotated to the LRU so that page reclaim can make progress.
> > + */
> > +long reclaim_wait(long timeout)
> > +{
> > +	long ret;
> > +	unsigned long start = jiffies;
> > +	DEFINE_WAIT(wait);
> > +
> > +	prepare_to_wait(&reclaim_wqh, &wait, TASK_KILLABLE);
> > +	ret = io_schedule_timeout(timeout);
> > +	finish_wait(&reclaim_wqh, &wait);
> > +
> > +	trace_writeback_reclaim_wait(jiffies_to_usecs(timeout),
> > +				     jiffies_to_usecs(jiffies - start));
> > +
> > +	return ret;
> > +}
> > +EXPORT_SYMBOL(reclaim_wait);
> > +
> 
> Why do we export this? Only vmscan.c is calling it and I'm scratching my
> head trying to figure out why a kernel module would want to call it.

Err a typical copy&paste error... Removed this and further moved
reclaim_wait()/reclaim_rotated() to vmscan.c.

> > +void reclaim_rotated()
> > +{
> 
> style nit
> 
> void reclaim_rotated(void)

Updated, to...

        void reclaim_rotated(struct page *page)

which will wakeup tasks from the wqh

        &NODE_DATA(page_to_nid(page))->reclaim_wait

> > +	wait_queue_head_t *wqh = &reclaim_wqh;
> > +
> > +	if (waitqueue_active(wqh))
> > +		wake_up(wqh);
> > +}
> > +
> > --- linux.orig/mm/swap.c	2012-02-14 20:11:21.000000000 +0800
> > +++ linux/mm/swap.c	2012-02-15 12:27:35.000000000 +0800
> > @@ -253,6 +253,7 @@ static void pagevec_move_tail(struct pag
> >  
> >  	pagevec_lru_move_fn(pvec, pagevec_move_tail_fn, &pgmoved);
> >  	__count_vm_events(PGROTATED, pgmoved);
> > +	reclaim_rotated();
> >  }
> >  
> >  /*
> > --- linux.orig/mm/vmscan.c	2012-02-14 20:11:21.000000000 +0800
> > +++ linux/mm/vmscan.c	2012-02-16 17:23:17.000000000 +0800
> > @@ -767,7 +767,8 @@ static unsigned long shrink_page_list(st
> >  				      struct scan_control *sc,
> >  				      int priority,
> >  				      unsigned long *ret_nr_dirty,
> > -				      unsigned long *ret_nr_writeback)
> > +				      unsigned long *ret_nr_writeback,
> > +				      unsigned long *ret_nr_pgreclaim)
> >  {
> >  	LIST_HEAD(ret_pages);
> >  	LIST_HEAD(free_pages);
> > @@ -776,6 +777,7 @@ static unsigned long shrink_page_list(st
> >  	unsigned long nr_congested = 0;
> >  	unsigned long nr_reclaimed = 0;
> >  	unsigned long nr_writeback = 0;
> > +	unsigned long nr_pgreclaim = 0;
> >  
> >  	cond_resched();
> >  
> > @@ -813,6 +815,10 @@ static unsigned long shrink_page_list(st
> >  
> >  		if (PageWriteback(page)) {
> >  			nr_writeback++;
> > +			if (PageReclaim(page))
> > +				nr_pgreclaim++;
> > +			else
> > +				SetPageReclaim(page);
> >  			/*
> >  			 * Synchronous reclaim cannot queue pages for
> >  			 * writeback due to the possibility of stack overflow
> > @@ -874,12 +880,15 @@ static unsigned long shrink_page_list(st
> >  			nr_dirty++;
> >  
> >  			/*
> > -			 * Only kswapd can writeback filesystem pages to
> > -			 * avoid risk of stack overflow but do not writeback
> > -			 * unless under significant pressure.
> > +			 * run into the visited page again: we are scanning
> > +			 * faster than the flusher can writeout dirty pages
> >  			 */
> > -			if (page_is_file_cache(page) &&
> > -					(!current_is_kswapd() || priority >= DEF_PRIORITY - 2)) {
> > +			if (page_is_file_cache(page) && PageReclaim(page)) {
> > +				nr_pgreclaim++;
> > +				goto keep_locked;
> > +			}
> 
> This change means that kswapd is no longer doing any writeback from page
> reclaim.

Not really skipping all pageout()s. Here we skip PG_reclaim dirty
pages because there are already pageout works queued for them.

> Was that intended because it's not discussed in the changelog. I
> know writeback from kswapd is poor in terms of IO performance but it's a
> last resort for freeing a page when reclaim is in trouble. If we are to
> disable it and depend 100% on the flusher threads, it should be in its
> own patch for bisection reasons if nothing else.

I'm actually totally with your points.  The patch will still default
to pageout() if failed to queue pageout work for the page.

But yes, there is the possibility of pageout works delayed by some
sync works for a dozen of seconds. In this case the sync work may also
hit some PG_reclaim pages and do the clean-and-rotate job for us. But
it would certainly be much better to have some kind of guarantee...

> > +			if (page_is_file_cache(page) && mapping &&
> > +			    flush_inode_page(mapping, page, false) >= 0) {
> >  				/*
> >  				 * Immediately reclaim when written back.
> >  				 * Similar in principal to deactivate_page()
> > @@ -1028,6 +1037,7 @@ keep_lumpy:
> >  	count_vm_events(PGACTIVATE, pgactivate);
> >  	*ret_nr_dirty += nr_dirty;
> >  	*ret_nr_writeback += nr_writeback;
> > +	*ret_nr_pgreclaim += nr_pgreclaim;
> >  	return nr_reclaimed;
> >  }
> >  
> > @@ -1509,6 +1519,7 @@ shrink_inactive_list(unsigned long nr_to
> >  	unsigned long nr_file;
> >  	unsigned long nr_dirty = 0;
> >  	unsigned long nr_writeback = 0;
> > +	unsigned long nr_pgreclaim = 0;
> >  	isolate_mode_t reclaim_mode = ISOLATE_INACTIVE;
> >  	struct zone *zone = mz->zone;
> >  
> > @@ -1559,13 +1570,13 @@ shrink_inactive_list(unsigned long nr_to
> >  	spin_unlock_irq(&zone->lru_lock);
> >  
> >  	nr_reclaimed = shrink_page_list(&page_list, mz, sc, priority,
> > -						&nr_dirty, &nr_writeback);
> > +				&nr_dirty, &nr_writeback, &nr_pgreclaim);
> >  
> >  	/* Check if we should syncronously wait for writeback */
> >  	if (should_reclaim_stall(nr_taken, nr_reclaimed, priority, sc)) {
> >  		set_reclaim_mode(priority, sc, true);
> >  		nr_reclaimed += shrink_page_list(&page_list, mz, sc,
> > -					priority, &nr_dirty, &nr_writeback);
> > +			priority, &nr_dirty, &nr_writeback, &nr_pgreclaim);
> >  	}
> >  
> >  	spin_lock_irq(&zone->lru_lock);
> > @@ -1608,6 +1619,9 @@ shrink_inactive_list(unsigned long nr_to
> >  	 */
> >  	if (nr_writeback && nr_writeback >= (nr_taken >> (DEF_PRIORITY-priority)))
> >  		wait_iff_congested(zone, BLK_RW_ASYNC, HZ/10);
> > +	if (nr_pgreclaim && nr_pgreclaim >= (nr_taken >> (DEF_PRIORITY-priority)) &&
> > +	    (!global_reclaim(sc) || !zone_dirty_ok(zone)))
> > +		reclaim_wait(HZ/10);
> >  
> 
> I prefer this but it would be nice if there was a comment explaining it
> or at least expand the comment explaining how nr_writeback can lead to
> wait_iff_congested() being called.

Sure, done.

> >  	trace_mm_vmscan_lru_shrink_inactive(zone->zone_pgdat->node_id,
> >  		zone_idx(zone),
> > @@ -2382,8 +2396,6 @@ static unsigned long do_try_to_free_page
> >  		 */
> >  		writeback_threshold = sc->nr_to_reclaim + sc->nr_to_reclaim / 2;
> >  		if (total_scanned > writeback_threshold) {
> > -			wakeup_flusher_threads(laptop_mode ? 0 : total_scanned,
> > -						WB_REASON_TRY_TO_FREE_PAGES);
> >  			sc->may_writepage = 1;
> >  		}
> >  
> > --- linux.orig/fs/fs-writeback.c	2012-02-14 20:11:21.000000000 +0800
> > +++ linux/fs/fs-writeback.c	2012-02-15 12:27:35.000000000 +0800
> > @@ -41,6 +41,8 @@ struct wb_writeback_work {
> >  	long nr_pages;
> >  	struct super_block *sb;
> >  	unsigned long *older_than_this;
> > +	struct inode *inode;
> > +	pgoff_t offset;
> >  	enum writeback_sync_modes sync_mode;
> >  	unsigned int tagged_writepages:1;
> >  	unsigned int for_kupdate:1;
> > @@ -65,6 +67,27 @@ struct wb_writeback_work {
> >   */
> >  int nr_pdflush_threads;
> >  
> > +static mempool_t *wb_work_mempool;
> > +
> > +static void *wb_work_alloc(gfp_t gfp_mask, void *pool_data)
> > +{
> > +	/*
> > +	 * bdi_flush_inode_range() may be called on page reclaim
> > +	 */
> > +	if (current->flags & PF_MEMALLOC)
> > +		return NULL;
> > +
> 
> This check is why I worry about kswapd being unable to write pages at
> all. If the mempool is depleted for whatever reason, reclaim has no way
> of telling the flushers what work is needed or waking them. Potentially,
> we could be waiting a long time for pending flusher work to complete to
> free up a slot.  I recognise it may not be bad in practice because the
> pool is large and other work will be completing but it's why kswapd not
> writing back pages should be in its own patch.

See above, the patch still falls back to pageout() if we fail to queue
the work. The main problem is, it's possible for the *queued* work to
be delayed for long time by some other writeback works. It should be
trivial to make pageout works the No.1 priority for the flushers, but
then that may risk delaying sync() works for long time...

> > +	return kmalloc(sizeof(struct wb_writeback_work), gfp_mask);
> > +}
> > +
> > +static __init int wb_work_init(void)
> > +{
> > +	wb_work_mempool = mempool_create(1024,
> > +					 wb_work_alloc, mempool_kfree, NULL);
> > +	return wb_work_mempool ? 0 : -ENOMEM;
> > +}
> > +fs_initcall(wb_work_init);
> > +
> >  /**
> >   * writeback_in_progress - determine whether there is writeback in progress
> >   * @bdi: the device's backing_dev_info structure.
> > @@ -129,7 +152,7 @@ __bdi_start_writeback(struct backing_dev
> >  	 * This is WB_SYNC_NONE writeback, so if allocation fails just
> >  	 * wakeup the thread for old dirty data writeback
> >  	 */
> > -	work = kzalloc(sizeof(*work), GFP_ATOMIC);
> > +	work = mempool_alloc(wb_work_mempool, GFP_NOWAIT);
> >  	if (!work) {
> >  		if (bdi->wb.task) {
> >  			trace_writeback_nowork(bdi);
> > @@ -138,6 +161,7 @@ __bdi_start_writeback(struct backing_dev
> >  		return;
> >  	}
> >  
> > +	memset(work, 0, sizeof(*work));
> >  	work->sync_mode	= WB_SYNC_NONE;
> >  	work->nr_pages	= nr_pages;
> >  	work->range_cyclic = range_cyclic;
> > @@ -186,6 +210,125 @@ void bdi_start_background_writeback(stru
> >  	spin_unlock_bh(&bdi->wb_lock);
> >  }
> >  
> > +static bool extend_writeback_range(struct wb_writeback_work *work,
> > +				   pgoff_t offset,
> > +				   unsigned long write_around_pages)
> > +{
> 
> comment on what this function is for and what the return values mean.
> 
> "returns true if the wb_writeback_work now encompasses the request"
> 
> or something

Here it goes:

/*
 * Check if @work already covers @offset, or try to extend it to cover @offset.
 * Returns true if the wb_writeback_work now encompasses the requested offset.
 */
 
> > +	pgoff_t end = work->offset + work->nr_pages;
> > +
> > +	if (offset >= work->offset && offset < end)
> > +		return true;
> > +
> 
> This does not ensure that the full span of
> offset -> offset+write_around_pages is encompassed by work. All it
> checks is that the start of the requested range is going to be handled.

Oh this function does not work as you expected.

I'd better change the "write_around_pages" parameter name to "unit".

Every pageout work starts with [begin, begin+unit] and each time it's
extended, the added length will be N*unit.

extend_writeback_range() only aims to extend the work to encompass the
single page at @offset. "unit" merely serves as the extending unit.
 
> I guess it's ok because the page reclaims cares about is covered and
> avoids a situation where too much IO is being queued. It's unclear if
> this is what you intended though because you check for too much IO being
> queued in the next block.

Yeah both the number of pending pageout works and the chunk size of
each work will be limited.

> > +	/*
> > +	 * for sequential workloads with good locality, include up to 8 times
> > +	 * more data in one chunk
> > +	 */
> > +	if (work->nr_pages >= 8 * write_around_pages)
> > +		return false;
> > +
> > +	/* the unsigned comparison helps eliminate one compare */
> > +	if (work->offset - offset < write_around_pages) {
> > +		work->nr_pages += write_around_pages;
> > +		work->offset -= write_around_pages;
> > +		return true;
> > +	}
> > +
> > +	if (offset - end < write_around_pages) {
> > +		work->nr_pages += write_around_pages;
> > +		return true;
> > +	}
> > +
> > +	return false;
> > +}
> > +
> > +/*
> > + * schedule writeback on a range of inode pages.
> > + */
> > +static struct wb_writeback_work *
> > +bdi_flush_inode_range(struct backing_dev_info *bdi,
> > +		      struct inode *inode,
> > +		      pgoff_t offset,
> > +		      pgoff_t len,
> > +		      bool wait)
> > +{
> > +	struct wb_writeback_work *work;
> > +
> > +	if (!igrab(inode))
> > +		return ERR_PTR(-ENOENT);
> > +
> 
> Explain why the igrab is necessary. I think it's because we are calling
> this from page reclaim context and the only thing pinning the
> address_space is the page lock . If I'm right, it should be made clear
> in the comment for bdi_flush_inode_range that this should only be called
> from page reclaim context. Maybe even VM_BUG_ON if
> !(current->flags & PF_MEMALLOC)?

OK, I added this comment.

        /*
         * Grab the inode until work execution time. We are calling this from
         * page reclaim context and the only thing pinning the address_space
         * for the moment is the page lock.
         */

Maybe not VM_BUG_ON, since it won't hurt that much if called by
someone else ;)

> > +	work = mempool_alloc(wb_work_mempool, wait ? GFP_NOIO : GFP_NOWAIT);
> > +	if (!work) {
> > +		trace_printk("wb_work_mempool alloc fail\n");
> > +		return ERR_PTR(-ENOMEM);
> > +	}
> > +
> > +	memset(work, 0, sizeof(*work));
> > +	work->sync_mode		= WB_SYNC_NONE;
> > +	work->inode		= inode;
> > +	work->offset		= offset;
> > +	work->nr_pages		= len;
> > +	work->reason		= WB_REASON_PAGEOUT;
> > +
> > +	bdi_queue_work(bdi, work);
> > +
> > +	return work;
> > +}
> > +
> > +/*
> > + * Called by page reclaim code to flush the dirty page ASAP. Do write-around to
> > + * improve IO throughput. The nearby pages will have good chance to reside in
> > + * the same LRU list that vmscan is working on, and even close to each other
> > + * inside the LRU list in the common case of sequential read/write.
> > + *
> > + * ret > 0: success, found/reused a previous writeback work
> > + * ret = 0: success, allocated/queued a new writeback work
> > + * ret < 0: failed
> > + */
> > +long flush_inode_page(struct address_space *mapping,
> > +		      struct page *page,
> > +		      bool wait)
> > +{
> > +	struct backing_dev_info *bdi = mapping->backing_dev_info;
> > +	struct inode *inode = mapping->host;
> > +	struct wb_writeback_work *work;
> > +	unsigned long write_around_pages;
> > +	pgoff_t offset = page->index;
> > +	int i;
> > +	long ret = 0;
> > +
> > +	if (unlikely(!inode))
> > +		return -ENOENT;
> > +
> > +	/*
> > +	 * piggy back 8-15ms worth of data
> > +	 */
> > +	write_around_pages = bdi->avg_write_bandwidth + MIN_WRITEBACK_PAGES;
> > +	write_around_pages = rounddown_pow_of_two(write_around_pages) >> 6;
> > +
> > +	i = 1;
> > +	spin_lock_bh(&bdi->wb_lock);
> > +	list_for_each_entry_reverse(work, &bdi->work_list, list) {
> > +		if (work->inode != inode)
> > +			continue;
> > +		if (extend_writeback_range(work, offset, write_around_pages)) {
> > +			ret = i;
> > +			break;
> > +		}
> > +		if (i++ > 100)	/* limit search depth */
> > +			break;
> 
> No harm to move Jan's comment on depth limit search to here adding why
> 100 is as good as number as any to use.

Good idea. Here is the refined work pool throttling scheme:

/*
 * Tailored for vmscan which may submit lots of pageout works. The page reclaim
 * will try to slow down the pageout work submission rate when the queue size
 * grows to LOTS_OF_WRITEBACK_WORKS. flush_inode_page() will accordingly limit
 * its search depth to (2 * LOTS_OF_WRITEBACK_WORKS).
 *
 * Note that the limited search and work pool is not a big problem: 1024 IOs
 * under flight are typically more than enough to saturate the disk. And the
 * overheads of searching in the work list didn't even show up in perf report.
 */
#define WB_WORK_MEMPOOL_SIZE            1024
#define LOTS_OF_WRITEBACK_WORKS         (WB_WORK_MEMPOOL_SIZE / 8)

Thanks,
Fengguang

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 33+ messages in thread

end of thread, other threads:[~2012-02-27 14:29 UTC | newest]

Thread overview: 33+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-02-08  7:55 memcg writeback (was Re: [Lsf-pc] [LSF/MM TOPIC] memcg topics.) Greg Thelen
2012-02-08  9:31 ` Wu Fengguang
2012-02-08 20:54   ` Ying Han
2012-02-09 13:50     ` Wu Fengguang
2012-02-13 18:40       ` Ying Han
2012-02-10  5:51   ` Greg Thelen
2012-02-10  5:52     ` Greg Thelen
2012-02-10  9:20       ` Wu Fengguang
2012-02-10 11:47     ` Wu Fengguang
2012-02-11 12:44       ` reclaim the LRU lists full of dirty/writeback pages Wu Fengguang
2012-02-11 14:55         ` Rik van Riel
2012-02-12  3:10           ` Wu Fengguang
2012-02-12  6:45             ` Wu Fengguang
2012-02-13 15:43             ` Jan Kara
2012-02-14 10:03               ` Wu Fengguang
2012-02-14 13:29                 ` Jan Kara
2012-02-16  4:00                   ` Wu Fengguang
2012-02-16 12:44                     ` Jan Kara
2012-02-16 13:32                       ` Wu Fengguang
2012-02-16 14:06                         ` Wu Fengguang
2012-02-17 16:41                     ` Wu Fengguang
2012-02-20 14:00                       ` Jan Kara
2012-02-14 10:19         ` Mel Gorman
2012-02-14 13:18           ` Wu Fengguang
2012-02-14 13:35             ` Wu Fengguang
2012-02-14 15:51             ` Mel Gorman
2012-02-16  9:50               ` Wu Fengguang
2012-02-16 17:31                 ` Mel Gorman
2012-02-27 14:24                   ` Fengguang Wu
2012-02-16  0:00             ` KAMEZAWA Hiroyuki
2012-02-16  3:04               ` Wu Fengguang
2012-02-16  3:52                 ` KAMEZAWA Hiroyuki
2012-02-16  4:05                   ` Wu Fengguang

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.