All of lore.kernel.org
 help / color / mirror / Atom feed
From: Mel Gorman <mel@csn.ul.ie>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Dave Chinner <david@fromorbit.com>,
	linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org, Chris Mason <chris.mason@oracle.com>,
	Nick Piggin <npiggin@suse.de>, Rik van Riel <riel@redhat.com>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Christoph Hellwig <hch@infradead.org>,
	KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Subject: Re: [PATCH 11/12] vmscan: Write out dirty pages in batch
Date: Tue, 15 Jun 2010 12:43:42 +0100	[thread overview]
Message-ID: <20100615114342.GD26788@csn.ul.ie> (raw)
In-Reply-To: <20100614211515.dd9880dc.akpm@linux-foundation.org>

On Mon, Jun 14, 2010 at 09:15:15PM -0700, Andrew Morton wrote:
> On Tue, 15 Jun 2010 13:20:34 +1000 Dave Chinner <david@fromorbit.com> wrote:
> 
> > On Mon, Jun 14, 2010 at 06:39:57PM -0700, Andrew Morton wrote:
> > > On Tue, 15 Jun 2010 10:39:43 +1000 Dave Chinner <david@fromorbit.com> wrote:
> > > 
> > > > 
> > > > IOWs, IMO anywhere there is a context with significant queue of IO,
> > > > that's where we should be doing a better job of sorting before that
> > > > IO is dispatched to the lower layers. This is still no guarantee of
> > > > better IO (e.g. if the filesystem fragments the file) but it does
> > > > give the lower layers a far better chance at optimal allocation and
> > > > scheduling of IO...
> > > 
> > > None of what you said had much to do with what I said.
> > > 
> > > What you've described are implementation problems in the current block
> > > layer because it conflates "sorting" with "queueing".  I'm saying "fix
> > > that".
> > 
> > You can't sort until you've queued.
> 
> Yes you can.  That's exactly what you're recommending!  Only you're
> recommending doing it at the wrong level.  The fs-writeback radix-tree
> walks do it at the wrong level too.  Sorting should be done within, or
> in a layer above the block queues, not within the large number of
> individual callers.
> 
> > > And...  sorting at the block layer will always be superior to sorting
> > > at the pagecache layer because the block layer sorts at the physical
> > > block level and can handle not-well-laid-out files and can sort and merge
> > > pages from different address_spaces.
> > 
> > Yes it, can do that. And it still does that even if the higher
> > layers sort their I/O dispatch better,
> > 
> > Filesystems try very hard to allocate adjacent logical offsets in a
> > file in adjacent physical blocks on disk - that's the whole point of
> > extent-indexed filesystems. Hence with modern filesystems there is
> > generally a direct correlation between the page {mapping,index}
> > tuple and the physical location of the mapped block.
> > 
> > i.e. there is generally zero physical correlation between pages in
> > different mappings, but there is a high physical correlation
> > between the index of pages on the same mapping.
> 
> Nope.  Large-number-of-small-files is a pretty common case.  If the fs
> doesn't handle that well (ie: by placing them nearby on disk), it's
> borked.
> 
> > Hence by sorting
> > where we have a {mapping,index} context, we push out IO that is
> > much more likely to be in contiguous physical chunks that the
> > current random page shootdown.
> > 
> > We optimise applications to use these sorts of correlations all the
> > time to improve IO patterns. Why can't we make the same sort of
> > optmisations to the IO that the VM issues?
> 
> We can, but it shouldn't be specific to page reclaim.  Other places
> submit IO too, and want the same treatment.
> 
> > > Still, I suspect none of it will improve anything anyway.  Those pages
> > > are still dirty, possibly-locked and need to go to disk.  It doesn't
> > > matter from the MM POV whether they sit in some VM list or in the
> > > request queue.
> > 
> > Oh, but it does.....
> 
> The only difference is that pages which are in the queue (current
> implementation thereof) can't be shot down by truncate.
> 
> > > Possibly there may be some benefit to not putting so many of these
> > > unreclaimable pages into the queue all at the the same time.  But
> > > that's a shortcoming in the block code: we should be able to shove
> > > arbitrary numbers of dirty page (segments) into the queue and not gum
> > > the system up.  Don't try to work around that in the VM.
> > 
> > I think you know perfectly well why the system gums up when we
> > increase block layer queue depth: it's the fact that the _VM_ relies
> > on block layer queue congestion to limit the amount of dirty memory
> > in the system.
> 
> mm, a little bit still, I guess.  Mainly because dirty memory
> management isn't zone aware, so even though we limit dirty memory
> globally, a particular zone(set) can get excessively dirtied.
> 
> Most of this problem happen on the balance_dirty_pages() path, where we
> already sort the pages in ascending logical order.
> 
> > We've got a feedback loop between the block layer and the VM that
> > only works if block device queues are kept shallow. Keeping the
> > number of dirty pages under control is a VM responsibility, but it
> > is putting limitations on the block layer to ensure that the VM
> > works correctly.  If you want the block layer to have deep queues,
> > then someone needs to fix the VM not to require knowledge of the
> > internal operation of the block layer for correct operation.
> > 
> > Adding a few lines of code to sort a list in the VM is far, far
> > easier than redesigning the write throttling code....
> 
> It's a hack and a workaround.  And I suspect it won't make any
> difference, especially given Mel's measurements of the number of dirty
> pages he's seeing coming off the LRU.  Although those numbers may well
> be due to the new quite-low dirty memory thresholds.  
> 

I tested with a dirty ratio of 40 but I didn't see a major problem.
It's still a case with the tests I saw that direct reclaim of dirty pages
was a relatively rare event except when lumpy reclaim was involved. What
did change is the amount of scanning the direct reclaim and kswapd had to do
(both increased quite a bit) but the percentage of dirty pages encountered was
roughly the same (1-2% of scanned pages were dirty in the case of sysbench).

This is sysbench only rather than flooding with more data.

DIRTY RATIO == 20
FTrace Reclaim Statistics
                traceonly-v2r5  stackreduce-v2r5     nodirect-v2r5
Direct reclaims                               9843      13398      51651 
Direct reclaim pages scanned                871367    1008709    3080593 
Direct reclaim write async I/O               24883      30699          0 
Direct reclaim write sync I/O                    0          0          0 
Wake kswapd requests                       7070819    6961672   11268341 
Kswapd wakeups                                1578       1500        943 
Kswapd pages scanned                      22016558   21779455   17393431 
Kswapd reclaim write async I/O             1161346    1101641    1717759 
Kswapd reclaim write sync I/O                    0          0          0 
Time stalled direct reclaim (ms)             26.11      45.04       2.97 
Time kswapd awake (ms)                     5105.06    5135.93    6086.32 

User/Sys Time Running Test (seconds)        734.52    712.39     703.9
Percentage Time Spent Direct Reclaim         0.00%     0.00%     0.00%
Total Elapsed Time (seconds)               9710.02   9589.20   9334.45
Percentage Time kswapd Awake                 0.06%     0.00%     0.00%

DIRTY RATIO == 40
FTrace Reclaim Statistics
                traceonly-v2r5  stackreduce-v2r5     nodirect-v2r5
Direct reclaims                              29945      41887     163006 
Direct reclaim pages scanned               2853804    3075288   13142072 
Direct reclaim write async I/O               51498      63662          0 
Direct reclaim write sync I/O                    0          0          0 
Wake kswapd requests                      11899105   12466894   15645364 
Kswapd wakeups                                 945        891        522 
Kswapd pages scanned                      20401921   20674788   11319791 
Kswapd reclaim write async I/O             1381897    1332436    1711266 
Kswapd reclaim write sync I/O                    0          0          0 
Time stalled direct reclaim (ms)            131.78     165.08       5.47 
Time kswapd awake (ms)                     6321.11    6413.79    6687.67 

User/Sys Time Running Test (seconds)        709.91    718.39    664.28
Percentage Time Spent Direct Reclaim         0.00%     0.00%     0.00%
Total Elapsed Time (seconds)               9579.90   9700.42   9101.05
Percentage Time kswapd Awake                 0.07%     0.00%     0.00%

I guess what was really interesting was that even though raising the
dirty ratio allowed the test to complete faster, the percentage of time
spent in direct reclaim increased quite a lot. Again, just stopping
writeback in direct reclaim seemed to help.

> It would be interesting to code up a little test patch though, see if
> there's benefit to be had going down this path.
> 

I'll do this just to see what it looks like. To be frank, I lack taste when
it comes to how the block layer and filesystem should behave so am having
troube deciding if sorting the pages prior to submission is a good thing or
if it would just encourage bad or lax behaviour in the IO submission queueing.

-- 
Mel Gorman
Part-time Phd Student                          Linux Technology Center
University of Limerick                         IBM Dublin Software Lab

WARNING: multiple messages have this Message-ID (diff)
From: Mel Gorman <mel@csn.ul.ie>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Dave Chinner <david@fromorbit.com>,
	linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org, Chris Mason <chris.mason@oracle.com>,
	Nick Piggin <npiggin@suse.de>, Rik van Riel <riel@redhat.com>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Christoph Hellwig <hch@infradead.org>,
	KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Subject: Re: [PATCH 11/12] vmscan: Write out dirty pages in batch
Date: Tue, 15 Jun 2010 12:43:42 +0100	[thread overview]
Message-ID: <20100615114342.GD26788@csn.ul.ie> (raw)
In-Reply-To: <20100614211515.dd9880dc.akpm@linux-foundation.org>

On Mon, Jun 14, 2010 at 09:15:15PM -0700, Andrew Morton wrote:
> On Tue, 15 Jun 2010 13:20:34 +1000 Dave Chinner <david@fromorbit.com> wrote:
> 
> > On Mon, Jun 14, 2010 at 06:39:57PM -0700, Andrew Morton wrote:
> > > On Tue, 15 Jun 2010 10:39:43 +1000 Dave Chinner <david@fromorbit.com> wrote:
> > > 
> > > > 
> > > > IOWs, IMO anywhere there is a context with significant queue of IO,
> > > > that's where we should be doing a better job of sorting before that
> > > > IO is dispatched to the lower layers. This is still no guarantee of
> > > > better IO (e.g. if the filesystem fragments the file) but it does
> > > > give the lower layers a far better chance at optimal allocation and
> > > > scheduling of IO...
> > > 
> > > None of what you said had much to do with what I said.
> > > 
> > > What you've described are implementation problems in the current block
> > > layer because it conflates "sorting" with "queueing".  I'm saying "fix
> > > that".
> > 
> > You can't sort until you've queued.
> 
> Yes you can.  That's exactly what you're recommending!  Only you're
> recommending doing it at the wrong level.  The fs-writeback radix-tree
> walks do it at the wrong level too.  Sorting should be done within, or
> in a layer above the block queues, not within the large number of
> individual callers.
> 
> > > And...  sorting at the block layer will always be superior to sorting
> > > at the pagecache layer because the block layer sorts at the physical
> > > block level and can handle not-well-laid-out files and can sort and merge
> > > pages from different address_spaces.
> > 
> > Yes it, can do that. And it still does that even if the higher
> > layers sort their I/O dispatch better,
> > 
> > Filesystems try very hard to allocate adjacent logical offsets in a
> > file in adjacent physical blocks on disk - that's the whole point of
> > extent-indexed filesystems. Hence with modern filesystems there is
> > generally a direct correlation between the page {mapping,index}
> > tuple and the physical location of the mapped block.
> > 
> > i.e. there is generally zero physical correlation between pages in
> > different mappings, but there is a high physical correlation
> > between the index of pages on the same mapping.
> 
> Nope.  Large-number-of-small-files is a pretty common case.  If the fs
> doesn't handle that well (ie: by placing them nearby on disk), it's
> borked.
> 
> > Hence by sorting
> > where we have a {mapping,index} context, we push out IO that is
> > much more likely to be in contiguous physical chunks that the
> > current random page shootdown.
> > 
> > We optimise applications to use these sorts of correlations all the
> > time to improve IO patterns. Why can't we make the same sort of
> > optmisations to the IO that the VM issues?
> 
> We can, but it shouldn't be specific to page reclaim.  Other places
> submit IO too, and want the same treatment.
> 
> > > Still, I suspect none of it will improve anything anyway.  Those pages
> > > are still dirty, possibly-locked and need to go to disk.  It doesn't
> > > matter from the MM POV whether they sit in some VM list or in the
> > > request queue.
> > 
> > Oh, but it does.....
> 
> The only difference is that pages which are in the queue (current
> implementation thereof) can't be shot down by truncate.
> 
> > > Possibly there may be some benefit to not putting so many of these
> > > unreclaimable pages into the queue all at the the same time.  But
> > > that's a shortcoming in the block code: we should be able to shove
> > > arbitrary numbers of dirty page (segments) into the queue and not gum
> > > the system up.  Don't try to work around that in the VM.
> > 
> > I think you know perfectly well why the system gums up when we
> > increase block layer queue depth: it's the fact that the _VM_ relies
> > on block layer queue congestion to limit the amount of dirty memory
> > in the system.
> 
> mm, a little bit still, I guess.  Mainly because dirty memory
> management isn't zone aware, so even though we limit dirty memory
> globally, a particular zone(set) can get excessively dirtied.
> 
> Most of this problem happen on the balance_dirty_pages() path, where we
> already sort the pages in ascending logical order.
> 
> > We've got a feedback loop between the block layer and the VM that
> > only works if block device queues are kept shallow. Keeping the
> > number of dirty pages under control is a VM responsibility, but it
> > is putting limitations on the block layer to ensure that the VM
> > works correctly.  If you want the block layer to have deep queues,
> > then someone needs to fix the VM not to require knowledge of the
> > internal operation of the block layer for correct operation.
> > 
> > Adding a few lines of code to sort a list in the VM is far, far
> > easier than redesigning the write throttling code....
> 
> It's a hack and a workaround.  And I suspect it won't make any
> difference, especially given Mel's measurements of the number of dirty
> pages he's seeing coming off the LRU.  Although those numbers may well
> be due to the new quite-low dirty memory thresholds.  
> 

I tested with a dirty ratio of 40 but I didn't see a major problem.
It's still a case with the tests I saw that direct reclaim of dirty pages
was a relatively rare event except when lumpy reclaim was involved. What
did change is the amount of scanning the direct reclaim and kswapd had to do
(both increased quite a bit) but the percentage of dirty pages encountered was
roughly the same (1-2% of scanned pages were dirty in the case of sysbench).

This is sysbench only rather than flooding with more data.

DIRTY RATIO == 20
FTrace Reclaim Statistics
                traceonly-v2r5  stackreduce-v2r5     nodirect-v2r5
Direct reclaims                               9843      13398      51651 
Direct reclaim pages scanned                871367    1008709    3080593 
Direct reclaim write async I/O               24883      30699          0 
Direct reclaim write sync I/O                    0          0          0 
Wake kswapd requests                       7070819    6961672   11268341 
Kswapd wakeups                                1578       1500        943 
Kswapd pages scanned                      22016558   21779455   17393431 
Kswapd reclaim write async I/O             1161346    1101641    1717759 
Kswapd reclaim write sync I/O                    0          0          0 
Time stalled direct reclaim (ms)             26.11      45.04       2.97 
Time kswapd awake (ms)                     5105.06    5135.93    6086.32 

User/Sys Time Running Test (seconds)        734.52    712.39     703.9
Percentage Time Spent Direct Reclaim         0.00%     0.00%     0.00%
Total Elapsed Time (seconds)               9710.02   9589.20   9334.45
Percentage Time kswapd Awake                 0.06%     0.00%     0.00%

DIRTY RATIO == 40
FTrace Reclaim Statistics
                traceonly-v2r5  stackreduce-v2r5     nodirect-v2r5
Direct reclaims                              29945      41887     163006 
Direct reclaim pages scanned               2853804    3075288   13142072 
Direct reclaim write async I/O               51498      63662          0 
Direct reclaim write sync I/O                    0          0          0 
Wake kswapd requests                      11899105   12466894   15645364 
Kswapd wakeups                                 945        891        522 
Kswapd pages scanned                      20401921   20674788   11319791 
Kswapd reclaim write async I/O             1381897    1332436    1711266 
Kswapd reclaim write sync I/O                    0          0          0 
Time stalled direct reclaim (ms)            131.78     165.08       5.47 
Time kswapd awake (ms)                     6321.11    6413.79    6687.67 

User/Sys Time Running Test (seconds)        709.91    718.39    664.28
Percentage Time Spent Direct Reclaim         0.00%     0.00%     0.00%
Total Elapsed Time (seconds)               9579.90   9700.42   9101.05
Percentage Time kswapd Awake                 0.07%     0.00%     0.00%

I guess what was really interesting was that even though raising the
dirty ratio allowed the test to complete faster, the percentage of time
spent in direct reclaim increased quite a lot. Again, just stopping
writeback in direct reclaim seemed to help.

> It would be interesting to code up a little test patch though, see if
> there's benefit to be had going down this path.
> 

I'll do this just to see what it looks like. To be frank, I lack taste when
it comes to how the block layer and filesystem should behave so am having
troube deciding if sorting the pages prior to submission is a good thing or
if it would just encourage bad or lax behaviour in the IO submission queueing.

-- 
Mel Gorman
Part-time Phd Student                          Linux Technology Center
University of Limerick                         IBM Dublin Software Lab

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  parent reply	other threads:[~2010-06-15 11:44 UTC|newest]

Thread overview: 198+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-06-14 11:17 [PATCH 0/12] Avoid overflowing of stack during page reclaim V2 Mel Gorman
2010-06-14 11:17 ` Mel Gorman
2010-06-14 11:17 ` [PATCH 01/12] tracing, vmscan: Add trace events for kswapd wakeup, sleeping and direct reclaim Mel Gorman
2010-06-14 11:17   ` Mel Gorman
2010-06-14 15:45   ` Rik van Riel
2010-06-14 15:45     ` Rik van Riel
2010-06-14 21:01   ` Larry Woodman
2010-06-14 21:01     ` Larry Woodman
2010-06-14 11:17 ` [PATCH 02/12] tracing, vmscan: Add trace events for LRU page isolation Mel Gorman
2010-06-14 11:17   ` Mel Gorman
2010-06-14 16:47   ` Rik van Riel
2010-06-14 16:47     ` Rik van Riel
2010-06-14 21:02   ` Larry Woodman
2010-06-14 21:02     ` Larry Woodman
2010-06-14 11:17 ` [PATCH 03/12] tracing, vmscan: Add trace event when a page is written Mel Gorman
2010-06-14 11:17   ` Mel Gorman
2010-06-14 16:48   ` Rik van Riel
2010-06-14 16:48     ` Rik van Riel
2010-06-14 21:02   ` Larry Woodman
2010-06-14 21:02     ` Larry Woodman
2010-06-14 11:17 ` [PATCH 04/12] tracing, vmscan: Add a postprocessing script for reclaim-related ftrace events Mel Gorman
2010-06-14 11:17   ` Mel Gorman
2010-06-14 17:55   ` Rik van Riel
2010-06-14 17:55     ` Rik van Riel
2010-06-14 21:03   ` Larry Woodman
2010-06-14 21:03     ` Larry Woodman
2010-06-14 11:17 ` [PATCH 05/12] vmscan: kill prev_priority completely Mel Gorman
2010-06-14 11:17   ` Mel Gorman
2010-06-14 18:04   ` Rik van Riel
2010-06-14 18:04     ` Rik van Riel
2010-06-16 23:37   ` Andrew Morton
2010-06-16 23:37     ` Andrew Morton
2010-06-16 23:45     ` Rik van Riel
2010-06-16 23:45       ` Rik van Riel
2010-06-17  0:18       ` Andrew Morton
2010-06-17  0:18         ` Andrew Morton
2010-06-17  0:34         ` Rik van Riel
2010-06-17  0:34           ` Rik van Riel
2010-06-25  8:29     ` KOSAKI Motohiro
2010-06-25  8:29       ` KOSAKI Motohiro
2010-06-28 10:35       ` Mel Gorman
2010-06-28 10:35         ` Mel Gorman
2010-06-14 11:17 ` [PATCH 06/12] vmscan: simplify shrink_inactive_list() Mel Gorman
2010-06-14 11:17   ` Mel Gorman
2010-06-14 18:06   ` Rik van Riel
2010-06-14 18:06     ` Rik van Riel
2010-06-15 10:13     ` Mel Gorman
2010-06-15 10:13       ` Mel Gorman
2010-06-14 11:17 ` [PATCH 07/12] vmscan: Remove unnecessary temporary vars in do_try_to_free_pages Mel Gorman
2010-06-14 11:17   ` Mel Gorman
2010-06-14 18:14   ` Rik van Riel
2010-06-14 18:14     ` Rik van Riel
2010-06-14 11:17 ` [PATCH 08/12] vmscan: Setup pagevec as late as possible in shrink_inactive_list() Mel Gorman
2010-06-14 11:17   ` Mel Gorman
2010-06-14 18:59   ` Rik van Riel
2010-06-14 18:59     ` Rik van Riel
2010-06-15 10:47   ` Christoph Hellwig
2010-06-15 10:47     ` Christoph Hellwig
2010-06-15 15:56     ` Mel Gorman
2010-06-15 15:56       ` Mel Gorman
2010-06-16 23:43   ` Andrew Morton
2010-06-16 23:43     ` Andrew Morton
2010-06-17 10:30     ` Mel Gorman
2010-06-17 10:30       ` Mel Gorman
2010-06-14 11:17 ` [PATCH 09/12] vmscan: Setup pagevec as late as possible in shrink_page_list() Mel Gorman
2010-06-14 11:17   ` Mel Gorman
2010-06-14 19:24   ` Rik van Riel
2010-06-14 19:24     ` Rik van Riel
2010-06-16 23:48   ` Andrew Morton
2010-06-16 23:48     ` Andrew Morton
2010-06-17 10:46     ` Mel Gorman
2010-06-17 10:46       ` Mel Gorman
2010-06-14 11:17 ` [PATCH 10/12] vmscan: Update isolated page counters outside of main path in shrink_inactive_list() Mel Gorman
2010-06-14 11:17   ` Mel Gorman
2010-06-14 19:42   ` Rik van Riel
2010-06-14 19:42     ` Rik van Riel
2010-06-14 11:17 ` [PATCH 11/12] vmscan: Write out dirty pages in batch Mel Gorman
2010-06-14 11:17   ` Mel Gorman
2010-06-14 21:13   ` Rik van Riel
2010-06-14 21:13     ` Rik van Riel
2010-06-15 10:18     ` Mel Gorman
2010-06-15 10:18       ` Mel Gorman
2010-06-14 23:11   ` Dave Chinner
2010-06-14 23:11     ` Dave Chinner
2010-06-14 23:21     ` Andrew Morton
2010-06-14 23:21       ` Andrew Morton
2010-06-15  0:39       ` Dave Chinner
2010-06-15  0:39         ` Dave Chinner
2010-06-15  1:16         ` Rik van Riel
2010-06-15  1:16           ` Rik van Riel
2010-06-15  1:45           ` Andrew Morton
2010-06-15  1:45             ` Andrew Morton
2010-06-15  4:08             ` Rik van Riel
2010-06-15  4:08               ` Rik van Riel
2010-06-15  4:37               ` Andrew Morton
2010-06-15  4:37                 ` Andrew Morton
2010-06-15  5:12                 ` Nick Piggin
2010-06-15  5:12                   ` Nick Piggin
2010-06-15  5:43                   ` [patch] mm: vmscan fix mapping use after free Nick Piggin
2010-06-15  5:43                     ` Nick Piggin
2010-06-15 13:23                     ` Mel Gorman
2010-06-15 13:23                       ` Mel Gorman
2010-06-15 11:01           ` [PATCH 11/12] vmscan: Write out dirty pages in batch Christoph Hellwig
2010-06-15 11:01             ` Christoph Hellwig
2010-06-15 13:32             ` Rik van Riel
2010-06-15 13:32               ` Rik van Riel
2010-06-15  1:39         ` Andrew Morton
2010-06-15  1:39           ` Andrew Morton
2010-06-15  3:20           ` Dave Chinner
2010-06-15  3:20             ` Dave Chinner
2010-06-15  4:15             ` Andrew Morton
2010-06-15  4:15               ` Andrew Morton
2010-06-15  6:36               ` Dave Chinner
2010-06-15  6:36                 ` Dave Chinner
2010-06-15 10:28                 ` Evgeniy Polyakov
2010-06-15 10:28                   ` Evgeniy Polyakov
2010-06-15 10:55                   ` Nick Piggin
2010-06-15 10:55                     ` Nick Piggin
2010-06-15 11:10                     ` Christoph Hellwig
2010-06-15 11:10                       ` Christoph Hellwig
2010-06-15 11:20                       ` Nick Piggin
2010-06-15 11:20                         ` Nick Piggin
2010-06-15 23:20                     ` Dave Chinner
2010-06-15 23:20                       ` Dave Chinner
2010-06-16  6:04                       ` Nick Piggin
2010-06-16  6:04                         ` Nick Piggin
2010-06-15 11:08                   ` Christoph Hellwig
2010-06-15 11:08                     ` Christoph Hellwig
2010-06-15 11:43               ` Mel Gorman [this message]
2010-06-15 11:43                 ` Mel Gorman
2010-06-15 13:07                 ` tytso
2010-06-15 13:07                   ` tytso
2010-06-15 15:44                 ` Mel Gorman
2010-06-15 15:44                   ` Mel Gorman
2010-06-15 10:57       ` Christoph Hellwig
2010-06-15 10:57         ` Christoph Hellwig
2010-06-15 10:53   ` Christoph Hellwig
2010-06-15 10:53     ` Christoph Hellwig
2010-06-15 11:11     ` Mel Gorman
2010-06-15 11:11       ` Mel Gorman
2010-06-15 11:13     ` Nick Piggin
2010-06-15 11:13       ` Nick Piggin
2010-06-14 11:17 ` [PATCH 12/12] vmscan: Do not writeback pages in direct reclaim Mel Gorman
2010-06-14 11:17   ` Mel Gorman
2010-06-14 21:55   ` Rik van Riel
2010-06-14 21:55     ` Rik van Riel
2010-06-15 11:45     ` Mel Gorman
2010-06-15 11:45       ` Mel Gorman
2010-06-15 13:34       ` Rik van Riel
2010-06-15 13:34         ` Rik van Riel
2010-06-15 13:37         ` Christoph Hellwig
2010-06-15 13:37           ` Christoph Hellwig
2010-06-15 13:54           ` Mel Gorman
2010-06-15 13:54             ` Mel Gorman
2010-06-16  0:30             ` KAMEZAWA Hiroyuki
2010-06-16  0:30               ` KAMEZAWA Hiroyuki
2010-06-15 14:02           ` Rik van Riel
2010-06-15 14:02             ` Rik van Riel
2010-06-15 13:59         ` Mel Gorman
2010-06-15 13:59           ` Mel Gorman
2010-06-15 14:04           ` Rik van Riel
2010-06-15 14:04             ` Rik van Riel
2010-06-15 14:16             ` Mel Gorman
2010-06-15 14:16               ` Mel Gorman
2010-06-16  0:17               ` KAMEZAWA Hiroyuki
2010-06-16  0:17                 ` KAMEZAWA Hiroyuki
2010-06-16  0:29                 ` Rik van Riel
2010-06-16  0:29                   ` Rik van Riel
2010-06-16  0:39                   ` KAMEZAWA Hiroyuki
2010-06-16  0:39                     ` KAMEZAWA Hiroyuki
2010-06-16  0:53                     ` Rik van Riel
2010-06-16  0:53                       ` Rik van Riel
2010-06-16  1:40                       ` KAMEZAWA Hiroyuki
2010-06-16  1:40                         ` KAMEZAWA Hiroyuki
2010-06-16  2:20                         ` KAMEZAWA Hiroyuki
2010-06-16  2:20                           ` KAMEZAWA Hiroyuki
2010-06-16  5:11                           ` Christoph Hellwig
2010-06-16  5:11                             ` Christoph Hellwig
2010-06-16 10:51                             ` Jens Axboe
2010-06-16 10:51                               ` Jens Axboe
2010-06-16  5:07                     ` Christoph Hellwig
2010-06-16  5:07                       ` Christoph Hellwig
2010-06-16  5:06                 ` Christoph Hellwig
2010-06-16  5:06                   ` Christoph Hellwig
2010-06-17  0:25                   ` KAMEZAWA Hiroyuki
2010-06-17  0:25                     ` KAMEZAWA Hiroyuki
2010-06-17  6:16                     ` Christoph Hellwig
2010-06-17  6:16                       ` Christoph Hellwig
2010-06-17  6:23                       ` KAMEZAWA Hiroyuki
2010-06-17  6:23                         ` KAMEZAWA Hiroyuki
2010-06-14 15:10 ` [PATCH 0/12] Avoid overflowing of stack during page reclaim V2 Christoph Hellwig
2010-06-14 15:10   ` Christoph Hellwig
2010-06-15 11:45   ` Mel Gorman
2010-06-15 11:45     ` Mel Gorman
2010-06-15  0:08 ` KAMEZAWA Hiroyuki
2010-06-15  0:08   ` KAMEZAWA Hiroyuki
2010-06-15 11:49   ` Mel Gorman
2010-06-15 11:49     ` Mel Gorman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20100615114342.GD26788@csn.ul.ie \
    --to=mel@csn.ul.ie \
    --cc=akpm@linux-foundation.org \
    --cc=chris.mason@oracle.com \
    --cc=david@fromorbit.com \
    --cc=hannes@cmpxchg.org \
    --cc=hch@infradead.org \
    --cc=kamezawa.hiroyu@jp.fujitsu.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=npiggin@suse.de \
    --cc=riel@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.