linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/4] Fix compaction stalls due to accounting errors in isolated page accounting
@ 2011-06-07 15:07 Mel Gorman
  2011-06-07 15:07 ` [PATCH 1/4] mm: compaction: Ensure that the compaction free scanner does not move to the next zone Mel Gorman
                   ` (4 more replies)
  0 siblings, 5 replies; 19+ messages in thread
From: Mel Gorman @ 2011-06-07 15:07 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrea Arcangeli, Minchan Kim, Thomas Sattler, Ury Stankevich,
	Andi Kleen, linux-mm, linux-kernel

There were some reports about processes getting stalled for very long
periods of time in compaction. The bulk of this problem turned out
to be due to an accounting error wherby the isolated count could go
negative but only noticed by UP builds.

This series is the useful patches (not all mine) that came out of
the related discussions that have not been merged to -mm already.
All these patches should be considered for -stable 2.6.38 and
2.6.39. Hence, Andrea's introduction of __page_count() is missing from
this series because while it's worth merging, it's not for -stable.

Patch 1 is the primary fix for a problem where the isolated count
	could go negative on one zone and remain elevated on another.

Patch 2 notes that the linear scanner in vmscan.c cannot safely
	use page_count because it could be scanning a tail page.

Patch 3 fixes memory failure accounting of isolated pages

Patch 4 fixes a problem whereby asynchronous callers to compaction
	can still stall in too_many_isolated when it should just fail
	the allocation.

Re-verification from testers that these patches really do fix their
problems would be appreciated. Even if hangs disappear, please confirm
that the values for nr_isolated_anon and nr_isolated_file in *both*
/proc/zoneinfo and /proc/vmstat are sensible (i.e. usually zero).

 mm/compaction.c     |   41 +++++++++++++++++++++++++++++++++++------
 mm/memory-failure.c |    4 +++-
 mm/vmscan.c         |   16 ++++++++++++++--
 3 files changed, 52 insertions(+), 9 deletions(-)

-- 
1.7.3.4


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH 1/4] mm: compaction: Ensure that the compaction free scanner does not move to the next zone
  2011-06-07 15:07 [PATCH 0/4] Fix compaction stalls due to accounting errors in isolated page accounting Mel Gorman
@ 2011-06-07 15:07 ` Mel Gorman
  2011-06-08  9:33   ` Michal Hocko
  2011-06-07 15:07 ` [PATCH 2/4] mm: vmscan: Do not use page_count without a page pin Mel Gorman
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 19+ messages in thread
From: Mel Gorman @ 2011-06-07 15:07 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrea Arcangeli, Minchan Kim, Thomas Sattler, Ury Stankevich,
	Andi Kleen, linux-mm, linux-kernel

Compaction works with two scanners, a migration and a free
scanner. When the scanners crossover, migration within the zone is
complete. The location of the scanner is recorded on each cycle to
avoid excesive scanning.

When a zone is small and mostly reserved, it's very easy for the
migration scanner to be close to the end of the zone. Then the following
situation can occurs

  o migration scanner isolates some pages near the end of the zone
  o free scanner starts at the end of the zone but finds that the
    migration scanner is already there
  o free scanner gets reinitialised for the next cycle as
    cc->migrate_pfn + pageblock_nr_pages
    moving the free scanner into the next zone
  o migration scanner moves into the next zone

When this happens, NR_ISOLATED accounting goes haywire because some
of the accounting happens against the wrong zone. One zones counter
remains positive while the other goes negative even though the overall
global count is accurate. This was reported on X86-32 with !SMP because
!SMP allows the negative counters to be visible. The fact that it is
difficult to reproduce on X86-64 is probably just a co-incidence as
the bug should theoritically be possible there.

Signed-off-by: Mel Gorman <mgorman@suse.de>
Reviewed-by: Minchan Kim <minchan.kim@gmail.com>
---
 mm/compaction.c |   13 ++++++++++++-
 1 files changed, 12 insertions(+), 1 deletions(-)

diff --git a/mm/compaction.c b/mm/compaction.c
index 021a296..5c744ab 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -144,9 +144,20 @@ static void isolate_freepages(struct zone *zone,
 	int nr_freepages = cc->nr_freepages;
 	struct list_head *freelist = &cc->freepages;
 
+	/*
+	 * Initialise the free scanner. The starting point is where we last
+	 * scanned from (or the end of the zone if starting). The low point
+	 * is the end of the pageblock the migration scanner is using.
+	 */
 	pfn = cc->free_pfn;
 	low_pfn = cc->migrate_pfn + pageblock_nr_pages;
-	high_pfn = low_pfn;
+
+	/*
+	 * Take care that if the migration scanner is at the end of the zone
+	 * that the free scanner does not accidentally move to the next zone
+	 * in the next isolation cycle.
+	 */
+	high_pfn = min(low_pfn, pfn);
 
 	/*
 	 * Isolate free pages until enough are available to migrate the
-- 
1.7.3.4


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 2/4] mm: vmscan: Do not use page_count without a page pin
  2011-06-07 15:07 [PATCH 0/4] Fix compaction stalls due to accounting errors in isolated page accounting Mel Gorman
  2011-06-07 15:07 ` [PATCH 1/4] mm: compaction: Ensure that the compaction free scanner does not move to the next zone Mel Gorman
@ 2011-06-07 15:07 ` Mel Gorman
  2011-06-07 15:12   ` Minchan Kim
  2011-06-08  9:38   ` Michal Hocko
  2011-06-07 15:07 ` [PATCH 3/4] mm: memory-failure: Fix isolated page count during memory failure Mel Gorman
                   ` (2 subsequent siblings)
  4 siblings, 2 replies; 19+ messages in thread
From: Mel Gorman @ 2011-06-07 15:07 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrea Arcangeli, Minchan Kim, Thomas Sattler, Ury Stankevich,
	Andi Kleen, linux-mm, linux-kernel

From: Andrea Arcangeli <aarcange@redhat.com>

From: Andrea Arcangeli <aarcange@redhat.com>

It is unsafe to run page_count during the physical pfn scan because
compound_head could trip on a dangling pointer when reading
page->first_page if the compound page is being freed by another CPU.

[mgorman@suse.de: Split out patch]
Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
Signed-off-by: Mel Gorman <mgorman@suse.de>
---
 mm/vmscan.c |   16 ++++++++++++++--
 1 files changed, 14 insertions(+), 2 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index faa0a08..001a504 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1124,8 +1124,20 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan,
 					nr_lumpy_dirty++;
 				scan++;
 			} else {
-				/* the page is freed already. */
-				if (!page_count(cursor_page))
+				/*
+				 * Check if the page is freed already.
+				 *
+				 * We can't use page_count() as that
+				 * requires compound_head and we don't
+				 * have a pin on the page here. If a
+				 * page is tail, we may or may not
+				 * have isolated the head, so assume
+				 * it's not free, it'd be tricky to
+				 * track the head status without a
+				 * page pin.
+				 */
+				if (!PageTail(cursor_page) &&
+				    !atomic_read(&cursor_page->_count))
 					continue;
 				break;
 			}
-- 
1.7.3.4


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 3/4] mm: memory-failure: Fix isolated page count during memory failure
  2011-06-07 15:07 [PATCH 0/4] Fix compaction stalls due to accounting errors in isolated page accounting Mel Gorman
  2011-06-07 15:07 ` [PATCH 1/4] mm: compaction: Ensure that the compaction free scanner does not move to the next zone Mel Gorman
  2011-06-07 15:07 ` [PATCH 2/4] mm: vmscan: Do not use page_count without a page pin Mel Gorman
@ 2011-06-07 15:07 ` Mel Gorman
  2011-06-07 15:14   ` Minchan Kim
  2011-06-08 10:07   ` Michal Hocko
  2011-06-07 15:07 ` [PATCH 4/4] mm: compaction: Abort compaction if too many pages are isolated and caller is asynchronous Mel Gorman
  2011-07-17  8:52 ` [PATCH 0/4] Fix compaction stalls due to accounting errors in isolated page accounting Thomas Sattler
  4 siblings, 2 replies; 19+ messages in thread
From: Mel Gorman @ 2011-06-07 15:07 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrea Arcangeli, Minchan Kim, Thomas Sattler, Ury Stankevich,
	Andi Kleen, linux-mm, linux-kernel

From: Minchan Kim <minchan.kim@gmail.com>

From: Minchan Kim <minchan.kim@gmail.com>

Pages isolated for migration are accounted with the vmstat counters
NR_ISOLATE_[ANON|FILE]. Callers of migrate_pages() are expected to
increment these counters when pages are isolated from the LRU. Once
the pages have been migrated, they are put back on the LRU or freed
and the isolated count is decremented.

Memory failure is not properly accounting for pages it isolates
causing the NR_ISOLATED counters to be negative. On SMP builds,
this goes unnoticed as negative counters are treated as 0 due to
expected per-cpu drift. On UP builds, the counter is treated by
too_many_isolated() as a large value causing processes to enter D
state during page reclaim or compaction. This patch accounts for
pages isolated by memory failure correctly.

[mgorman@suse.de: Updated changelog]
Signed-off-by: Minchan Kim <minchan.kim@gmail.com>
Signed-off-by: Mel Gorman <mgorman@suse.de>
Reviewed-by: Andrea Arcangeli <aarcange@redhat.com>
---
 mm/memory-failure.c |    4 +++-
 1 files changed, 3 insertions(+), 1 deletions(-)

diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index 5c8f7e0..eac0ba5 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -52,6 +52,7 @@
 #include <linux/swapops.h>
 #include <linux/hugetlb.h>
 #include <linux/memory_hotplug.h>
+#include <linux/mm_inline.h>
 #include "internal.h"
 
 int sysctl_memory_failure_early_kill __read_mostly = 0;
@@ -1468,7 +1469,8 @@ int soft_offline_page(struct page *page, int flags)
 	put_page(page);
 	if (!ret) {
 		LIST_HEAD(pagelist);
-
+		inc_zone_page_state(page, NR_ISOLATED_ANON +
+					    page_is_file_cache(page));
 		list_add(&page->lru, &pagelist);
 		ret = migrate_pages(&pagelist, new_page, MPOL_MF_MOVE_ALL,
 								0, true);
-- 
1.7.3.4


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 4/4] mm: compaction: Abort compaction if too many pages are isolated and caller is asynchronous
  2011-06-07 15:07 [PATCH 0/4] Fix compaction stalls due to accounting errors in isolated page accounting Mel Gorman
                   ` (2 preceding siblings ...)
  2011-06-07 15:07 ` [PATCH 3/4] mm: memory-failure: Fix isolated page count during memory failure Mel Gorman
@ 2011-06-07 15:07 ` Mel Gorman
  2011-06-07 15:50   ` Minchan Kim
  2011-07-17  8:52 ` [PATCH 0/4] Fix compaction stalls due to accounting errors in isolated page accounting Thomas Sattler
  4 siblings, 1 reply; 19+ messages in thread
From: Mel Gorman @ 2011-06-07 15:07 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andrea Arcangeli, Minchan Kim, Thomas Sattler, Ury Stankevich,
	Andi Kleen, linux-mm, linux-kernel

Asynchronous compaction is used when promoting to huge pages. This is
all very nice but if there are a number of processes in compacting
memory, a large number of pages can be isolated. An "asynchronous"
process can stall for long periods of time as a result with a user
reporting that firefox can stall for 10s of seconds. This patch aborts
asynchronous compaction if too many pages are isolated as it's better to
fail a hugepage promotion than stall a process.

Reported-and-tested-by: Ury Stankevich <urykhy@gmail.com>
Signed-off-by: Mel Gorman <mgorman@suse.de>
---
 mm/compaction.c |   28 +++++++++++++++++++++++-----
 1 files changed, 23 insertions(+), 5 deletions(-)

diff --git a/mm/compaction.c b/mm/compaction.c
index 5c744ab..cb28580 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -251,11 +251,18 @@ static bool too_many_isolated(struct zone *zone)
 	return isolated > (inactive + active) / 2;
 }
 
+/* possible outcome of isolate_migratepages */
+typedef enum {
+	ISOLATE_ABORT,		/* Abort compaction now */
+	ISOLATE_NONE,		/* No pages isolated, continue scanning */
+	ISOLATE_SUCCESS,	/* Pages isolated, migrate */
+} isolate_migrate_t;
+
 /*
  * Isolate all pages that can be migrated from the block pointed to by
  * the migrate scanner within compact_control.
  */
-static unsigned long isolate_migratepages(struct zone *zone,
+static isolate_migrate_t isolate_migratepages(struct zone *zone,
 					struct compact_control *cc)
 {
 	unsigned long low_pfn, end_pfn;
@@ -272,7 +279,7 @@ static unsigned long isolate_migratepages(struct zone *zone,
 	/* Do not cross the free scanner or scan within a memory hole */
 	if (end_pfn > cc->free_pfn || !pfn_valid(low_pfn)) {
 		cc->migrate_pfn = end_pfn;
-		return 0;
+		return ISOLATE_NONE;
 	}
 
 	/*
@@ -281,10 +288,14 @@ static unsigned long isolate_migratepages(struct zone *zone,
 	 * delay for some time until fewer pages are isolated
 	 */
 	while (unlikely(too_many_isolated(zone))) {
+		/* async migration should just abort */
+		if (!cc->sync)
+			return ISOLATE_ABORT;
+
 		congestion_wait(BLK_RW_ASYNC, HZ/10);
 
 		if (fatal_signal_pending(current))
-			return 0;
+			return ISOLATE_ABORT;
 	}
 
 	/* Time to isolate some pages for migration */
@@ -369,7 +380,7 @@ static unsigned long isolate_migratepages(struct zone *zone,
 
 	trace_mm_compaction_isolate_migratepages(nr_scanned, nr_isolated);
 
-	return cc->nr_migratepages;
+	return ISOLATE_SUCCESS;
 }
 
 /*
@@ -533,8 +544,14 @@ static int compact_zone(struct zone *zone, struct compact_control *cc)
 		unsigned long nr_migrate, nr_remaining;
 		int err;
 
-		if (!isolate_migratepages(zone, cc))
+		switch (isolate_migratepages(zone, cc)) {
+		case ISOLATE_ABORT:
+			goto out;
+		case ISOLATE_NONE:
 			continue;
+		case ISOLATE_SUCCESS:
+			;
+		}
 
 		nr_migrate = cc->nr_migratepages;
 		err = migrate_pages(&cc->migratepages, compaction_alloc,
@@ -558,6 +575,7 @@ static int compact_zone(struct zone *zone, struct compact_control *cc)
 
 	}
 
+out:
 	/* Release free pages and check accounting */
 	cc->nr_freepages -= release_freepages(&cc->freepages);
 	VM_BUG_ON(cc->nr_freepages != 0);
-- 
1.7.3.4


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [PATCH 2/4] mm: vmscan: Do not use page_count without a page pin
  2011-06-07 15:07 ` [PATCH 2/4] mm: vmscan: Do not use page_count without a page pin Mel Gorman
@ 2011-06-07 15:12   ` Minchan Kim
  2011-06-08  9:38   ` Michal Hocko
  1 sibling, 0 replies; 19+ messages in thread
From: Minchan Kim @ 2011-06-07 15:12 UTC (permalink / raw)
  To: Mel Gorman
  Cc: Andrew Morton, Andrea Arcangeli, Thomas Sattler, Ury Stankevich,
	Andi Kleen, linux-mm, linux-kernel

On Tue, Jun 07, 2011 at 04:07:03PM +0100, Mel Gorman wrote:
> From: Andrea Arcangeli <aarcange@redhat.com>
> 
> From: Andrea Arcangeli <aarcange@redhat.com>
> 
> It is unsafe to run page_count during the physical pfn scan because
> compound_head could trip on a dangling pointer when reading
> page->first_page if the compound page is being freed by another CPU.
> 
> [mgorman@suse.de: Split out patch]
> Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
> Signed-off-by: Mel Gorman <mgorman@suse.de>
Reviewed-by: Minchan Kim <minchan.kim@gmail.com>

-- 
Kind regards
Minchan Kim

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 3/4] mm: memory-failure: Fix isolated page count during memory failure
  2011-06-07 15:07 ` [PATCH 3/4] mm: memory-failure: Fix isolated page count during memory failure Mel Gorman
@ 2011-06-07 15:14   ` Minchan Kim
  2011-06-08 10:07   ` Michal Hocko
  1 sibling, 0 replies; 19+ messages in thread
From: Minchan Kim @ 2011-06-07 15:14 UTC (permalink / raw)
  To: Mel Gorman
  Cc: Andrew Morton, Andrea Arcangeli, Thomas Sattler, Ury Stankevich,
	Andi Kleen, linux-mm, linux-kernel

On Tue, Jun 07, 2011 at 04:07:04PM +0100, Mel Gorman wrote:
> From: Minchan Kim <minchan.kim@gmail.com>
> 
> From: Minchan Kim <minchan.kim@gmail.com>
> 
> Pages isolated for migration are accounted with the vmstat counters
> NR_ISOLATE_[ANON|FILE]. Callers of migrate_pages() are expected to
> increment these counters when pages are isolated from the LRU. Once
> the pages have been migrated, they are put back on the LRU or freed
> and the isolated count is decremented.
> 
> Memory failure is not properly accounting for pages it isolates
> causing the NR_ISOLATED counters to be negative. On SMP builds,
> this goes unnoticed as negative counters are treated as 0 due to
> expected per-cpu drift. On UP builds, the counter is treated by
> too_many_isolated() as a large value causing processes to enter D
> state during page reclaim or compaction. This patch accounts for
> pages isolated by memory failure correctly.
> 
> [mgorman@suse.de: Updated changelog]
> Signed-off-by: Minchan Kim <minchan.kim@gmail.com>
> Signed-off-by: Mel Gorman <mgorman@suse.de>
> Reviewed-by: Andrea Arcangeli <aarcange@redhat.com>

I was about to resend this patch with your updated description.
Thanks! Mel.

-- 
Kind regards
Minchan Kim

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 4/4] mm: compaction: Abort compaction if too many pages are isolated and caller is asynchronous
  2011-06-07 15:07 ` [PATCH 4/4] mm: compaction: Abort compaction if too many pages are isolated and caller is asynchronous Mel Gorman
@ 2011-06-07 15:50   ` Minchan Kim
  2011-06-07 16:26     ` Mel Gorman
  2011-06-07 16:27     ` [PATCH 4/4] mm: compaction: Abort compaction if too many pages are isolated and caller is asynchronous V2 Mel Gorman
  0 siblings, 2 replies; 19+ messages in thread
From: Minchan Kim @ 2011-06-07 15:50 UTC (permalink / raw)
  To: Mel Gorman
  Cc: Andrew Morton, Andrea Arcangeli, Thomas Sattler, Ury Stankevich,
	Andi Kleen, linux-mm, linux-kernel

On Tue, Jun 07, 2011 at 04:07:05PM +0100, Mel Gorman wrote:
> Asynchronous compaction is used when promoting to huge pages. This is
> all very nice but if there are a number of processes in compacting
> memory, a large number of pages can be isolated. An "asynchronous"
> process can stall for long periods of time as a result with a user
> reporting that firefox can stall for 10s of seconds. This patch aborts
> asynchronous compaction if too many pages are isolated as it's better to
> fail a hugepage promotion than stall a process.
> 
> Reported-and-tested-by: Ury Stankevich <urykhy@gmail.com>
> Signed-off-by: Mel Gorman <mgorman@suse.de>
> ---
>  mm/compaction.c |   28 +++++++++++++++++++++++-----
>  1 files changed, 23 insertions(+), 5 deletions(-)
> 
> diff --git a/mm/compaction.c b/mm/compaction.c
> index 5c744ab..cb28580 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -251,11 +251,18 @@ static bool too_many_isolated(struct zone *zone)
>  	return isolated > (inactive + active) / 2;
>  }
>  
> +/* possible outcome of isolate_migratepages */
> +typedef enum {
> +	ISOLATE_ABORT,		/* Abort compaction now */
> +	ISOLATE_NONE,		/* No pages isolated, continue scanning */
> +	ISOLATE_SUCCESS,	/* Pages isolated, migrate */
> +} isolate_migrate_t;
> +
>  /*
>   * Isolate all pages that can be migrated from the block pointed to by
>   * the migrate scanner within compact_control.
>   */
> -static unsigned long isolate_migratepages(struct zone *zone,
> +static isolate_migrate_t isolate_migratepages(struct zone *zone,
>  					struct compact_control *cc)
>  {
>  	unsigned long low_pfn, end_pfn;
> @@ -272,7 +279,7 @@ static unsigned long isolate_migratepages(struct zone *zone,
>  	/* Do not cross the free scanner or scan within a memory hole */
>  	if (end_pfn > cc->free_pfn || !pfn_valid(low_pfn)) {
>  		cc->migrate_pfn = end_pfn;
> -		return 0;
> +		return ISOLATE_NONE;
>  	}
>  
>  	/*
> @@ -281,10 +288,14 @@ static unsigned long isolate_migratepages(struct zone *zone,
>  	 * delay for some time until fewer pages are isolated
>  	 */
>  	while (unlikely(too_many_isolated(zone))) {
> +		/* async migration should just abort */
> +		if (!cc->sync)
> +			return ISOLATE_ABORT;
> +
>  		congestion_wait(BLK_RW_ASYNC, HZ/10);
>  
>  		if (fatal_signal_pending(current))
> -			return 0;
> +			return ISOLATE_ABORT;
>  	}
>  
>  	/* Time to isolate some pages for migration */
> @@ -369,7 +380,7 @@ static unsigned long isolate_migratepages(struct zone *zone,
>  
>  	trace_mm_compaction_isolate_migratepages(nr_scanned, nr_isolated);
>  
> -	return cc->nr_migratepages;
> +	return ISOLATE_SUCCESS;
>  }
>  
>  /*
> @@ -533,8 +544,14 @@ static int compact_zone(struct zone *zone, struct compact_control *cc)
>  		unsigned long nr_migrate, nr_remaining;
>  		int err;
>  
> -		if (!isolate_migratepages(zone, cc))
> +		switch (isolate_migratepages(zone, cc)) {
> +		case ISOLATE_ABORT:

In this case, you change old behavior slightly.
In old case, we return COMPACT_PARTIAL to cancel migration.
But this patch makes to return COMPACT_SUCCESS.
At present, return value of compact_zone is only used by __alloc_pages_direct_compact
and it only consider COMPACT_SKIPPED so it would be not a problem.
But I think it would be better to return COMPACT_PARTIAL instead of COMPACT_CONTINUE
for consistency with compact_finished and right semantic for the future user of compact_zone.

-- 
Kind regards
Minchan Kim

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 4/4] mm: compaction: Abort compaction if too many pages are isolated and caller is asynchronous
  2011-06-07 15:50   ` Minchan Kim
@ 2011-06-07 16:26     ` Mel Gorman
  2011-06-07 16:27     ` [PATCH 4/4] mm: compaction: Abort compaction if too many pages are isolated and caller is asynchronous V2 Mel Gorman
  1 sibling, 0 replies; 19+ messages in thread
From: Mel Gorman @ 2011-06-07 16:26 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, Andrea Arcangeli, Thomas Sattler, Ury Stankevich,
	Andi Kleen, linux-mm, linux-kernel

On Wed, Jun 08, 2011 at 12:50:29AM +0900, Minchan Kim wrote:
> > <SNIP>
> > @@ -533,8 +544,14 @@ static int compact_zone(struct zone *zone, struct compact_control *cc)
> >  		unsigned long nr_migrate, nr_remaining;
> >  		int err;
> >  
> > -		if (!isolate_migratepages(zone, cc))
> > +		switch (isolate_migratepages(zone, cc)) {
> > +		case ISOLATE_ABORT:
> 
> In this case, you change old behavior slightly.
> In old case, we return COMPACT_PARTIAL to cancel migration.
> But this patch makes to return COMPACT_SUCCESS.
> At present, return value of compact_zone is only used by __alloc_pages_direct_compact
> and it only consider COMPACT_SKIPPED so it would be not a problem.
> But I think it would be better to return COMPACT_PARTIAL instead of COMPACT_CONTINUE
> for consistency with compact_finished and right semantic for the future user of compact_zone.
> 

Agreed. Thanks.

-- 
Mel Gorman
SUSE Labs

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH 4/4] mm: compaction: Abort compaction if too many pages are isolated and caller is asynchronous V2
  2011-06-07 15:50   ` Minchan Kim
  2011-06-07 16:26     ` Mel Gorman
@ 2011-06-07 16:27     ` Mel Gorman
  2011-06-07 16:36       ` Minchan Kim
  2011-06-08  9:55       ` Michal Hocko
  1 sibling, 2 replies; 19+ messages in thread
From: Mel Gorman @ 2011-06-07 16:27 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Minchan Kim, Andrea Arcangeli, Thomas Sattler, Ury Stankevich,
	Andi Kleen, linux-mm, linux-kernel

Changelog since V1
  o Return COMPACT_PARTIAL when aborting due to too many isolated
    pages. As pointed out by Minchan, this is better for consistency

Asynchronous compaction is used when promoting to huge pages. This is
all very nice but if there are a number of processes in compacting
memory, a large number of pages can be isolated. An "asynchronous"
process can stall for long periods of time as a result with a user
reporting that firefox can stall for 10s of seconds. This patch aborts
asynchronous compaction if too many pages are isolated as it's better to
fail a hugepage promotion than stall a process.

[minchan.kim@gmail.com: Return COMPACT_PARTIAL for abort]
Reported-and-tested-by: Ury Stankevich <urykhy@gmail.com>
Signed-off-by: Mel Gorman <mgorman@suse.de>
---
 mm/compaction.c |   29 ++++++++++++++++++++++++-----
 1 files changed, 24 insertions(+), 5 deletions(-)

diff --git a/mm/compaction.c b/mm/compaction.c
index 5c744ab..e4e0166 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -251,11 +251,18 @@ static bool too_many_isolated(struct zone *zone)
 	return isolated > (inactive + active) / 2;
 }
 
+/* possible outcome of isolate_migratepages */
+typedef enum {
+	ISOLATE_ABORT,		/* Abort compaction now */
+	ISOLATE_NONE,		/* No pages isolated, continue scanning */
+	ISOLATE_SUCCESS,	/* Pages isolated, migrate */
+} isolate_migrate_t;
+
 /*
  * Isolate all pages that can be migrated from the block pointed to by
  * the migrate scanner within compact_control.
  */
-static unsigned long isolate_migratepages(struct zone *zone,
+static isolate_migrate_t isolate_migratepages(struct zone *zone,
 					struct compact_control *cc)
 {
 	unsigned long low_pfn, end_pfn;
@@ -272,7 +279,7 @@ static unsigned long isolate_migratepages(struct zone *zone,
 	/* Do not cross the free scanner or scan within a memory hole */
 	if (end_pfn > cc->free_pfn || !pfn_valid(low_pfn)) {
 		cc->migrate_pfn = end_pfn;
-		return 0;
+		return ISOLATE_NONE;
 	}
 
 	/*
@@ -281,10 +288,14 @@ static unsigned long isolate_migratepages(struct zone *zone,
 	 * delay for some time until fewer pages are isolated
 	 */
 	while (unlikely(too_many_isolated(zone))) {
+		/* async migration should just abort */
+		if (!cc->sync)
+			return ISOLATE_ABORT;
+
 		congestion_wait(BLK_RW_ASYNC, HZ/10);
 
 		if (fatal_signal_pending(current))
-			return 0;
+			return ISOLATE_ABORT;
 	}
 
 	/* Time to isolate some pages for migration */
@@ -369,7 +380,7 @@ static unsigned long isolate_migratepages(struct zone *zone,
 
 	trace_mm_compaction_isolate_migratepages(nr_scanned, nr_isolated);
 
-	return cc->nr_migratepages;
+	return ISOLATE_SUCCESS;
 }
 
 /*
@@ -533,8 +544,15 @@ static int compact_zone(struct zone *zone, struct compact_control *cc)
 		unsigned long nr_migrate, nr_remaining;
 		int err;
 
-		if (!isolate_migratepages(zone, cc))
+		switch (isolate_migratepages(zone, cc)) {
+		case ISOLATE_ABORT:
+			ret = COMPACT_PARTIAL;
+			goto out;
+		case ISOLATE_NONE:
 			continue;
+		case ISOLATE_SUCCESS:
+			;
+		}
 
 		nr_migrate = cc->nr_migratepages;
 		err = migrate_pages(&cc->migratepages, compaction_alloc,
@@ -558,6 +576,7 @@ static int compact_zone(struct zone *zone, struct compact_control *cc)
 
 	}
 
+out:
 	/* Release free pages and check accounting */
 	cc->nr_freepages -= release_freepages(&cc->freepages);
 	VM_BUG_ON(cc->nr_freepages != 0);

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [PATCH 4/4] mm: compaction: Abort compaction if too many pages are isolated and caller is asynchronous V2
  2011-06-07 16:27     ` [PATCH 4/4] mm: compaction: Abort compaction if too many pages are isolated and caller is asynchronous V2 Mel Gorman
@ 2011-06-07 16:36       ` Minchan Kim
  2011-06-08  9:55       ` Michal Hocko
  1 sibling, 0 replies; 19+ messages in thread
From: Minchan Kim @ 2011-06-07 16:36 UTC (permalink / raw)
  To: Mel Gorman
  Cc: Andrew Morton, Andrea Arcangeli, Thomas Sattler, Ury Stankevich,
	Andi Kleen, linux-mm, linux-kernel

On Tue, Jun 07, 2011 at 05:27:11PM +0100, Mel Gorman wrote:
> Changelog since V1
>   o Return COMPACT_PARTIAL when aborting due to too many isolated
>     pages. As pointed out by Minchan, this is better for consistency
> 
> Asynchronous compaction is used when promoting to huge pages. This is
> all very nice but if there are a number of processes in compacting
> memory, a large number of pages can be isolated. An "asynchronous"
> process can stall for long periods of time as a result with a user
> reporting that firefox can stall for 10s of seconds. This patch aborts
> asynchronous compaction if too many pages are isolated as it's better to
> fail a hugepage promotion than stall a process.
> 
> [minchan.kim@gmail.com: Return COMPACT_PARTIAL for abort]
> Reported-and-tested-by: Ury Stankevich <urykhy@gmail.com>
> Signed-off-by: Mel Gorman <mgorman@suse.de>
Reviewed-by: Minchan Kim <minchan.kim@gmail.com>

-- 
Kind regards
Minchan Kim

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 1/4] mm: compaction: Ensure that the compaction free scanner does not move to the next zone
  2011-06-07 15:07 ` [PATCH 1/4] mm: compaction: Ensure that the compaction free scanner does not move to the next zone Mel Gorman
@ 2011-06-08  9:33   ` Michal Hocko
  0 siblings, 0 replies; 19+ messages in thread
From: Michal Hocko @ 2011-06-08  9:33 UTC (permalink / raw)
  To: Mel Gorman
  Cc: Andrew Morton, Andrea Arcangeli, Minchan Kim, Thomas Sattler,
	Ury Stankevich, Andi Kleen, linux-mm, linux-kernel

On Tue 07-06-11 16:07:02, Mel Gorman wrote:
> Compaction works with two scanners, a migration and a free
> scanner. When the scanners crossover, migration within the zone is
> complete. The location of the scanner is recorded on each cycle to
> avoid excesive scanning.
> 
> When a zone is small and mostly reserved, it's very easy for the
> migration scanner to be close to the end of the zone. Then the following
> situation can occurs
> 
>   o migration scanner isolates some pages near the end of the zone
>   o free scanner starts at the end of the zone but finds that the
>     migration scanner is already there
>   o free scanner gets reinitialised for the next cycle as
>     cc->migrate_pfn + pageblock_nr_pages
>     moving the free scanner into the next zone
>   o migration scanner moves into the next zone
> 
> When this happens, NR_ISOLATED accounting goes haywire because some
> of the accounting happens against the wrong zone. One zones counter
> remains positive while the other goes negative even though the overall
> global count is accurate. This was reported on X86-32 with !SMP because
> !SMP allows the negative counters to be visible. The fact that it is
> difficult to reproduce on X86-64 is probably just a co-incidence as
> the bug should theoritically be possible there.
> 
> Signed-off-by: Mel Gorman <mgorman@suse.de>
> Reviewed-by: Minchan Kim <minchan.kim@gmail.com>

Reviewed-by: Michal Hocko <mhocko@suse.cz>

-- 
Michal Hocko
SUSE Labs
SUSE LINUX s.r.o.
Lihovarska 1060/12
190 00 Praha 9    
Czech Republic

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 2/4] mm: vmscan: Do not use page_count without a page pin
  2011-06-07 15:07 ` [PATCH 2/4] mm: vmscan: Do not use page_count without a page pin Mel Gorman
  2011-06-07 15:12   ` Minchan Kim
@ 2011-06-08  9:38   ` Michal Hocko
  1 sibling, 0 replies; 19+ messages in thread
From: Michal Hocko @ 2011-06-08  9:38 UTC (permalink / raw)
  To: Mel Gorman
  Cc: Andrew Morton, Andrea Arcangeli, Minchan Kim, Thomas Sattler,
	Ury Stankevich, Andi Kleen, linux-mm, linux-kernel

On Tue 07-06-11 16:07:03, Mel Gorman wrote:
> From: Andrea Arcangeli <aarcange@redhat.com>
> 
> From: Andrea Arcangeli <aarcange@redhat.com>
> 
> It is unsafe to run page_count during the physical pfn scan because
> compound_head could trip on a dangling pointer when reading
> page->first_page if the compound page is being freed by another CPU.
> 
> [mgorman@suse.de: Split out patch]
> Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
> Signed-off-by: Mel Gorman <mgorman@suse.de>

Reviewed-by: Michal Hocko <mhocko@suse.cz>
-- 
Michal Hocko
SUSE Labs
SUSE LINUX s.r.o.
Lihovarska 1060/12
190 00 Praha 9    
Czech Republic

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 4/4] mm: compaction: Abort compaction if too many pages are isolated and caller is asynchronous V2
  2011-06-07 16:27     ` [PATCH 4/4] mm: compaction: Abort compaction if too many pages are isolated and caller is asynchronous V2 Mel Gorman
  2011-06-07 16:36       ` Minchan Kim
@ 2011-06-08  9:55       ` Michal Hocko
  1 sibling, 0 replies; 19+ messages in thread
From: Michal Hocko @ 2011-06-08  9:55 UTC (permalink / raw)
  To: Mel Gorman
  Cc: Andrew Morton, Minchan Kim, Andrea Arcangeli, Thomas Sattler,
	Ury Stankevich, Andi Kleen, linux-mm, linux-kernel

On Tue 07-06-11 17:27:11, Mel Gorman wrote:
> Changelog since V1
>   o Return COMPACT_PARTIAL when aborting due to too many isolated
>     pages. As pointed out by Minchan, this is better for consistency
> 
> Asynchronous compaction is used when promoting to huge pages. This is
> all very nice but if there are a number of processes in compacting
> memory, a large number of pages can be isolated. An "asynchronous"
> process can stall for long periods of time as a result with a user
> reporting that firefox can stall for 10s of seconds. This patch aborts
> asynchronous compaction if too many pages are isolated as it's better to
> fail a hugepage promotion than stall a process.
> 
> [minchan.kim@gmail.com: Return COMPACT_PARTIAL for abort]
> Reported-and-tested-by: Ury Stankevich <urykhy@gmail.com>
> Signed-off-by: Mel Gorman <mgorman@suse.de>

Reviewed-by: Michal Hocko <mhocko@suse.cz>
-- 
Michal Hocko
SUSE Labs
SUSE LINUX s.r.o.
Lihovarska 1060/12
190 00 Praha 9    
Czech Republic

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 3/4] mm: memory-failure: Fix isolated page count during memory failure
  2011-06-07 15:07 ` [PATCH 3/4] mm: memory-failure: Fix isolated page count during memory failure Mel Gorman
  2011-06-07 15:14   ` Minchan Kim
@ 2011-06-08 10:07   ` Michal Hocko
  2011-06-08 10:11     ` Michal Hocko
  1 sibling, 1 reply; 19+ messages in thread
From: Michal Hocko @ 2011-06-08 10:07 UTC (permalink / raw)
  To: Mel Gorman
  Cc: Andrew Morton, Andrea Arcangeli, Minchan Kim, Thomas Sattler,
	Ury Stankevich, Andi Kleen, linux-mm, linux-kernel

On Tue 07-06-11 16:07:04, Mel Gorman wrote:
> From: Minchan Kim <minchan.kim@gmail.com>
> 
> From: Minchan Kim <minchan.kim@gmail.com>
> 
> Pages isolated for migration are accounted with the vmstat counters
> NR_ISOLATE_[ANON|FILE]. Callers of migrate_pages() are expected to
> increment these counters when pages are isolated from the LRU. Once
> the pages have been migrated, they are put back on the LRU or freed
> and the isolated count is decremented.

Aren't we missing this in compact_zone as well? AFAICS there is no
accounting done after we isolate pages from LRU? Or am I missing
something?

> 
> Memory failure is not properly accounting for pages it isolates
> causing the NR_ISOLATED counters to be negative. On SMP builds,
> this goes unnoticed as negative counters are treated as 0 due to
> expected per-cpu drift. On UP builds, the counter is treated by
> too_many_isolated() as a large value causing processes to enter D
> state during page reclaim or compaction. This patch accounts for
> pages isolated by memory failure correctly.
> 
> [mgorman@suse.de: Updated changelog]
> Signed-off-by: Minchan Kim <minchan.kim@gmail.com>
> Signed-off-by: Mel Gorman <mgorman@suse.de>
> Reviewed-by: Andrea Arcangeli <aarcange@redhat.com>

Reviewed-by: Michal Hocko <mhocko@suse.cz>

-- 
Michal Hocko
SUSE Labs
SUSE LINUX s.r.o.
Lihovarska 1060/12
190 00 Praha 9    
Czech Republic

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 3/4] mm: memory-failure: Fix isolated page count during memory failure
  2011-06-08 10:07   ` Michal Hocko
@ 2011-06-08 10:11     ` Michal Hocko
  0 siblings, 0 replies; 19+ messages in thread
From: Michal Hocko @ 2011-06-08 10:11 UTC (permalink / raw)
  To: Mel Gorman
  Cc: Andrew Morton, Andrea Arcangeli, Minchan Kim, Thomas Sattler,
	Ury Stankevich, Andi Kleen, linux-mm, linux-kernel

On Wed 08-06-11 12:07:20, Michal Hocko wrote:
> On Tue 07-06-11 16:07:04, Mel Gorman wrote:
> > From: Minchan Kim <minchan.kim@gmail.com>
> > 
> > From: Minchan Kim <minchan.kim@gmail.com>
> > 
> > Pages isolated for migration are accounted with the vmstat counters
> > NR_ISOLATE_[ANON|FILE]. Callers of migrate_pages() are expected to
> > increment these counters when pages are isolated from the LRU. Once
> > the pages have been migrated, they are put back on the LRU or freed
> > and the isolated count is decremented.
> 
> Aren't we missing this in compact_zone as well? AFAICS there is no
> accounting done after we isolate pages from LRU? Or am I missing
> something?

Scratch that. It was hidden in acct_isolated which is called from
isolate_migratepages.
It would be really strange if this was broken ;)

-- 
Michal Hocko
SUSE Labs
SUSE LINUX s.r.o.
Lihovarska 1060/12
190 00 Praha 9    
Czech Republic

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 0/4] Fix compaction stalls due to accounting errors in isolated page accounting
  2011-06-07 15:07 [PATCH 0/4] Fix compaction stalls due to accounting errors in isolated page accounting Mel Gorman
                   ` (3 preceding siblings ...)
  2011-06-07 15:07 ` [PATCH 4/4] mm: compaction: Abort compaction if too many pages are isolated and caller is asynchronous Mel Gorman
@ 2011-07-17  8:52 ` Thomas Sattler
  2011-07-19  9:16   ` Mel Gorman
  4 siblings, 1 reply; 19+ messages in thread
From: Thomas Sattler @ 2011-07-17  8:52 UTC (permalink / raw)
  To: Mel Gorman
  Cc: Andrew Morton, Andrea Arcangeli, Minchan Kim, Ury Stankevich,
	Andi Kleen, linux-mm, linux-kernel

Hi there ...

> Re-verification from testers that these patches really do fix their
> problems would be appreciated. Even if hangs disappear, please confirm
> that the values for nr_isolated_anon and nr_isolated_file in *both*
> /proc/zoneinfo and /proc/vmstat are sensible (i.e. usually zero).

I applied these patches to 2.6.38.8 and it run for nearly a month
without any problems. Even Though I did not check nr_isolated_*.
As (at least) patch3 made it into 2.6.39.3 I did not apply the
others any more. And it occurred again this morning:


  sysload at 2, iowait at 80% on an idle (single core) system


$ grep nr_isolated /proc/zoneinfo /proc/vmstat
/proc/zoneinfo:    nr_isolated_anon 0
/proc/zoneinfo:    nr_isolated_file 1
/proc/zoneinfo:    nr_isolated_anon 0
/proc/zoneinfo:    nr_isolated_file 4294967295
/proc/zoneinfo:    nr_isolated_anon 0
/proc/zoneinfo:    nr_isolated_file 0
/proc/vmstat:nr_isolated_anon 0
/proc/vmstat:nr_isolated_file 0

I captured several SysRq-* logs, in case they're of interest.

Thomas

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 0/4] Fix compaction stalls due to accounting errors in isolated page accounting
  2011-07-17  8:52 ` [PATCH 0/4] Fix compaction stalls due to accounting errors in isolated page accounting Thomas Sattler
@ 2011-07-19  9:16   ` Mel Gorman
  2011-07-19 11:24     ` Thomas Sattler
  0 siblings, 1 reply; 19+ messages in thread
From: Mel Gorman @ 2011-07-19  9:16 UTC (permalink / raw)
  To: Thomas Sattler
  Cc: Andrew Morton, Andrea Arcangeli, Minchan Kim, Ury Stankevich,
	Andi Kleen, linux-mm, linux-kernel

On Sun, Jul 17, 2011 at 10:52:12AM +0200, Thomas Sattler wrote:
> Hi there ...
> 
> > Re-verification from testers that these patches really do fix their
> > problems would be appreciated. Even if hangs disappear, please confirm
> > that the values for nr_isolated_anon and nr_isolated_file in *both*
> > /proc/zoneinfo and /proc/vmstat are sensible (i.e. usually zero).
> 
> I applied these patches to 2.6.38.8 and it run for nearly a month
> without any problems. Even Though I did not check nr_isolated_*.
> As (at least) patch3 made it into 2.6.39.3 I did not apply the
> others any more. And it occurred again this morning:
> 

I assume you mean it occured again on 2.6.39.3 and so have submitted
them to -stable for 2.6.39.x. You're cc'd so you should hear when or if
they get picked up.

Thanks for testing.

-- 
Mel Gorman
SUSE Labs

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 0/4] Fix compaction stalls due to accounting errors in isolated page accounting
  2011-07-19  9:16   ` Mel Gorman
@ 2011-07-19 11:24     ` Thomas Sattler
  0 siblings, 0 replies; 19+ messages in thread
From: Thomas Sattler @ 2011-07-19 11:24 UTC (permalink / raw)
  To: Mel Gorman
  Cc: Andrew Morton, Andrea Arcangeli, Minchan Kim, Ury Stankevich,
	Andi Kleen, linux-mm, linux-kernel

> I assume you mean it occured again on 2.6.39.3 and so have submitted
> them to -stable for 2.6.39.x. You're cc'd so you should hear when or if
> they get picked up.

Yes, that's, what I meant:

$ uname -a
Linux pearl 2.6.39.3 #14 PREEMPT Thu Jul 14 17:41:20 CEST 2011 i686
Intel(R) Pentium(R) M processor 1700MHz GenuineIntel GNU/Linux

$ grep nr_isolated /proc/zoneinfo /proc/vmstat
/proc/zoneinfo:    nr_isolated_anon 25
/proc/zoneinfo:    nr_isolated_file 7
/proc/zoneinfo:    nr_isolated_anon 4294967271
/proc/zoneinfo:    nr_isolated_file 4294967289
/proc/zoneinfo:    nr_isolated_anon 0
/proc/zoneinfo:    nr_isolated_file 0
/proc/vmstat:nr_isolated_anon 0
/proc/vmstat:nr_isolated_file 0

$ top  # only headlines
top - 13:23:27 up 2 days,  2:52,  6 users,  load average: 2.30, 2.09, 1.69
Tasks: 132 total,   3 running, 129 sleeping,   0 stopped,   0 zombie
Cpu(s): 16.9%us,  5.6%sy,  0.0%ni,  0.0%id, 77.4%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:   1551416k total,  1449088k used,   102328k free,    80736k buffers
Swap:  4120660k total,    48776k used,  4071884k free,   721312k cached

Thomas

^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2011-07-19 11:24 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-06-07 15:07 [PATCH 0/4] Fix compaction stalls due to accounting errors in isolated page accounting Mel Gorman
2011-06-07 15:07 ` [PATCH 1/4] mm: compaction: Ensure that the compaction free scanner does not move to the next zone Mel Gorman
2011-06-08  9:33   ` Michal Hocko
2011-06-07 15:07 ` [PATCH 2/4] mm: vmscan: Do not use page_count without a page pin Mel Gorman
2011-06-07 15:12   ` Minchan Kim
2011-06-08  9:38   ` Michal Hocko
2011-06-07 15:07 ` [PATCH 3/4] mm: memory-failure: Fix isolated page count during memory failure Mel Gorman
2011-06-07 15:14   ` Minchan Kim
2011-06-08 10:07   ` Michal Hocko
2011-06-08 10:11     ` Michal Hocko
2011-06-07 15:07 ` [PATCH 4/4] mm: compaction: Abort compaction if too many pages are isolated and caller is asynchronous Mel Gorman
2011-06-07 15:50   ` Minchan Kim
2011-06-07 16:26     ` Mel Gorman
2011-06-07 16:27     ` [PATCH 4/4] mm: compaction: Abort compaction if too many pages are isolated and caller is asynchronous V2 Mel Gorman
2011-06-07 16:36       ` Minchan Kim
2011-06-08  9:55       ` Michal Hocko
2011-07-17  8:52 ` [PATCH 0/4] Fix compaction stalls due to accounting errors in isolated page accounting Thomas Sattler
2011-07-19  9:16   ` Mel Gorman
2011-07-19 11:24     ` Thomas Sattler

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).