All of lore.kernel.org
 help / color / mirror / Atom feed
From: Mel Gorman <mgorman@suse.de>
To: Stable <stable@vger.kernel.org>
Cc: Linux-MM <linux-mm@kvack.org>,
	LKML <linux-kernel@vger.kernel.org>, Mel Gorman <mgorman@suse.de>
Subject: [PATCH 18/34] mm: page allocator: Do not call direct reclaim for THP allocations while compaction is deferred
Date: Mon, 23 Jul 2012 14:38:31 +0100	[thread overview]
Message-ID: <1343050727-3045-19-git-send-email-mgorman@suse.de> (raw)
In-Reply-To: <1343050727-3045-1-git-send-email-mgorman@suse.de>

commit 66199712e9eef5aede09dbcd9dfff87798a66917 upstream.

Stable note: Not tracked in Buzilla. This was part of a series that
	reduced interactivity stalls experienced when THP was enabled.

If compaction is deferred, direct reclaim is used to try free enough
pages for the allocation to succeed. For small high-orders, this has
a reasonable chance of success. However, if the caller has specified
__GFP_NO_KSWAPD to limit the disruption to the system, it makes more
sense to fail the allocation rather than stall the caller in direct
reclaim. This patch skips direct reclaim if compaction is deferred
and the caller specifies __GFP_NO_KSWAPD.

Async compaction only considers a subset of pages so it is possible for
compaction to be deferred prematurely and not enter direct reclaim even
in cases where it should. To compensate for this, this patch also defers
compaction only if sync compaction failed.

Signed-off-by: Mel Gorman <mgorman@suse.de>
Acked-by: Minchan Kim <minchan.kim@gmail.com>
Reviewed-by: Rik van Riel<riel@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Dave Jones <davej@redhat.com>
Cc: Jan Kara <jack@suse.cz>
Cc: Andy Isaacson <adi@hexapodia.org>
Cc: Nai Xia <nai.xia@gmail.com>
Cc: Johannes Weiner <jweiner@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
---
 mm/page_alloc.c |   45 +++++++++++++++++++++++++++++++++++----------
 1 file changed, 35 insertions(+), 10 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index e568b80..257acae 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1897,14 +1897,20 @@ static struct page *
 __alloc_pages_direct_compact(gfp_t gfp_mask, unsigned int order,
 	struct zonelist *zonelist, enum zone_type high_zoneidx,
 	nodemask_t *nodemask, int alloc_flags, struct zone *preferred_zone,
-	int migratetype, unsigned long *did_some_progress,
-	bool sync_migration)
+	int migratetype, bool sync_migration,
+	bool *deferred_compaction,
+	unsigned long *did_some_progress)
 {
 	struct page *page;
 
-	if (!order || compaction_deferred(preferred_zone))
+	if (!order)
 		return NULL;
 
+	if (compaction_deferred(preferred_zone)) {
+		*deferred_compaction = true;
+		return NULL;
+	}
+
 	current->flags |= PF_MEMALLOC;
 	*did_some_progress = try_to_compact_pages(zonelist, order, gfp_mask,
 						nodemask, sync_migration);
@@ -1932,7 +1938,13 @@ __alloc_pages_direct_compact(gfp_t gfp_mask, unsigned int order,
 		 * but not enough to satisfy watermarks.
 		 */
 		count_vm_event(COMPACTFAIL);
-		defer_compaction(preferred_zone);
+
+		/*
+		 * As async compaction considers a subset of pageblocks, only
+		 * defer if the failure was a sync compaction failure.
+		 */
+		if (sync_migration)
+			defer_compaction(preferred_zone);
 
 		cond_resched();
 	}
@@ -1944,8 +1956,9 @@ static inline struct page *
 __alloc_pages_direct_compact(gfp_t gfp_mask, unsigned int order,
 	struct zonelist *zonelist, enum zone_type high_zoneidx,
 	nodemask_t *nodemask, int alloc_flags, struct zone *preferred_zone,
-	int migratetype, unsigned long *did_some_progress,
-	bool sync_migration)
+	int migratetype, bool sync_migration,
+	bool *deferred_compaction,
+	unsigned long *did_some_progress)
 {
 	return NULL;
 }
@@ -2095,6 +2108,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 	unsigned long pages_reclaimed = 0;
 	unsigned long did_some_progress;
 	bool sync_migration = false;
+	bool deferred_compaction = false;
 
 	/*
 	 * In the slowpath, we sanity check order to avoid ever trying to
@@ -2175,12 +2189,22 @@ rebalance:
 					zonelist, high_zoneidx,
 					nodemask,
 					alloc_flags, preferred_zone,
-					migratetype, &did_some_progress,
-					sync_migration);
+					migratetype, sync_migration,
+					&deferred_compaction,
+					&did_some_progress);
 	if (page)
 		goto got_pg;
 	sync_migration = true;
 
+	/*
+	 * If compaction is deferred for high-order allocations, it is because
+	 * sync compaction recently failed. In this is the case and the caller
+	 * has requested the system not be heavily disrupted, fail the
+	 * allocation now instead of entering direct reclaim
+	 */
+	if (deferred_compaction && (gfp_mask & __GFP_NO_KSWAPD))
+		goto nopage;
+
 	/* Try direct reclaim and then allocating */
 	page = __alloc_pages_direct_reclaim(gfp_mask, order,
 					zonelist, high_zoneidx,
@@ -2243,8 +2267,9 @@ rebalance:
 					zonelist, high_zoneidx,
 					nodemask,
 					alloc_flags, preferred_zone,
-					migratetype, &did_some_progress,
-					sync_migration);
+					migratetype, sync_migration,
+					&deferred_compaction,
+					&did_some_progress);
 		if (page)
 			goto got_pg;
 	}
-- 
1.7.9.2


WARNING: multiple messages have this Message-ID (diff)
From: Mel Gorman <mgorman@suse.de>
To: Stable <stable@vger.kernel.org>
Cc: Linux-MM <linux-mm@kvack.org>,
	LKML <linux-kernel@vger.kernel.org>, Mel Gorman <mgorman@suse.de>
Subject: [PATCH 18/34] mm: page allocator: Do not call direct reclaim for THP allocations while compaction is deferred
Date: Mon, 23 Jul 2012 14:38:31 +0100	[thread overview]
Message-ID: <1343050727-3045-19-git-send-email-mgorman@suse.de> (raw)
In-Reply-To: <1343050727-3045-1-git-send-email-mgorman@suse.de>

commit 66199712e9eef5aede09dbcd9dfff87798a66917 upstream.

Stable note: Not tracked in Buzilla. This was part of a series that
	reduced interactivity stalls experienced when THP was enabled.

If compaction is deferred, direct reclaim is used to try free enough
pages for the allocation to succeed. For small high-orders, this has
a reasonable chance of success. However, if the caller has specified
__GFP_NO_KSWAPD to limit the disruption to the system, it makes more
sense to fail the allocation rather than stall the caller in direct
reclaim. This patch skips direct reclaim if compaction is deferred
and the caller specifies __GFP_NO_KSWAPD.

Async compaction only considers a subset of pages so it is possible for
compaction to be deferred prematurely and not enter direct reclaim even
in cases where it should. To compensate for this, this patch also defers
compaction only if sync compaction failed.

Signed-off-by: Mel Gorman <mgorman@suse.de>
Acked-by: Minchan Kim <minchan.kim@gmail.com>
Reviewed-by: Rik van Riel<riel@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Dave Jones <davej@redhat.com>
Cc: Jan Kara <jack@suse.cz>
Cc: Andy Isaacson <adi@hexapodia.org>
Cc: Nai Xia <nai.xia@gmail.com>
Cc: Johannes Weiner <jweiner@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
---
 mm/page_alloc.c |   45 +++++++++++++++++++++++++++++++++++----------
 1 file changed, 35 insertions(+), 10 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index e568b80..257acae 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1897,14 +1897,20 @@ static struct page *
 __alloc_pages_direct_compact(gfp_t gfp_mask, unsigned int order,
 	struct zonelist *zonelist, enum zone_type high_zoneidx,
 	nodemask_t *nodemask, int alloc_flags, struct zone *preferred_zone,
-	int migratetype, unsigned long *did_some_progress,
-	bool sync_migration)
+	int migratetype, bool sync_migration,
+	bool *deferred_compaction,
+	unsigned long *did_some_progress)
 {
 	struct page *page;
 
-	if (!order || compaction_deferred(preferred_zone))
+	if (!order)
 		return NULL;
 
+	if (compaction_deferred(preferred_zone)) {
+		*deferred_compaction = true;
+		return NULL;
+	}
+
 	current->flags |= PF_MEMALLOC;
 	*did_some_progress = try_to_compact_pages(zonelist, order, gfp_mask,
 						nodemask, sync_migration);
@@ -1932,7 +1938,13 @@ __alloc_pages_direct_compact(gfp_t gfp_mask, unsigned int order,
 		 * but not enough to satisfy watermarks.
 		 */
 		count_vm_event(COMPACTFAIL);
-		defer_compaction(preferred_zone);
+
+		/*
+		 * As async compaction considers a subset of pageblocks, only
+		 * defer if the failure was a sync compaction failure.
+		 */
+		if (sync_migration)
+			defer_compaction(preferred_zone);
 
 		cond_resched();
 	}
@@ -1944,8 +1956,9 @@ static inline struct page *
 __alloc_pages_direct_compact(gfp_t gfp_mask, unsigned int order,
 	struct zonelist *zonelist, enum zone_type high_zoneidx,
 	nodemask_t *nodemask, int alloc_flags, struct zone *preferred_zone,
-	int migratetype, unsigned long *did_some_progress,
-	bool sync_migration)
+	int migratetype, bool sync_migration,
+	bool *deferred_compaction,
+	unsigned long *did_some_progress)
 {
 	return NULL;
 }
@@ -2095,6 +2108,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 	unsigned long pages_reclaimed = 0;
 	unsigned long did_some_progress;
 	bool sync_migration = false;
+	bool deferred_compaction = false;
 
 	/*
 	 * In the slowpath, we sanity check order to avoid ever trying to
@@ -2175,12 +2189,22 @@ rebalance:
 					zonelist, high_zoneidx,
 					nodemask,
 					alloc_flags, preferred_zone,
-					migratetype, &did_some_progress,
-					sync_migration);
+					migratetype, sync_migration,
+					&deferred_compaction,
+					&did_some_progress);
 	if (page)
 		goto got_pg;
 	sync_migration = true;
 
+	/*
+	 * If compaction is deferred for high-order allocations, it is because
+	 * sync compaction recently failed. In this is the case and the caller
+	 * has requested the system not be heavily disrupted, fail the
+	 * allocation now instead of entering direct reclaim
+	 */
+	if (deferred_compaction && (gfp_mask & __GFP_NO_KSWAPD))
+		goto nopage;
+
 	/* Try direct reclaim and then allocating */
 	page = __alloc_pages_direct_reclaim(gfp_mask, order,
 					zonelist, high_zoneidx,
@@ -2243,8 +2267,9 @@ rebalance:
 					zonelist, high_zoneidx,
 					nodemask,
 					alloc_flags, preferred_zone,
-					migratetype, &did_some_progress,
-					sync_migration);
+					migratetype, sync_migration,
+					&deferred_compaction,
+					&did_some_progress);
 		if (page)
 			goto got_pg;
 	}
-- 
1.7.9.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  parent reply	other threads:[~2012-07-23 13:43 UTC|newest]

Thread overview: 119+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-07-23 13:38 [PATCH 00/34] Memory management performance backports for -stable V2 Mel Gorman
2012-07-23 13:38 ` Mel Gorman
2012-07-23 13:38 ` [PATCH 01/34] mm: vmstat: cache align vm_stat Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 02/34] mm: memory hotplug: Check if pages are correctly reserved on a per-section basis Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 03/34] mm: Reduce the amount of work done when updating min_free_kbytes Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-24 22:47   ` Greg KH
2012-07-24 22:47     ` Greg KH
2012-07-25  7:57     ` Mel Gorman
2012-07-25  7:57       ` Mel Gorman
2012-07-23 13:38 ` [PATCH 04/34] mm: vmscan: fix force-scanning small targets without swap Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 05/34] vmscan: clear ZONE_CONGESTED for zone with good watermark Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 06/34] vmscan: add shrink_slab tracepoints Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 07/34] vmscan: shrinker->nr updates race and go wrong Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 08/34] vmscan: reduce wind up shrinker->nr when shrinker can't do work Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 09/34] mm: limit direct reclaim for higher order allocations Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 10/34] mm: Abort reclaim/compaction if compaction can proceed Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 11/34] mm: compaction: trivial clean up in acct_isolated() Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 12/34] mm: change isolate mode from #define to bitwise type Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 13/34] mm: compaction: make isolate_lru_page() filter-aware Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 14/34] mm: zone_reclaim: " Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 15/34] mm: migration: clean up unmap_and_move() Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-25 15:45   ` Greg KH
2012-07-25 15:45     ` Greg KH
2012-07-25 16:04     ` Mel Gorman
2012-07-25 16:04       ` Mel Gorman
2012-07-25 18:03       ` Greg KH
2012-07-25 18:03         ` Greg KH
2012-07-23 13:38 ` [PATCH 16/34] mm: compaction: Allow compaction to isolate dirty pages Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-25 15:47   ` Greg KH
2012-07-25 15:47     ` Greg KH
2012-07-25 16:07     ` Mel Gorman
2012-07-25 16:07       ` Mel Gorman
2012-07-23 13:38 ` [PATCH 17/34] mm: compaction: Determine if dirty pages can be migrated without blocking within ->migratepage Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` Mel Gorman [this message]
2012-07-23 13:38   ` [PATCH 18/34] mm: page allocator: Do not call direct reclaim for THP allocations while compaction is deferred Mel Gorman
2012-07-23 13:38 ` [PATCH 19/34] mm: compaction: make isolate_lru_page() filter-aware again Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 20/34] kswapd: avoid unnecessary rebalance after an unsuccessful balancing Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 21/34] kswapd: assign new_order and new_classzone_idx after wakeup in sleeping Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 22/34] mm: compaction: Introduce sync-light migration for use by compaction Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 23/34] mm: vmscan: When reclaiming for compaction, ensure there are sufficient free pages available Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 24/34] mm: vmscan: Do not OOM if aborting reclaim to start compaction Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 25/34] mm: vmscan: Check if reclaim should really abort even if compaction_ready() is true for one zone Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-25 19:51   ` Greg KH
2012-07-25 19:51     ` Greg KH
2012-07-23 13:38 ` [PATCH 26/34] vmscan: promote shared file mapped pages Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 27/34] vmscan: activate executable pages after first usage Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 28/34] mm/vmscan.c: consider swap space when deciding whether to continue reclaim Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 29/34] mm: test PageSwapBacked in lumpy reclaim Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 30/34] mm: vmscan: Do not force kswapd to scan small targets Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-25 19:59   ` Greg KH
2012-07-25 19:59     ` Greg KH
2012-07-25 21:35     ` Mel Gorman
2012-07-25 21:35       ` Mel Gorman
2012-07-25 21:44       ` Greg KH
2012-07-25 21:44         ` Greg KH
2012-07-23 13:38 ` [PATCH 31/34] cpusets: avoid looping when storing to mems_allowed if one node remains set Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 32/34] cpusets: stall when updating mems_allowed for mempolicy or disjoint nodemask Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 33/34] cpuset: mm: Reduce large amounts of memory barrier related damage v3 Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 34/34] mm/hugetlb: fix warning in alloc_huge_page/dequeue_huge_page_vma Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-24  5:58 ` [PATCH 00/34] Memory management performance backports for -stable V2 Mike Galbraith
2012-07-24  5:58   ` Mike Galbraith
2012-07-24  8:10   ` Mel Gorman
2012-07-24  8:10     ` Mel Gorman
2012-07-24 13:18   ` Hillf Danton
2012-07-24 13:18     ` Hillf Danton
2012-07-24 13:27     ` Mel Gorman
2012-07-24 13:27       ` Mel Gorman
2012-07-24 13:34       ` Hillf Danton
2012-07-24 13:34         ` Hillf Danton
2012-07-24 13:53         ` Mel Gorman
2012-07-24 13:53           ` Mel Gorman
2012-07-24 14:11           ` Hillf Danton
2012-07-24 14:11             ` Hillf Danton
2012-07-24 13:52     ` Mike Galbraith
2012-07-24 13:52       ` Mike Galbraith
2012-07-24 14:18       ` Hillf Danton
2012-07-24 14:18         ` Hillf Danton
2012-07-24 14:41         ` Mike Galbraith
2012-07-24 14:41           ` Mike Galbraith
2012-07-25 22:30 ` Greg KH
2012-07-25 22:30   ` Greg KH
2012-07-25 22:48   ` Mel Gorman
2012-07-25 22:48     ` Mel Gorman
2012-07-30  1:13 ` Ben Hutchings
  -- strict thread matches above, loose matches on Subject: below --
2012-07-19 14:36 [PATCH 00/34] Memory management performance backports for -stable Mel Gorman
2012-07-19 14:36 ` [PATCH 18/34] mm: page allocator: Do not call direct reclaim for THP allocations while compaction is deferred Mel Gorman
2012-07-19 14:36   ` Mel Gorman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1343050727-3045-19-git-send-email-mgorman@suse.de \
    --to=mgorman@suse.de \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=stable@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.