All of lore.kernel.org
 help / color / mirror / Atom feed
From: Mel Gorman <mgorman@suse.de>
To: Stable <stable@vger.kernel.org>
Cc: Linux-MM <linux-mm@kvack.org>,
	LKML <linux-kernel@vger.kernel.org>, Mel Gorman <mgorman@suse.de>
Subject: [PATCH 23/34] mm: vmscan: When reclaiming for compaction, ensure there are sufficient free pages available
Date: Mon, 23 Jul 2012 14:38:36 +0100	[thread overview]
Message-ID: <1343050727-3045-24-git-send-email-mgorman@suse.de> (raw)
In-Reply-To: <1343050727-3045-1-git-send-email-mgorman@suse.de>

commit fe4b1b244bdb96136855f2c694071cb09d140766 upstream.

Stable note: Not tracked on Bugzilla. THP and compaction was found to
	aggressively reclaim pages and stall systems under different
	situations that was addressed piecemeal over time. This patch
	addresses a problem where the fix regressed THP allocation
	success rates.

In commit [e0887c19: vmscan: limit direct reclaim for higher order
allocations], Rik noted that reclaim was too aggressive when THP was
enabled. In his initial patch he used the number of free pages to
decide if reclaim should abort for compaction. My feedback was that
reclaim and compaction should be using the same logic when deciding if
reclaim should be aborted.

Unfortunately, this had the effect of reducing THP success rates when
the workload included something like streaming reads that continually
allocated pages. The window during which compaction could run and return
a THP was too small.

This patch combines Rik's two patches together. compaction_suitable()
is still used to decide if reclaim should be aborted to allow
compaction is used. However, it will also ensure that there is a
reasonable buffer of free pages available. This improves upon the
THP allocation success rates but bounds the number of pages that are
freed for compaction.

Signed-off-by: Mel Gorman <mgorman@suse.de>
Reviewed-by: Rik van Riel<riel@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Minchan Kim <minchan.kim@gmail.com>
Cc: Dave Jones <davej@redhat.com>
Cc: Jan Kara <jack@suse.cz>
Cc: Andy Isaacson <adi@hexapodia.org>
Cc: Nai Xia <nai.xia@gmail.com>
Cc: Johannes Weiner <jweiner@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Mel Gorman <mgorman@suse.de>
---
 mm/vmscan.c |   44 +++++++++++++++++++++++++++++++++++++++-----
 1 file changed, 39 insertions(+), 5 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index b8c1fc0..e85abfd 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2075,6 +2075,42 @@ restart:
 	throttle_vm_writeout(sc->gfp_mask);
 }
 
+/* Returns true if compaction should go ahead for a high-order request */
+static inline bool compaction_ready(struct zone *zone, struct scan_control *sc)
+{
+	unsigned long balance_gap, watermark;
+	bool watermark_ok;
+
+	/* Do not consider compaction for orders reclaim is meant to satisfy */
+	if (sc->order <= PAGE_ALLOC_COSTLY_ORDER)
+		return false;
+
+	/*
+	 * Compaction takes time to run and there are potentially other
+	 * callers using the pages just freed. Continue reclaiming until
+	 * there is a buffer of free pages available to give compaction
+	 * a reasonable chance of completing and allocating the page
+	 */
+	balance_gap = min(low_wmark_pages(zone),
+		(zone->present_pages + KSWAPD_ZONE_BALANCE_GAP_RATIO-1) /
+			KSWAPD_ZONE_BALANCE_GAP_RATIO);
+	watermark = high_wmark_pages(zone) + balance_gap + (2UL << sc->order);
+	watermark_ok = zone_watermark_ok_safe(zone, 0, watermark, 0, 0);
+
+	/*
+	 * If compaction is deferred, reclaim up to a point where
+	 * compaction will have a chance of success when re-enabled
+	 */
+	if (compaction_deferred(zone))
+		return watermark_ok;
+
+	/* If compaction is not ready to start, keep reclaiming */
+	if (!compaction_suitable(zone, sc->order))
+		return false;
+
+	return watermark_ok;
+}
+
 /*
  * This is the direct reclaim path, for page-allocating processes.  We only
  * try to reclaim pages from zones which will satisfy the caller's allocation
@@ -2092,8 +2128,8 @@ restart:
  * scan then give up on it.
  *
  * This function returns true if a zone is being reclaimed for a costly
- * high-order allocation and compaction is either ready to begin or deferred.
- * This indicates to the caller that it should retry the allocation or fail.
+ * allocation and compaction is ready to begin. This indicates to the caller
+ * that it should retry the allocation or fail.
  */
 static bool shrink_zones(int priority, struct zonelist *zonelist,
 					struct scan_control *sc)
@@ -2127,9 +2163,7 @@ static bool shrink_zones(int priority, struct zonelist *zonelist,
 				 * noticable problem, like transparent huge page
 				 * allocations.
 				 */
-				if (sc->order > PAGE_ALLOC_COSTLY_ORDER &&
-					(compaction_suitable(zone, sc->order) ||
-					 compaction_deferred(zone))) {
+				if (compaction_ready(zone, sc)) {
 					should_abort_reclaim = true;
 					continue;
 				}
-- 
1.7.9.2


WARNING: multiple messages have this Message-ID (diff)
From: Mel Gorman <mgorman@suse.de>
To: Stable <stable@vger.kernel.org>
Cc: Linux-MM <linux-mm@kvack.org>,
	LKML <linux-kernel@vger.kernel.org>, Mel Gorman <mgorman@suse.de>
Subject: [PATCH 23/34] mm: vmscan: When reclaiming for compaction, ensure there are sufficient free pages available
Date: Mon, 23 Jul 2012 14:38:36 +0100	[thread overview]
Message-ID: <1343050727-3045-24-git-send-email-mgorman@suse.de> (raw)
In-Reply-To: <1343050727-3045-1-git-send-email-mgorman@suse.de>

commit fe4b1b244bdb96136855f2c694071cb09d140766 upstream.

Stable note: Not tracked on Bugzilla. THP and compaction was found to
	aggressively reclaim pages and stall systems under different
	situations that was addressed piecemeal over time. This patch
	addresses a problem where the fix regressed THP allocation
	success rates.

In commit [e0887c19: vmscan: limit direct reclaim for higher order
allocations], Rik noted that reclaim was too aggressive when THP was
enabled. In his initial patch he used the number of free pages to
decide if reclaim should abort for compaction. My feedback was that
reclaim and compaction should be using the same logic when deciding if
reclaim should be aborted.

Unfortunately, this had the effect of reducing THP success rates when
the workload included something like streaming reads that continually
allocated pages. The window during which compaction could run and return
a THP was too small.

This patch combines Rik's two patches together. compaction_suitable()
is still used to decide if reclaim should be aborted to allow
compaction is used. However, it will also ensure that there is a
reasonable buffer of free pages available. This improves upon the
THP allocation success rates but bounds the number of pages that are
freed for compaction.

Signed-off-by: Mel Gorman <mgorman@suse.de>
Reviewed-by: Rik van Riel<riel@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Minchan Kim <minchan.kim@gmail.com>
Cc: Dave Jones <davej@redhat.com>
Cc: Jan Kara <jack@suse.cz>
Cc: Andy Isaacson <adi@hexapodia.org>
Cc: Nai Xia <nai.xia@gmail.com>
Cc: Johannes Weiner <jweiner@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Mel Gorman <mgorman@suse.de>
---
 mm/vmscan.c |   44 +++++++++++++++++++++++++++++++++++++++-----
 1 file changed, 39 insertions(+), 5 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index b8c1fc0..e85abfd 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2075,6 +2075,42 @@ restart:
 	throttle_vm_writeout(sc->gfp_mask);
 }
 
+/* Returns true if compaction should go ahead for a high-order request */
+static inline bool compaction_ready(struct zone *zone, struct scan_control *sc)
+{
+	unsigned long balance_gap, watermark;
+	bool watermark_ok;
+
+	/* Do not consider compaction for orders reclaim is meant to satisfy */
+	if (sc->order <= PAGE_ALLOC_COSTLY_ORDER)
+		return false;
+
+	/*
+	 * Compaction takes time to run and there are potentially other
+	 * callers using the pages just freed. Continue reclaiming until
+	 * there is a buffer of free pages available to give compaction
+	 * a reasonable chance of completing and allocating the page
+	 */
+	balance_gap = min(low_wmark_pages(zone),
+		(zone->present_pages + KSWAPD_ZONE_BALANCE_GAP_RATIO-1) /
+			KSWAPD_ZONE_BALANCE_GAP_RATIO);
+	watermark = high_wmark_pages(zone) + balance_gap + (2UL << sc->order);
+	watermark_ok = zone_watermark_ok_safe(zone, 0, watermark, 0, 0);
+
+	/*
+	 * If compaction is deferred, reclaim up to a point where
+	 * compaction will have a chance of success when re-enabled
+	 */
+	if (compaction_deferred(zone))
+		return watermark_ok;
+
+	/* If compaction is not ready to start, keep reclaiming */
+	if (!compaction_suitable(zone, sc->order))
+		return false;
+
+	return watermark_ok;
+}
+
 /*
  * This is the direct reclaim path, for page-allocating processes.  We only
  * try to reclaim pages from zones which will satisfy the caller's allocation
@@ -2092,8 +2128,8 @@ restart:
  * scan then give up on it.
  *
  * This function returns true if a zone is being reclaimed for a costly
- * high-order allocation and compaction is either ready to begin or deferred.
- * This indicates to the caller that it should retry the allocation or fail.
+ * allocation and compaction is ready to begin. This indicates to the caller
+ * that it should retry the allocation or fail.
  */
 static bool shrink_zones(int priority, struct zonelist *zonelist,
 					struct scan_control *sc)
@@ -2127,9 +2163,7 @@ static bool shrink_zones(int priority, struct zonelist *zonelist,
 				 * noticable problem, like transparent huge page
 				 * allocations.
 				 */
-				if (sc->order > PAGE_ALLOC_COSTLY_ORDER &&
-					(compaction_suitable(zone, sc->order) ||
-					 compaction_deferred(zone))) {
+				if (compaction_ready(zone, sc)) {
 					should_abort_reclaim = true;
 					continue;
 				}
-- 
1.7.9.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  parent reply	other threads:[~2012-07-23 13:42 UTC|newest]

Thread overview: 119+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-07-23 13:38 [PATCH 00/34] Memory management performance backports for -stable V2 Mel Gorman
2012-07-23 13:38 ` Mel Gorman
2012-07-23 13:38 ` [PATCH 01/34] mm: vmstat: cache align vm_stat Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 02/34] mm: memory hotplug: Check if pages are correctly reserved on a per-section basis Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 03/34] mm: Reduce the amount of work done when updating min_free_kbytes Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-24 22:47   ` Greg KH
2012-07-24 22:47     ` Greg KH
2012-07-25  7:57     ` Mel Gorman
2012-07-25  7:57       ` Mel Gorman
2012-07-23 13:38 ` [PATCH 04/34] mm: vmscan: fix force-scanning small targets without swap Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 05/34] vmscan: clear ZONE_CONGESTED for zone with good watermark Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 06/34] vmscan: add shrink_slab tracepoints Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 07/34] vmscan: shrinker->nr updates race and go wrong Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 08/34] vmscan: reduce wind up shrinker->nr when shrinker can't do work Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 09/34] mm: limit direct reclaim for higher order allocations Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 10/34] mm: Abort reclaim/compaction if compaction can proceed Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 11/34] mm: compaction: trivial clean up in acct_isolated() Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 12/34] mm: change isolate mode from #define to bitwise type Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 13/34] mm: compaction: make isolate_lru_page() filter-aware Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 14/34] mm: zone_reclaim: " Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 15/34] mm: migration: clean up unmap_and_move() Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-25 15:45   ` Greg KH
2012-07-25 15:45     ` Greg KH
2012-07-25 16:04     ` Mel Gorman
2012-07-25 16:04       ` Mel Gorman
2012-07-25 18:03       ` Greg KH
2012-07-25 18:03         ` Greg KH
2012-07-23 13:38 ` [PATCH 16/34] mm: compaction: Allow compaction to isolate dirty pages Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-25 15:47   ` Greg KH
2012-07-25 15:47     ` Greg KH
2012-07-25 16:07     ` Mel Gorman
2012-07-25 16:07       ` Mel Gorman
2012-07-23 13:38 ` [PATCH 17/34] mm: compaction: Determine if dirty pages can be migrated without blocking within ->migratepage Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 18/34] mm: page allocator: Do not call direct reclaim for THP allocations while compaction is deferred Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 19/34] mm: compaction: make isolate_lru_page() filter-aware again Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 20/34] kswapd: avoid unnecessary rebalance after an unsuccessful balancing Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 21/34] kswapd: assign new_order and new_classzone_idx after wakeup in sleeping Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 22/34] mm: compaction: Introduce sync-light migration for use by compaction Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` Mel Gorman [this message]
2012-07-23 13:38   ` [PATCH 23/34] mm: vmscan: When reclaiming for compaction, ensure there are sufficient free pages available Mel Gorman
2012-07-23 13:38 ` [PATCH 24/34] mm: vmscan: Do not OOM if aborting reclaim to start compaction Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 25/34] mm: vmscan: Check if reclaim should really abort even if compaction_ready() is true for one zone Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-25 19:51   ` Greg KH
2012-07-25 19:51     ` Greg KH
2012-07-23 13:38 ` [PATCH 26/34] vmscan: promote shared file mapped pages Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 27/34] vmscan: activate executable pages after first usage Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 28/34] mm/vmscan.c: consider swap space when deciding whether to continue reclaim Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 29/34] mm: test PageSwapBacked in lumpy reclaim Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 30/34] mm: vmscan: Do not force kswapd to scan small targets Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-25 19:59   ` Greg KH
2012-07-25 19:59     ` Greg KH
2012-07-25 21:35     ` Mel Gorman
2012-07-25 21:35       ` Mel Gorman
2012-07-25 21:44       ` Greg KH
2012-07-25 21:44         ` Greg KH
2012-07-23 13:38 ` [PATCH 31/34] cpusets: avoid looping when storing to mems_allowed if one node remains set Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 32/34] cpusets: stall when updating mems_allowed for mempolicy or disjoint nodemask Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 33/34] cpuset: mm: Reduce large amounts of memory barrier related damage v3 Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 34/34] mm/hugetlb: fix warning in alloc_huge_page/dequeue_huge_page_vma Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-24  5:58 ` [PATCH 00/34] Memory management performance backports for -stable V2 Mike Galbraith
2012-07-24  5:58   ` Mike Galbraith
2012-07-24  8:10   ` Mel Gorman
2012-07-24  8:10     ` Mel Gorman
2012-07-24 13:18   ` Hillf Danton
2012-07-24 13:18     ` Hillf Danton
2012-07-24 13:27     ` Mel Gorman
2012-07-24 13:27       ` Mel Gorman
2012-07-24 13:34       ` Hillf Danton
2012-07-24 13:34         ` Hillf Danton
2012-07-24 13:53         ` Mel Gorman
2012-07-24 13:53           ` Mel Gorman
2012-07-24 14:11           ` Hillf Danton
2012-07-24 14:11             ` Hillf Danton
2012-07-24 13:52     ` Mike Galbraith
2012-07-24 13:52       ` Mike Galbraith
2012-07-24 14:18       ` Hillf Danton
2012-07-24 14:18         ` Hillf Danton
2012-07-24 14:41         ` Mike Galbraith
2012-07-24 14:41           ` Mike Galbraith
2012-07-25 22:30 ` Greg KH
2012-07-25 22:30   ` Greg KH
2012-07-25 22:48   ` Mel Gorman
2012-07-25 22:48     ` Mel Gorman
2012-07-30  1:13 ` Ben Hutchings
  -- strict thread matches above, loose matches on Subject: below --
2012-07-19 14:36 [PATCH 00/34] Memory management performance backports for -stable Mel Gorman
2012-07-19 14:36 ` [PATCH 23/34] mm: vmscan: When reclaiming for compaction, ensure there are sufficient free pages available Mel Gorman
2012-07-19 14:36   ` Mel Gorman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1343050727-3045-24-git-send-email-mgorman@suse.de \
    --to=mgorman@suse.de \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=stable@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.