From: Mel Gorman <mgorman@suse.de> To: Stable <stable@vger.kernel.org> Cc: Linux-MM <linux-mm@kvack.org>, LKML <linux-kernel@vger.kernel.org>, Mel Gorman <mgorman@suse.de> Subject: [PATCH 20/34] kswapd: avoid unnecessary rebalance after an unsuccessful balancing Date: Mon, 23 Jul 2012 14:38:33 +0100 [thread overview] Message-ID: <1343050727-3045-21-git-send-email-mgorman@suse.de> (raw) In-Reply-To: <1343050727-3045-1-git-send-email-mgorman@suse.de> From: "Alex,Shi" <alex.shi@intel.com> commit d2ebd0f6b89567eb93ead4e2ca0cbe03021f344b upstream. Stable note: Fixes https://bugzilla.redhat.com/show_bug.cgi?id=712019. This patch reduces kswapd CPU usage. In commit 215ddd66 ("mm: vmscan: only read new_classzone_idx from pgdat when reclaiming successfully") , Mel Gorman said kswapd is better to sleep after a unsuccessful balancing if there is tighter reclaim request pending in the balancing. But in the following scenario, kswapd do something that is not matched our expectation. The patch fixes this issue. 1, Read pgdat request A (classzone_idx, order = 3) 2, balance_pgdat() 3, During pgdat, a new pgdat request B (classzone_idx, order = 5) is placed 4, balance_pgdat() returns but failed since returned order = 0 5, pgdat of request A assigned to balance_pgdat(), and do balancing again. While the expectation behavior of kswapd should try to sleep. Signed-off-by: Alex Shi <alex.shi@intel.com> Reviewed-by: Tim Chen <tim.c.chen@linux.intel.com> Acked-by: Mel Gorman <mgorman@suse.de> Tested-by: Pádraig Brady <P@draigBrady.com> Cc: Rik van Riel <riel@redhat.com> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Mel Gorman <mgorman@suse.de> --- mm/vmscan.c | 14 +++++++++++--- 1 file changed, 11 insertions(+), 3 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index aa75861..bf85e4d 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2841,7 +2841,9 @@ static void kswapd_try_to_sleep(pg_data_t *pgdat, int order, int classzone_idx) static int kswapd(void *p) { unsigned long order, new_order; + unsigned balanced_order; int classzone_idx, new_classzone_idx; + int balanced_classzone_idx; pg_data_t *pgdat = (pg_data_t*)p; struct task_struct *tsk = current; @@ -2872,7 +2874,9 @@ static int kswapd(void *p) set_freezable(); order = new_order = 0; + balanced_order = 0; classzone_idx = new_classzone_idx = pgdat->nr_zones - 1; + balanced_classzone_idx = classzone_idx; for ( ; ; ) { int ret; @@ -2881,7 +2885,8 @@ static int kswapd(void *p) * new request of a similar or harder type will succeed soon * so consider going to sleep on the basis we reclaimed at */ - if (classzone_idx >= new_classzone_idx && order == new_order) { + if (balanced_classzone_idx >= new_classzone_idx && + balanced_order == new_order) { new_order = pgdat->kswapd_max_order; new_classzone_idx = pgdat->classzone_idx; pgdat->kswapd_max_order = 0; @@ -2896,7 +2901,8 @@ static int kswapd(void *p) order = new_order; classzone_idx = new_classzone_idx; } else { - kswapd_try_to_sleep(pgdat, order, classzone_idx); + kswapd_try_to_sleep(pgdat, balanced_order, + balanced_classzone_idx); order = pgdat->kswapd_max_order; classzone_idx = pgdat->classzone_idx; pgdat->kswapd_max_order = 0; @@ -2913,7 +2919,9 @@ static int kswapd(void *p) */ if (!ret) { trace_mm_vmscan_kswapd_wake(pgdat->node_id, order); - order = balance_pgdat(pgdat, order, &classzone_idx); + balanced_classzone_idx = classzone_idx; + balanced_order = balance_pgdat(pgdat, order, + &balanced_classzone_idx); } } return 0; -- 1.7.9.2
WARNING: multiple messages have this Message-ID (diff)
From: Mel Gorman <mgorman@suse.de> To: Stable <stable@vger.kernel.org> Cc: Linux-MM <linux-mm@kvack.org>, LKML <linux-kernel@vger.kernel.org>, Mel Gorman <mgorman@suse.de> Subject: [PATCH 20/34] kswapd: avoid unnecessary rebalance after an unsuccessful balancing Date: Mon, 23 Jul 2012 14:38:33 +0100 [thread overview] Message-ID: <1343050727-3045-21-git-send-email-mgorman@suse.de> (raw) In-Reply-To: <1343050727-3045-1-git-send-email-mgorman@suse.de> From: "Alex,Shi" <alex.shi@intel.com> commit d2ebd0f6b89567eb93ead4e2ca0cbe03021f344b upstream. Stable note: Fixes https://bugzilla.redhat.com/show_bug.cgi?id=712019. This patch reduces kswapd CPU usage. In commit 215ddd66 ("mm: vmscan: only read new_classzone_idx from pgdat when reclaiming successfully") , Mel Gorman said kswapd is better to sleep after a unsuccessful balancing if there is tighter reclaim request pending in the balancing. But in the following scenario, kswapd do something that is not matched our expectation. The patch fixes this issue. 1, Read pgdat request A (classzone_idx, order = 3) 2, balance_pgdat() 3, During pgdat, a new pgdat request B (classzone_idx, order = 5) is placed 4, balance_pgdat() returns but failed since returned order = 0 5, pgdat of request A assigned to balance_pgdat(), and do balancing again. While the expectation behavior of kswapd should try to sleep. Signed-off-by: Alex Shi <alex.shi@intel.com> Reviewed-by: Tim Chen <tim.c.chen@linux.intel.com> Acked-by: Mel Gorman <mgorman@suse.de> Tested-by: PA!draig Brady <P@draigBrady.com> Cc: Rik van Riel <riel@redhat.com> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Mel Gorman <mgorman@suse.de> --- mm/vmscan.c | 14 +++++++++++--- 1 file changed, 11 insertions(+), 3 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index aa75861..bf85e4d 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2841,7 +2841,9 @@ static void kswapd_try_to_sleep(pg_data_t *pgdat, int order, int classzone_idx) static int kswapd(void *p) { unsigned long order, new_order; + unsigned balanced_order; int classzone_idx, new_classzone_idx; + int balanced_classzone_idx; pg_data_t *pgdat = (pg_data_t*)p; struct task_struct *tsk = current; @@ -2872,7 +2874,9 @@ static int kswapd(void *p) set_freezable(); order = new_order = 0; + balanced_order = 0; classzone_idx = new_classzone_idx = pgdat->nr_zones - 1; + balanced_classzone_idx = classzone_idx; for ( ; ; ) { int ret; @@ -2881,7 +2885,8 @@ static int kswapd(void *p) * new request of a similar or harder type will succeed soon * so consider going to sleep on the basis we reclaimed at */ - if (classzone_idx >= new_classzone_idx && order == new_order) { + if (balanced_classzone_idx >= new_classzone_idx && + balanced_order == new_order) { new_order = pgdat->kswapd_max_order; new_classzone_idx = pgdat->classzone_idx; pgdat->kswapd_max_order = 0; @@ -2896,7 +2901,8 @@ static int kswapd(void *p) order = new_order; classzone_idx = new_classzone_idx; } else { - kswapd_try_to_sleep(pgdat, order, classzone_idx); + kswapd_try_to_sleep(pgdat, balanced_order, + balanced_classzone_idx); order = pgdat->kswapd_max_order; classzone_idx = pgdat->classzone_idx; pgdat->kswapd_max_order = 0; @@ -2913,7 +2919,9 @@ static int kswapd(void *p) */ if (!ret) { trace_mm_vmscan_kswapd_wake(pgdat->node_id, order); - order = balance_pgdat(pgdat, order, &classzone_idx); + balanced_classzone_idx = classzone_idx; + balanced_order = balance_pgdat(pgdat, order, + &balanced_classzone_idx); } } return 0; -- 1.7.9.2 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2012-07-23 13:43 UTC|newest] Thread overview: 119+ messages / expand[flat|nested] mbox.gz Atom feed top 2012-07-23 13:38 [PATCH 00/34] Memory management performance backports for -stable V2 Mel Gorman 2012-07-23 13:38 ` Mel Gorman 2012-07-23 13:38 ` [PATCH 01/34] mm: vmstat: cache align vm_stat Mel Gorman 2012-07-23 13:38 ` Mel Gorman 2012-07-23 13:38 ` [PATCH 02/34] mm: memory hotplug: Check if pages are correctly reserved on a per-section basis Mel Gorman 2012-07-23 13:38 ` Mel Gorman 2012-07-23 13:38 ` [PATCH 03/34] mm: Reduce the amount of work done when updating min_free_kbytes Mel Gorman 2012-07-23 13:38 ` Mel Gorman 2012-07-24 22:47 ` Greg KH 2012-07-24 22:47 ` Greg KH 2012-07-25 7:57 ` Mel Gorman 2012-07-25 7:57 ` Mel Gorman 2012-07-23 13:38 ` [PATCH 04/34] mm: vmscan: fix force-scanning small targets without swap Mel Gorman 2012-07-23 13:38 ` Mel Gorman 2012-07-23 13:38 ` [PATCH 05/34] vmscan: clear ZONE_CONGESTED for zone with good watermark Mel Gorman 2012-07-23 13:38 ` Mel Gorman 2012-07-23 13:38 ` [PATCH 06/34] vmscan: add shrink_slab tracepoints Mel Gorman 2012-07-23 13:38 ` Mel Gorman 2012-07-23 13:38 ` [PATCH 07/34] vmscan: shrinker->nr updates race and go wrong Mel Gorman 2012-07-23 13:38 ` Mel Gorman 2012-07-23 13:38 ` [PATCH 08/34] vmscan: reduce wind up shrinker->nr when shrinker can't do work Mel Gorman 2012-07-23 13:38 ` Mel Gorman 2012-07-23 13:38 ` [PATCH 09/34] mm: limit direct reclaim for higher order allocations Mel Gorman 2012-07-23 13:38 ` Mel Gorman 2012-07-23 13:38 ` [PATCH 10/34] mm: Abort reclaim/compaction if compaction can proceed Mel Gorman 2012-07-23 13:38 ` Mel Gorman 2012-07-23 13:38 ` [PATCH 11/34] mm: compaction: trivial clean up in acct_isolated() Mel Gorman 2012-07-23 13:38 ` Mel Gorman 2012-07-23 13:38 ` [PATCH 12/34] mm: change isolate mode from #define to bitwise type Mel Gorman 2012-07-23 13:38 ` Mel Gorman 2012-07-23 13:38 ` [PATCH 13/34] mm: compaction: make isolate_lru_page() filter-aware Mel Gorman 2012-07-23 13:38 ` Mel Gorman 2012-07-23 13:38 ` [PATCH 14/34] mm: zone_reclaim: " Mel Gorman 2012-07-23 13:38 ` Mel Gorman 2012-07-23 13:38 ` [PATCH 15/34] mm: migration: clean up unmap_and_move() Mel Gorman 2012-07-23 13:38 ` Mel Gorman 2012-07-25 15:45 ` Greg KH 2012-07-25 15:45 ` Greg KH 2012-07-25 16:04 ` Mel Gorman 2012-07-25 16:04 ` Mel Gorman 2012-07-25 18:03 ` Greg KH 2012-07-25 18:03 ` Greg KH 2012-07-23 13:38 ` [PATCH 16/34] mm: compaction: Allow compaction to isolate dirty pages Mel Gorman 2012-07-23 13:38 ` Mel Gorman 2012-07-25 15:47 ` Greg KH 2012-07-25 15:47 ` Greg KH 2012-07-25 16:07 ` Mel Gorman 2012-07-25 16:07 ` Mel Gorman 2012-07-23 13:38 ` [PATCH 17/34] mm: compaction: Determine if dirty pages can be migrated without blocking within ->migratepage Mel Gorman 2012-07-23 13:38 ` Mel Gorman 2012-07-23 13:38 ` [PATCH 18/34] mm: page allocator: Do not call direct reclaim for THP allocations while compaction is deferred Mel Gorman 2012-07-23 13:38 ` Mel Gorman 2012-07-23 13:38 ` [PATCH 19/34] mm: compaction: make isolate_lru_page() filter-aware again Mel Gorman 2012-07-23 13:38 ` Mel Gorman 2012-07-23 13:38 ` Mel Gorman [this message] 2012-07-23 13:38 ` [PATCH 20/34] kswapd: avoid unnecessary rebalance after an unsuccessful balancing Mel Gorman 2012-07-23 13:38 ` [PATCH 21/34] kswapd: assign new_order and new_classzone_idx after wakeup in sleeping Mel Gorman 2012-07-23 13:38 ` Mel Gorman 2012-07-23 13:38 ` [PATCH 22/34] mm: compaction: Introduce sync-light migration for use by compaction Mel Gorman 2012-07-23 13:38 ` Mel Gorman 2012-07-23 13:38 ` [PATCH 23/34] mm: vmscan: When reclaiming for compaction, ensure there are sufficient free pages available Mel Gorman 2012-07-23 13:38 ` Mel Gorman 2012-07-23 13:38 ` [PATCH 24/34] mm: vmscan: Do not OOM if aborting reclaim to start compaction Mel Gorman 2012-07-23 13:38 ` Mel Gorman 2012-07-23 13:38 ` [PATCH 25/34] mm: vmscan: Check if reclaim should really abort even if compaction_ready() is true for one zone Mel Gorman 2012-07-23 13:38 ` Mel Gorman 2012-07-25 19:51 ` Greg KH 2012-07-25 19:51 ` Greg KH 2012-07-23 13:38 ` [PATCH 26/34] vmscan: promote shared file mapped pages Mel Gorman 2012-07-23 13:38 ` Mel Gorman 2012-07-23 13:38 ` [PATCH 27/34] vmscan: activate executable pages after first usage Mel Gorman 2012-07-23 13:38 ` Mel Gorman 2012-07-23 13:38 ` [PATCH 28/34] mm/vmscan.c: consider swap space when deciding whether to continue reclaim Mel Gorman 2012-07-23 13:38 ` Mel Gorman 2012-07-23 13:38 ` [PATCH 29/34] mm: test PageSwapBacked in lumpy reclaim Mel Gorman 2012-07-23 13:38 ` Mel Gorman 2012-07-23 13:38 ` [PATCH 30/34] mm: vmscan: Do not force kswapd to scan small targets Mel Gorman 2012-07-23 13:38 ` Mel Gorman 2012-07-25 19:59 ` Greg KH 2012-07-25 19:59 ` Greg KH 2012-07-25 21:35 ` Mel Gorman 2012-07-25 21:35 ` Mel Gorman 2012-07-25 21:44 ` Greg KH 2012-07-25 21:44 ` Greg KH 2012-07-23 13:38 ` [PATCH 31/34] cpusets: avoid looping when storing to mems_allowed if one node remains set Mel Gorman 2012-07-23 13:38 ` Mel Gorman 2012-07-23 13:38 ` [PATCH 32/34] cpusets: stall when updating mems_allowed for mempolicy or disjoint nodemask Mel Gorman 2012-07-23 13:38 ` Mel Gorman 2012-07-23 13:38 ` [PATCH 33/34] cpuset: mm: Reduce large amounts of memory barrier related damage v3 Mel Gorman 2012-07-23 13:38 ` Mel Gorman 2012-07-23 13:38 ` [PATCH 34/34] mm/hugetlb: fix warning in alloc_huge_page/dequeue_huge_page_vma Mel Gorman 2012-07-23 13:38 ` Mel Gorman 2012-07-24 5:58 ` [PATCH 00/34] Memory management performance backports for -stable V2 Mike Galbraith 2012-07-24 5:58 ` Mike Galbraith 2012-07-24 8:10 ` Mel Gorman 2012-07-24 8:10 ` Mel Gorman 2012-07-24 13:18 ` Hillf Danton 2012-07-24 13:18 ` Hillf Danton 2012-07-24 13:27 ` Mel Gorman 2012-07-24 13:27 ` Mel Gorman 2012-07-24 13:34 ` Hillf Danton 2012-07-24 13:34 ` Hillf Danton 2012-07-24 13:53 ` Mel Gorman 2012-07-24 13:53 ` Mel Gorman 2012-07-24 14:11 ` Hillf Danton 2012-07-24 14:11 ` Hillf Danton 2012-07-24 13:52 ` Mike Galbraith 2012-07-24 13:52 ` Mike Galbraith 2012-07-24 14:18 ` Hillf Danton 2012-07-24 14:18 ` Hillf Danton 2012-07-24 14:41 ` Mike Galbraith 2012-07-24 14:41 ` Mike Galbraith 2012-07-25 22:30 ` Greg KH 2012-07-25 22:30 ` Greg KH 2012-07-25 22:48 ` Mel Gorman 2012-07-25 22:48 ` Mel Gorman 2012-07-30 1:13 ` Ben Hutchings -- strict thread matches above, loose matches on Subject: below -- 2012-07-19 14:36 [PATCH 00/34] Memory management performance backports for -stable Mel Gorman 2012-07-19 14:36 ` [PATCH 20/34] kswapd: avoid unnecessary rebalance after an unsuccessful balancing Mel Gorman 2012-07-19 14:36 ` Mel Gorman
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=1343050727-3045-21-git-send-email-mgorman@suse.de \ --to=mgorman@suse.de \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=stable@vger.kernel.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.