From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933168Ab2LGK0E (ORCPT ); Fri, 7 Dec 2012 05:26:04 -0500 Received: from cantor2.suse.de ([195.135.220.15]:42471 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933120Ab2LGKZR (ORCPT ); Fri, 7 Dec 2012 05:25:17 -0500 From: Mel Gorman To: Peter Zijlstra , Andrea Arcangeli , Ingo Molnar Cc: Rik van Riel , Johannes Weiner , Hugh Dickins , Thomas Gleixner , Paul Turner , Hillf Danton , David Rientjes , Lee Schermerhorn , Alex Shi , Srikar Dronamraju , Aneesh Kumar , Linus Torvalds , Andrew Morton , Linux-MM , LKML , Mel Gorman Subject: [PATCH 47/49] mm: migrate: Account a transhuge page properly when rate limiting Date: Fri, 7 Dec 2012 10:23:50 +0000 Message-Id: <1354875832-9700-48-git-send-email-mgorman@suse.de> X-Mailer: git-send-email 1.7.9.2 In-Reply-To: <1354875832-9700-1-git-send-email-mgorman@suse.de> References: <1354875832-9700-1-git-send-email-mgorman@suse.de> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org If there is excessive migration due to NUMA balancing it gets rate limited. It does this by counting the number of pages it has migrated recently but counts a transhuge page as 1 page. Account for it properly. Signed-off-by: Mel Gorman --- mm/migrate.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/mm/migrate.c b/mm/migrate.c index eb155c9..6b6567f 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1492,7 +1492,7 @@ bool migrate_ratelimited(int node) } /* Returns true if the node is migrate rate-limited after the update */ -bool numamigrate_update_ratelimit(pg_data_t *pgdat) +bool numamigrate_update_ratelimit(pg_data_t *pgdat, unsigned long nr_pages) { bool rate_limited = false; @@ -1510,7 +1510,7 @@ bool numamigrate_update_ratelimit(pg_data_t *pgdat) if (pgdat->balancenuma_migrate_nr_pages > ratelimit_pages) rate_limited = true; else - pgdat->balancenuma_migrate_nr_pages++; + pgdat->balancenuma_migrate_nr_pages += nr_pages; spin_unlock(&pgdat->balancenuma_migrate_lock); return rate_limited; @@ -1579,7 +1579,7 @@ int migrate_misplaced_page(struct page *page, int node) * Optimal placement is no good if the memory bus is saturated and * all the time is being spent migrating! */ - if (numamigrate_update_ratelimit(pgdat)) { + if (numamigrate_update_ratelimit(pgdat, 1)) { put_page(page); goto out; } @@ -1630,7 +1630,7 @@ int migrate_misplaced_transhuge_page(struct mm_struct *mm, * Optimal placement is no good if the memory bus is saturated and * all the time is being spent migrating! */ - if (numamigrate_update_ratelimit(pgdat)) + if (numamigrate_update_ratelimit(pgdat, HPAGE_PMD_NR)) goto out_dropref; new_page = alloc_pages_node(node, -- 1.7.9.2