From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S935010AbaFTPx5 (ORCPT ); Fri, 20 Jun 2014 11:53:57 -0400 Received: from cantor2.suse.de ([195.135.220.15]:58862 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S934517AbaFTPuM (ORCPT ); Fri, 20 Jun 2014 11:50:12 -0400 From: Vlastimil Babka To: linux-mm@kvack.org, Andrew Morton , David Rientjes Cc: Minchan Kim , Mel Gorman , Joonsoo Kim , Michal Nazarewicz , Naoya Horiguchi , Christoph Lameter , Rik van Riel , Zhang Yanfei , linux-kernel@vger.kernel.org, Vlastimil Babka Subject: [PATCH v3 07/13] mm, compaction: skip rechecks when lock was already held Date: Fri, 20 Jun 2014 17:49:37 +0200 Message-Id: <1403279383-5862-8-git-send-email-vbabka@suse.cz> X-Mailer: git-send-email 1.8.4.5 In-Reply-To: <1403279383-5862-1-git-send-email-vbabka@suse.cz> References: <1403279383-5862-1-git-send-email-vbabka@suse.cz> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Compaction scanners try to lock zone locks as late as possible by checking many page or pageblock properties opportunistically without lock and skipping them if not unsuitable. For pages that pass the initial checks, some properties have to be checked again safely under lock. However, if the lock was already held from a previous iteration in the initial checks, the rechecks are unnecessary. This patch therefore skips the rechecks when the lock was already held. This is now possible to do, since we don't (potentially) drop and reacquire the lock between the initial checks and the safe rechecks anymore. Signed-off-by: Vlastimil Babka Acked-by: Minchan Kim Cc: Mel Gorman Cc: Michal Nazarewicz Cc: Naoya Horiguchi Cc: Christoph Lameter Cc: Rik van Riel Acked-by: David Rientjes --- mm/compaction.c | 53 +++++++++++++++++++++++++++++++---------------------- 1 file changed, 31 insertions(+), 22 deletions(-) diff --git a/mm/compaction.c b/mm/compaction.c index 40da812..9f6e857 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -324,22 +324,30 @@ static unsigned long isolate_freepages_block(struct compact_control *cc, goto isolate_fail; /* - * The zone lock must be held to isolate freepages. - * Unfortunately this is a very coarse lock and can be - * heavily contended if there are parallel allocations - * or parallel compactions. For async compaction do not - * spin on the lock and we acquire the lock as late as - * possible. + * If we already hold the lock, we can skip some rechecking. + * Note that if we hold the lock now, checked_pageblock was + * already set in some previous iteration (or strict is true), + * so it is correct to skip the suitable migration target + * recheck as well. */ - if (!locked) + if (!locked) { + /* + * The zone lock must be held to isolate freepages. + * Unfortunately this is a very coarse lock and can be + * heavily contended if there are parallel allocations + * or parallel compactions. For async compaction do not + * spin on the lock and we acquire the lock as late as + * possible. + */ locked = compact_trylock_irqsave(&cc->zone->lock, &flags, cc); - if (!locked) - break; + if (!locked) + break; - /* Recheck this is a buddy page under lock */ - if (!PageBuddy(page)) - goto isolate_fail; + /* Recheck this is a buddy page under lock */ + if (!PageBuddy(page)) + goto isolate_fail; + } /* Found a free page, break it into order-0 pages */ isolated = split_free_page(page); @@ -623,19 +631,20 @@ isolate_migratepages_range(struct zone *zone, struct compact_control *cc, page_count(page) > page_mapcount(page)) continue; - /* If the lock is not held, try to take it */ - if (!locked) + /* If we already hold the lock, we can skip some rechecking */ + if (!locked) { locked = compact_trylock_irqsave(&zone->lru_lock, &flags, cc); - if (!locked) - break; + if (!locked) + break; - /* Recheck PageLRU and PageTransHuge under lock */ - if (!PageLRU(page)) - continue; - if (PageTransHuge(page)) { - low_pfn += (1 << compound_order(page)) - 1; - continue; + /* Recheck PageLRU and PageTransHuge under lock */ + if (!PageLRU(page)) + continue; + if (PageTransHuge(page)) { + low_pfn += (1 << compound_order(page)) - 1; + continue; + } } lruvec = mem_cgroup_page_lruvec(page, zone); -- 1.8.4.5 From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-we0-f182.google.com (mail-we0-f182.google.com [74.125.82.182]) by kanga.kvack.org (Postfix) with ESMTP id C65D06B0055 for ; Fri, 20 Jun 2014 11:50:13 -0400 (EDT) Received: by mail-we0-f182.google.com with SMTP id q59so4023562wes.13 for ; Fri, 20 Jun 2014 08:50:12 -0700 (PDT) Received: from mx2.suse.de (cantor2.suse.de. [195.135.220.15]) by mx.google.com with ESMTPS id y1si2831589wij.95.2014.06.20.08.50.10 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 20 Jun 2014 08:50:10 -0700 (PDT) From: Vlastimil Babka Subject: [PATCH v3 07/13] mm, compaction: skip rechecks when lock was already held Date: Fri, 20 Jun 2014 17:49:37 +0200 Message-Id: <1403279383-5862-8-git-send-email-vbabka@suse.cz> In-Reply-To: <1403279383-5862-1-git-send-email-vbabka@suse.cz> References: <1403279383-5862-1-git-send-email-vbabka@suse.cz> Sender: owner-linux-mm@kvack.org List-ID: To: linux-mm@kvack.org, Andrew Morton , David Rientjes Cc: Minchan Kim , Mel Gorman , Joonsoo Kim , Michal Nazarewicz , Naoya Horiguchi , Christoph Lameter , Rik van Riel , Zhang Yanfei , linux-kernel@vger.kernel.org, Vlastimil Babka Compaction scanners try to lock zone locks as late as possible by checking many page or pageblock properties opportunistically without lock and skipping them if not unsuitable. For pages that pass the initial checks, some properties have to be checked again safely under lock. However, if the lock was already held from a previous iteration in the initial checks, the rechecks are unnecessary. This patch therefore skips the rechecks when the lock was already held. This is now possible to do, since we don't (potentially) drop and reacquire the lock between the initial checks and the safe rechecks anymore. Signed-off-by: Vlastimil Babka Acked-by: Minchan Kim Cc: Mel Gorman Cc: Michal Nazarewicz Cc: Naoya Horiguchi Cc: Christoph Lameter Cc: Rik van Riel Acked-by: David Rientjes --- mm/compaction.c | 53 +++++++++++++++++++++++++++++++---------------------- 1 file changed, 31 insertions(+), 22 deletions(-) diff --git a/mm/compaction.c b/mm/compaction.c index 40da812..9f6e857 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -324,22 +324,30 @@ static unsigned long isolate_freepages_block(struct compact_control *cc, goto isolate_fail; /* - * The zone lock must be held to isolate freepages. - * Unfortunately this is a very coarse lock and can be - * heavily contended if there are parallel allocations - * or parallel compactions. For async compaction do not - * spin on the lock and we acquire the lock as late as - * possible. + * If we already hold the lock, we can skip some rechecking. + * Note that if we hold the lock now, checked_pageblock was + * already set in some previous iteration (or strict is true), + * so it is correct to skip the suitable migration target + * recheck as well. */ - if (!locked) + if (!locked) { + /* + * The zone lock must be held to isolate freepages. + * Unfortunately this is a very coarse lock and can be + * heavily contended if there are parallel allocations + * or parallel compactions. For async compaction do not + * spin on the lock and we acquire the lock as late as + * possible. + */ locked = compact_trylock_irqsave(&cc->zone->lock, &flags, cc); - if (!locked) - break; + if (!locked) + break; - /* Recheck this is a buddy page under lock */ - if (!PageBuddy(page)) - goto isolate_fail; + /* Recheck this is a buddy page under lock */ + if (!PageBuddy(page)) + goto isolate_fail; + } /* Found a free page, break it into order-0 pages */ isolated = split_free_page(page); @@ -623,19 +631,20 @@ isolate_migratepages_range(struct zone *zone, struct compact_control *cc, page_count(page) > page_mapcount(page)) continue; - /* If the lock is not held, try to take it */ - if (!locked) + /* If we already hold the lock, we can skip some rechecking */ + if (!locked) { locked = compact_trylock_irqsave(&zone->lru_lock, &flags, cc); - if (!locked) - break; + if (!locked) + break; - /* Recheck PageLRU and PageTransHuge under lock */ - if (!PageLRU(page)) - continue; - if (PageTransHuge(page)) { - low_pfn += (1 << compound_order(page)) - 1; - continue; + /* Recheck PageLRU and PageTransHuge under lock */ + if (!PageLRU(page)) + continue; + if (PageTransHuge(page)) { + low_pfn += (1 << compound_order(page)) - 1; + continue; + } } lruvec = mem_cgroup_page_lruvec(page, zone); -- 1.8.4.5 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org