From mboxrd@z Thu Jan 1 00:00:00 1970 Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S935240AbeAOOUk (ORCPT + 1 other); Mon, 15 Jan 2018 09:20:40 -0500 Received: from mail.linuxfoundation.org ([140.211.169.12]:46040 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S965464AbeAOMjJ (ORCPT ); Mon, 15 Jan 2018 07:39:09 -0500 From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Joonsoo Kim , David Rientjes , Vlastimil Babka , Aaron Lu , Mel Gorman , Rik van Riel , Andrew Morton , Linus Torvalds Subject: [PATCH 4.4 25/87] mm/compaction: fix invalid free_pfn and compact_cached_free_pfn Date: Mon, 15 Jan 2018 13:34:24 +0100 Message-Id: <20180115123351.847478312@linuxfoundation.org> X-Mailer: git-send-email 2.15.1 In-Reply-To: <20180115123349.252309699@linuxfoundation.org> References: <20180115123349.252309699@linuxfoundation.org> User-Agent: quilt/0.65 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Return-Path: 4.4-stable review patch. If anyone has any objections, please let me know. ------------------ From: Joonsoo Kim commit 623446e4dc45b37740268165107cc63abb3022f0 upstream. free_pfn and compact_cached_free_pfn are the pointer that remember restart position of freepage scanner. When they are reset or invalid, we set them to zone_end_pfn because freepage scanner works in reverse direction. But, because zone range is defined as [zone_start_pfn, zone_end_pfn), zone_end_pfn is invalid to access. Therefore, we should not store it to free_pfn and compact_cached_free_pfn. Instead, we need to store zone_end_pfn - 1 to them. There is one more thing we should consider. Freepage scanner scan reversely by pageblock unit. If free_pfn and compact_cached_free_pfn are set to middle of pageblock, it regards that sitiation as that it already scans front part of pageblock so we lose opportunity to scan there. To fix-up, this patch do round_down() to guarantee that reset position will be pageblock aligned. Note that thanks to the current pageblock_pfn_to_page() implementation, actual access to zone_end_pfn doesn't happen until now. But, following patch will change pageblock_pfn_to_page() so this patch is needed from now on. Signed-off-by: Joonsoo Kim Acked-by: David Rientjes Acked-by: Vlastimil Babka Cc: Aaron Lu Cc: Mel Gorman Cc: Rik van Riel Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Mel Gorman Signed-off-by: Greg Kroah-Hartman --- mm/compaction.c | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) --- a/mm/compaction.c +++ b/mm/compaction.c @@ -200,7 +200,8 @@ static void reset_cached_positions(struc { zone->compact_cached_migrate_pfn[0] = zone->zone_start_pfn; zone->compact_cached_migrate_pfn[1] = zone->zone_start_pfn; - zone->compact_cached_free_pfn = zone_end_pfn(zone); + zone->compact_cached_free_pfn = + round_down(zone_end_pfn(zone) - 1, pageblock_nr_pages); } /* @@ -1358,11 +1359,11 @@ static int compact_zone(struct zone *zon */ cc->migrate_pfn = zone->compact_cached_migrate_pfn[sync]; cc->free_pfn = zone->compact_cached_free_pfn; - if (cc->free_pfn < start_pfn || cc->free_pfn > end_pfn) { - cc->free_pfn = end_pfn & ~(pageblock_nr_pages-1); + if (cc->free_pfn < start_pfn || cc->free_pfn >= end_pfn) { + cc->free_pfn = round_down(end_pfn - 1, pageblock_nr_pages); zone->compact_cached_free_pfn = cc->free_pfn; } - if (cc->migrate_pfn < start_pfn || cc->migrate_pfn > end_pfn) { + if (cc->migrate_pfn < start_pfn || cc->migrate_pfn >= end_pfn) { cc->migrate_pfn = start_pfn; zone->compact_cached_migrate_pfn[0] = cc->migrate_pfn; zone->compact_cached_migrate_pfn[1] = cc->migrate_pfn;