From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1031077AbbD1Ohe (ORCPT ); Tue, 28 Apr 2015 10:37:34 -0400 Received: from cantor2.suse.de ([195.135.220.15]:50991 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1030959AbbD1OhY (ORCPT ); Tue, 28 Apr 2015 10:37:24 -0400 From: Mel Gorman To: Andrew Morton Cc: Nathan Zimmer , Dave Hansen , Waiman Long , Scott Norton , Daniel J Blueman , Linux-MM , LKML , Mel Gorman Subject: [PATCH 12/13] mm: meminit: Reduce number of times pageblocks are set during struct page init Date: Tue, 28 Apr 2015 15:37:09 +0100 Message-Id: <1430231830-7702-13-git-send-email-mgorman@suse.de> X-Mailer: git-send-email 2.3.5 In-Reply-To: <1430231830-7702-1-git-send-email-mgorman@suse.de> References: <1430231830-7702-1-git-send-email-mgorman@suse.de> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org During parallel sturct page initialisation, ranges are checked for every PFN unnecessarily which increases boot times. This patch alters when the ranges are checked. Signed-off-by: Mel Gorman --- mm/page_alloc.c | 46 ++++++++++++++++++++++++---------------------- 1 file changed, 24 insertions(+), 22 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 2200b7473b5a..313f4a5a3907 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -852,33 +852,12 @@ static int free_tail_pages_check(struct page *head_page, struct page *page) static void __meminit __init_single_page(struct page *page, unsigned long pfn, unsigned long zone, int nid) { - struct zone *z = &NODE_DATA(nid)->node_zones[zone]; - set_page_links(page, zone, nid, pfn); mminit_verify_page_links(page, zone, nid, pfn); init_page_count(page); page_mapcount_reset(page); page_cpupid_reset_last(page); - /* - * Mark the block movable so that blocks are reserved for - * movable at startup. This will force kernel allocations - * to reserve their blocks rather than leaking throughout - * the address space during boot when many long-lived - * kernel allocations are made. Later some blocks near - * the start are marked MIGRATE_RESERVE by - * setup_zone_migrate_reserve() - * - * bitmap is created for zone's valid pfn range. but memmap - * can be created for invalid pages (for alignment) - * check here not to call set_pageblock_migratetype() against - * pfn out of zone. - */ - if ((z->zone_start_pfn <= pfn) - && (pfn < zone_end_pfn(z)) - && !(pfn & (pageblock_nr_pages - 1))) - set_pageblock_migratetype(page, MIGRATE_MOVABLE); - INIT_LIST_HEAD(&page->lru); #ifdef WANT_PAGE_VIRTUAL /* The shift won't overflow because ZONE_NORMAL is below 4G. */ @@ -1074,6 +1053,7 @@ static void __defermem_init deferred_free_range(struct page *page, /* Free a large naturally-aligned chunk if possible */ if (nr_pages == MAX_ORDER_NR_PAGES && (pfn & (MAX_ORDER_NR_PAGES-1)) == 0) { + set_pageblock_migratetype(page, MIGRATE_MOVABLE); __free_pages_boot_core(page, pfn, MAX_ORDER-1); return; } @@ -4492,7 +4472,29 @@ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone, &nr_initialised)) break; } - __init_single_pfn(pfn, zone, nid); + + /* + * Mark the block movable so that blocks are reserved for + * movable at startup. This will force kernel allocations + * to reserve their blocks rather than leaking throughout + * the address space during boot when many long-lived + * kernel allocations are made. Later some blocks near + * the start are marked MIGRATE_RESERVE by + * setup_zone_migrate_reserve() + * + * bitmap is created for zone's valid pfn range. but memmap + * can be created for invalid pages (for alignment) + * check here not to call set_pageblock_migratetype() against + * pfn out of zone. + */ + if (!(pfn & (pageblock_nr_pages - 1))) { + struct page *page = pfn_to_page(pfn); + + set_pageblock_migratetype(page, MIGRATE_MOVABLE); + __init_single_page(page, pfn, zone, nid); + } else { + __init_single_pfn(pfn, zone, nid); + } } } -- 2.3.5 From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-wi0-f169.google.com (mail-wi0-f169.google.com [209.85.212.169]) by kanga.kvack.org (Postfix) with ESMTP id 36267900016 for ; Tue, 28 Apr 2015 10:37:45 -0400 (EDT) Received: by widdi4 with SMTP id di4so143030199wid.0 for ; Tue, 28 Apr 2015 07:37:44 -0700 (PDT) Received: from mx2.suse.de (cantor2.suse.de. [195.135.220.15]) by mx.google.com with ESMTPS id et2si18289606wib.90.2015.04.28.07.37.23 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 28 Apr 2015 07:37:24 -0700 (PDT) From: Mel Gorman Subject: [PATCH 12/13] mm: meminit: Reduce number of times pageblocks are set during struct page init Date: Tue, 28 Apr 2015 15:37:09 +0100 Message-Id: <1430231830-7702-13-git-send-email-mgorman@suse.de> In-Reply-To: <1430231830-7702-1-git-send-email-mgorman@suse.de> References: <1430231830-7702-1-git-send-email-mgorman@suse.de> Sender: owner-linux-mm@kvack.org List-ID: To: Andrew Morton Cc: Nathan Zimmer , Dave Hansen , Waiman Long , Scott Norton , Daniel J Blueman , Linux-MM , LKML , Mel Gorman During parallel sturct page initialisation, ranges are checked for every PFN unnecessarily which increases boot times. This patch alters when the ranges are checked. Signed-off-by: Mel Gorman --- mm/page_alloc.c | 46 ++++++++++++++++++++++++---------------------- 1 file changed, 24 insertions(+), 22 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 2200b7473b5a..313f4a5a3907 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -852,33 +852,12 @@ static int free_tail_pages_check(struct page *head_page, struct page *page) static void __meminit __init_single_page(struct page *page, unsigned long pfn, unsigned long zone, int nid) { - struct zone *z = &NODE_DATA(nid)->node_zones[zone]; - set_page_links(page, zone, nid, pfn); mminit_verify_page_links(page, zone, nid, pfn); init_page_count(page); page_mapcount_reset(page); page_cpupid_reset_last(page); - /* - * Mark the block movable so that blocks are reserved for - * movable at startup. This will force kernel allocations - * to reserve their blocks rather than leaking throughout - * the address space during boot when many long-lived - * kernel allocations are made. Later some blocks near - * the start are marked MIGRATE_RESERVE by - * setup_zone_migrate_reserve() - * - * bitmap is created for zone's valid pfn range. but memmap - * can be created for invalid pages (for alignment) - * check here not to call set_pageblock_migratetype() against - * pfn out of zone. - */ - if ((z->zone_start_pfn <= pfn) - && (pfn < zone_end_pfn(z)) - && !(pfn & (pageblock_nr_pages - 1))) - set_pageblock_migratetype(page, MIGRATE_MOVABLE); - INIT_LIST_HEAD(&page->lru); #ifdef WANT_PAGE_VIRTUAL /* The shift won't overflow because ZONE_NORMAL is below 4G. */ @@ -1074,6 +1053,7 @@ static void __defermem_init deferred_free_range(struct page *page, /* Free a large naturally-aligned chunk if possible */ if (nr_pages == MAX_ORDER_NR_PAGES && (pfn & (MAX_ORDER_NR_PAGES-1)) == 0) { + set_pageblock_migratetype(page, MIGRATE_MOVABLE); __free_pages_boot_core(page, pfn, MAX_ORDER-1); return; } @@ -4492,7 +4472,29 @@ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone, &nr_initialised)) break; } - __init_single_pfn(pfn, zone, nid); + + /* + * Mark the block movable so that blocks are reserved for + * movable at startup. This will force kernel allocations + * to reserve their blocks rather than leaking throughout + * the address space during boot when many long-lived + * kernel allocations are made. Later some blocks near + * the start are marked MIGRATE_RESERVE by + * setup_zone_migrate_reserve() + * + * bitmap is created for zone's valid pfn range. but memmap + * can be created for invalid pages (for alignment) + * check here not to call set_pageblock_migratetype() against + * pfn out of zone. + */ + if (!(pfn & (pageblock_nr_pages - 1))) { + struct page *page = pfn_to_page(pfn); + + set_pageblock_migratetype(page, MIGRATE_MOVABLE); + __init_single_page(page, pfn, zone, nid); + } else { + __init_single_pfn(pfn, zone, nid); + } } } -- 2.3.5 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org