From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S966802AbbDVRIf (ORCPT ); Wed, 22 Apr 2015 13:08:35 -0400 Received: from cantor2.suse.de ([195.135.220.15]:56522 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1030204AbbDVRIK (ORCPT ); Wed, 22 Apr 2015 13:08:10 -0400 From: Mel Gorman To: Linux-MM Cc: Nathan Zimmer , Dave Hansen , Waiman Long , Scott Norton , Daniel J Blueman , LKML , Mel Gorman Subject: [PATCH 12/13] mm: meminit: Reduce number of times pageblocks are set during struct page init Date: Wed, 22 Apr 2015 18:07:52 +0100 Message-Id: <1429722473-28118-13-git-send-email-mgorman@suse.de> X-Mailer: git-send-email 2.1.2 In-Reply-To: <1429722473-28118-1-git-send-email-mgorman@suse.de> References: <1429722473-28118-1-git-send-email-mgorman@suse.de> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org During parallel sturct page initialisation, ranges are checked for every PFN unnecessarily which increases boot times. This patch alters when the ranges are checked. Signed-off-by: Mel Gorman --- mm/page_alloc.c | 45 +++++++++++++++++++++++---------------------- 1 file changed, 23 insertions(+), 22 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 8d3fd13a09c9..945d56667b61 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -876,33 +876,12 @@ static int free_tail_pages_check(struct page *head_page, struct page *page) static void __meminit __init_single_page(struct page *page, unsigned long pfn, unsigned long zone, int nid) { - struct zone *z = &NODE_DATA(nid)->node_zones[zone]; - set_page_links(page, zone, nid, pfn); mminit_verify_page_links(page, zone, nid, pfn); init_page_count(page); page_mapcount_reset(page); page_cpupid_reset_last(page); - /* - * Mark the block movable so that blocks are reserved for - * movable at startup. This will force kernel allocations - * to reserve their blocks rather than leaking throughout - * the address space during boot when many long-lived - * kernel allocations are made. Later some blocks near - * the start are marked MIGRATE_RESERVE by - * setup_zone_migrate_reserve() - * - * bitmap is created for zone's valid pfn range. but memmap - * can be created for invalid pages (for alignment) - * check here not to call set_pageblock_migratetype() against - * pfn out of zone. - */ - if ((z->zone_start_pfn <= pfn) - && (pfn < zone_end_pfn(z)) - && !(pfn & (pageblock_nr_pages - 1))) - set_pageblock_migratetype(page, MIGRATE_MOVABLE); - INIT_LIST_HEAD(&page->lru); #ifdef WANT_PAGE_VIRTUAL /* The shift won't overflow because ZONE_NORMAL is below 4G. */ @@ -1091,6 +1070,7 @@ void __defermem_init deferred_free_range(struct page *page, unsigned long pfn, int i; if (nr_pages == MAX_ORDER_NR_PAGES && (pfn & (MAX_ORDER_NR_PAGES-1)) == 0) { + set_pageblock_migratetype(page, MIGRATE_MOVABLE); __free_pages_boot_core(page, pfn, MAX_ORDER-1); return; } @@ -4500,7 +4480,28 @@ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone, &nr_initialised)) break; } - __init_single_pfn(pfn, zone, nid); + + /* + * Mark the block movable so that blocks are reserved for + * movable at startup. This will force kernel allocations + * to reserve their blocks rather than leaking throughout + * the address space during boot when many long-lived + * kernel allocations are made. Later some blocks near + * the start are marked MIGRATE_RESERVE by + * setup_zone_migrate_reserve() + * + * bitmap is created for zone's valid pfn range. but memmap + * can be created for invalid pages (for alignment) + * check here not to call set_pageblock_migratetype() against + * pfn out of zone. + */ + if (!(pfn & (pageblock_nr_pages - 1))) { + struct page *page = pfn_to_page(pfn); + set_pageblock_migratetype(page, MIGRATE_MOVABLE); + __init_single_page(page, pfn, zone, nid); + } else { + __init_single_pfn(pfn, zone, nid); + } } } -- 2.1.2