From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756705Ab2JRUvY (ORCPT ); Thu, 18 Oct 2012 16:51:24 -0400 Received: from rcsinet15.oracle.com ([148.87.113.117]:42628 "EHLO rcsinet15.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756445Ab2JRUvN (ORCPT ); Thu, 18 Oct 2012 16:51:13 -0400 From: Yinghai Lu To: Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , Jacob Shin , Tejun Heo Cc: Stefano Stabellini , linux-kernel@vger.kernel.org, Yinghai Lu Subject: [PATCH 02/19] x86, mm: Use big page size for small memory range Date: Thu, 18 Oct 2012 13:50:12 -0700 Message-Id: <1350593430-24470-5-git-send-email-yinghai@kernel.org> X-Mailer: git-send-email 1.7.7 In-Reply-To: <1350593430-24470-1-git-send-email-yinghai@kernel.org> References: <1350593430-24470-1-git-send-email-yinghai@kernel.org> X-Source-IP: acsinet22.oracle.com [141.146.126.238] Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org We could map small range in the middle of big range at first, so should use big page size at first to avoid using small page size to break down page table. Only can set big page bit when that range has ram area around it. -v2: fix 32bit boundary checking. We can not count ram above max_low_pfn for 32 bit. Signed-off-by: Yinghai Lu --- arch/x86/mm/init.c | 37 +++++++++++++++++++++++++++++++++++++ 1 files changed, 37 insertions(+), 0 deletions(-) diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c index c12dfd5..09ce38f 100644 --- a/arch/x86/mm/init.c +++ b/arch/x86/mm/init.c @@ -88,6 +88,40 @@ static int __meminit save_mr(struct map_range *mr, int nr_range, return nr_range; } +/* + * adjust the page_size_mask for small range to go with + * big page size instead small one if nearby are ram too. + */ +static void __init_refok adjust_range_page_size_mask(struct map_range *mr, + int nr_range) +{ + int i; + + for (i = 0; i < nr_range; i++) { + if ((page_size_mask & (1<> PAGE_SHIFT) > max_low_pfn) + continue; +#endif + + if (memblock_is_region_memory(start, end - start)) + mr[i].page_size_mask |= 1<