From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 90E2AC433E1 for ; Tue, 21 Jul 2020 07:33:59 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 573922080D for ; Tue, 21 Jul 2020 07:33:59 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="qONob+7p" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 573922080D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:Message-ID:Date:Subject:To:From: Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender :Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Owner; bh=4RFWDH6P5VnwMWYG/wUaww6tT/q5o6oWAH1xZ8ABwMA=; b=qONob+7pulChucSz3WOYiYQQ7B zm82e9+/yhULMTKuCHcPgQx3IsUhD9GFTnPv9T0WvJN4mRlHIGzCriQx7FGN1FX33yA+M+gLOy8f3 Yf0PB+4uR1zhA5BiWOO1WIB4bdI36I04nDOJKR6gEWbvGsCPYvNLqXRt63qoCUyseaSc8K5aWmwww nB+M2ResOVxXT4sCK7Ttwvz0lo2LOKgB/xW/eAPg0/X64TcDK5oYuycNnjTbKTpDx4bMKoY2UmqzL aSLYIPzI1di5Bn6l6xpjoXh2/ZCpBVuuzkIesd0W5/8k6P3WrNGeBYh2i9pEVZ4pIoBxKNVxlsr/2 trg7xQ9A==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jxmlN-00012J-NF; Tue, 21 Jul 2020 07:32:33 +0000 Received: from szxga05-in.huawei.com ([45.249.212.191] helo=huawei.com) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jxmlG-00010x-Co for linux-arm-kernel@lists.infradead.org; Tue, 21 Jul 2020 07:32:31 +0000 Received: from DGGEMS403-HUB.china.huawei.com (unknown [172.30.72.59]) by Forcepoint Email with ESMTP id 04945930AF83F7FFFCF2; Tue, 21 Jul 2020 15:32:17 +0800 (CST) Received: from vm107-89-192.huawei.com (100.107.89.192) by DGGEMS403-HUB.china.huawei.com (10.3.19.203) with Microsoft SMTP Server id 14.3.487.0; Tue, 21 Jul 2020 15:32:07 +0800 From: Wei Li To: , Subject: [PATCH] arm64: mm: free unused memmap for sparse memory model that define VMEMMAP Date: Tue, 21 Jul 2020 15:32:03 +0800 Message-ID: <20200721073203.107862-1-liwei213@huawei.com> X-Mailer: git-send-email 2.15.0 MIME-Version: 1.0 X-Originating-IP: [100.107.89.192] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200721_033230_783755_A6938EB5 X-CRM114-Status: GOOD ( 13.97 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: song.bao.hua@hisilicon.com, sujunfei2@hisilicon.com, saberlily.xia@hisilicon.com, linux-arm-kernel@lists.infradead.org, steve.capper@arm.com, puck.chen@hisilicon.com, liwei213@huawei.com, linux-kernel@vger.kernel.org, rppt@linux.ibm.com, fengbaopeng2@hisilicon.com, nsaenzjulienne@suse.de, butao@hisilicon.com Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org For the memory hole, sparse memory model that define SPARSEMEM_VMEMMAP do not free the reserved memory for the page map, this patch do it. Signed-off-by: Wei Li Signed-off-by: Chen Feng Signed-off-by: Xia Qing --- arch/arm64/mm/init.c | 81 +++++++++++++++++++++++++++++++++++++++++++++------- 1 file changed, 71 insertions(+), 10 deletions(-) diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 1e93cfc7c47a..d1b56b47d5ba 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -441,7 +441,48 @@ void __init bootmem_init(void) memblock_dump_all(); } -#ifndef CONFIG_SPARSEMEM_VMEMMAP +#ifdef CONFIG_SPARSEMEM_VMEMMAP +#define VMEMMAP_PAGE_INUSE 0xFD +static inline void free_memmap(unsigned long start_pfn, unsigned long end_pfn) +{ + unsigned long addr, end; + unsigned long next; + pmd_t *pmd; + void *page_addr; + phys_addr_t phys_addr; + + addr = (unsigned long)pfn_to_page(start_pfn); + end = (unsigned long)pfn_to_page(end_pfn); + + pmd = pmd_offset(pud_offset(pgd_offset_k(addr), addr), addr); + for (; addr < end; addr = next, pmd++) { + next = pmd_addr_end(addr, end); + + if (!pmd_present(*pmd)) + continue; + + if (IS_ALIGNED(addr, PMD_SIZE) && + IS_ALIGNED(next, PMD_SIZE)) { + phys_addr = __pfn_to_phys(pmd_pfn(*pmd)); + free_bootmem(phys_addr, PMD_SIZE); + pmd_clear(pmd); + } else { + /* If here, we are freeing vmemmap pages. */ + memset((void *)addr, VMEMMAP_PAGE_INUSE, next - addr); + page_addr = page_address(pmd_page(*pmd)); + + if (!memchr_inv(page_addr, VMEMMAP_PAGE_INUSE, + PMD_SIZE)) { + phys_addr = __pfn_to_phys(pmd_pfn(*pmd)); + free_bootmem(phys_addr, PMD_SIZE); + pmd_clear(pmd); + } + } + } + + flush_tlb_all(); +} +#else static inline void free_memmap(unsigned long start_pfn, unsigned long end_pfn) { struct page *start_pg, *end_pg; @@ -468,31 +509,53 @@ static inline void free_memmap(unsigned long start_pfn, unsigned long end_pfn) memblock_free(pg, pgend - pg); } +#endif + /* * The mem_map array can get very big. Free the unused area of the memory map. */ static void __init free_unused_memmap(void) { - unsigned long start, prev_end = 0; + unsigned long start, cur_start, prev_end = 0; struct memblock_region *reg; for_each_memblock(memory, reg) { - start = __phys_to_pfn(reg->base); + cur_start = __phys_to_pfn(reg->base); #ifdef CONFIG_SPARSEMEM /* * Take care not to free memmap entries that don't exist due * to SPARSEMEM sections which aren't present. */ - start = min(start, ALIGN(prev_end, PAGES_PER_SECTION)); -#endif + start = min(cur_start, ALIGN(prev_end, PAGES_PER_SECTION)); + /* - * If we had a previous bank, and there is a space between the - * current bank and the previous, free it. + * Free memory in the case of: + * 1. if cur_start - prev_end <= PAGES_PER_SECTION, + * free pre_end ~ cur_start. + * 2. if cur_start - prev_end > PAGES_PER_SECTION, + * free pre_end ~ ALIGN(prev_end, PAGES_PER_SECTION). */ if (prev_end && prev_end < start) free_memmap(prev_end, start); + /* + * Free memory in the case of: + * if cur_start - prev_end > PAGES_PER_SECTION, + * free ALIGN_DOWN(cur_start, PAGES_PER_SECTION) ~ cur_start. + */ + if (cur_start > start && + !IS_ALIGNED(cur_start, PAGES_PER_SECTION)) + free_memmap(ALIGN_DOWN(cur_start, PAGES_PER_SECTION), + cur_start); +#else + /* + * If we had a previous bank, and there is a space between the + * current bank and the previous, free it. + */ + if (prev_end && prev_end < cur_start) + free_memmap(prev_end, cur_start); +#endif /* * Align up here since the VM subsystem insists that the * memmap entries are valid from the bank end aligned to @@ -507,7 +570,6 @@ static void __init free_unused_memmap(void) free_memmap(prev_end, ALIGN(prev_end, PAGES_PER_SECTION)); #endif } -#endif /* !CONFIG_SPARSEMEM_VMEMMAP */ /* * mem_init() marks the free areas in the mem_map and tells us how much memory @@ -524,9 +586,8 @@ void __init mem_init(void) set_max_mapnr(max_pfn - PHYS_PFN_OFFSET); -#ifndef CONFIG_SPARSEMEM_VMEMMAP free_unused_memmap(); -#endif + /* this will put all unused low memory onto the freelists */ memblock_free_all(); -- 2.15.0 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel