From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752725AbdKWLOD (ORCPT ); Thu, 23 Nov 2017 06:14:03 -0500 Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]:38726 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752570AbdKWLOB (ORCPT ); Thu, 23 Nov 2017 06:14:01 -0500 Date: Thu, 23 Nov 2017 11:13:52 +0000 From: Maciej Bielski To: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, ar@linux.vnet.ibm.com, arunks@qti.qualcomm.com, mark.rutland@arm.com, scott.branden@broadcom.com, will.deacon@arm.com, qiuxishi@huawei.com, catalin.marinas@arm.com, mhocko@suse.com, realean2@ie.ibm.com Subject: [PATCH v2 1/5] mm: memory_hotplug: Memory hotplug (add) support for arm64 References: MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.24 (2015-08-30) X-TM-AS-GCONF: 00 x-cbid: 17112311-0008-0000-0000-000004AECBFA X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 17112311-0009-0000-0000-00001E419910 Message-Id: X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:,, definitions=2017-11-23_04:,, signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 suspectscore=1 phishscore=0 bulkscore=0 spamscore=0 clxscore=1034 lowpriorityscore=0 impostorscore=0 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1709140000 definitions=main-1711230155 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Introduces memory hotplug functionality (hot-add) for arm64. Changes v1->v2: - swapper pgtable updated in place on hot add, avoiding unnecessary copy: all changes are additive and non destructive. - stop_machine used to updated swapper on hot add, avoiding races - checking if pagealloc is under debug to stay coherent with mem_map Signed-off-by: Maciej Bielski Signed-off-by: Andrea Reale --- arch/arm64/Kconfig | 12 ++++++ arch/arm64/configs/defconfig | 1 + arch/arm64/include/asm/mmu.h | 3 ++ arch/arm64/mm/init.c | 87 ++++++++++++++++++++++++++++++++++++++++++++ arch/arm64/mm/mmu.c | 39 ++++++++++++++++++++ 5 files changed, 142 insertions(+) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 0df64a6..c736bba 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -641,6 +641,14 @@ config HOTPLUG_CPU Say Y here to experiment with turning CPUs off and on. CPUs can be controlled through /sys/devices/system/cpu. +config ARCH_HAS_ADD_PAGES + def_bool y + depends on ARCH_ENABLE_MEMORY_HOTPLUG + +config ARCH_ENABLE_MEMORY_HOTPLUG + def_bool y + depends on !NUMA + # Common NUMA Features config NUMA bool "Numa Memory Allocation and Scheduler Support" @@ -715,6 +723,10 @@ config ARCH_HAS_CACHE_LINE_SIZE source "mm/Kconfig" +config ARCH_MEMORY_PROBE + def_bool y + depends on MEMORY_HOTPLUG + config SECCOMP bool "Enable seccomp to safely compute untrusted bytecode" ---help--- diff --git a/arch/arm64/configs/defconfig b/arch/arm64/configs/defconfig index 34480e9..5fc5656 100644 --- a/arch/arm64/configs/defconfig +++ b/arch/arm64/configs/defconfig @@ -80,6 +80,7 @@ CONFIG_ARM64_VA_BITS_48=y CONFIG_SCHED_MC=y CONFIG_NUMA=y CONFIG_PREEMPT=y +CONFIG_MEMORY_HOTPLUG=y CONFIG_KSM=y CONFIG_TRANSPARENT_HUGEPAGE=y CONFIG_CMA=y diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h index 0d34bf0..2b3fa4d 100644 --- a/arch/arm64/include/asm/mmu.h +++ b/arch/arm64/include/asm/mmu.h @@ -40,5 +40,8 @@ extern void create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys, pgprot_t prot, bool page_mappings_only); extern void *fixmap_remap_fdt(phys_addr_t dt_phys); extern void mark_linear_text_alias_ro(void); +#ifdef CONFIG_MEMORY_HOTPLUG +extern void hotplug_paging(phys_addr_t start, phys_addr_t size); +#endif #endif diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 5960bef..e96e7d3 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -722,3 +722,90 @@ static int __init register_mem_limit_dumper(void) return 0; } __initcall(register_mem_limit_dumper); + +#ifdef CONFIG_MEMORY_HOTPLUG +int add_pages(int nid, unsigned long start_pfn, + unsigned long nr_pages, bool want_memblock) +{ + int ret; + u64 start_addr = start_pfn << PAGE_SHIFT; + /* + * Mark the first page in the range as unusable. This is needed + * because __add_section (within __add_pages) wants pfn_valid + * of it to be false, and in arm64 pfn falid is implemented by + * just checking at the nomap flag for existing blocks. + * + * A small trick here is that __add_section() requires only + * phys_start_pfn (that is the first pfn of a section) to be + * invalid. Regardless of whether it was assumed (by the function + * author) that all pfns within a section are either all valid + * or all invalid, it allows to avoid looping twice (once here, + * second when memblock_clear_nomap() is called) through all + * pfns of the section and modify only one pfn. Thanks to that, + * further, in __add_zone() only this very first pfn is skipped + * and corresponding page is not flagged reserved. Therefore it + * is enough to correct this setup only for it. + * + * When arch_add_memory() returns the walk_memory_range() function + * is called and passed with online_memory_block() callback, + * which execution finally reaches the memory_block_action() + * function, where also only the first pfn of a memory block is + * checked to be reserved. Above, it was first pfn of a section, + * here it is a block but + * (drivers/base/memory.c): + * sections_per_block = block_sz / MIN_MEMORY_BLOCK_SIZE; + * (include/linux/memory.h): + * #define MIN_MEMORY_BLOCK_SIZE (1UL << SECTION_SIZE_BITS) + * so we can consider block and section equivalently + */ + memblock_mark_nomap(start_addr, 1<> PAGE_SHIFT; + unsigned long nr_pages = size >> PAGE_SHIFT; + unsigned long end_pfn = start_pfn + nr_pages; + unsigned long max_sparsemem_pfn = 1UL << (MAX_PHYSMEM_BITS-PAGE_SHIFT); + + if (end_pfn > max_sparsemem_pfn) { + pr_err("end_pfn too big"); + return -1; + } + hotplug_paging(start, size); + + ret = add_pages(nid, start_pfn, nr_pages, want_memblock); + + if (ret) + pr_warn("%s: Problem encountered in __add_pages() ret=%d\n", + __func__, ret); + + return ret; +} + +#endif /* CONFIG_MEMORY_HOTPLUG */ diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index f1eb15e..d93043d 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -28,6 +28,7 @@ #include #include #include +#include #include #include #include @@ -615,6 +616,44 @@ void __init paging_init(void) SWAPPER_DIR_SIZE - PAGE_SIZE); } +#ifdef CONFIG_MEMORY_HOTPLUG + +/* + * hotplug_paging() is used by memory hotplug to build new page tables + * for hot added memory. + */ + +struct mem_range { + phys_addr_t base; + phys_addr_t size; +}; + +static int __hotplug_paging(void *data) +{ + int flags = 0; + struct mem_range *section = data; + + if (debug_pagealloc_enabled()) + flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS; + + __create_pgd_mapping(swapper_pg_dir, section->base, + __phys_to_virt(section->base), section->size, + PAGE_KERNEL, pgd_pgtable_alloc, flags); + + return 0; +} + +inline void hotplug_paging(phys_addr_t start, phys_addr_t size) +{ + struct mem_range section = { + .base = start, + .size = size, + }; + + stop_machine(__hotplug_paging, §ion, NULL); +} +#endif /* CONFIG_MEMORY_HOTPLUG */ + /* * Check whether a kernel address is valid (derived from arch/x86/). */ -- 2.7.4 From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-qt0-f197.google.com (mail-qt0-f197.google.com [209.85.216.197]) by kanga.kvack.org (Postfix) with ESMTP id 98E0D6B0276 for ; Thu, 23 Nov 2017 06:14:04 -0500 (EST) Received: by mail-qt0-f197.google.com with SMTP id j12so2706375qtc.20 for ; Thu, 23 Nov 2017 03:14:04 -0800 (PST) Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com. [148.163.156.1]) by mx.google.com with ESMTPS id i31si4349924qtc.424.2017.11.23.03.14.01 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 23 Nov 2017 03:14:02 -0800 (PST) Received: from pps.filterd (m0098399.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.21/8.16.0.21) with SMTP id vANBCRHJ013836 for ; Thu, 23 Nov 2017 06:14:01 -0500 Received: from e06smtp12.uk.ibm.com (e06smtp12.uk.ibm.com [195.75.94.108]) by mx0a-001b2d01.pphosted.com with ESMTP id 2eduwbp8g9-1 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT) for ; Thu, 23 Nov 2017 06:14:00 -0500 Received: from localhost by e06smtp12.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Thu, 23 Nov 2017 11:13:58 -0000 Date: Thu, 23 Nov 2017 11:13:52 +0000 From: Maciej Bielski Subject: [PATCH v2 1/5] mm: memory_hotplug: Memory hotplug (add) support for arm64 References: MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: Message-Id: Sender: owner-linux-mm@kvack.org List-ID: To: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, ar@linux.vnet.ibm.com, arunks@qti.qualcomm.com, mark.rutland@arm.com, scott.branden@broadcom.com, will.deacon@arm.com, qiuxishi@huawei.com, catalin.marinas@arm.com, mhocko@suse.com, realean2@ie.ibm.com Introduces memory hotplug functionality (hot-add) for arm64. Changes v1->v2: - swapper pgtable updated in place on hot add, avoiding unnecessary copy: all changes are additive and non destructive. - stop_machine used to updated swapper on hot add, avoiding races - checking if pagealloc is under debug to stay coherent with mem_map Signed-off-by: Maciej Bielski Signed-off-by: Andrea Reale --- arch/arm64/Kconfig | 12 ++++++ arch/arm64/configs/defconfig | 1 + arch/arm64/include/asm/mmu.h | 3 ++ arch/arm64/mm/init.c | 87 ++++++++++++++++++++++++++++++++++++++++++++ arch/arm64/mm/mmu.c | 39 ++++++++++++++++++++ 5 files changed, 142 insertions(+) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 0df64a6..c736bba 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -641,6 +641,14 @@ config HOTPLUG_CPU Say Y here to experiment with turning CPUs off and on. CPUs can be controlled through /sys/devices/system/cpu. +config ARCH_HAS_ADD_PAGES + def_bool y + depends on ARCH_ENABLE_MEMORY_HOTPLUG + +config ARCH_ENABLE_MEMORY_HOTPLUG + def_bool y + depends on !NUMA + # Common NUMA Features config NUMA bool "Numa Memory Allocation and Scheduler Support" @@ -715,6 +723,10 @@ config ARCH_HAS_CACHE_LINE_SIZE source "mm/Kconfig" +config ARCH_MEMORY_PROBE + def_bool y + depends on MEMORY_HOTPLUG + config SECCOMP bool "Enable seccomp to safely compute untrusted bytecode" ---help--- diff --git a/arch/arm64/configs/defconfig b/arch/arm64/configs/defconfig index 34480e9..5fc5656 100644 --- a/arch/arm64/configs/defconfig +++ b/arch/arm64/configs/defconfig @@ -80,6 +80,7 @@ CONFIG_ARM64_VA_BITS_48=y CONFIG_SCHED_MC=y CONFIG_NUMA=y CONFIG_PREEMPT=y +CONFIG_MEMORY_HOTPLUG=y CONFIG_KSM=y CONFIG_TRANSPARENT_HUGEPAGE=y CONFIG_CMA=y diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h index 0d34bf0..2b3fa4d 100644 --- a/arch/arm64/include/asm/mmu.h +++ b/arch/arm64/include/asm/mmu.h @@ -40,5 +40,8 @@ extern void create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys, pgprot_t prot, bool page_mappings_only); extern void *fixmap_remap_fdt(phys_addr_t dt_phys); extern void mark_linear_text_alias_ro(void); +#ifdef CONFIG_MEMORY_HOTPLUG +extern void hotplug_paging(phys_addr_t start, phys_addr_t size); +#endif #endif diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 5960bef..e96e7d3 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -722,3 +722,90 @@ static int __init register_mem_limit_dumper(void) return 0; } __initcall(register_mem_limit_dumper); + +#ifdef CONFIG_MEMORY_HOTPLUG +int add_pages(int nid, unsigned long start_pfn, + unsigned long nr_pages, bool want_memblock) +{ + int ret; + u64 start_addr = start_pfn << PAGE_SHIFT; + /* + * Mark the first page in the range as unusable. This is needed + * because __add_section (within __add_pages) wants pfn_valid + * of it to be false, and in arm64 pfn falid is implemented by + * just checking at the nomap flag for existing blocks. + * + * A small trick here is that __add_section() requires only + * phys_start_pfn (that is the first pfn of a section) to be + * invalid. Regardless of whether it was assumed (by the function + * author) that all pfns within a section are either all valid + * or all invalid, it allows to avoid looping twice (once here, + * second when memblock_clear_nomap() is called) through all + * pfns of the section and modify only one pfn. Thanks to that, + * further, in __add_zone() only this very first pfn is skipped + * and corresponding page is not flagged reserved. Therefore it + * is enough to correct this setup only for it. + * + * When arch_add_memory() returns the walk_memory_range() function + * is called and passed with online_memory_block() callback, + * which execution finally reaches the memory_block_action() + * function, where also only the first pfn of a memory block is + * checked to be reserved. Above, it was first pfn of a section, + * here it is a block but + * (drivers/base/memory.c): + * sections_per_block = block_sz / MIN_MEMORY_BLOCK_SIZE; + * (include/linux/memory.h): + * #define MIN_MEMORY_BLOCK_SIZE (1UL << SECTION_SIZE_BITS) + * so we can consider block and section equivalently + */ + memblock_mark_nomap(start_addr, 1<> PAGE_SHIFT; + unsigned long nr_pages = size >> PAGE_SHIFT; + unsigned long end_pfn = start_pfn + nr_pages; + unsigned long max_sparsemem_pfn = 1UL << (MAX_PHYSMEM_BITS-PAGE_SHIFT); + + if (end_pfn > max_sparsemem_pfn) { + pr_err("end_pfn too big"); + return -1; + } + hotplug_paging(start, size); + + ret = add_pages(nid, start_pfn, nr_pages, want_memblock); + + if (ret) + pr_warn("%s: Problem encountered in __add_pages() ret=%d\n", + __func__, ret); + + return ret; +} + +#endif /* CONFIG_MEMORY_HOTPLUG */ diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index f1eb15e..d93043d 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -28,6 +28,7 @@ #include #include #include +#include #include #include #include @@ -615,6 +616,44 @@ void __init paging_init(void) SWAPPER_DIR_SIZE - PAGE_SIZE); } +#ifdef CONFIG_MEMORY_HOTPLUG + +/* + * hotplug_paging() is used by memory hotplug to build new page tables + * for hot added memory. + */ + +struct mem_range { + phys_addr_t base; + phys_addr_t size; +}; + +static int __hotplug_paging(void *data) +{ + int flags = 0; + struct mem_range *section = data; + + if (debug_pagealloc_enabled()) + flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS; + + __create_pgd_mapping(swapper_pg_dir, section->base, + __phys_to_virt(section->base), section->size, + PAGE_KERNEL, pgd_pgtable_alloc, flags); + + return 0; +} + +inline void hotplug_paging(phys_addr_t start, phys_addr_t size) +{ + struct mem_range section = { + .base = start, + .size = size, + }; + + stop_machine(__hotplug_paging, §ion, NULL); +} +#endif /* CONFIG_MEMORY_HOTPLUG */ + /* * Check whether a kernel address is valid (derived from arch/x86/). */ -- 2.7.4 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 From: m.bielski@virtualopensystems.com (Maciej Bielski) Date: Thu, 23 Nov 2017 11:13:52 +0000 Subject: [PATCH v2 1/5] mm: memory_hotplug: Memory hotplug (add) support for arm64 In-Reply-To: References: Message-ID: To: linux-arm-kernel@lists.infradead.org List-Id: linux-arm-kernel.lists.infradead.org Introduces memory hotplug functionality (hot-add) for arm64. Changes v1->v2: - swapper pgtable updated in place on hot add, avoiding unnecessary copy: all changes are additive and non destructive. - stop_machine used to updated swapper on hot add, avoiding races - checking if pagealloc is under debug to stay coherent with mem_map Signed-off-by: Maciej Bielski Signed-off-by: Andrea Reale --- arch/arm64/Kconfig | 12 ++++++ arch/arm64/configs/defconfig | 1 + arch/arm64/include/asm/mmu.h | 3 ++ arch/arm64/mm/init.c | 87 ++++++++++++++++++++++++++++++++++++++++++++ arch/arm64/mm/mmu.c | 39 ++++++++++++++++++++ 5 files changed, 142 insertions(+) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 0df64a6..c736bba 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -641,6 +641,14 @@ config HOTPLUG_CPU Say Y here to experiment with turning CPUs off and on. CPUs can be controlled through /sys/devices/system/cpu. +config ARCH_HAS_ADD_PAGES + def_bool y + depends on ARCH_ENABLE_MEMORY_HOTPLUG + +config ARCH_ENABLE_MEMORY_HOTPLUG + def_bool y + depends on !NUMA + # Common NUMA Features config NUMA bool "Numa Memory Allocation and Scheduler Support" @@ -715,6 +723,10 @@ config ARCH_HAS_CACHE_LINE_SIZE source "mm/Kconfig" +config ARCH_MEMORY_PROBE + def_bool y + depends on MEMORY_HOTPLUG + config SECCOMP bool "Enable seccomp to safely compute untrusted bytecode" ---help--- diff --git a/arch/arm64/configs/defconfig b/arch/arm64/configs/defconfig index 34480e9..5fc5656 100644 --- a/arch/arm64/configs/defconfig +++ b/arch/arm64/configs/defconfig @@ -80,6 +80,7 @@ CONFIG_ARM64_VA_BITS_48=y CONFIG_SCHED_MC=y CONFIG_NUMA=y CONFIG_PREEMPT=y +CONFIG_MEMORY_HOTPLUG=y CONFIG_KSM=y CONFIG_TRANSPARENT_HUGEPAGE=y CONFIG_CMA=y diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h index 0d34bf0..2b3fa4d 100644 --- a/arch/arm64/include/asm/mmu.h +++ b/arch/arm64/include/asm/mmu.h @@ -40,5 +40,8 @@ extern void create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys, pgprot_t prot, bool page_mappings_only); extern void *fixmap_remap_fdt(phys_addr_t dt_phys); extern void mark_linear_text_alias_ro(void); +#ifdef CONFIG_MEMORY_HOTPLUG +extern void hotplug_paging(phys_addr_t start, phys_addr_t size); +#endif #endif diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 5960bef..e96e7d3 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -722,3 +722,90 @@ static int __init register_mem_limit_dumper(void) return 0; } __initcall(register_mem_limit_dumper); + +#ifdef CONFIG_MEMORY_HOTPLUG +int add_pages(int nid, unsigned long start_pfn, + unsigned long nr_pages, bool want_memblock) +{ + int ret; + u64 start_addr = start_pfn << PAGE_SHIFT; + /* + * Mark the first page in the range as unusable. This is needed + * because __add_section (within __add_pages) wants pfn_valid + * of it to be false, and in arm64 pfn falid is implemented by + * just checking@the nomap flag for existing blocks. + * + * A small trick here is that __add_section() requires only + * phys_start_pfn (that is the first pfn of a section) to be + * invalid. Regardless of whether it was assumed (by the function + * author) that all pfns within a section are either all valid + * or all invalid, it allows to avoid looping twice (once here, + * second when memblock_clear_nomap() is called) through all + * pfns of the section and modify only one pfn. Thanks to that, + * further, in __add_zone() only this very first pfn is skipped + * and corresponding page is not flagged reserved. Therefore it + * is enough to correct this setup only for it. + * + * When arch_add_memory() returns the walk_memory_range() function + * is called and passed with online_memory_block() callback, + * which execution finally reaches the memory_block_action() + * function, where also only the first pfn of a memory block is + * checked to be reserved. Above, it was first pfn of a section, + * here it is a block but + * (drivers/base/memory.c): + * sections_per_block = block_sz / MIN_MEMORY_BLOCK_SIZE; + * (include/linux/memory.h): + * #define MIN_MEMORY_BLOCK_SIZE (1UL << SECTION_SIZE_BITS) + * so we can consider block and section equivalently + */ + memblock_mark_nomap(start_addr, 1<> PAGE_SHIFT; + unsigned long nr_pages = size >> PAGE_SHIFT; + unsigned long end_pfn = start_pfn + nr_pages; + unsigned long max_sparsemem_pfn = 1UL << (MAX_PHYSMEM_BITS-PAGE_SHIFT); + + if (end_pfn > max_sparsemem_pfn) { + pr_err("end_pfn too big"); + return -1; + } + hotplug_paging(start, size); + + ret = add_pages(nid, start_pfn, nr_pages, want_memblock); + + if (ret) + pr_warn("%s: Problem encountered in __add_pages() ret=%d\n", + __func__, ret); + + return ret; +} + +#endif /* CONFIG_MEMORY_HOTPLUG */ diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index f1eb15e..d93043d 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -28,6 +28,7 @@ #include #include #include +#include #include #include #include @@ -615,6 +616,44 @@ void __init paging_init(void) SWAPPER_DIR_SIZE - PAGE_SIZE); } +#ifdef CONFIG_MEMORY_HOTPLUG + +/* + * hotplug_paging() is used by memory hotplug to build new page tables + * for hot added memory. + */ + +struct mem_range { + phys_addr_t base; + phys_addr_t size; +}; + +static int __hotplug_paging(void *data) +{ + int flags = 0; + struct mem_range *section = data; + + if (debug_pagealloc_enabled()) + flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS; + + __create_pgd_mapping(swapper_pg_dir, section->base, + __phys_to_virt(section->base), section->size, + PAGE_KERNEL, pgd_pgtable_alloc, flags); + + return 0; +} + +inline void hotplug_paging(phys_addr_t start, phys_addr_t size) +{ + struct mem_range section = { + .base = start, + .size = size, + }; + + stop_machine(__hotplug_paging, §ion, NULL); +} +#endif /* CONFIG_MEMORY_HOTPLUG */ + /* * Check whether a kernel address is valid (derived from arch/x86/). */ -- 2.7.4