From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752791AbdKWLOw (ORCPT ); Thu, 23 Nov 2017 06:14:52 -0500 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]:54816 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1752523AbdKWLOv (ORCPT ); Thu, 23 Nov 2017 06:14:51 -0500 Date: Thu, 23 Nov 2017 11:14:38 +0000 From: Andrea Reale To: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, m.bielski@virtualopensystems.com, arunks@qti.qualcomm.com, mark.rutland@arm.com, scott.branden@broadcom.com, will.deacon@arm.com, qiuxishi@huawei.com, catalin.marinas@arm.com, mhocko@suse.com, realean2@ie.ibm.com Subject: [PATCH v2 3/5] mm: memory_hotplug: memblock to track partially removed vmemmap mem References: MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.24 (2015-08-30) X-TM-AS-GCONF: 00 x-cbid: 17112311-0016-0000-0000-00000504C3A5 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 17112311-0017-0000-0000-0000284097C4 Message-Id: X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:,, definitions=2017-11-23_04:,, signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 suspectscore=3 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 impostorscore=0 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1709140000 definitions=main-1711230157 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org When hot-removing memory we need to free vmemmap memory. However, depending on the memory is being removed, it might not be always possible to free a full vmemmap page / huge-page because part of it might still be used. Commit ae9aae9eda2d ("memory-hotplug: common APIs to support page tables hot-remove") introduced a workaround for x86 hot-remove, by which partially unused areas are filled with the 0xFD constant. Full pages are only removed when fully filled by 0xFDs. This commit introduces a MEMBLOCK_UNUSED_VMEMMAP memblock flag, with the goal of using it in place of 0xFDs. For now, this will be used for the arm64 port of memory hot remove, but the idea is to eventually use the same mechanism for x86 as well. Signed-off-by: Andrea Reale Signed-off-by: Maciej Bielski --- include/linux/memblock.h | 12 ++++++++++++ mm/memblock.c | 32 ++++++++++++++++++++++++++++++++ 2 files changed, 44 insertions(+) diff --git a/include/linux/memblock.h b/include/linux/memblock.h index bae11c7..0daec05 100644 --- a/include/linux/memblock.h +++ b/include/linux/memblock.h @@ -26,6 +26,9 @@ enum { MEMBLOCK_HOTPLUG = 0x1, /* hotpluggable region */ MEMBLOCK_MIRROR = 0x2, /* mirrored region */ MEMBLOCK_NOMAP = 0x4, /* don't add to kernel direct mapping */ +#ifdef CONFIG_MEMORY_HOTREMOVE + MEMBLOCK_UNUSED_VMEMMAP = 0x8, /* Mark VMEMAP blocks as dirty */ +#endif }; struct memblock_region { @@ -90,6 +93,10 @@ int memblock_mark_mirror(phys_addr_t base, phys_addr_t size); int memblock_mark_nomap(phys_addr_t base, phys_addr_t size); int memblock_clear_nomap(phys_addr_t base, phys_addr_t size); ulong choose_memblock_flags(void); +#ifdef CONFIG_MEMORY_HOTREMOVE +int memblock_mark_unused_vmemmap(phys_addr_t base, phys_addr_t size); +int memblock_clear_unused_vmemmap(phys_addr_t base, phys_addr_t size); +#endif /* Low level functions */ int memblock_add_range(struct memblock_type *type, @@ -182,6 +189,11 @@ static inline bool memblock_is_nomap(struct memblock_region *m) return m->flags & MEMBLOCK_NOMAP; } +#ifdef CONFIG_MEMORY_HOTREMOVE +bool memblock_is_vmemmap_unused_range(struct memblock_type *mt, + phys_addr_t start, phys_addr_t end); +#endif + #ifdef CONFIG_HAVE_MEMBLOCK_NODE_MAP int memblock_search_pfn_nid(unsigned long pfn, unsigned long *start_pfn, unsigned long *end_pfn); diff --git a/mm/memblock.c b/mm/memblock.c index 9120578..30d5aa4 100644 --- a/mm/memblock.c +++ b/mm/memblock.c @@ -809,6 +809,18 @@ int __init_memblock memblock_clear_nomap(phys_addr_t base, phys_addr_t size) return memblock_setclr_flag(base, size, 0, MEMBLOCK_NOMAP); } +#ifdef CONFIG_MEMORY_HOTREMOVE +int __init_memblock memblock_mark_unused_vmemmap(phys_addr_t base, + phys_addr_t size) +{ + return memblock_setclr_flag(base, size, 1, MEMBLOCK_UNUSED_VMEMMAP); +} +int __init_memblock memblock_clear_unused_vmemmap(phys_addr_t base, + phys_addr_t size) +{ + return memblock_setclr_flag(base, size, 0, MEMBLOCK_UNUSED_VMEMMAP); +} +#endif /** * __next_reserved_mem_region - next function for for_each_reserved_region() * @idx: pointer to u64 loop variable @@ -1696,6 +1708,26 @@ void __init_memblock memblock_trim_memory(phys_addr_t align) } } +#ifdef CONFIG_MEMORY_HOTREMOVE +bool __init_memblock memblock_is_vmemmap_unused_range(struct memblock_type *mt, + phys_addr_t start, phys_addr_t end) +{ + u64 i; + struct memblock_region *r; + + i = memblock_search(mt, start); + r = &(mt->regions[i]); + while (r->base < end) { + if (!(r->flags & MEMBLOCK_UNUSED_VMEMMAP)) + return 0; + + r = &(memblock.memory.regions[++i]); + } + + return 1; +} +#endif + void __init_memblock memblock_set_current_limit(phys_addr_t limit) { memblock.current_limit = limit; -- 2.7.4