From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.8 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 66584C43603 for ; Thu, 5 Dec 2019 00:54:32 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 08EC424667 for ; Thu, 5 Dec 2019 00:54:32 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="uhAwwICx" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 08EC424667 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id ACE316B0E10; Wed, 4 Dec 2019 19:54:31 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A7EA06B0E12; Wed, 4 Dec 2019 19:54:31 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 91EFE6B0E13; Wed, 4 Dec 2019 19:54:31 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0084.hostedemail.com [216.40.44.84]) by kanga.kvack.org (Postfix) with ESMTP id 78E076B0E10 for ; Wed, 4 Dec 2019 19:54:31 -0500 (EST) Received: from smtpin14.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with SMTP id 43A024DC7 for ; Thu, 5 Dec 2019 00:54:31 +0000 (UTC) X-FDA: 76229267142.14.wire75_2769cba6d902f X-HE-Tag: wire75_2769cba6d902f X-Filterd-Recvd-Size: 11222 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf21.hostedemail.com (Postfix) with ESMTP for ; Thu, 5 Dec 2019 00:54:30 +0000 (UTC) Received: from localhost.localdomain (c-73-231-172-41.hsd1.ca.comcast.net [73.231.172.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 1C50524664; Thu, 5 Dec 2019 00:54:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1575507270; bh=KiSWxkf2eMsLncEbvrYyOwdf8OnfalrxYWkyEhpPkfY=; h=Date:From:To:Subject:In-Reply-To:From; b=uhAwwICxwQXwz84xnq7TRtbnPcIrylQx63leJ22wBic/h83FX1niNXYJdNq9oRO5Z jbDub1lpOxA3HjQmtUzbK2OBiJb1+QJnaABvHQFoVX6eaf+hdpuPR49g05hGF/ZH8/ RI6gYvU+Vl9xsl8LxxV9Q9HfMt8JUU1n8MFIHO2Y= Date: Wed, 04 Dec 2019 16:54:28 -0800 From: Andrew Morton To: akpm@linux-foundation.org, anton.ivanov@cambridgegreys.com, arnd@arndb.de, davem@davemloft.net, deanbo422@gmail.com, deller@gmx.de, eike-kernel@sf-tec.de, geert@linux-m68k.org, gerg@linux-m68k.org, green.hu@gmail.com, James.Bottomley@HansenPartnership.com, jdike@addtoit.com, kirill@shutemov.name, linux-mm@kvack.org, linux@armlinux.org.uk, matorola@gmail.com, mattst88@gmail.com, mm-commits@vger.kernel.org, monstr@monstr.eu, msalter@redhat.com, peda@axentia.se, richard@nod.at, rmk+kernel@armlinux.org.uk, rppt@linux.ibm.com, sammy@sammy.net, torvalds@linux-foundation.org, Vineet.Gupta1@synopsys.com Subject: [patch 85/86] um: add support for folded p4d page tables Message-ID: <20191205005428.OcqNnvzdb%akpm@linux-foundation.org> In-Reply-To: <20191204164858.fe4ed8886e34ad9f3b34ea00@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Mike Rapoport Subject: um: add support for folded p4d page tables The UML port uses 4 and 5 level fixups to support higher level page table directories in the generic VM code. Implement primitives necessary for the 4th level folding, add walks of p4d level where appropriate and drop usage of __ARCH_USE_5LEVEL_HACK. Link: http://lkml.kernel.org/r/1572938135-31886-13-git-send-email-rppt@kernel.org Signed-off-by: Mike Rapoport Cc: Anatoly Pugachev Cc: Anton Ivanov Cc: Arnd Bergmann Cc: "David S. Miller" Cc: Geert Uytterhoeven Cc: Greentime Hu Cc: Greg Ungerer Cc: Helge Deller Cc: "James E.J. Bottomley" Cc: Jeff Dike Cc: "Kirill A. Shutemov" Cc: Mark Salter Cc: Matt Turner Cc: Michal Simek Cc: Peter Rosin Cc: Richard Weinberger Cc: Rolf Eike Beer Cc: Russell King Cc: Russell King Cc: Sam Creasey Cc: Vincent Chen Cc: Vineet Gupta Signed-off-by: Andrew Morton --- arch/um/include/asm/pgtable-2level.h | 1 arch/um/include/asm/pgtable-3level.h | 1 arch/um/include/asm/pgtable.h | 3 + arch/um/kernel/mem.c | 8 ++- arch/um/kernel/skas/mmu.c | 12 ++++- arch/um/kernel/skas/uaccess.c | 7 ++- arch/um/kernel/tlb.c | 56 ++++++++++++++++++++++--- arch/um/kernel/trap.c | 4 + 8 files changed, 78 insertions(+), 14 deletions(-) --- a/arch/um/include/asm/pgtable-2level.h~um-add-support-for-folded-p4d-page-tables +++ a/arch/um/include/asm/pgtable-2level.h @@ -8,7 +8,6 @@ #ifndef __UM_PGTABLE_2LEVEL_H #define __UM_PGTABLE_2LEVEL_H -#define __ARCH_USE_5LEVEL_HACK #include /* PGDIR_SHIFT determines what a third-level page table entry can map */ --- a/arch/um/include/asm/pgtable-3level.h~um-add-support-for-folded-p4d-page-tables +++ a/arch/um/include/asm/pgtable-3level.h @@ -7,7 +7,6 @@ #ifndef __UM_PGTABLE_3LEVEL_H #define __UM_PGTABLE_3LEVEL_H -#define __ARCH_USE_5LEVEL_HACK #include /* PGDIR_SHIFT determines what a third-level page table entry can map */ --- a/arch/um/include/asm/pgtable.h~um-add-support-for-folded-p4d-page-tables +++ a/arch/um/include/asm/pgtable.h @@ -106,6 +106,9 @@ extern unsigned long end_iomem; #define pud_newpage(x) (pud_val(x) & _PAGE_NEWPAGE) #define pud_mkuptodate(x) (pud_val(x) &= ~_PAGE_NEWPAGE) +#define p4d_newpage(x) (p4d_val(x) & _PAGE_NEWPAGE) +#define p4d_mkuptodate(x) (p4d_val(x) &= ~_PAGE_NEWPAGE) + #define pmd_page(pmd) phys_to_page(pmd_val(pmd) & PAGE_MASK) #define pte_page(x) pfn_to_page(pte_pfn(x)) --- a/arch/um/kernel/mem.c~um-add-support-for-folded-p4d-page-tables +++ a/arch/um/kernel/mem.c @@ -96,6 +96,7 @@ static void __init fixrange_init(unsigne pgd_t *pgd_base) { pgd_t *pgd; + p4d_t *p4d; pud_t *pud; pmd_t *pmd; int i, j; @@ -107,7 +108,8 @@ static void __init fixrange_init(unsigne pgd = pgd_base + i; for ( ; (i < PTRS_PER_PGD) && (vaddr < end); pgd++, i++) { - pud = pud_offset(pgd, vaddr); + p4d = p4d_offset(pgd, vaddr); + pud = pud_offset(p4d, vaddr); if (pud_none(*pud)) one_md_table_init(pud); pmd = pmd_offset(pud, vaddr); @@ -124,6 +126,7 @@ static void __init fixaddr_user_init( vo #ifdef CONFIG_ARCH_REUSE_HOST_VSYSCALL_AREA long size = FIXADDR_USER_END - FIXADDR_USER_START; pgd_t *pgd; + p4d_t *p4d; pud_t *pud; pmd_t *pmd; pte_t *pte; @@ -144,7 +147,8 @@ static void __init fixaddr_user_init( vo for ( ; size > 0; size -= PAGE_SIZE, vaddr += PAGE_SIZE, p += PAGE_SIZE) { pgd = swapper_pg_dir + pgd_index(vaddr); - pud = pud_offset(pgd, vaddr); + p4d = p4d_offset(pgd, vaddr); + pud = pud_offset(p4d, vaddr); pmd = pmd_offset(pud, vaddr); pte = pte_offset_kernel(pmd, vaddr); pte_set_val(*pte, p, PAGE_READONLY); --- a/arch/um/kernel/skas/mmu.c~um-add-support-for-folded-p4d-page-tables +++ a/arch/um/kernel/skas/mmu.c @@ -19,15 +19,21 @@ static int init_stub_pte(struct mm_struc unsigned long kernel) { pgd_t *pgd; + p4d_t *p4d; pud_t *pud; pmd_t *pmd; pte_t *pte; pgd = pgd_offset(mm, proc); - pud = pud_alloc(mm, pgd, proc); - if (!pud) + + p4d = p4d_alloc(mm, pgd, proc); + if (!p4d) goto out; + pud = pud_alloc(mm, p4d, proc); + if (!pud) + goto out_pud; + pmd = pmd_alloc(mm, pud, proc); if (!pmd) goto out_pmd; @@ -44,6 +50,8 @@ static int init_stub_pte(struct mm_struc pmd_free(mm, pmd); out_pmd: pud_free(mm, pud); + out_pud: + p4d_free(mm, p4d); out: return -ENOMEM; } --- a/arch/um/kernel/skas/uaccess.c~um-add-support-for-folded-p4d-page-tables +++ a/arch/um/kernel/skas/uaccess.c @@ -17,6 +17,7 @@ pte_t *virt_to_pte(struct mm_struct *mm, unsigned long addr) { pgd_t *pgd; + p4d_t *p4d; pud_t *pud; pmd_t *pmd; @@ -27,7 +28,11 @@ pte_t *virt_to_pte(struct mm_struct *mm, if (!pgd_present(*pgd)) return NULL; - pud = pud_offset(pgd, addr); + p4d = p4d_offset(pgd, addr); + if (!p4d_present(*p4d)) + return NULL; + + pud = pud_offset(p4d, addr); if (!pud_present(*pud)) return NULL; --- a/arch/um/kernel/tlb.c~um-add-support-for-folded-p4d-page-tables +++ a/arch/um/kernel/tlb.c @@ -277,7 +277,7 @@ static inline int update_pmd_range(pud_t return ret; } -static inline int update_pud_range(pgd_t *pgd, unsigned long addr, +static inline int update_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end, struct host_vm_change *hvc) { @@ -285,7 +285,7 @@ static inline int update_pud_range(pgd_t unsigned long next; int ret = 0; - pud = pud_offset(pgd, addr); + pud = pud_offset(p4d, addr); do { next = pud_addr_end(addr, end); if (!pud_present(*pud)) { @@ -299,6 +299,28 @@ static inline int update_pud_range(pgd_t return ret; } +static inline int update_p4d_range(pgd_t *pgd, unsigned long addr, + unsigned long end, + struct host_vm_change *hvc) +{ + p4d_t *p4d; + unsigned long next; + int ret = 0; + + p4d = p4d_offset(pgd, addr); + do { + next = p4d_addr_end(addr, end); + if (!p4d_present(*p4d)) { + if (hvc->force || p4d_newpage(*p4d)) { + ret = add_munmap(addr, next - addr, hvc); + p4d_mkuptodate(*p4d); + } + } else + ret = update_pud_range(p4d, addr, next, hvc); + } while (p4d++, addr = next, ((addr < end) && !ret)); + return ret; +} + void fix_range_common(struct mm_struct *mm, unsigned long start_addr, unsigned long end_addr, int force) { @@ -316,8 +338,8 @@ void fix_range_common(struct mm_struct * ret = add_munmap(addr, next - addr, &hvc); pgd_mkuptodate(*pgd); } - } - else ret = update_pud_range(pgd, addr, next, &hvc); + } else + ret = update_p4d_range(pgd, addr, next, &hvc); } while (pgd++, addr = next, ((addr < end_addr) && !ret)); if (!ret) @@ -338,6 +360,7 @@ static int flush_tlb_kernel_range_common { struct mm_struct *mm; pgd_t *pgd; + p4d_t *p4d; pud_t *pud; pmd_t *pmd; pte_t *pte; @@ -364,7 +387,23 @@ static int flush_tlb_kernel_range_common continue; } - pud = pud_offset(pgd, addr); + p4d = p4d_offset(pgd, addr); + if (!p4d_present(*p4d)) { + last = ADD_ROUND(addr, P4D_SIZE); + if (last > end) + last = end; + if (p4d_newpage(*p4d)) { + updated = 1; + err = add_munmap(addr, last - addr, &hvc); + if (err < 0) + panic("munmap failed, errno = %d\n", + -err); + } + addr = last; + continue; + } + + pud = pud_offset(p4d, addr); if (!pud_present(*pud)) { last = ADD_ROUND(addr, PUD_SIZE); if (last > end) @@ -424,6 +463,7 @@ static int flush_tlb_kernel_range_common void flush_tlb_page(struct vm_area_struct *vma, unsigned long address) { pgd_t *pgd; + p4d_t *p4d; pud_t *pud; pmd_t *pmd; pte_t *pte; @@ -437,7 +477,11 @@ void flush_tlb_page(struct vm_area_struc if (!pgd_present(*pgd)) goto kill; - pud = pud_offset(pgd, address); + p4d = p4d_offset(pgd, address); + if (!p4d_present(*p4d)) + goto kill; + + pud = pud_offset(p4d, address); if (!pud_present(*pud)) goto kill; --- a/arch/um/kernel/trap.c~um-add-support-for-folded-p4d-page-tables +++ a/arch/um/kernel/trap.c @@ -28,6 +28,7 @@ int handle_page_fault(unsigned long addr struct mm_struct *mm = current->mm; struct vm_area_struct *vma; pgd_t *pgd; + p4d_t *p4d; pud_t *pud; pmd_t *pmd; pte_t *pte; @@ -104,7 +105,8 @@ good_area: } pgd = pgd_offset(mm, address); - pud = pud_offset(pgd, address); + p4d = p4d_offset(pgd, address); + pud = pud_offset(p4d, address); pmd = pmd_offset(pud, address); pte = pte_offset_kernel(pmd, address); } while (!pte_present(*pte)); _