From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D59BBC2BA83 for ; Sun, 16 Feb 2020 08:20:30 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7D83124673 for ; Sun, 16 Feb 2020 08:20:30 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="WzLO1/kT" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7D83124673 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 2E1606B0073; Sun, 16 Feb 2020 03:20:30 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 291976B0074; Sun, 16 Feb 2020 03:20:30 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 180106B0075; Sun, 16 Feb 2020 03:20:30 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0045.hostedemail.com [216.40.44.45]) by kanga.kvack.org (Postfix) with ESMTP id 00DAC6B0073 for ; Sun, 16 Feb 2020 03:20:29 -0500 (EST) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 9DE1F4DCD for ; Sun, 16 Feb 2020 08:20:29 +0000 (UTC) X-FDA: 76495293378.22.bells86_215a6bc7e8941 X-HE-Tag: bells86_215a6bc7e8941 X-Filterd-Recvd-Size: 12088 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf47.hostedemail.com (Postfix) with ESMTP for ; Sun, 16 Feb 2020 08:20:29 +0000 (UTC) Received: from aquarius.haifa.ibm.com (nesher1.haifa.il.ibm.com [195.110.40.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id CA76D22522; Sun, 16 Feb 2020 08:20:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1581841228; bh=sP+IWeG/qqt8+HYoNOurocpitqp+If3ZmTKOJ3D8dpw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=WzLO1/kTMTNS2/4dI4ItGJl01UEtsIJqQ51S6GYSl8mFqlP09EnjnxHIT6oGSbrr+ m/GblM2Yj+oXk43tErJPsF4L9IW6zbCLRMh6N0gzslnXCpSGWnuBhfVb0Wv88ZC7jb /63buyJzrdkYcODBkArDt8fGbjfGl25lxz5F+Q+k= From: Mike Rapoport To: linux-kernel@vger.kernel.org Cc: Andrew Morton , Arnd Bergmann , Benjamin Herrenschmidt , Brian Cain , Catalin Marinas , Christophe Leroy , Fenghua Yu , Geert Uytterhoeven , Guan Xuetao , James Morse , Jonas Bonn , Julien Thierry , Ley Foon Tan , Marc Zyngier , Michael Ellerman , Paul Mackerras , Rich Felker , Russell King , Stafford Horne , Stefan Kristiansson , Suzuki K Poulose , Tony Luck , Will Deacon , Yoshinori Sato , kvmarm@lists.cs.columbia.edu, kvm-ppc@vger.kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, linux-ia64@vger.kernel.org, linux-mm@kvack.org, linuxppc-dev@lists.ozlabs.org, linux-sh@vger.kernel.org, nios2-dev@lists.rocketboards.org, openrisc@lists.librecores.org, uclinux-h8-devel@lists.sourceforge.jp, Mike Rapoport , Mike Rapoport Subject: [PATCH v2 10/13] sh: add support for folded p4d page tables Date: Sun, 16 Feb 2020 10:18:40 +0200 Message-Id: <20200216081843.28670-11-rppt@kernel.org> X-Mailer: git-send-email 2.24.0 In-Reply-To: <20200216081843.28670-1-rppt@kernel.org> References: <20200216081843.28670-1-rppt@kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Mike Rapoport Implement primitives necessary for the 4th level folding, add walks of p4= d level where appropriate and remove usage of __ARCH_USE_5LEVEL_HACK. Signed-off-by: Mike Rapoport --- arch/sh/include/asm/pgtable-2level.h | 1 - arch/sh/include/asm/pgtable-3level.h | 1 - arch/sh/kernel/io_trapped.c | 7 ++++++- arch/sh/mm/cache-sh4.c | 4 +++- arch/sh/mm/cache-sh5.c | 7 ++++++- arch/sh/mm/fault.c | 26 +++++++++++++++++++++++--- arch/sh/mm/hugetlbpage.c | 28 ++++++++++++++++++---------- arch/sh/mm/init.c | 9 ++++++++- arch/sh/mm/kmap.c | 2 +- arch/sh/mm/tlbex_32.c | 6 +++++- arch/sh/mm/tlbex_64.c | 7 ++++++- 11 files changed, 76 insertions(+), 22 deletions(-) diff --git a/arch/sh/include/asm/pgtable-2level.h b/arch/sh/include/asm/p= gtable-2level.h index bf1eb51c3ee5..08bff93927ff 100644 --- a/arch/sh/include/asm/pgtable-2level.h +++ b/arch/sh/include/asm/pgtable-2level.h @@ -2,7 +2,6 @@ #ifndef __ASM_SH_PGTABLE_2LEVEL_H #define __ASM_SH_PGTABLE_2LEVEL_H =20 -#define __ARCH_USE_5LEVEL_HACK #include =20 /* diff --git a/arch/sh/include/asm/pgtable-3level.h b/arch/sh/include/asm/p= gtable-3level.h index 779260b721ca..0f80097e5c9c 100644 --- a/arch/sh/include/asm/pgtable-3level.h +++ b/arch/sh/include/asm/pgtable-3level.h @@ -2,7 +2,6 @@ #ifndef __ASM_SH_PGTABLE_3LEVEL_H #define __ASM_SH_PGTABLE_3LEVEL_H =20 -#define __ARCH_USE_5LEVEL_HACK #include =20 /* diff --git a/arch/sh/kernel/io_trapped.c b/arch/sh/kernel/io_trapped.c index 60c828a2b8a2..037aab2708b7 100644 --- a/arch/sh/kernel/io_trapped.c +++ b/arch/sh/kernel/io_trapped.c @@ -136,6 +136,7 @@ EXPORT_SYMBOL_GPL(match_trapped_io_handler); static struct trapped_io *lookup_tiop(unsigned long address) { pgd_t *pgd_k; + p4d_t *p4d_k; pud_t *pud_k; pmd_t *pmd_k; pte_t *pte_k; @@ -145,7 +146,11 @@ static struct trapped_io *lookup_tiop(unsigned long = address) if (!pgd_present(*pgd_k)) return NULL; =20 - pud_k =3D pud_offset(pgd_k, address); + p4d_k =3D p4d_offset(pgd_k, address); + if (!p4d_present(*p4d_k)) + return NULL; + + pud_k =3D pud_offset(p4d_k, address); if (!pud_present(*pud_k)) return NULL; =20 diff --git a/arch/sh/mm/cache-sh4.c b/arch/sh/mm/cache-sh4.c index eee911422cf9..45943bcb7042 100644 --- a/arch/sh/mm/cache-sh4.c +++ b/arch/sh/mm/cache-sh4.c @@ -209,6 +209,7 @@ static void sh4_flush_cache_page(void *args) unsigned long address, pfn, phys; int map_coherent =3D 0; pgd_t *pgd; + p4d_t *p4d; pud_t *pud; pmd_t *pmd; pte_t *pte; @@ -224,7 +225,8 @@ static void sh4_flush_cache_page(void *args) return; =20 pgd =3D pgd_offset(vma->vm_mm, address); - pud =3D pud_offset(pgd, address); + p4d =3D p4d_offset(pgd, address); + pud =3D pud_offset(p4d, address); pmd =3D pmd_offset(pud, address); pte =3D pte_offset_kernel(pmd, address); =20 diff --git a/arch/sh/mm/cache-sh5.c b/arch/sh/mm/cache-sh5.c index 445b5e69b73c..442a77cc2957 100644 --- a/arch/sh/mm/cache-sh5.c +++ b/arch/sh/mm/cache-sh5.c @@ -383,6 +383,7 @@ static void sh64_dcache_purge_user_pages(struct mm_st= ruct *mm, unsigned long addr, unsigned long end) { pgd_t *pgd; + p4d_t *p4d; pud_t *pud; pmd_t *pmd; pte_t *pte; @@ -397,7 +398,11 @@ static void sh64_dcache_purge_user_pages(struct mm_s= truct *mm, if (pgd_bad(*pgd)) return; =20 - pud =3D pud_offset(pgd, addr); + p4d =3D p4d_offset(pgd, addr); + if (p4d_none(*p4d) || p4d_bad(*p4d)) + return; + + pud =3D pud_offset(p4d, addr); if (pud_none(*pud) || pud_bad(*pud)) return; =20 diff --git a/arch/sh/mm/fault.c b/arch/sh/mm/fault.c index a2b0275413e8..ebd30003fd06 100644 --- a/arch/sh/mm/fault.c +++ b/arch/sh/mm/fault.c @@ -53,6 +53,7 @@ static void show_pte(struct mm_struct *mm, unsigned lon= g addr) (u64)pgd_val(*pgd)); =20 do { + p4d_t *p4d; pud_t *pud; pmd_t *pmd; pte_t *pte; @@ -65,7 +66,20 @@ static void show_pte(struct mm_struct *mm, unsigned lo= ng addr) break; } =20 - pud =3D pud_offset(pgd, addr); + p4d =3D p4d_offset(pgd, addr); + if (PTRS_PER_P4D !=3D 1) + pr_cont(", *p4d=3D%0*Lx", (u32)(sizeof(*p4d) * 2), + (u64)p4d_val(*p4d)); + + if (p4d_none(*p4d)) + break; + + if (p4d_bad(*p4d)) { + pr_cont("(bad)"); + break; + } + + pud =3D pud_offset(p4d, addr); if (PTRS_PER_PUD !=3D 1) pr_cont(", *pud=3D%0*llx", (u32)(sizeof(*pud) * 2), (u64)pud_val(*pud)); @@ -107,6 +121,7 @@ static inline pmd_t *vmalloc_sync_one(pgd_t *pgd, uns= igned long address) { unsigned index =3D pgd_index(address); pgd_t *pgd_k; + p4d_t *p4d, *p4d_k; pud_t *pud, *pud_k; pmd_t *pmd, *pmd_k; =20 @@ -116,8 +131,13 @@ static inline pmd_t *vmalloc_sync_one(pgd_t *pgd, un= signed long address) if (!pgd_present(*pgd_k)) return NULL; =20 - pud =3D pud_offset(pgd, address); - pud_k =3D pud_offset(pgd_k, address); + p4d =3D p4d_offset(pgd, address); + p4d_k =3D p4d_offset(pgd_k, address); + if (!p4d_present(*p4d_k)) + return NULL; + + pud =3D pud_offset(p4d, address); + pud_k =3D pud_offset(p4d_k, address); if (!pud_present(*pud_k)) return NULL; =20 diff --git a/arch/sh/mm/hugetlbpage.c b/arch/sh/mm/hugetlbpage.c index 960deb1f24a1..acd5652a0de3 100644 --- a/arch/sh/mm/hugetlbpage.c +++ b/arch/sh/mm/hugetlbpage.c @@ -26,17 +26,21 @@ pte_t *huge_pte_alloc(struct mm_struct *mm, unsigned long addr, unsigned long sz) { pgd_t *pgd; + p4d_t *p4d; pud_t *pud; pmd_t *pmd; pte_t *pte =3D NULL; =20 pgd =3D pgd_offset(mm, addr); if (pgd) { - pud =3D pud_alloc(mm, pgd, addr); - if (pud) { - pmd =3D pmd_alloc(mm, pud, addr); - if (pmd) - pte =3D pte_alloc_map(mm, pmd, addr); + p4d =3D p4d_alloc(mm, pgd, addr); + if (p4d) { + pud =3D pud_alloc(mm, p4d, addr); + if (pud) { + pmd =3D pmd_alloc(mm, pud, addr); + if (pmd) + pte =3D pte_alloc_map(mm, pmd, addr); + } } } =20 @@ -47,17 +51,21 @@ pte_t *huge_pte_offset(struct mm_struct *mm, unsigned long addr, unsigned long sz) { pgd_t *pgd; + p4d_t *p4d; pud_t *pud; pmd_t *pmd; pte_t *pte =3D NULL; =20 pgd =3D pgd_offset(mm, addr); if (pgd) { - pud =3D pud_offset(pgd, addr); - if (pud) { - pmd =3D pmd_offset(pud, addr); - if (pmd) - pte =3D pte_offset_map(pmd, addr); + p4d =3D p4d_offset(pgd, addr); + if (p4d) { + pud =3D pud_offset(p4d, addr); + if (pud) { + pmd =3D pmd_offset(pud, addr); + if (pmd) + pte =3D pte_offset_map(pmd, addr); + } } } =20 diff --git a/arch/sh/mm/init.c b/arch/sh/mm/init.c index 4bab79baee75..594203530d43 100644 --- a/arch/sh/mm/init.c +++ b/arch/sh/mm/init.c @@ -45,6 +45,7 @@ void __init __weak plat_mem_setup(void) static pte_t *__get_pte_phys(unsigned long addr) { pgd_t *pgd; + p4d_t *p4d; pud_t *pud; pmd_t *pmd; =20 @@ -54,7 +55,13 @@ static pte_t *__get_pte_phys(unsigned long addr) return NULL; } =20 - pud =3D pud_alloc(NULL, pgd, addr); + p4d =3D p4d_alloc(NULL, pgd, addr); + if (unlikely(!p4d)) { + p4d_ERROR(*p4d); + return NULL; + } + + pud =3D pud_alloc(NULL, p4d, addr); if (unlikely(!pud)) { pud_ERROR(*pud); return NULL; diff --git a/arch/sh/mm/kmap.c b/arch/sh/mm/kmap.c index 9e6b38b03cf7..0e7039137f5a 100644 --- a/arch/sh/mm/kmap.c +++ b/arch/sh/mm/kmap.c @@ -15,7 +15,7 @@ #include =20 #define kmap_get_fixmap_pte(vaddr) \ - pte_offset_kernel(pmd_offset(pud_offset(pgd_offset_k(vaddr), (vaddr)), = (vaddr)), (vaddr)) + pte_offset_kernel(pmd_offset(pud_offset(p4d_offset(pgd_offset_k(vaddr),= (vaddr)), (vaddr)), (vaddr)), vaddr) =20 static pte_t *kmap_coherent_pte; =20 diff --git a/arch/sh/mm/tlbex_32.c b/arch/sh/mm/tlbex_32.c index 382262dc0c4b..1c53868632ee 100644 --- a/arch/sh/mm/tlbex_32.c +++ b/arch/sh/mm/tlbex_32.c @@ -23,6 +23,7 @@ handle_tlbmiss(struct pt_regs *regs, unsigned long erro= r_code, unsigned long address) { pgd_t *pgd; + p4d_t *p4d; pud_t *pud; pmd_t *pmd; pte_t *pte; @@ -42,7 +43,10 @@ handle_tlbmiss(struct pt_regs *regs, unsigned long err= or_code, pgd =3D pgd_offset(current->mm, address); } =20 - pud =3D pud_offset(pgd, address); + p4d =3D p4d_offset(pgd, address); + if (p4d_none_or_clear_bad(p4d)) + return 1; + pud =3D pud_offset(p4d, address); if (pud_none_or_clear_bad(pud)) return 1; pmd =3D pmd_offset(pud, address); diff --git a/arch/sh/mm/tlbex_64.c b/arch/sh/mm/tlbex_64.c index 8ff966dd0c74..0d015f7556fa 100644 --- a/arch/sh/mm/tlbex_64.c +++ b/arch/sh/mm/tlbex_64.c @@ -44,6 +44,7 @@ static int handle_tlbmiss(unsigned long long protection= _flags, unsigned long address) { pgd_t *pgd; + p4d_t *p4d; pud_t *pud; pmd_t *pmd; pte_t *pte; @@ -58,7 +59,11 @@ static int handle_tlbmiss(unsigned long long protectio= n_flags, pgd =3D pgd_offset(current->mm, address); } =20 - pud =3D pud_offset(pgd, address); + p4d =3D p4d_offset(pgd, address); + if (p4d_none(*p4d) || !p4d_present(*p4d)) + return 1; + + pud =3D pud_offset(p4d, address); if (pud_none(*pud) || !pud_present(*pud)) return 1; =20 --=20 2.24.0