From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-20.5 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 73BE5C07E9B for ; Tue, 20 Jul 2021 12:36:01 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 570CB61165 for ; Tue, 20 Jul 2021 12:36:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237943AbhGTLzP (ORCPT ); Tue, 20 Jul 2021 07:55:15 -0400 Received: from mail.kernel.org ([198.145.29.99]:38930 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229632AbhGTLzE (ORCPT ); Tue, 20 Jul 2021 07:55:04 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id E5F6E61186; Tue, 20 Jul 2021 12:35:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1626784542; bh=F9par1LOdZWcaYEp+Fsm6nACHy0sgpjKC7cEQSSN6Kk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=IsWzOQWgeCT1AYKwtELp1TXCDgnOLCVvxD7X75XcFQe5PScLYsCySiEhWrWsNzIFy NU1eCxY9+EtVnyEeY8mYhtZYNtWKq6pwEnYIuUmhq+Xra94KKBJJODpnH81/4MF4Pl 7uyzd4SqeuJM8/Gemg06n3sL8LVi+LGWAdi3y+rn3xvIoCxR4j6ia9WzfgB8kl+oM5 2ykDSrOi/oG3+IA+5TyQEfXk1QR+3NhmU29GozMF0aqkga4B/tubtAa6SESVqo3Yru cEZWb51tqxOvMK7ctZtnfU8yKwalKFMh4knEqIyj+OBhWfD/KGtJxAT0MoPOPtTKR0 ibGilJFyngZYg== From: Will Deacon To: linux-kernel@vger.kernel.org Cc: Will Deacon , Ard Biesheuvel , Michael Ellerman , Thomas Gleixner , Benjamin Herrenschmidt , Christophe Leroy , Paul Mackerras , Jonathan Marek , Catalin Marinas , Andrew Morton , Mike Rapoport , Mark Rutland , Geert Uytterhoeven , Marc Zyngier , linuxppc-dev@lists.ozlabs.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 1/2] Revert "powerpc/8xx: add support for huge pages on VMAP and VMALLOC" Date: Tue, 20 Jul 2021 13:35:11 +0100 Message-Id: <20210720123512.8740-2-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210720123512.8740-1-will@kernel.org> References: <20210720123512.8740-1-will@kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This reverts commit a6a8f7c4aa7eb50304b5c4e68eccd24313f3a785. Commit c742199a014d ("mm/pgtable: add stubs for {pmd/pub}_{set/clear}_huge") breaks the boot for arm64 when block mappings are used to create the linear map, as this relies on a working implementation of pXd_set_huge() even if the corresponding page-table levels have been folded. Although the problematic patch reverts cleanly, doing so breaks the build for 32-bit PowerPC 8xx machines, which rely on the default function definitions when the corresponding page-table levels are folded: | powerpc64-linux-ld: mm/vmalloc.o: in function `vunmap_pud_range': | linux/mm/vmalloc.c:362: undefined reference to `pud_clear_huge' | powerpc64-linux-ld: mm/vmalloc.o: in function `vunmap_pmd_range': | linux/mm/vmalloc.c:337: undefined reference to `pmd_clear_huge' | powerpc64-linux-ld: mm/vmalloc.o: in function `vunmap_pud_range': | linux/mm/vmalloc.c:362: undefined reference to `pud_clear_huge' | powerpc64-linux-ld: mm/vmalloc.o: in function `vunmap_pmd_range': | linux/mm/vmalloc.c:337: undefined reference to `pmd_clear_huge' | make: *** [Makefile:1177: vmlinux] Error 1 Although Christophe has kindly offered to look into the arm64 breakage, he's on holiday for another 10 days and there isn't an obvious fix on the arm64 side which allows us to continue using huge-vmap for affected configurations. In the interest of quickly getting things back to a working state as they were in 5.13, revert the huge-vmap changes for PowerPC 8xx prior to reverting the change which breaks arm64. We can then work on this together for 5.15 once Christophe is back. Cc: Ard Biesheuvel Cc: Michael Ellerman Cc: Benjamin Herrenschmidt Cc: Christophe Leroy Cc: Paul Mackerras Cc: Catalin Marinas Cc: Andrew Morton Cc: Nicholas Piggin Cc: Mark Rutland Cc: Geert Uytterhoeven Cc: Marc Zyngier Link: https://lore.kernel.org/r/20210719170615.Horde.Qio1wp3k5ebLo-d9xXHdOg1@messagerie.c-s.fr Signed-off-by: Will Deacon --- arch/powerpc/Kconfig | 2 +- arch/powerpc/include/asm/nohash/32/mmu-8xx.h | 43 -------------------- 2 files changed, 1 insertion(+), 44 deletions(-) diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig index d01e3401581d..5fc19ac62cb9 100644 --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -189,7 +189,7 @@ config PPC select GENERIC_VDSO_TIME_NS select HAVE_ARCH_AUDITSYSCALL select HAVE_ARCH_HUGE_VMALLOC if HAVE_ARCH_HUGE_VMAP - select HAVE_ARCH_HUGE_VMAP if PPC_RADIX_MMU || PPC_8xx + select HAVE_ARCH_HUGE_VMAP if PPC_BOOK3S_64 && PPC_RADIX_MMU select HAVE_ARCH_JUMP_LABEL select HAVE_ARCH_JUMP_LABEL_RELATIVE select HAVE_ARCH_KASAN if PPC32 && PPC_PAGE_SHIFT <= 14 diff --git a/arch/powerpc/include/asm/nohash/32/mmu-8xx.h b/arch/powerpc/include/asm/nohash/32/mmu-8xx.h index 997cec973406..6e4faa0a9b35 100644 --- a/arch/powerpc/include/asm/nohash/32/mmu-8xx.h +++ b/arch/powerpc/include/asm/nohash/32/mmu-8xx.h @@ -178,7 +178,6 @@ #ifndef __ASSEMBLY__ #include -#include void mmu_pin_tlb(unsigned long top, bool readonly); @@ -226,48 +225,6 @@ static inline unsigned int mmu_psize_to_shift(unsigned int mmu_psize) BUG(); } -static inline bool arch_vmap_try_size(unsigned long addr, unsigned long end, u64 pfn, - unsigned int max_page_shift, unsigned long size) -{ - if (end - addr < size) - return false; - - if ((1UL << max_page_shift) < size) - return false; - - if (!IS_ALIGNED(addr, size)) - return false; - - if (!IS_ALIGNED(PFN_PHYS(pfn), size)) - return false; - - return true; -} - -static inline unsigned long arch_vmap_pte_range_map_size(unsigned long addr, unsigned long end, - u64 pfn, unsigned int max_page_shift) -{ - if (arch_vmap_try_size(addr, end, pfn, max_page_shift, SZ_512K)) - return SZ_512K; - if (PAGE_SIZE == SZ_16K) - return SZ_16K; - if (arch_vmap_try_size(addr, end, pfn, max_page_shift, SZ_16K)) - return SZ_16K; - return PAGE_SIZE; -} -#define arch_vmap_pte_range_map_size arch_vmap_pte_range_map_size - -static inline int arch_vmap_pte_supported_shift(unsigned long size) -{ - if (size >= SZ_512K) - return 19; - else if (size >= SZ_16K) - return 14; - else - return PAGE_SHIFT; -} -#define arch_vmap_pte_supported_shift arch_vmap_pte_supported_shift - /* patch sites */ extern s32 patch__itlbmiss_exit_1, patch__dtlbmiss_exit_1; extern s32 patch__itlbmiss_perf, patch__dtlbmiss_perf; -- 2.32.0.402.g57bb445576-goog From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BD2D1C07E9B for ; Tue, 20 Jul 2021 12:36:35 +0000 (UTC) Received: from lists.ozlabs.org (lists.ozlabs.org [112.213.38.117]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 572A061209 for ; Tue, 20 Jul 2021 12:36:35 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 572A061209 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=linuxppc-dev-bounces+linuxppc-dev=archiver.kernel.org@lists.ozlabs.org Received: from boromir.ozlabs.org (localhost [IPv6:::1]) by lists.ozlabs.org (Postfix) with ESMTP id 4GTdW572mGz3bpW for ; Tue, 20 Jul 2021 22:36:33 +1000 (AEST) Authentication-Results: lists.ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=kernel.org header.i=@kernel.org header.a=rsa-sha256 header.s=k20201202 header.b=IsWzOQWg; dkim-atps=neutral Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=kernel.org (client-ip=198.145.29.99; helo=mail.kernel.org; envelope-from=will@kernel.org; receiver=) Authentication-Results: lists.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=kernel.org header.i=@kernel.org header.a=rsa-sha256 header.s=k20201202 header.b=IsWzOQWg; dkim-atps=neutral Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 4GTdV8297Qz2yP4 for ; Tue, 20 Jul 2021 22:35:44 +1000 (AEST) Received: by mail.kernel.org (Postfix) with ESMTPSA id E5F6E61186; Tue, 20 Jul 2021 12:35:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1626784542; bh=F9par1LOdZWcaYEp+Fsm6nACHy0sgpjKC7cEQSSN6Kk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=IsWzOQWgeCT1AYKwtELp1TXCDgnOLCVvxD7X75XcFQe5PScLYsCySiEhWrWsNzIFy NU1eCxY9+EtVnyEeY8mYhtZYNtWKq6pwEnYIuUmhq+Xra94KKBJJODpnH81/4MF4Pl 7uyzd4SqeuJM8/Gemg06n3sL8LVi+LGWAdi3y+rn3xvIoCxR4j6ia9WzfgB8kl+oM5 2ykDSrOi/oG3+IA+5TyQEfXk1QR+3NhmU29GozMF0aqkga4B/tubtAa6SESVqo3Yru cEZWb51tqxOvMK7ctZtnfU8yKwalKFMh4knEqIyj+OBhWfD/KGtJxAT0MoPOPtTKR0 ibGilJFyngZYg== From: Will Deacon To: linux-kernel@vger.kernel.org Subject: [PATCH 1/2] Revert "powerpc/8xx: add support for huge pages on VMAP and VMALLOC" Date: Tue, 20 Jul 2021 13:35:11 +0100 Message-Id: <20210720123512.8740-2-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210720123512.8740-1-will@kernel.org> References: <20210720123512.8740-1-will@kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Marc Zyngier , Catalin Marinas , Andrew Morton , Jonathan Marek , linuxppc-dev@lists.ozlabs.org, Paul Mackerras , linux-arm-kernel@lists.infradead.org, Geert Uytterhoeven , Thomas Gleixner , Will Deacon , Ard Biesheuvel , Mike Rapoport Errors-To: linuxppc-dev-bounces+linuxppc-dev=archiver.kernel.org@lists.ozlabs.org Sender: "Linuxppc-dev" This reverts commit a6a8f7c4aa7eb50304b5c4e68eccd24313f3a785. Commit c742199a014d ("mm/pgtable: add stubs for {pmd/pub}_{set/clear}_huge") breaks the boot for arm64 when block mappings are used to create the linear map, as this relies on a working implementation of pXd_set_huge() even if the corresponding page-table levels have been folded. Although the problematic patch reverts cleanly, doing so breaks the build for 32-bit PowerPC 8xx machines, which rely on the default function definitions when the corresponding page-table levels are folded: | powerpc64-linux-ld: mm/vmalloc.o: in function `vunmap_pud_range': | linux/mm/vmalloc.c:362: undefined reference to `pud_clear_huge' | powerpc64-linux-ld: mm/vmalloc.o: in function `vunmap_pmd_range': | linux/mm/vmalloc.c:337: undefined reference to `pmd_clear_huge' | powerpc64-linux-ld: mm/vmalloc.o: in function `vunmap_pud_range': | linux/mm/vmalloc.c:362: undefined reference to `pud_clear_huge' | powerpc64-linux-ld: mm/vmalloc.o: in function `vunmap_pmd_range': | linux/mm/vmalloc.c:337: undefined reference to `pmd_clear_huge' | make: *** [Makefile:1177: vmlinux] Error 1 Although Christophe has kindly offered to look into the arm64 breakage, he's on holiday for another 10 days and there isn't an obvious fix on the arm64 side which allows us to continue using huge-vmap for affected configurations. In the interest of quickly getting things back to a working state as they were in 5.13, revert the huge-vmap changes for PowerPC 8xx prior to reverting the change which breaks arm64. We can then work on this together for 5.15 once Christophe is back. Cc: Ard Biesheuvel Cc: Michael Ellerman Cc: Benjamin Herrenschmidt Cc: Christophe Leroy Cc: Paul Mackerras Cc: Catalin Marinas Cc: Andrew Morton Cc: Nicholas Piggin Cc: Mark Rutland Cc: Geert Uytterhoeven Cc: Marc Zyngier Link: https://lore.kernel.org/r/20210719170615.Horde.Qio1wp3k5ebLo-d9xXHdOg1@messagerie.c-s.fr Signed-off-by: Will Deacon --- arch/powerpc/Kconfig | 2 +- arch/powerpc/include/asm/nohash/32/mmu-8xx.h | 43 -------------------- 2 files changed, 1 insertion(+), 44 deletions(-) diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig index d01e3401581d..5fc19ac62cb9 100644 --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -189,7 +189,7 @@ config PPC select GENERIC_VDSO_TIME_NS select HAVE_ARCH_AUDITSYSCALL select HAVE_ARCH_HUGE_VMALLOC if HAVE_ARCH_HUGE_VMAP - select HAVE_ARCH_HUGE_VMAP if PPC_RADIX_MMU || PPC_8xx + select HAVE_ARCH_HUGE_VMAP if PPC_BOOK3S_64 && PPC_RADIX_MMU select HAVE_ARCH_JUMP_LABEL select HAVE_ARCH_JUMP_LABEL_RELATIVE select HAVE_ARCH_KASAN if PPC32 && PPC_PAGE_SHIFT <= 14 diff --git a/arch/powerpc/include/asm/nohash/32/mmu-8xx.h b/arch/powerpc/include/asm/nohash/32/mmu-8xx.h index 997cec973406..6e4faa0a9b35 100644 --- a/arch/powerpc/include/asm/nohash/32/mmu-8xx.h +++ b/arch/powerpc/include/asm/nohash/32/mmu-8xx.h @@ -178,7 +178,6 @@ #ifndef __ASSEMBLY__ #include -#include void mmu_pin_tlb(unsigned long top, bool readonly); @@ -226,48 +225,6 @@ static inline unsigned int mmu_psize_to_shift(unsigned int mmu_psize) BUG(); } -static inline bool arch_vmap_try_size(unsigned long addr, unsigned long end, u64 pfn, - unsigned int max_page_shift, unsigned long size) -{ - if (end - addr < size) - return false; - - if ((1UL << max_page_shift) < size) - return false; - - if (!IS_ALIGNED(addr, size)) - return false; - - if (!IS_ALIGNED(PFN_PHYS(pfn), size)) - return false; - - return true; -} - -static inline unsigned long arch_vmap_pte_range_map_size(unsigned long addr, unsigned long end, - u64 pfn, unsigned int max_page_shift) -{ - if (arch_vmap_try_size(addr, end, pfn, max_page_shift, SZ_512K)) - return SZ_512K; - if (PAGE_SIZE == SZ_16K) - return SZ_16K; - if (arch_vmap_try_size(addr, end, pfn, max_page_shift, SZ_16K)) - return SZ_16K; - return PAGE_SIZE; -} -#define arch_vmap_pte_range_map_size arch_vmap_pte_range_map_size - -static inline int arch_vmap_pte_supported_shift(unsigned long size) -{ - if (size >= SZ_512K) - return 19; - else if (size >= SZ_16K) - return 14; - else - return PAGE_SHIFT; -} -#define arch_vmap_pte_supported_shift arch_vmap_pte_supported_shift - /* patch sites */ extern s32 patch__itlbmiss_exit_1, patch__dtlbmiss_exit_1; extern s32 patch__itlbmiss_perf, patch__dtlbmiss_perf; -- 2.32.0.402.g57bb445576-goog From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.5 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AA1B5C07E95 for ; Tue, 20 Jul 2021 12:37:39 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 6FF0360E0C for ; Tue, 20 Jul 2021 12:37:39 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6FF0360E0C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=mZ1yLp4Hs3uSdaYALkb0F99+s9L28pMNDLm9/tituuc=; b=j5JYTJY41u775T hHS1XH4Ln4+5Wjoch1NkXhcfI0nDCzCyikyvSHepxTu/hQFGh7Bnshr7j3z8YLHYh2eeo53/MFrNI KwX24uTNanXtNy6Ndgh2tG115u5gnvCF+FbtJM3N+I5kXRFEvDViyJ8wx7SACg/lXQqIOhFoYjoxF m/zNO3GnZaMRcMG2zCKO/GJgfymQJGkG1YnnFTAge22NrtNLeLBgbPXVQHLQhj8gXYGcCZCkErIrp EPXzkC81GF23zZMh3Nb0LqcjsPSZIQEkhp8+qZhuG+zZlzFH+5rWYEhLPH9ImLNDheIl0pfIJQ+3C giXBrgZM6vXbeS2k8DYQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1m5oyZ-00CmFB-Pu; Tue, 20 Jul 2021 12:35:55 +0000 Received: from mail.kernel.org ([198.145.29.99]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1m5oyM-00CmCH-Sm for linux-arm-kernel@lists.infradead.org; Tue, 20 Jul 2021 12:35:47 +0000 Received: by mail.kernel.org (Postfix) with ESMTPSA id E5F6E61186; Tue, 20 Jul 2021 12:35:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1626784542; bh=F9par1LOdZWcaYEp+Fsm6nACHy0sgpjKC7cEQSSN6Kk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=IsWzOQWgeCT1AYKwtELp1TXCDgnOLCVvxD7X75XcFQe5PScLYsCySiEhWrWsNzIFy NU1eCxY9+EtVnyEeY8mYhtZYNtWKq6pwEnYIuUmhq+Xra94KKBJJODpnH81/4MF4Pl 7uyzd4SqeuJM8/Gemg06n3sL8LVi+LGWAdi3y+rn3xvIoCxR4j6ia9WzfgB8kl+oM5 2ykDSrOi/oG3+IA+5TyQEfXk1QR+3NhmU29GozMF0aqkga4B/tubtAa6SESVqo3Yru cEZWb51tqxOvMK7ctZtnfU8yKwalKFMh4knEqIyj+OBhWfD/KGtJxAT0MoPOPtTKR0 ibGilJFyngZYg== From: Will Deacon To: linux-kernel@vger.kernel.org Cc: Will Deacon , Ard Biesheuvel , Michael Ellerman , Thomas Gleixner , Benjamin Herrenschmidt , Christophe Leroy , Paul Mackerras , Jonathan Marek , Catalin Marinas , Andrew Morton , Mike Rapoport , Mark Rutland , Geert Uytterhoeven , Marc Zyngier , linuxppc-dev@lists.ozlabs.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 1/2] Revert "powerpc/8xx: add support for huge pages on VMAP and VMALLOC" Date: Tue, 20 Jul 2021 13:35:11 +0100 Message-Id: <20210720123512.8740-2-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210720123512.8740-1-will@kernel.org> References: <20210720123512.8740-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210720_053543_004320_36A096D5 X-CRM114-Status: GOOD ( 15.41 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org This reverts commit a6a8f7c4aa7eb50304b5c4e68eccd24313f3a785. Commit c742199a014d ("mm/pgtable: add stubs for {pmd/pub}_{set/clear}_huge") breaks the boot for arm64 when block mappings are used to create the linear map, as this relies on a working implementation of pXd_set_huge() even if the corresponding page-table levels have been folded. Although the problematic patch reverts cleanly, doing so breaks the build for 32-bit PowerPC 8xx machines, which rely on the default function definitions when the corresponding page-table levels are folded: | powerpc64-linux-ld: mm/vmalloc.o: in function `vunmap_pud_range': | linux/mm/vmalloc.c:362: undefined reference to `pud_clear_huge' | powerpc64-linux-ld: mm/vmalloc.o: in function `vunmap_pmd_range': | linux/mm/vmalloc.c:337: undefined reference to `pmd_clear_huge' | powerpc64-linux-ld: mm/vmalloc.o: in function `vunmap_pud_range': | linux/mm/vmalloc.c:362: undefined reference to `pud_clear_huge' | powerpc64-linux-ld: mm/vmalloc.o: in function `vunmap_pmd_range': | linux/mm/vmalloc.c:337: undefined reference to `pmd_clear_huge' | make: *** [Makefile:1177: vmlinux] Error 1 Although Christophe has kindly offered to look into the arm64 breakage, he's on holiday for another 10 days and there isn't an obvious fix on the arm64 side which allows us to continue using huge-vmap for affected configurations. In the interest of quickly getting things back to a working state as they were in 5.13, revert the huge-vmap changes for PowerPC 8xx prior to reverting the change which breaks arm64. We can then work on this together for 5.15 once Christophe is back. Cc: Ard Biesheuvel Cc: Michael Ellerman Cc: Benjamin Herrenschmidt Cc: Christophe Leroy Cc: Paul Mackerras Cc: Catalin Marinas Cc: Andrew Morton Cc: Nicholas Piggin Cc: Mark Rutland Cc: Geert Uytterhoeven Cc: Marc Zyngier Link: https://lore.kernel.org/r/20210719170615.Horde.Qio1wp3k5ebLo-d9xXHdOg1@messagerie.c-s.fr Signed-off-by: Will Deacon --- arch/powerpc/Kconfig | 2 +- arch/powerpc/include/asm/nohash/32/mmu-8xx.h | 43 -------------------- 2 files changed, 1 insertion(+), 44 deletions(-) diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig index d01e3401581d..5fc19ac62cb9 100644 --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -189,7 +189,7 @@ config PPC select GENERIC_VDSO_TIME_NS select HAVE_ARCH_AUDITSYSCALL select HAVE_ARCH_HUGE_VMALLOC if HAVE_ARCH_HUGE_VMAP - select HAVE_ARCH_HUGE_VMAP if PPC_RADIX_MMU || PPC_8xx + select HAVE_ARCH_HUGE_VMAP if PPC_BOOK3S_64 && PPC_RADIX_MMU select HAVE_ARCH_JUMP_LABEL select HAVE_ARCH_JUMP_LABEL_RELATIVE select HAVE_ARCH_KASAN if PPC32 && PPC_PAGE_SHIFT <= 14 diff --git a/arch/powerpc/include/asm/nohash/32/mmu-8xx.h b/arch/powerpc/include/asm/nohash/32/mmu-8xx.h index 997cec973406..6e4faa0a9b35 100644 --- a/arch/powerpc/include/asm/nohash/32/mmu-8xx.h +++ b/arch/powerpc/include/asm/nohash/32/mmu-8xx.h @@ -178,7 +178,6 @@ #ifndef __ASSEMBLY__ #include -#include void mmu_pin_tlb(unsigned long top, bool readonly); @@ -226,48 +225,6 @@ static inline unsigned int mmu_psize_to_shift(unsigned int mmu_psize) BUG(); } -static inline bool arch_vmap_try_size(unsigned long addr, unsigned long end, u64 pfn, - unsigned int max_page_shift, unsigned long size) -{ - if (end - addr < size) - return false; - - if ((1UL << max_page_shift) < size) - return false; - - if (!IS_ALIGNED(addr, size)) - return false; - - if (!IS_ALIGNED(PFN_PHYS(pfn), size)) - return false; - - return true; -} - -static inline unsigned long arch_vmap_pte_range_map_size(unsigned long addr, unsigned long end, - u64 pfn, unsigned int max_page_shift) -{ - if (arch_vmap_try_size(addr, end, pfn, max_page_shift, SZ_512K)) - return SZ_512K; - if (PAGE_SIZE == SZ_16K) - return SZ_16K; - if (arch_vmap_try_size(addr, end, pfn, max_page_shift, SZ_16K)) - return SZ_16K; - return PAGE_SIZE; -} -#define arch_vmap_pte_range_map_size arch_vmap_pte_range_map_size - -static inline int arch_vmap_pte_supported_shift(unsigned long size) -{ - if (size >= SZ_512K) - return 19; - else if (size >= SZ_16K) - return 14; - else - return PAGE_SHIFT; -} -#define arch_vmap_pte_supported_shift arch_vmap_pte_supported_shift - /* patch sites */ extern s32 patch__itlbmiss_exit_1, patch__dtlbmiss_exit_1; extern s32 patch__itlbmiss_perf, patch__dtlbmiss_perf; -- 2.32.0.402.g57bb445576-goog _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel