From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S934308AbcAKRP6 (ORCPT ); Mon, 11 Jan 2016 12:15:58 -0500 Received: from mail-io0-f172.google.com ([209.85.223.172]:36364 "EHLO mail-io0-f172.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1757307AbcAKRP5 (ORCPT ); Mon, 11 Jan 2016 12:15:57 -0500 MIME-Version: 1.0 In-Reply-To: References: <1452518355-4606-1-git-send-email-ard.biesheuvel@linaro.org> <1452518355-4606-5-git-send-email-ard.biesheuvel@linaro.org> <20160111160906.GO6499@leverpostej> <20160111162737.GQ6499@leverpostej> <20160111165103.GA29503@leverpostej> Date: Mon, 11 Jan 2016 18:15:56 +0100 Message-ID: Subject: Re: [PATCH v3 04/21] arm64: decouple early fixmap init from linear mapping From: Ard Biesheuvel To: Mark Rutland Cc: Kees Cook , Arnd Bergmann , kernel-hardening@lists.openwall.com, Sharma Bhupesh , Catalin Marinas , Will Deacon , "linux-kernel@vger.kernel.org" , Leif Lindholm , Stuart Yoder , Marc Zyngier , Christoffer Dall , "linux-arm-kernel@lists.infradead.org" Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 11 January 2016 at 18:08, Ard Biesheuvel wrote: > On 11 January 2016 at 17:51, Mark Rutland wrote: >> On Mon, Jan 11, 2016 at 04:27:38PM +0000, Mark Rutland wrote: >>> On Mon, Jan 11, 2016 at 05:15:13PM +0100, Ard Biesheuvel wrote: >>> > On 11 January 2016 at 17:09, Mark Rutland wrote: >>> > > On Mon, Jan 11, 2016 at 02:18:57PM +0100, Ard Biesheuvel wrote: >>> > >> Since the early fixmap page tables are populated using pages that are >>> > >> part of the static footprint of the kernel, they are covered by the >>> > >> initial kernel mapping, and we can refer to them without using __va/__pa >>> > >> translations, which are tied to the linear mapping. >>> > >> >>> > >> Since the fixmap page tables are disjoint from the kernel mapping up >>> > >> to the top level pgd entry, we can refer to bm_pte[] directly, and there >>> > >> is no need to walk the page tables and perform __pa()/__va() translations >>> > >> at each step. >>> > >> >>> > >> Signed-off-by: Ard Biesheuvel >>> > >> --- >>> > >> arch/arm64/mm/mmu.c | 32 ++++++-------------- >>> > >> 1 file changed, 9 insertions(+), 23 deletions(-) >>> > >> >>> > >> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c >>> > >> index 7711554a94f4..75b5f0dc3bdc 100644 >>> > >> --- a/arch/arm64/mm/mmu.c >>> > >> +++ b/arch/arm64/mm/mmu.c >>> > >> @@ -570,38 +570,24 @@ void vmemmap_free(unsigned long start, unsigned long end) >>> > >> #endif /* CONFIG_SPARSEMEM_VMEMMAP */ >>> > >> >>> > >> static pte_t bm_pte[PTRS_PER_PTE] __page_aligned_bss; >>> > >> -#if CONFIG_PGTABLE_LEVELS > 2 >>> > >> static pmd_t bm_pmd[PTRS_PER_PMD] __page_aligned_bss; >>> > >> -#endif >>> > >> -#if CONFIG_PGTABLE_LEVELS > 3 >>> > >> static pud_t bm_pud[PTRS_PER_PUD] __page_aligned_bss; >>> > >> -#endif >>> > >> >>> > >> static inline pud_t * fixmap_pud(unsigned long addr) >>> > >> { >>> > >> - pgd_t *pgd = pgd_offset_k(addr); >>> > >> - >>> > >> - BUG_ON(pgd_none(*pgd) || pgd_bad(*pgd)); >>> > >> - >>> > >> - return pud_offset(pgd, addr); >>> > >> + return (CONFIG_PGTABLE_LEVELS > 3) ? &bm_pud[pud_index(addr)] >>> > >> + : (pud_t *)pgd_offset_k(addr); >>> > > >>> > > If we move patch 6 earlier, we could use pud_offset_kimg here, and avoid >>> > > the cast, at the cost of passing the pgd into fixmap_pud. >>> > > >>> > > Similarly for fixmap_pmd. >>> > > >>> > >>> > Is that necessarily an improvement? I know it hides the cast, but I >>> > think having an explicit pgd_t* to pud_t* cast that so obviously >>> > applies to CONFIG_PGTABLE_LEVELS < 4 only is fine as well. >>> >>> True; it's not a big thing either way. >> >> Sorry, I'm gonig to change my mind on that again. I think using >> p?d_offset_kimg is preferable. e.g. >> >> static inline pud_t * fixmap_pud(unsigned long addr) >> { >> pgd_t *pgd = pgd_offset_k(addr); >> >> BUG_ON(pgd_none(*pgd) || pgd_bad(*pgd)); >> >> return pud_offset_kimg(pgd, addr); >> } >> >> static inline pmd_t * fixmap_pmd(unsigned long addr) >> { >> pud_t *pud = fixmap_pud(addr); >> >> BUG_ON(pud_none(*pud) || pud_bad(*pud)); >> >> return pmd_offset_kimg(pud, addr); >> } >> >> That avoids having to check CONFIG_PGTABLE_LEVELS check and perform a cast, >> avoids duplicating details about bm_{pud,pmd}, and keeps the existing structure >> so it's easier to reason about the change. I was wrong about having to pass the >> pgd or pud in, so callers don't need upating. >> >> From my PoV that is preferable. >> > > OK. I think it looks better, indeed. ... however, this does mean we have to go through a __pa() translation and back just to get to the address of bm_pud/bm_pmd