From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-14.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id ED56FC433B4 for ; Tue, 11 May 2021 15:51:42 +0000 (UTC) Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 26DBE61207 for ; Tue, 11 May 2021 15:51:42 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 26DBE61207 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding :Content-Type:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References:Message-ID: Subject:Cc:To:From:Date:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=4mCZRl4508hdIF5gImlQocYnX5aZq2yrqhYFF3QaKVI=; b=eLvWqlzG/DfX0cOJK1Br9zqug 3TnbzKbgogHyJIMbfwr2SV5/FrMYTRYqmjgkON4M4XCJeyAy1hsUHlyF5J+v5bXHvRO0DqqIUr2JX kvCrjKwmCSdTZlHnNr+2Q3CsLkOd3NdrzLeKhOiP7xAgQ3wRhzeINwXoHzC7OoNSc53ycl4FVQhQS qj4VyXbpa88RpEjk+Q/DAXK9Ooj4ThrSifKXPohgVY2XKiLbS9Drv+Jt1x3vxm35cSFfTtqRKMebi hivTfNlrA0J1XR0dXZKazPKOxo6LUK3bP3U/eSW/BC1q+ZLRIRCTbpXjjaIrk3qHMflefY4cdxecr ENdtls2LA==; Received: from localhost ([::1] helo=desiato.infradead.org) by desiato.infradead.org with esmtp (Exim 4.94 #2 (Red Hat Linux)) id 1lgUdr-000eC6-13; Tue, 11 May 2021 15:49:52 +0000 Received: from bombadil.infradead.org ([2607:7c80:54:e::133]) by desiato.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lgUdm-000eBD-Ok for linux-arm-kernel@desiato.infradead.org; Tue, 11 May 2021 15:49:47 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=In-Reply-To:Content-Type:MIME-Version :References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=TyUBdh5B9kICISbuu8LNpb8IUOs4D9t+MSM+VRPdpg4=; b=KHXJk7fFpjNNX8aWPeOQII64vH OSWOaNYwC2uXmpNStzBFPPhvEE0bJ4/hk5hARuAYuNrhFNpgGmfl2+HvEwpguYUCneciX3akZYpTH krl7ByS3Xs5tTMbu+x/7gk4AZ1YKMmGTUTEb2xciomMt1RrhlJ36AwK7q027TCiLKemQwR1JLe4a7 CAtzx21ttGfy5hLmCX4WtenzOGecEtVR+iwjH3lZD8YPz8taTDLISePhXC6AQJTfsugrjhUNSncao P3dc3j6+unwS+AczdfrvW04Mrh9tx5wouWwUidlBbzYNrG8VPeuVHxz2OH+Z0KjpMeDbmFor5E6PI g2L+mSYA==; Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94 #2 (Red Hat Linux)) id 1lgUdh-009jLC-J8 for linux-arm-kernel@lists.infradead.org; Tue, 11 May 2021 15:49:45 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 55347D6E; Tue, 11 May 2021 08:49:39 -0700 (PDT) Received: from C02TD0UTHF1T.local (unknown [10.57.29.91]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 0A6133F719; Tue, 11 May 2021 08:49:36 -0700 (PDT) Date: Tue, 11 May 2021 16:49:33 +0100 From: Mark Rutland To: Ard Biesheuvel Cc: Fuad Tabba , Linux ARM , Will Deacon , Catalin Marinas , Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose Subject: Re: [PATCH v1 13/13] arm64: Rename arm64-internal cache maintenance functions Message-ID: <20210511154933.GF8933@C02TD0UTHF1T.local> References: <20210511144252.3779113-1-tabba@google.com> <20210511144252.3779113-14-tabba@google.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210511_084941_797695_EF13A8C4 X-CRM114-Status: GOOD ( 37.76 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Tue, May 11, 2021 at 05:09:18PM +0200, Ard Biesheuvel wrote: > On Tue, 11 May 2021 at 16:43, Fuad Tabba wrote: > > > > Although naming across the codebase isn't that consistent, it > > tends to follow certain patterns. Moreover, the term "flush" > > isn't defined in the Arm Architecture reference manual, and might > > be interpreted to mean clean, invalidate, or both for a cache. > > > > Rename arm64-internal functions to make the naming internally > > consistent, as well as making it consistent with the Arm ARM, by > > clarifying whether the operation is a clean, invalidate, or both. > > Also specify which point the operation applies two, i.e., to the > > point of unification (PoU), coherence (PoC), or persistence > > (PoP). > > > > This commit applies the following sed transformation to all files > > under arch/arm64: > > > > "s/\b__flush_cache_range\b/__clean_inval_cache_pou_macro/g;"\ > > "s/\b__flush_icache_range\b/__clean_inval_cache_pou/g;"\ For icaches, a "flush" is just an invalidate, so this doesn't need "clean". > > "s/\binvalidate_icache_range\b/__inval_icache_pou/g;"\ > > "s/\b__flush_dcache_area\b/__clean_inval_dcache_poc/g;"\ > > "s/\b__inval_dcache_area\b/__inval_dcache_poc/g;"\ > > "s/__clean_dcache_area_poc\b/__clean_dcache_poc/g;"\ > > "s/\b__clean_dcache_area_pop\b/__clean_dcache_pop/g;"\ > > "s/\b__clean_dcache_area_pou\b/__clean_dcache_pou/g;"\ > > "s/\b__flush_cache_user_range\b/__clean_inval_cache_user_pou/g;"\ > > "s/\b__flush_icache_all\b/__clean_inval_all_icache_pou/g;" Likewise here. > > > > Note that __clean_dcache_area_poc is deliberately missing a word > > boundary check to match the efistub symbols in image-vars.h. > > > > No functional change intended. > > > > Signed-off-by: Fuad Tabba > > I am a big fan of this change: code is so much easier to read if the > names of subroutines match their intent. Likewise! > I would suggest, though, that we get rid of all the leading > underscores while at it: we often use them when refactoring existing > routines into separate pieces (which is where at least some of these > came from), but here, they seem to have little value. That all makes sense to me; I'd also suggest we make the cache type the prefix, e.g. * icache_clean_pou * dcache_clean_inval_poc * caches_clean_inval_user_pou // D+I caches ... since then it's easier to read consistently, rather than having to search for the cache type midway through the name. Thanks, Mark. > > > > --- > > arch/arm64/include/asm/arch_gicv3.h | 2 +- > > arch/arm64/include/asm/cacheflush.h | 36 +++++++++---------- > > arch/arm64/include/asm/efi.h | 2 +- > > arch/arm64/include/asm/kvm_mmu.h | 6 ++-- > > arch/arm64/kernel/alternative.c | 2 +- > > arch/arm64/kernel/efi-entry.S | 4 +-- > > arch/arm64/kernel/head.S | 8 ++--- > > arch/arm64/kernel/hibernate.c | 12 +++---- > > arch/arm64/kernel/idreg-override.c | 2 +- > > arch/arm64/kernel/image-vars.h | 2 +- > > arch/arm64/kernel/insn.c | 2 +- > > arch/arm64/kernel/kaslr.c | 6 ++-- > > arch/arm64/kernel/machine_kexec.c | 10 +++--- > > arch/arm64/kernel/smp.c | 4 +-- > > arch/arm64/kernel/smp_spin_table.c | 4 +-- > > arch/arm64/kernel/sys_compat.c | 2 +- > > arch/arm64/kvm/arm.c | 2 +- > > arch/arm64/kvm/hyp/nvhe/cache.S | 4 +-- > > arch/arm64/kvm/hyp/nvhe/setup.c | 2 +- > > arch/arm64/kvm/hyp/nvhe/tlb.c | 2 +- > > arch/arm64/kvm/hyp/pgtable.c | 4 +-- > > arch/arm64/lib/uaccess_flushcache.c | 4 +-- > > arch/arm64/mm/cache.S | 56 ++++++++++++++--------------- > > arch/arm64/mm/flush.c | 12 +++---- > > 24 files changed, 95 insertions(+), 95 deletions(-) > > > > diff --git a/arch/arm64/include/asm/arch_gicv3.h b/arch/arm64/include/asm/arch_gicv3.h > > index ed1cc9d8e6df..4b7ac9098e8f 100644 > > --- a/arch/arm64/include/asm/arch_gicv3.h > > +++ b/arch/arm64/include/asm/arch_gicv3.h > > @@ -125,7 +125,7 @@ static inline u32 gic_read_rpr(void) > > #define gic_write_lpir(v, c) writeq_relaxed(v, c) > > > > #define gic_flush_dcache_to_poc(a,l) \ > > - __flush_dcache_area((unsigned long)(a), (unsigned long)(a)+(l)) > > + __clean_inval_dcache_poc((unsigned long)(a), (unsigned long)(a)+(l)) > > > > #define gits_read_baser(c) readq_relaxed(c) > > #define gits_write_baser(v, c) writeq_relaxed(v, c) > > diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h > > index 4b91d3530013..526eee4522eb 100644 > > --- a/arch/arm64/include/asm/cacheflush.h > > +++ b/arch/arm64/include/asm/cacheflush.h > > @@ -34,54 +34,54 @@ > > * - start - virtual start address > > * - end - virtual end address > > * > > - * __flush_icache_range(start, end) > > + * __clean_inval_cache_pou(start, end) > > * > > * Ensure coherency between the I-cache and the D-cache region to > > * the Point of Unification. > > * > > - * __flush_cache_user_range(start, end) > > + * __clean_inval_cache_user_pou(start, end) > > * > > * Ensure coherency between the I-cache and the D-cache region to > > * the Point of Unification. > > * Use only if the region might access user memory. > > * > > - * invalidate_icache_range(start, end) > > + * __inval_icache_pou(start, end) > > * > > * Invalidate I-cache region to the Point of Unification. > > * > > - * __flush_dcache_area(start, end) > > + * __clean_inval_dcache_poc(start, end) > > * > > * Clean and invalidate D-cache region to the Point of Coherence. > > * > > - * __inval_dcache_area(start, end) > > + * __inval_dcache_poc(start, end) > > * > > * Invalidate D-cache region to the Point of Coherence. > > * > > - * __clean_dcache_area_poc(start, end) > > + * __clean_dcache_poc(start, end) > > * > > * Clean D-cache region to the Point of Coherence. > > * > > - * __clean_dcache_area_pop(start, end) > > + * __clean_dcache_pop(start, end) > > * > > * Clean D-cache region to the Point of Persistence. > > * > > - * __clean_dcache_area_pou(start, end) > > + * __clean_dcache_pou(start, end) > > * > > * Clean D-cache region to the Point of Unification. > > */ > > -extern void __flush_icache_range(unsigned long start, unsigned long end); > > -extern void invalidate_icache_range(unsigned long start, unsigned long end); > > -extern void __flush_dcache_area(unsigned long start, unsigned long end); > > -extern void __inval_dcache_area(unsigned long start, unsigned long end); > > -extern void __clean_dcache_area_poc(unsigned long start, unsigned long end); > > -extern void __clean_dcache_area_pop(unsigned long start, unsigned long end); > > -extern void __clean_dcache_area_pou(unsigned long start, unsigned long end); > > -extern long __flush_cache_user_range(unsigned long start, unsigned long end); > > +extern void __clean_inval_cache_pou(unsigned long start, unsigned long end); > > +extern void __inval_icache_pou(unsigned long start, unsigned long end); > > +extern void __clean_inval_dcache_poc(unsigned long start, unsigned long end); > > +extern void __inval_dcache_poc(unsigned long start, unsigned long end); > > +extern void __clean_dcache_poc(unsigned long start, unsigned long end); > > +extern void __clean_dcache_pop(unsigned long start, unsigned long end); > > +extern void __clean_dcache_pou(unsigned long start, unsigned long end); > > +extern long __clean_inval_cache_user_pou(unsigned long start, unsigned long end); > > extern void sync_icache_aliases(unsigned long start, unsigned long end); > > > > static inline void flush_icache_range(unsigned long start, unsigned long end) > > { > > - __flush_icache_range(start, end); > > + __clean_inval_cache_pou(start, end); > > > > /* > > * IPI all online CPUs so that they undergo a context synchronization > > @@ -135,7 +135,7 @@ extern void copy_to_user_page(struct vm_area_struct *, struct page *, > > #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 > > extern void flush_dcache_page(struct page *); > > > > -static __always_inline void __flush_icache_all(void) > > +static __always_inline void __clean_inval_all_icache_pou(void) > > { > > if (cpus_have_const_cap(ARM64_HAS_CACHE_DIC)) > > return; > > diff --git a/arch/arm64/include/asm/efi.h b/arch/arm64/include/asm/efi.h > > index 0ae2397076fd..d1e2a4bf8def 100644 > > --- a/arch/arm64/include/asm/efi.h > > +++ b/arch/arm64/include/asm/efi.h > > @@ -137,7 +137,7 @@ void efi_virtmap_unload(void); > > > > static inline void efi_capsule_flush_cache_range(void *addr, int size) > > { > > - __flush_dcache_area((unsigned long)addr, (unsigned long)addr + size); > > + __clean_inval_dcache_poc((unsigned long)addr, (unsigned long)addr + size); > > } > > > > #endif /* _ASM_EFI_H */ > > diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h > > index 33293d5855af..29d2aa6f3940 100644 > > --- a/arch/arm64/include/asm/kvm_mmu.h > > +++ b/arch/arm64/include/asm/kvm_mmu.h > > @@ -181,7 +181,7 @@ static inline void *__kvm_vector_slot2addr(void *base, > > struct kvm; > > > > #define kvm_flush_dcache_to_poc(a,l) \ > > - __flush_dcache_area((unsigned long)(a), (unsigned long)(a)+(l)) > > + __clean_inval_dcache_poc((unsigned long)(a), (unsigned long)(a)+(l)) > > > > static inline bool vcpu_has_cache_enabled(struct kvm_vcpu *vcpu) > > { > > @@ -209,12 +209,12 @@ static inline void __invalidate_icache_guest_page(kvm_pfn_t pfn, > > { > > if (icache_is_aliasing()) { > > /* any kind of VIPT cache */ > > - __flush_icache_all(); > > + __clean_inval_all_icache_pou(); > > } else if (is_kernel_in_hyp_mode() || !icache_is_vpipt()) { > > /* PIPT or VPIPT at EL2 (see comment in __kvm_tlb_flush_vmid_ipa) */ > > void *va = page_address(pfn_to_page(pfn)); > > > > - invalidate_icache_range((unsigned long)va, > > + __inval_icache_pou((unsigned long)va, > > (unsigned long)va + size); > > } > > } > > diff --git a/arch/arm64/kernel/alternative.c b/arch/arm64/kernel/alternative.c > > index c906d20c7b52..ea2d52fa9a0c 100644 > > --- a/arch/arm64/kernel/alternative.c > > +++ b/arch/arm64/kernel/alternative.c > > @@ -181,7 +181,7 @@ static void __nocfi __apply_alternatives(struct alt_region *region, bool is_modu > > */ > > if (!is_module) { > > dsb(ish); > > - __flush_icache_all(); > > + __clean_inval_all_icache_pou(); > > isb(); > > > > /* Ignore ARM64_CB bit from feature mask */ > > diff --git a/arch/arm64/kernel/efi-entry.S b/arch/arm64/kernel/efi-entry.S > > index 72e6a580290a..230506f460ec 100644 > > --- a/arch/arm64/kernel/efi-entry.S > > +++ b/arch/arm64/kernel/efi-entry.S > > @@ -29,7 +29,7 @@ SYM_CODE_START(efi_enter_kernel) > > */ > > ldr w1, =kernel_size > > add x1, x0, x1 > > - bl __clean_dcache_area_poc > > + bl __clean_dcache_poc > > ic ialluis > > > > /* > > @@ -38,7 +38,7 @@ SYM_CODE_START(efi_enter_kernel) > > */ > > adr x0, 0f > > adr x1, 3f > > - bl __clean_dcache_area_poc > > + bl __clean_dcache_poc > > 0: > > /* Turn off Dcache and MMU */ > > mrs x0, CurrentEL > > diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S > > index 8df0ac8d9123..ea0447c5010a 100644 > > --- a/arch/arm64/kernel/head.S > > +++ b/arch/arm64/kernel/head.S > > @@ -118,7 +118,7 @@ SYM_CODE_START_LOCAL(preserve_boot_args) > > // MMU off > > > > add x1, x0, #0x20 // 4 x 8 bytes > > - b __inval_dcache_area // tail call > > + b __inval_dcache_poc // tail call > > SYM_CODE_END(preserve_boot_args) > > > > /* > > @@ -268,7 +268,7 @@ SYM_FUNC_START_LOCAL(__create_page_tables) > > */ > > adrp x0, init_pg_dir > > adrp x1, init_pg_end > > - bl __inval_dcache_area > > + bl __inval_dcache_poc > > > > /* > > * Clear the init page tables. > > @@ -381,11 +381,11 @@ SYM_FUNC_START_LOCAL(__create_page_tables) > > > > adrp x0, idmap_pg_dir > > adrp x1, idmap_pg_end > > - bl __inval_dcache_area > > + bl __inval_dcache_poc > > > > adrp x0, init_pg_dir > > adrp x1, init_pg_end > > - bl __inval_dcache_area > > + bl __inval_dcache_poc > > > > ret x28 > > SYM_FUNC_END(__create_page_tables) > > diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c > > index b40ddce71507..ec871b24fd5b 100644 > > --- a/arch/arm64/kernel/hibernate.c > > +++ b/arch/arm64/kernel/hibernate.c > > @@ -210,7 +210,7 @@ static int create_safe_exec_page(void *src_start, size_t length, > > return -ENOMEM; > > > > memcpy(page, src_start, length); > > - __flush_icache_range((unsigned long)page, (unsigned long)page + length); > > + __clean_inval_cache_pou((unsigned long)page, (unsigned long)page + length); > > rc = trans_pgd_idmap_page(&trans_info, &trans_ttbr0, &t0sz, page); > > if (rc) > > return rc; > > @@ -381,17 +381,17 @@ int swsusp_arch_suspend(void) > > ret = swsusp_save(); > > } else { > > /* Clean kernel core startup/idle code to PoC*/ > > - __flush_dcache_area((unsigned long)__mmuoff_data_start, > > + __clean_inval_dcache_poc((unsigned long)__mmuoff_data_start, > > (unsigned long)__mmuoff_data_end); > > - __flush_dcache_area((unsigned long)__idmap_text_start, > > + __clean_inval_dcache_poc((unsigned long)__idmap_text_start, > > (unsigned long)__idmap_text_end); > > > > /* Clean kvm setup code to PoC? */ > > if (el2_reset_needed()) { > > - __flush_dcache_area( > > + __clean_inval_dcache_poc( > > (unsigned long)__hyp_idmap_text_start, > > (unsigned long)__hyp_idmap_text_end); > > - __flush_dcache_area((unsigned long)__hyp_text_start, > > + __clean_inval_dcache_poc((unsigned long)__hyp_text_start, > > (unsigned long)__hyp_text_end); > > } > > > > @@ -477,7 +477,7 @@ int swsusp_arch_resume(void) > > * The hibernate exit text contains a set of el2 vectors, that will > > * be executed at el2 with the mmu off in order to reload hyp-stub. > > */ > > - __flush_dcache_area((unsigned long)hibernate_exit, > > + __clean_inval_dcache_poc((unsigned long)hibernate_exit, > > (unsigned long)hibernate_exit + exit_size); > > > > /* > > diff --git a/arch/arm64/kernel/idreg-override.c b/arch/arm64/kernel/idreg-override.c > > index 3dd515baf526..6b4b5727f2db 100644 > > --- a/arch/arm64/kernel/idreg-override.c > > +++ b/arch/arm64/kernel/idreg-override.c > > @@ -237,7 +237,7 @@ asmlinkage void __init init_feature_override(void) > > > > for (i = 0; i < ARRAY_SIZE(regs); i++) { > > if (regs[i]->override) > > - __flush_dcache_area((unsigned long)regs[i]->override, > > + __clean_inval_dcache_poc((unsigned long)regs[i]->override, > > (unsigned long)regs[i]->override + > > sizeof(*regs[i]->override)); > > } > > diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h > > index bcf3c2755370..14beda6a573d 100644 > > --- a/arch/arm64/kernel/image-vars.h > > +++ b/arch/arm64/kernel/image-vars.h > > @@ -35,7 +35,7 @@ __efistub_strnlen = __pi_strnlen; > > __efistub_strcmp = __pi_strcmp; > > __efistub_strncmp = __pi_strncmp; > > __efistub_strrchr = __pi_strrchr; > > -__efistub___clean_dcache_area_poc = __pi___clean_dcache_area_poc; > > +__efistub___clean_dcache_poc = __pi___clean_dcache_poc; > > > > #if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS) > > __efistub___memcpy = __pi_memcpy; > > diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c > > index 6c0de2f60ea9..11c7be09e305 100644 > > --- a/arch/arm64/kernel/insn.c > > +++ b/arch/arm64/kernel/insn.c > > @@ -198,7 +198,7 @@ int __kprobes aarch64_insn_patch_text_nosync(void *addr, u32 insn) > > > > ret = aarch64_insn_write(tp, insn); > > if (ret == 0) > > - __flush_icache_range((uintptr_t)tp, > > + __clean_inval_cache_pou((uintptr_t)tp, > > (uintptr_t)tp + AARCH64_INSN_SIZE); > > > > return ret; > > diff --git a/arch/arm64/kernel/kaslr.c b/arch/arm64/kernel/kaslr.c > > index 49cccd03cb37..038a4cc7de93 100644 > > --- a/arch/arm64/kernel/kaslr.c > > +++ b/arch/arm64/kernel/kaslr.c > > @@ -72,7 +72,7 @@ u64 __init kaslr_early_init(void) > > * we end up running with module randomization disabled. > > */ > > module_alloc_base = (u64)_etext - MODULES_VSIZE; > > - __flush_dcache_area((unsigned long)&module_alloc_base, > > + __clean_inval_dcache_poc((unsigned long)&module_alloc_base, > > (unsigned long)&module_alloc_base + > > sizeof(module_alloc_base)); > > > > @@ -172,10 +172,10 @@ u64 __init kaslr_early_init(void) > > module_alloc_base += (module_range * (seed & ((1 << 21) - 1))) >> 21; > > module_alloc_base &= PAGE_MASK; > > > > - __flush_dcache_area((unsigned long)&module_alloc_base, > > + __clean_inval_dcache_poc((unsigned long)&module_alloc_base, > > (unsigned long)&module_alloc_base + > > sizeof(module_alloc_base)); > > - __flush_dcache_area((unsigned long)&memstart_offset_seed, > > + __clean_inval_dcache_poc((unsigned long)&memstart_offset_seed, > > (unsigned long)&memstart_offset_seed + > > sizeof(memstart_offset_seed)); > > > > diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c > > index 4cada9000acf..0e20a789b03e 100644 > > --- a/arch/arm64/kernel/machine_kexec.c > > +++ b/arch/arm64/kernel/machine_kexec.c > > @@ -69,10 +69,10 @@ int machine_kexec_post_load(struct kimage *kimage) > > kexec_image_info(kimage); > > > > /* Flush the reloc_code in preparation for its execution. */ > > - __flush_dcache_area((unsigned long)reloc_code, > > + __clean_inval_dcache_poc((unsigned long)reloc_code, > > (unsigned long)reloc_code + > > arm64_relocate_new_kernel_size); > > - invalidate_icache_range((uintptr_t)reloc_code, > > + __inval_icache_pou((uintptr_t)reloc_code, > > (uintptr_t)reloc_code + > > arm64_relocate_new_kernel_size); > > > > @@ -108,7 +108,7 @@ static void kexec_list_flush(struct kimage *kimage) > > unsigned long addr; > > > > /* flush the list entries. */ > > - __flush_dcache_area((unsigned long)entry, > > + __clean_inval_dcache_poc((unsigned long)entry, > > (unsigned long)entry + > > sizeof(kimage_entry_t)); > > > > @@ -125,7 +125,7 @@ static void kexec_list_flush(struct kimage *kimage) > > break; > > case IND_SOURCE: > > /* flush the source pages. */ > > - __flush_dcache_area(addr, addr + PAGE_SIZE); > > + __clean_inval_dcache_poc(addr, addr + PAGE_SIZE); > > break; > > case IND_DESTINATION: > > break; > > @@ -152,7 +152,7 @@ static void kexec_segment_flush(const struct kimage *kimage) > > kimage->segment[i].memsz, > > kimage->segment[i].memsz / PAGE_SIZE); > > > > - __flush_dcache_area( > > + __clean_inval_dcache_poc( > > (unsigned long)phys_to_virt(kimage->segment[i].mem), > > (unsigned long)phys_to_virt(kimage->segment[i].mem) + > > kimage->segment[i].memsz); > > diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c > > index 5fcdee331087..2044210ed15a 100644 > > --- a/arch/arm64/kernel/smp.c > > +++ b/arch/arm64/kernel/smp.c > > @@ -122,7 +122,7 @@ int __cpu_up(unsigned int cpu, struct task_struct *idle) > > secondary_data.task = idle; > > secondary_data.stack = task_stack_page(idle) + THREAD_SIZE; > > update_cpu_boot_status(CPU_MMU_OFF); > > - __flush_dcache_area((unsigned long)&secondary_data, > > + __clean_inval_dcache_poc((unsigned long)&secondary_data, > > (unsigned long)&secondary_data + > > sizeof(secondary_data)); > > > > @@ -145,7 +145,7 @@ int __cpu_up(unsigned int cpu, struct task_struct *idle) > > pr_crit("CPU%u: failed to come online\n", cpu); > > secondary_data.task = NULL; > > secondary_data.stack = NULL; > > - __flush_dcache_area((unsigned long)&secondary_data, > > + __clean_inval_dcache_poc((unsigned long)&secondary_data, > > (unsigned long)&secondary_data + > > sizeof(secondary_data)); > > status = READ_ONCE(secondary_data.status); > > diff --git a/arch/arm64/kernel/smp_spin_table.c b/arch/arm64/kernel/smp_spin_table.c > > index 58d804582a35..a946ccb9791e 100644 > > --- a/arch/arm64/kernel/smp_spin_table.c > > +++ b/arch/arm64/kernel/smp_spin_table.c > > @@ -36,7 +36,7 @@ static void write_pen_release(u64 val) > > unsigned long size = sizeof(secondary_holding_pen_release); > > > > secondary_holding_pen_release = val; > > - __flush_dcache_area((unsigned long)start, (unsigned long)start + size); > > + __clean_inval_dcache_poc((unsigned long)start, (unsigned long)start + size); > > } > > > > > > @@ -90,7 +90,7 @@ static int smp_spin_table_cpu_prepare(unsigned int cpu) > > * the boot protocol. > > */ > > writeq_relaxed(pa_holding_pen, release_addr); > > - __flush_dcache_area((__force unsigned long)release_addr, > > + __clean_inval_dcache_poc((__force unsigned long)release_addr, > > (__force unsigned long)release_addr + > > sizeof(*release_addr)); > > > > diff --git a/arch/arm64/kernel/sys_compat.c b/arch/arm64/kernel/sys_compat.c > > index 265fe3eb1069..fdd415f8d841 100644 > > --- a/arch/arm64/kernel/sys_compat.c > > +++ b/arch/arm64/kernel/sys_compat.c > > @@ -41,7 +41,7 @@ __do_compat_cache_op(unsigned long start, unsigned long end) > > dsb(ish); > > } > > > > - ret = __flush_cache_user_range(start, start + chunk); > > + ret = __clean_inval_cache_user_pou(start, start + chunk); > > if (ret) > > return ret; > > > > diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c > > index 1cb39c0803a4..edeca89405ff 100644 > > --- a/arch/arm64/kvm/arm.c > > +++ b/arch/arm64/kvm/arm.c > > @@ -1064,7 +1064,7 @@ static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm_vcpu *vcpu, > > if (!cpus_have_final_cap(ARM64_HAS_STAGE2_FWB)) > > stage2_unmap_vm(vcpu->kvm); > > else > > - __flush_icache_all(); > > + __clean_inval_all_icache_pou(); > > } > > > > vcpu_reset_hcr(vcpu); > > diff --git a/arch/arm64/kvm/hyp/nvhe/cache.S b/arch/arm64/kvm/hyp/nvhe/cache.S > > index 36cef6915428..a906dd596e66 100644 > > --- a/arch/arm64/kvm/hyp/nvhe/cache.S > > +++ b/arch/arm64/kvm/hyp/nvhe/cache.S > > @@ -7,7 +7,7 @@ > > #include > > #include > > > > -SYM_FUNC_START_PI(__flush_dcache_area) > > +SYM_FUNC_START_PI(__clean_inval_dcache_poc) > > dcache_by_line_op civac, sy, x0, x1, x2, x3 > > ret > > -SYM_FUNC_END_PI(__flush_dcache_area) > > +SYM_FUNC_END_PI(__clean_inval_dcache_poc) > > diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c > > index 5dffe928f256..a16719f5068d 100644 > > --- a/arch/arm64/kvm/hyp/nvhe/setup.c > > +++ b/arch/arm64/kvm/hyp/nvhe/setup.c > > @@ -134,7 +134,7 @@ static void update_nvhe_init_params(void) > > for (i = 0; i < hyp_nr_cpus; i++) { > > params = per_cpu_ptr(&kvm_init_params, i); > > params->pgd_pa = __hyp_pa(pkvm_pgtable.pgd); > > - __flush_dcache_area((unsigned long)params, > > + __clean_inval_dcache_poc((unsigned long)params, > > (unsigned long)params + sizeof(*params)); > > } > > } > > diff --git a/arch/arm64/kvm/hyp/nvhe/tlb.c b/arch/arm64/kvm/hyp/nvhe/tlb.c > > index 83dc3b271bc5..184c9c7c13bd 100644 > > --- a/arch/arm64/kvm/hyp/nvhe/tlb.c > > +++ b/arch/arm64/kvm/hyp/nvhe/tlb.c > > @@ -104,7 +104,7 @@ void __kvm_tlb_flush_vmid_ipa(struct kvm_s2_mmu *mmu, > > * you should be running with VHE enabled. > > */ > > if (icache_is_vpipt()) > > - __flush_icache_all(); > > + __clean_inval_all_icache_pou(); > > > > __tlb_switch_to_host(&cxt); > > } > > diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c > > index 10d2f04013d4..fb2613f458de 100644 > > --- a/arch/arm64/kvm/hyp/pgtable.c > > +++ b/arch/arm64/kvm/hyp/pgtable.c > > @@ -841,7 +841,7 @@ static int stage2_unmap_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep, > > if (need_flush) { > > kvm_pte_t *pte_follow = kvm_pte_follow(pte, mm_ops); > > > > - __flush_dcache_area((unsigned long)pte_follow, > > + __clean_inval_dcache_poc((unsigned long)pte_follow, > > (unsigned long)pte_follow + > > kvm_granule_size(level)); > > } > > @@ -997,7 +997,7 @@ static int stage2_flush_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep, > > return 0; > > > > pte_follow = kvm_pte_follow(pte, mm_ops); > > - __flush_dcache_area((unsigned long)pte_follow, > > + __clean_inval_dcache_poc((unsigned long)pte_follow, > > (unsigned long)pte_follow + > > kvm_granule_size(level)); > > return 0; > > diff --git a/arch/arm64/lib/uaccess_flushcache.c b/arch/arm64/lib/uaccess_flushcache.c > > index 62ea989effe8..b1a6d9823864 100644 > > --- a/arch/arm64/lib/uaccess_flushcache.c > > +++ b/arch/arm64/lib/uaccess_flushcache.c > > @@ -15,7 +15,7 @@ void memcpy_flushcache(void *dst, const void *src, size_t cnt) > > * barrier to order the cache maintenance against the memcpy. > > */ > > memcpy(dst, src, cnt); > > - __clean_dcache_area_pop((unsigned long)dst, (unsigned long)dst + cnt); > > + __clean_dcache_pop((unsigned long)dst, (unsigned long)dst + cnt); > > } > > EXPORT_SYMBOL_GPL(memcpy_flushcache); > > > > @@ -33,6 +33,6 @@ unsigned long __copy_user_flushcache(void *to, const void __user *from, > > rc = raw_copy_from_user(to, from, n); > > > > /* See above */ > > - __clean_dcache_area_pop((unsigned long)to, (unsigned long)to + n - rc); > > + __clean_dcache_pop((unsigned long)to, (unsigned long)to + n - rc); > > return rc; > > } > > diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S > > index d8434e57fab3..2df7212de799 100644 > > --- a/arch/arm64/mm/cache.S > > +++ b/arch/arm64/mm/cache.S > > @@ -15,7 +15,7 @@ > > #include > > > > /* > > - * __flush_cache_range(start,end) [needs_uaccess] > > + * __clean_inval_cache_pou_macro(start,end) [needs_uaccess] > > * > > * Ensure that the I and D caches are coherent within specified region. > > * This is typically used when code has been written to a memory region, > > @@ -25,7 +25,7 @@ > > * - end - virtual end address of region > > * - needs_uaccess - (macro parameter) might access user space memory > > */ > > -.macro __flush_cache_range, needs_uaccess > > +.macro __clean_inval_cache_pou_macro, needs_uaccess > > .if \needs_uaccess > > uaccess_ttbr0_enable x2, x3, x4 > > .endif > > @@ -77,12 +77,12 @@ alternative_else_nop_endif > > * - start - virtual start address of region > > * - end - virtual end address of region > > */ > > -SYM_FUNC_START(__flush_icache_range) > > - __flush_cache_range needs_uaccess=0 > > -SYM_FUNC_END(__flush_icache_range) > > +SYM_FUNC_START(__clean_inval_cache_pou) > > + __clean_inval_cache_pou_macro needs_uaccess=0 > > +SYM_FUNC_END(__clean_inval_cache_pou) > > > > /* > > - * __flush_cache_user_range(start,end) > > + * __clean_inval_cache_user_pou(start,end) > > * > > * Ensure that the I and D caches are coherent within specified region. > > * This is typically used when code has been written to a memory region, > > @@ -91,19 +91,19 @@ SYM_FUNC_END(__flush_icache_range) > > * - start - virtual start address of region > > * - end - virtual end address of region > > */ > > -SYM_FUNC_START(__flush_cache_user_range) > > - __flush_cache_range needs_uaccess=1 > > -SYM_FUNC_END(__flush_cache_user_range) > > +SYM_FUNC_START(__clean_inval_cache_user_pou) > > + __clean_inval_cache_pou_macro needs_uaccess=1 > > +SYM_FUNC_END(__clean_inval_cache_user_pou) > > > > /* > > - * invalidate_icache_range(start,end) > > + * __inval_icache_pou(start,end) > > * > > * Ensure that the I cache is invalid within specified region. > > * > > * - start - virtual start address of region > > * - end - virtual end address of region > > */ > > -SYM_FUNC_START(invalidate_icache_range) > > +SYM_FUNC_START(__inval_icache_pou) > > alternative_if ARM64_HAS_CACHE_DIC > > isb > > ret > > @@ -111,10 +111,10 @@ alternative_else_nop_endif > > > > invalidate_icache_by_line x0, x1, x2, x3, 0, 0f > > ret > > -SYM_FUNC_END(invalidate_icache_range) > > +SYM_FUNC_END(__inval_icache_pou) > > > > /* > > - * __flush_dcache_area(start, end) > > + * __clean_inval_dcache_poc(start, end) > > * > > * Ensure that any D-cache lines for the interval [start, end) > > * are cleaned and invalidated to the PoC. > > @@ -122,13 +122,13 @@ SYM_FUNC_END(invalidate_icache_range) > > * - start - virtual start address of region > > * - end - virtual end address of region > > */ > > -SYM_FUNC_START_PI(__flush_dcache_area) > > +SYM_FUNC_START_PI(__clean_inval_dcache_poc) > > dcache_by_line_op civac, sy, x0, x1, x2, x3 > > ret > > -SYM_FUNC_END_PI(__flush_dcache_area) > > +SYM_FUNC_END_PI(__clean_inval_dcache_poc) > > > > /* > > - * __clean_dcache_area_pou(start, end) > > + * __clean_dcache_pou(start, end) > > * > > * Ensure that any D-cache lines for the interval [start, end) > > * are cleaned to the PoU. > > @@ -136,17 +136,17 @@ SYM_FUNC_END_PI(__flush_dcache_area) > > * - start - virtual start address of region > > * - end - virtual end address of region > > */ > > -SYM_FUNC_START(__clean_dcache_area_pou) > > +SYM_FUNC_START(__clean_dcache_pou) > > alternative_if ARM64_HAS_CACHE_IDC > > dsb ishst > > ret > > alternative_else_nop_endif > > dcache_by_line_op cvau, ish, x0, x1, x2, x3 > > ret > > -SYM_FUNC_END(__clean_dcache_area_pou) > > +SYM_FUNC_END(__clean_dcache_pou) > > > > /* > > - * __inval_dcache_area(start, end) > > + * __inval_dcache_poc(start, end) > > * > > * Ensure that any D-cache lines for the interval [start, end) > > * are invalidated. Any partial lines at the ends of the interval are > > @@ -156,7 +156,7 @@ SYM_FUNC_END(__clean_dcache_area_pou) > > * - end - kernel end address of region > > */ > > SYM_FUNC_START_LOCAL(__dma_inv_area) > > -SYM_FUNC_START_PI(__inval_dcache_area) > > +SYM_FUNC_START_PI(__inval_dcache_poc) > > /* FALLTHROUGH */ > > > > /* > > @@ -181,11 +181,11 @@ SYM_FUNC_START_PI(__inval_dcache_area) > > b.lo 2b > > dsb sy > > ret > > -SYM_FUNC_END_PI(__inval_dcache_area) > > +SYM_FUNC_END_PI(__inval_dcache_poc) > > SYM_FUNC_END(__dma_inv_area) > > > > /* > > - * __clean_dcache_area_poc(start, end) > > + * __clean_dcache_poc(start, end) > > * > > * Ensure that any D-cache lines for the interval [start, end) > > * are cleaned to the PoC. > > @@ -194,7 +194,7 @@ SYM_FUNC_END(__dma_inv_area) > > * - end - virtual end address of region > > */ > > SYM_FUNC_START_LOCAL(__dma_clean_area) > > -SYM_FUNC_START_PI(__clean_dcache_area_poc) > > +SYM_FUNC_START_PI(__clean_dcache_poc) > > /* FALLTHROUGH */ > > > > /* > > @@ -204,11 +204,11 @@ SYM_FUNC_START_PI(__clean_dcache_area_poc) > > */ > > dcache_by_line_op cvac, sy, x0, x1, x2, x3 > > ret > > -SYM_FUNC_END_PI(__clean_dcache_area_poc) > > +SYM_FUNC_END_PI(__clean_dcache_poc) > > SYM_FUNC_END(__dma_clean_area) > > > > /* > > - * __clean_dcache_area_pop(start, end) > > + * __clean_dcache_pop(start, end) > > * > > * Ensure that any D-cache lines for the interval [start, end) > > * are cleaned to the PoP. > > @@ -216,13 +216,13 @@ SYM_FUNC_END(__dma_clean_area) > > * - start - virtual start address of region > > * - end - virtual end address of region > > */ > > -SYM_FUNC_START_PI(__clean_dcache_area_pop) > > +SYM_FUNC_START_PI(__clean_dcache_pop) > > alternative_if_not ARM64_HAS_DCPOP > > - b __clean_dcache_area_poc > > + b __clean_dcache_poc > > alternative_else_nop_endif > > dcache_by_line_op cvap, sy, x0, x1, x2, x3 > > ret > > -SYM_FUNC_END_PI(__clean_dcache_area_pop) > > +SYM_FUNC_END_PI(__clean_dcache_pop) > > > > /* > > * __dma_flush_area(start, size) > > diff --git a/arch/arm64/mm/flush.c b/arch/arm64/mm/flush.c > > index 143f625e7727..005b92148252 100644 > > --- a/arch/arm64/mm/flush.c > > +++ b/arch/arm64/mm/flush.c > > @@ -17,14 +17,14 @@ > > void sync_icache_aliases(unsigned long start, unsigned long end) > > { > > if (icache_is_aliasing()) { > > - __clean_dcache_area_pou(start, end); > > - __flush_icache_all(); > > + __clean_dcache_pou(start, end); > > + __clean_inval_all_icache_pou(); > > } else { > > /* > > * Don't issue kick_all_cpus_sync() after I-cache invalidation > > * for user mappings. > > */ > > - __flush_icache_range(start, end); > > + __clean_inval_cache_pou(start, end); > > } > > } > > > > @@ -76,20 +76,20 @@ EXPORT_SYMBOL(flush_dcache_page); > > /* > > * Additional functions defined in assembly. > > */ > > -EXPORT_SYMBOL(__flush_icache_range); > > +EXPORT_SYMBOL(__clean_inval_cache_pou); > > > > #ifdef CONFIG_ARCH_HAS_PMEM_API > > void arch_wb_cache_pmem(void *addr, size_t size) > > { > > /* Ensure order against any prior non-cacheable writes */ > > dmb(osh); > > - __clean_dcache_area_pop((unsigned long)addr, (unsigned long)addr + size); > > + __clean_dcache_pop((unsigned long)addr, (unsigned long)addr + size); > > } > > EXPORT_SYMBOL_GPL(arch_wb_cache_pmem); > > > > void arch_invalidate_pmem(void *addr, size_t size) > > { > > - __inval_dcache_area((unsigned long)addr, (unsigned long)addr + size); > > + __inval_dcache_poc((unsigned long)addr, (unsigned long)addr + size); > > } > > EXPORT_SYMBOL_GPL(arch_invalidate_pmem); > > #endif > > -- > > 2.31.1.607.g51e8a6a459-goog > > _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel