All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 00/16] Tidy up cache.S
@ 2021-05-17  7:51 Fuad Tabba
  2021-05-17  7:51 ` [PATCH v2 01/16] arm64: Apply errata to swsusp_arch_suspend_exit Fuad Tabba
                   ` (15 more replies)
  0 siblings, 16 replies; 29+ messages in thread
From: Fuad Tabba @ 2021-05-17  7:51 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: will, catalin.marinas, mark.rutland, maz, ardb, james.morse,
	alexandru.elisei, suzuki.poulose, robin.murphy, tabba

Hi,

Changes since v1 [1]:
- Apply ARM64_WORKAROUND_CLEAN_CACHE errata to swsusp_arch_suspend_exit (Mark)
- Remove toggling of uaccess from the newly created cache flush
  (clean/invalidate) macro and leave it up to the caller (Robin)
- Fix renaming of cache maintenance functions (Ard, Mark)
- Fix comment on maintenance operations in machine_kexec_post_load (Ard)
- Fix commit msg comments to clarify some of the changes and outline potential
  performance impact (Mark)
- Fix code comments that refer to flush_icache_range when the intended function
  is __flush_icache_range

As has been noted before [2], the code in cache.S isn't very tidy. Some of its
functions accept address ranges by start and size, whereas others with similar
names do so by start and end. This has resulted in at least one bug [3].

Moreover, invalidate_icache_range and __flush_icache_range toggle uaccess,
which isn't necessary because they work on the kernel linear map [4].

This patch series attempts to fix these issues, as well as tidy up the code in
general to reduce ambiguity and make it consistent with Arm terminology and
with the functions' actual operations.

No functional change intended in this series. However, there might be a
performance impact due to the reduced number of instructions in general.

This series is based on v5.13-rc1. You can find the applied series here [5].

Cheers,
/fuad

[1] https://lore.kernel.org/linux-arm-kernel/20210511144252.3779113-1-tabba@google.com/T/
[2] https://lore.kernel.org/linux-arch/20200511075115.GA16134@willie-the-truck/
[3] https://lore.kernel.org/linux-arch/20200510075510.987823-3-hch@lst.de/
[4] https://lore.kernel.org/linux-arch/20200511110014.lb9PEahJ4hVOYrbwIb_qUHXyNy9KQzNFdb_I3YlzY6A@z/
[5] https://android-kvm.googlesource.com/linux/+/refs/heads/tabba/fixcache-5.13

Fuad Tabba (16):
  arm64: Apply errata to swsusp_arch_suspend_exit
  arm64: Do not enable uaccess for flush_icache_range
  arm64: Do not enable uaccess for invalidate_icache_range
  arm64: Downgrade flush_icache_range to invalidate
  arm64: Remove uaccess toggle from __flush_cache_range macro
  arm64: Move documentation of dcache_by_line_op
  arm64: Fix comments to refer to correct function __flush_icache_range
  arm64: __inval_dcache_area to take end parameter instead of size
  arm64: dcache_by_line_op to take end parameter instead of size
  arm64: __flush_dcache_area to take end parameter instead of size
  arm64: __clean_dcache_area_poc to take end parameter instead of size
  arm64: __clean_dcache_area_pop to take end parameter instead of size
  arm64: __clean_dcache_area_pou to take end parameter instead of size
  arm64: sync_icache_aliases to take end parameter instead of size
  arm64: Fix cache maintenance function comments
  arm64: Rename arm64-internal cache maintenance functions

 arch/arm64/include/asm/arch_gicv3.h |   3 +-
 arch/arm64/include/asm/assembler.h  |  52 ++++-----
 arch/arm64/include/asm/cacheflush.h |  69 +++++++-----
 arch/arm64/include/asm/efi.h        |   2 +-
 arch/arm64/include/asm/kvm_mmu.h    |   7 +-
 arch/arm64/kernel/alternative.c     |   2 +-
 arch/arm64/kernel/efi-entry.S       |   9 +-
 arch/arm64/kernel/head.S            |  13 +--
 arch/arm64/kernel/hibernate-asm.S   |   7 +-
 arch/arm64/kernel/hibernate.c       |  20 ++--
 arch/arm64/kernel/idreg-override.c  |   3 +-
 arch/arm64/kernel/image-vars.h      |   2 +-
 arch/arm64/kernel/insn.c            |   2 +-
 arch/arm64/kernel/kaslr.c           |  12 ++-
 arch/arm64/kernel/machine_kexec.c   |  30 ++++--
 arch/arm64/kernel/probes/uprobes.c  |   2 +-
 arch/arm64/kernel/smp.c             |   8 +-
 arch/arm64/kernel/smp_spin_table.c  |   7 +-
 arch/arm64/kernel/sys_compat.c      |   2 +-
 arch/arm64/kvm/arm.c                |   2 +-
 arch/arm64/kvm/hyp/nvhe/cache.S     |   4 +-
 arch/arm64/kvm/hyp/nvhe/setup.c     |   3 +-
 arch/arm64/kvm/hyp/nvhe/tlb.c       |   2 +-
 arch/arm64/kvm/hyp/pgtable.c        |  13 ++-
 arch/arm64/lib/uaccess_flushcache.c |   4 +-
 arch/arm64/mm/cache.S               | 157 +++++++++++++++-------------
 arch/arm64/mm/flush.c               |  29 ++---
 27 files changed, 267 insertions(+), 199 deletions(-)


base-commit: 6efb943b8616ec53a5e444193dccf1af9ad627b5
-- 
2.31.1.751.gd2f1c929bd-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* [PATCH v2 01/16] arm64: Apply errata to swsusp_arch_suspend_exit
  2021-05-17  7:51 [PATCH v2 00/16] Tidy up cache.S Fuad Tabba
@ 2021-05-17  7:51 ` Fuad Tabba
  2021-05-17  7:51 ` [PATCH v2 02/16] arm64: Do not enable uaccess for flush_icache_range Fuad Tabba
                   ` (14 subsequent siblings)
  15 siblings, 0 replies; 29+ messages in thread
From: Fuad Tabba @ 2021-05-17  7:51 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: will, catalin.marinas, mark.rutland, maz, ardb, james.morse,
	alexandru.elisei, suzuki.poulose, robin.murphy, tabba

The Arm errata covered by ARM64_WORKAROUND_CLEAN_CACHE require
that "dc cvau" instructions get promoted to "dc civac".

Reported-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/kernel/hibernate-asm.S | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/kernel/hibernate-asm.S b/arch/arm64/kernel/hibernate-asm.S
index 8ccca660034e..0ed2f72a6b94 100644
--- a/arch/arm64/kernel/hibernate-asm.S
+++ b/arch/arm64/kernel/hibernate-asm.S
@@ -91,7 +91,8 @@ SYM_CODE_START(swsusp_arch_suspend_exit)
 	raw_dcache_line_size x2, x3
 	sub	x3, x2, #1
 	bic	x4, x10, x3
-2:	dc	cvau, x4	/* clean D line / unified line */
+2:	/* clean D line / unified line */
+alternative_insn "dc cvau, x4",  "dc civac, x4",  ARM64_WORKAROUND_CLEAN_CACHE
 	add	x4, x4, x2
 	cmp	x4, x1
 	b.lo	2b
-- 
2.31.1.751.gd2f1c929bd-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v2 02/16] arm64: Do not enable uaccess for flush_icache_range
  2021-05-17  7:51 [PATCH v2 00/16] Tidy up cache.S Fuad Tabba
  2021-05-17  7:51 ` [PATCH v2 01/16] arm64: Apply errata to swsusp_arch_suspend_exit Fuad Tabba
@ 2021-05-17  7:51 ` Fuad Tabba
  2021-05-18 15:33   ` Mark Rutland
  2021-05-17  7:51 ` [PATCH v2 03/16] arm64: Do not enable uaccess for invalidate_icache_range Fuad Tabba
                   ` (13 subsequent siblings)
  15 siblings, 1 reply; 29+ messages in thread
From: Fuad Tabba @ 2021-05-17  7:51 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: will, catalin.marinas, mark.rutland, maz, ardb, james.morse,
	alexandru.elisei, suzuki.poulose, robin.murphy, tabba

__flush_icache_range works on the kernel linear map, and doesn't
need uaccess. The existing code is a side-effect of its current
implementation with __flush_cache_user_range fallthrough.

Instead of fallthrough to share the code, use a common macro for
the two where the caller can specify whether user-space access is
needed.

No functional change intended.
Possible performance impact due to the reduced number of
instructions.

Reported-by: Catalin Marinas <catalin.marinas@arm.com>
Reported-by: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/linux-arch/20200511110014.lb9PEahJ4hVOYrbwIb_qUHXyNy9KQzNFdb_I3YlzY6A@z/
Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/assembler.h | 13 ++++--
 arch/arm64/mm/cache.S              | 64 +++++++++++++++++++++---------
 2 files changed, 54 insertions(+), 23 deletions(-)

diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
index 8418c1bd8f04..6ff7a3a3b238 100644
--- a/arch/arm64/include/asm/assembler.h
+++ b/arch/arm64/include/asm/assembler.h
@@ -426,16 +426,21 @@ alternative_endif
  * Macro to perform an instruction cache maintenance for the interval
  * [start, end)
  *
- * 	start, end:	virtual addresses describing the region
- *	label:		A label to branch to on user fault.
- * 	Corrupts:	tmp1, tmp2
+ *	start, end:	virtual addresses describing the region
+ *	needs_uaccess:	might access user space memory
+ *	label:		label to branch to on user fault (if needs_uaccess)
+ *	Corrupts:	tmp1, tmp2
  */
-	.macro invalidate_icache_by_line start, end, tmp1, tmp2, label
+	.macro invalidate_icache_by_line start, end, tmp1, tmp2, needs_uaccess, label
 	icache_line_size \tmp1, \tmp2
 	sub	\tmp2, \tmp1, #1
 	bic	\tmp2, \start, \tmp2
 9997:
+	.if	\needs_uaccess
 USER(\label, ic	ivau, \tmp2)			// invalidate I line PoU
+	.else
+	ic	ivau, \tmp2
+	.endif
 	add	\tmp2, \tmp2, \tmp1
 	cmp	\tmp2, \end
 	b.lo	9997b
diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
index 2d881f34dd9d..092f73acdf9a 100644
--- a/arch/arm64/mm/cache.S
+++ b/arch/arm64/mm/cache.S
@@ -15,30 +15,20 @@
 #include <asm/asm-uaccess.h>
 
 /*
- *	flush_icache_range(start,end)
+ *	__flush_cache_range(start,end) [needs_uaccess]
  *
  *	Ensure that the I and D caches are coherent within specified region.
  *	This is typically used when code has been written to a memory region,
  *	and will be executed.
  *
- *	- start   - virtual start address of region
- *	- end     - virtual end address of region
+ *	- start   	- virtual start address of region
+ *	- end     	- virtual end address of region
+ *	- needs_uaccess - (macro parameter) might access user space memory
  */
-SYM_FUNC_START(__flush_icache_range)
-	/* FALLTHROUGH */
-
-/*
- *	__flush_cache_user_range(start,end)
- *
- *	Ensure that the I and D caches are coherent within specified region.
- *	This is typically used when code has been written to a memory region,
- *	and will be executed.
- *
- *	- start   - virtual start address of region
- *	- end     - virtual end address of region
- */
-SYM_FUNC_START(__flush_cache_user_range)
+.macro	__flush_cache_range, needs_uaccess
+	.if 	\needs_uaccess
 	uaccess_ttbr0_enable x2, x3, x4
+	.endif
 alternative_if ARM64_HAS_CACHE_IDC
 	dsb	ishst
 	b	7f
@@ -47,7 +37,11 @@ alternative_else_nop_endif
 	sub	x3, x2, #1
 	bic	x4, x0, x3
 1:
+	.if 	\needs_uaccess
 user_alt 9f, "dc cvau, x4",  "dc civac, x4",  ARM64_WORKAROUND_CLEAN_CACHE
+	.else
+alternative_insn "dc cvau, x4",  "dc civac, x4",  ARM64_WORKAROUND_CLEAN_CACHE
+	.endif
 	add	x4, x4, x2
 	cmp	x4, x1
 	b.lo	1b
@@ -58,15 +52,47 @@ alternative_if ARM64_HAS_CACHE_DIC
 	isb
 	b	8f
 alternative_else_nop_endif
-	invalidate_icache_by_line x0, x1, x2, x3, 9f
+	invalidate_icache_by_line x0, x1, x2, x3, \needs_uaccess, 9f
 8:	mov	x0, #0
 1:
+	.if	\needs_uaccess
 	uaccess_ttbr0_disable x1, x2
+	.endif
 	ret
+
+	.if 	\needs_uaccess
 9:
 	mov	x0, #-EFAULT
 	b	1b
+	.endif
+.endm
+
+/*
+ *	flush_icache_range(start,end)
+ *
+ *	Ensure that the I and D caches are coherent within specified region.
+ *	This is typically used when code has been written to a memory region,
+ *	and will be executed.
+ *
+ *	- start   - virtual start address of region
+ *	- end     - virtual end address of region
+ */
+SYM_FUNC_START(__flush_icache_range)
+	__flush_cache_range needs_uaccess=0
 SYM_FUNC_END(__flush_icache_range)
+
+/*
+ *	__flush_cache_user_range(start,end)
+ *
+ *	Ensure that the I and D caches are coherent within specified region.
+ *	This is typically used when code has been written to a memory region,
+ *	and will be executed.
+ *
+ *	- start   - virtual start address of region
+ *	- end     - virtual end address of region
+ */
+SYM_FUNC_START(__flush_cache_user_range)
+	__flush_cache_range needs_uaccess=1
 SYM_FUNC_END(__flush_cache_user_range)
 
 /*
@@ -86,7 +112,7 @@ alternative_else_nop_endif
 
 	uaccess_ttbr0_enable x2, x3, x4
 
-	invalidate_icache_by_line x0, x1, x2, x3, 2f
+	invalidate_icache_by_line x0, x1, x2, x3, 1, 2f
 	mov	x0, xzr
 1:
 	uaccess_ttbr0_disable x1, x2
-- 
2.31.1.751.gd2f1c929bd-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v2 03/16] arm64: Do not enable uaccess for invalidate_icache_range
  2021-05-17  7:51 [PATCH v2 00/16] Tidy up cache.S Fuad Tabba
  2021-05-17  7:51 ` [PATCH v2 01/16] arm64: Apply errata to swsusp_arch_suspend_exit Fuad Tabba
  2021-05-17  7:51 ` [PATCH v2 02/16] arm64: Do not enable uaccess for flush_icache_range Fuad Tabba
@ 2021-05-17  7:51 ` Fuad Tabba
  2021-05-18 15:36   ` Mark Rutland
  2021-05-17  7:51 ` [PATCH v2 04/16] arm64: Downgrade flush_icache_range to invalidate Fuad Tabba
                   ` (12 subsequent siblings)
  15 siblings, 1 reply; 29+ messages in thread
From: Fuad Tabba @ 2021-05-17  7:51 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: will, catalin.marinas, mark.rutland, maz, ardb, james.morse,
	alexandru.elisei, suzuki.poulose, robin.murphy, tabba

invalidate_icache_range() works on the kernel linear map, and
doesn't need uaccess. Remove the code that toggles
uaccess_ttbr0_enable, as well as the code that emits an entry
into the exception table (via the macro
invalidate_icache_by_line).

Changes return type of invalidate_icache_range() from int (which
used to indicate a fault) to void, since it doesn't need uaccess
and won't fault. Note that return value was never checked by any
of the callers.

No functional change intended.
Possible performance impact due to the reduced number of
instructions.

Reported-by: Catalin Marinas <catalin.marinas@arm.com>
Reported-by: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/linux-arch/20200511110014.lb9PEahJ4hVOYrbwIb_qUHXyNy9KQzNFdb_I3YlzY6A@z/
Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/cacheflush.h |  2 +-
 arch/arm64/mm/cache.S               | 11 +----------
 2 files changed, 2 insertions(+), 11 deletions(-)

diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
index 52e5c1623224..a586afa84172 100644
--- a/arch/arm64/include/asm/cacheflush.h
+++ b/arch/arm64/include/asm/cacheflush.h
@@ -57,7 +57,7 @@
  *		- size   - region size
  */
 extern void __flush_icache_range(unsigned long start, unsigned long end);
-extern int  invalidate_icache_range(unsigned long start, unsigned long end);
+extern void invalidate_icache_range(unsigned long start, unsigned long end);
 extern void __flush_dcache_area(void *addr, size_t len);
 extern void __inval_dcache_area(void *addr, size_t len);
 extern void __clean_dcache_area_poc(void *addr, size_t len);
diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
index 092f73acdf9a..6babaaf34f17 100644
--- a/arch/arm64/mm/cache.S
+++ b/arch/arm64/mm/cache.S
@@ -105,21 +105,12 @@ SYM_FUNC_END(__flush_cache_user_range)
  */
 SYM_FUNC_START(invalidate_icache_range)
 alternative_if ARM64_HAS_CACHE_DIC
-	mov	x0, xzr
 	isb
 	ret
 alternative_else_nop_endif
 
-	uaccess_ttbr0_enable x2, x3, x4
-
-	invalidate_icache_by_line x0, x1, x2, x3, 1, 2f
-	mov	x0, xzr
-1:
-	uaccess_ttbr0_disable x1, x2
+	invalidate_icache_by_line x0, x1, x2, x3, 0, 0f
 	ret
-2:
-	mov	x0, #-EFAULT
-	b	1b
 SYM_FUNC_END(invalidate_icache_range)
 
 /*
-- 
2.31.1.751.gd2f1c929bd-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v2 04/16] arm64: Downgrade flush_icache_range to invalidate
  2021-05-17  7:51 [PATCH v2 00/16] Tidy up cache.S Fuad Tabba
                   ` (2 preceding siblings ...)
  2021-05-17  7:51 ` [PATCH v2 03/16] arm64: Do not enable uaccess for invalidate_icache_range Fuad Tabba
@ 2021-05-17  7:51 ` Fuad Tabba
  2021-05-18 15:53   ` Mark Rutland
  2021-05-17  7:51 ` [PATCH v2 05/16] arm64: Remove uaccess toggle from __flush_cache_range macro Fuad Tabba
                   ` (11 subsequent siblings)
  15 siblings, 1 reply; 29+ messages in thread
From: Fuad Tabba @ 2021-05-17  7:51 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: will, catalin.marinas, mark.rutland, maz, ardb, james.morse,
	alexandru.elisei, suzuki.poulose, robin.murphy, tabba

Since __flush_dcache_area is called right before,
invalidate_icache_range is sufficient in this case.

Rewrite the comment to better explain the rationale behind the
cache maintenance operations used here.

No functional change intended.
Possible performance impact due to invalidating only the icache
rather than invalidating and cleaning both caches.

Reported-by: Catalin Marinas <catalin.marinas@arm.com>
Reported-by: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/linux-arch/20200511110014.lb9PEahJ4hVOYrbwIb_qUHXyNy9KQzNFdb_I3YlzY6A@z/
Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/kernel/machine_kexec.c | 10 +++++++---
 1 file changed, 7 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c
index 90a335c74442..ecd8915e02e1 100644
--- a/arch/arm64/kernel/machine_kexec.c
+++ b/arch/arm64/kernel/machine_kexec.c
@@ -68,10 +68,14 @@ int machine_kexec_post_load(struct kimage *kimage)
 	kimage->arch.kern_reloc = __pa(reloc_code);
 	kexec_image_info(kimage);
 
-	/* Flush the reloc_code in preparation for its execution. */
+	/*
+	 * For execution with the MMU off and I-cache on, reloc_code needs to be
+	 * cleaned to the PoC and invalidated from the I-cache.
+	 */
 	__flush_dcache_area(reloc_code, arm64_relocate_new_kernel_size);
-	flush_icache_range((uintptr_t)reloc_code, (uintptr_t)reloc_code +
-			   arm64_relocate_new_kernel_size);
+	invalidate_icache_range((uintptr_t)reloc_code,
+				(uintptr_t)reloc_code +
+					arm64_relocate_new_kernel_size);
 
 	return 0;
 }
-- 
2.31.1.751.gd2f1c929bd-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v2 05/16] arm64: Remove uaccess toggle from __flush_cache_range macro
  2021-05-17  7:51 [PATCH v2 00/16] Tidy up cache.S Fuad Tabba
                   ` (3 preceding siblings ...)
  2021-05-17  7:51 ` [PATCH v2 04/16] arm64: Downgrade flush_icache_range to invalidate Fuad Tabba
@ 2021-05-17  7:51 ` Fuad Tabba
  2021-05-18 16:00   ` Mark Rutland
  2021-05-17  7:51 ` [PATCH v2 06/16] arm64: Move documentation of dcache_by_line_op Fuad Tabba
                   ` (10 subsequent siblings)
  15 siblings, 1 reply; 29+ messages in thread
From: Fuad Tabba @ 2021-05-17  7:51 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: will, catalin.marinas, mark.rutland, maz, ardb, james.morse,
	alexandru.elisei, suzuki.poulose, robin.murphy, tabba

The uaccess toggle isn't part of the cache maintenance operation.
Move it directly to where it's needed.

No functional change intended.

Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/mm/cache.S | 8 ++------
 1 file changed, 2 insertions(+), 6 deletions(-)

diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
index 6babaaf34f17..d74b20cd6449 100644
--- a/arch/arm64/mm/cache.S
+++ b/arch/arm64/mm/cache.S
@@ -26,9 +26,6 @@
  *	- needs_uaccess - (macro parameter) might access user space memory
  */
 .macro	__flush_cache_range, needs_uaccess
-	.if 	\needs_uaccess
-	uaccess_ttbr0_enable x2, x3, x4
-	.endif
 alternative_if ARM64_HAS_CACHE_IDC
 	dsb	ishst
 	b	7f
@@ -55,9 +52,6 @@ alternative_else_nop_endif
 	invalidate_icache_by_line x0, x1, x2, x3, \needs_uaccess, 9f
 8:	mov	x0, #0
 1:
-	.if	\needs_uaccess
-	uaccess_ttbr0_disable x1, x2
-	.endif
 	ret
 
 	.if 	\needs_uaccess
@@ -92,7 +86,9 @@ SYM_FUNC_END(__flush_icache_range)
  *	- end     - virtual end address of region
  */
 SYM_FUNC_START(__flush_cache_user_range)
+	uaccess_ttbr0_enable x2, x3, x4
 	__flush_cache_range needs_uaccess=1
+	uaccess_ttbr0_disable x1, x2
 SYM_FUNC_END(__flush_cache_user_range)
 
 /*
-- 
2.31.1.751.gd2f1c929bd-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v2 06/16] arm64: Move documentation of dcache_by_line_op
  2021-05-17  7:51 [PATCH v2 00/16] Tidy up cache.S Fuad Tabba
                   ` (4 preceding siblings ...)
  2021-05-17  7:51 ` [PATCH v2 05/16] arm64: Remove uaccess toggle from __flush_cache_range macro Fuad Tabba
@ 2021-05-17  7:51 ` Fuad Tabba
  2021-05-17  7:51 ` [PATCH v2 07/16] arm64: Fix comments to refer to correct function __flush_icache_range Fuad Tabba
                   ` (9 subsequent siblings)
  15 siblings, 0 replies; 29+ messages in thread
From: Fuad Tabba @ 2021-05-17  7:51 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: will, catalin.marinas, mark.rutland, maz, ardb, james.morse,
	alexandru.elisei, suzuki.poulose, robin.murphy, tabba

The comment describing the macro dcache_by_line_op is placed
right before the previous macro of the one it describes, which is
a bit confusing. Move it to the macro it describes (dcache_by_line_op).

No functional change intended.

Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/assembler.h | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
index 6ff7a3a3b238..2bcfc5fdfafd 100644
--- a/arch/arm64/include/asm/assembler.h
+++ b/arch/arm64/include/asm/assembler.h
@@ -375,6 +375,14 @@ alternative_cb_end
 	bfi	\tcr, \tmp0, \pos, #3
 	.endm
 
+	.macro __dcache_op_workaround_clean_cache, op, kaddr
+alternative_if_not ARM64_WORKAROUND_CLEAN_CACHE
+	dc	\op, \kaddr
+alternative_else
+	dc	civac, \kaddr
+alternative_endif
+	.endm
+
 /*
  * Macro to perform a data cache maintenance for the interval
  * [kaddr, kaddr + size)
@@ -385,14 +393,6 @@ alternative_cb_end
  * 	size:		size of the region
  * 	Corrupts:	kaddr, size, tmp1, tmp2
  */
-	.macro __dcache_op_workaround_clean_cache, op, kaddr
-alternative_if_not ARM64_WORKAROUND_CLEAN_CACHE
-	dc	\op, \kaddr
-alternative_else
-	dc	civac, \kaddr
-alternative_endif
-	.endm
-
 	.macro dcache_by_line_op op, domain, kaddr, size, tmp1, tmp2
 	dcache_line_size \tmp1, \tmp2
 	add	\size, \kaddr, \size
-- 
2.31.1.751.gd2f1c929bd-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v2 07/16] arm64: Fix comments to refer to correct function __flush_icache_range
  2021-05-17  7:51 [PATCH v2 00/16] Tidy up cache.S Fuad Tabba
                   ` (5 preceding siblings ...)
  2021-05-17  7:51 ` [PATCH v2 06/16] arm64: Move documentation of dcache_by_line_op Fuad Tabba
@ 2021-05-17  7:51 ` Fuad Tabba
  2021-05-18 16:03   ` Mark Rutland
  2021-05-17  7:51 ` [PATCH v2 08/16] arm64: __inval_dcache_area to take end parameter instead of size Fuad Tabba
                   ` (8 subsequent siblings)
  15 siblings, 1 reply; 29+ messages in thread
From: Fuad Tabba @ 2021-05-17  7:51 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: will, catalin.marinas, mark.rutland, maz, ardb, james.morse,
	alexandru.elisei, suzuki.poulose, robin.murphy, tabba

Many comments refer to the function flush_icache_range, where the
intent is in fact __flush_icache_range. Fix these comments to
refer to the intended function.

No functional change intended.

Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/kernel/hibernate-asm.S | 4 ++--
 arch/arm64/mm/cache.S             | 2 +-
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/kernel/hibernate-asm.S b/arch/arm64/kernel/hibernate-asm.S
index 0ed2f72a6b94..ef2ab7caf815 100644
--- a/arch/arm64/kernel/hibernate-asm.S
+++ b/arch/arm64/kernel/hibernate-asm.S
@@ -45,7 +45,7 @@
  * Because this code has to be copied to a 'safe' page, it can't call out to
  * other functions by PC-relative address. Also remember that it may be
  * mid-way through over-writing other functions. For this reason it contains
- * code from flush_icache_range() and uses the copy_page() macro.
+ * code from __flush_icache_range() and uses the copy_page() macro.
  *
  * This 'safe' page is mapped via ttbr0, and executed from there. This function
  * switches to a copy of the linear map in ttbr1, performs the restore, then
@@ -87,7 +87,7 @@ SYM_CODE_START(swsusp_arch_suspend_exit)
 	copy_page	x0, x1, x2, x3, x4, x5, x6, x7, x8, x9
 
 	add	x1, x10, #PAGE_SIZE
-	/* Clean the copied page to PoU - based on flush_icache_range() */
+	/* Clean the copied page to PoU - based on __flush_icache_range() */
 	raw_dcache_line_size x2, x3
 	sub	x3, x2, #1
 	bic	x4, x10, x3
diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
index d74b20cd6449..8920f63442ae 100644
--- a/arch/arm64/mm/cache.S
+++ b/arch/arm64/mm/cache.S
@@ -62,7 +62,7 @@ alternative_else_nop_endif
 .endm
 
 /*
- *	flush_icache_range(start,end)
+ *	__flush_icache_range(start,end)
  *
  *	Ensure that the I and D caches are coherent within specified region.
  *	This is typically used when code has been written to a memory region,
-- 
2.31.1.751.gd2f1c929bd-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v2 08/16] arm64: __inval_dcache_area to take end parameter instead of size
  2021-05-17  7:51 [PATCH v2 00/16] Tidy up cache.S Fuad Tabba
                   ` (6 preceding siblings ...)
  2021-05-17  7:51 ` [PATCH v2 07/16] arm64: Fix comments to refer to correct function __flush_icache_range Fuad Tabba
@ 2021-05-17  7:51 ` Fuad Tabba
  2021-05-17  7:51 ` [PATCH v2 09/16] arm64: dcache_by_line_op " Fuad Tabba
                   ` (7 subsequent siblings)
  15 siblings, 0 replies; 29+ messages in thread
From: Fuad Tabba @ 2021-05-17  7:51 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: will, catalin.marinas, mark.rutland, maz, ardb, james.morse,
	alexandru.elisei, suzuki.poulose, robin.murphy, tabba

To be consistent with other functions with similar names and
functionality in cacheflush.h, cache.S, and cachetlb.rst, change
to specify the range in terms of start and end, as opposed to
start and size.

Because the code is shared with __dma_inv_area, it changes the
parameters for that as well. However, __dma_inv_area is local to
cache.S, so no other users are affected.

No functional change intended.

Reported-by: Will Deacon <will@kernel.org>
Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/cacheflush.h |  2 +-
 arch/arm64/kernel/head.S            |  5 +----
 arch/arm64/mm/cache.S               | 16 +++++++++-------
 arch/arm64/mm/flush.c               |  2 +-
 4 files changed, 12 insertions(+), 13 deletions(-)

diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
index a586afa84172..157234706817 100644
--- a/arch/arm64/include/asm/cacheflush.h
+++ b/arch/arm64/include/asm/cacheflush.h
@@ -59,7 +59,7 @@
 extern void __flush_icache_range(unsigned long start, unsigned long end);
 extern void invalidate_icache_range(unsigned long start, unsigned long end);
 extern void __flush_dcache_area(void *addr, size_t len);
-extern void __inval_dcache_area(void *addr, size_t len);
+extern void __inval_dcache_area(unsigned long start, unsigned long end);
 extern void __clean_dcache_area_poc(void *addr, size_t len);
 extern void __clean_dcache_area_pop(void *addr, size_t len);
 extern void __clean_dcache_area_pou(void *addr, size_t len);
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index 96873dfa67fd..8df0ac8d9123 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -117,7 +117,7 @@ SYM_CODE_START_LOCAL(preserve_boot_args)
 	dmb	sy				// needed before dc ivac with
 						// MMU off
 
-	mov	x1, #0x20			// 4 x 8 bytes
+	add	x1, x0, #0x20			// 4 x 8 bytes
 	b	__inval_dcache_area		// tail call
 SYM_CODE_END(preserve_boot_args)
 
@@ -268,7 +268,6 @@ SYM_FUNC_START_LOCAL(__create_page_tables)
 	 */
 	adrp	x0, init_pg_dir
 	adrp	x1, init_pg_end
-	sub	x1, x1, x0
 	bl	__inval_dcache_area
 
 	/*
@@ -382,12 +381,10 @@ SYM_FUNC_START_LOCAL(__create_page_tables)
 
 	adrp	x0, idmap_pg_dir
 	adrp	x1, idmap_pg_end
-	sub	x1, x1, x0
 	bl	__inval_dcache_area
 
 	adrp	x0, init_pg_dir
 	adrp	x1, init_pg_end
-	sub	x1, x1, x0
 	bl	__inval_dcache_area
 
 	ret	x28
diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
index 8920f63442ae..16660cbc45bf 100644
--- a/arch/arm64/mm/cache.S
+++ b/arch/arm64/mm/cache.S
@@ -142,25 +142,24 @@ alternative_else_nop_endif
 SYM_FUNC_END(__clean_dcache_area_pou)
 
 /*
- *	__inval_dcache_area(kaddr, size)
+ *	__inval_dcache_area(start, end)
  *
- * 	Ensure that any D-cache lines for the interval [kaddr, kaddr+size)
+ * 	Ensure that any D-cache lines for the interval [start, end)
  * 	are invalidated. Any partial lines at the ends of the interval are
  *	also cleaned to PoC to prevent data loss.
  *
- *	- kaddr   - kernel address
- *	- size    - size in question
+ *	- start   - kernel start address of region
+ *	- end     - kernel end address of region
  */
 SYM_FUNC_START_LOCAL(__dma_inv_area)
 SYM_FUNC_START_PI(__inval_dcache_area)
 	/* FALLTHROUGH */
 
 /*
- *	__dma_inv_area(start, size)
+ *	__dma_inv_area(start, end)
  *	- start   - virtual start address of region
- *	- size    - size in question
+ *	- end     - virtual end address of region
  */
-	add	x1, x1, x0
 	dcache_line_size x2, x3
 	sub	x3, x2, #1
 	tst	x1, x3				// end cache line aligned?
@@ -241,8 +240,10 @@ SYM_FUNC_END_PI(__dma_flush_area)
  *	- dir	- DMA direction
  */
 SYM_FUNC_START_PI(__dma_map_area)
+	add	x1, x0, x1
 	cmp	w2, #DMA_FROM_DEVICE
 	b.eq	__dma_inv_area
+	sub	x1, x1, x0
 	b	__dma_clean_area
 SYM_FUNC_END_PI(__dma_map_area)
 
@@ -253,6 +254,7 @@ SYM_FUNC_END_PI(__dma_map_area)
  *	- dir	- DMA direction
  */
 SYM_FUNC_START_PI(__dma_unmap_area)
+	add	x1, x0, x1
 	cmp	w2, #DMA_TO_DEVICE
 	b.ne	__dma_inv_area
 	ret
diff --git a/arch/arm64/mm/flush.c b/arch/arm64/mm/flush.c
index ac485163a4a7..4e3505c2bea6 100644
--- a/arch/arm64/mm/flush.c
+++ b/arch/arm64/mm/flush.c
@@ -88,7 +88,7 @@ EXPORT_SYMBOL_GPL(arch_wb_cache_pmem);
 
 void arch_invalidate_pmem(void *addr, size_t size)
 {
-	__inval_dcache_area(addr, size);
+	__inval_dcache_area((unsigned long)addr, (unsigned long)addr + size);
 }
 EXPORT_SYMBOL_GPL(arch_invalidate_pmem);
 #endif
-- 
2.31.1.751.gd2f1c929bd-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v2 09/16] arm64: dcache_by_line_op to take end parameter instead of size
  2021-05-17  7:51 [PATCH v2 00/16] Tidy up cache.S Fuad Tabba
                   ` (7 preceding siblings ...)
  2021-05-17  7:51 ` [PATCH v2 08/16] arm64: __inval_dcache_area to take end parameter instead of size Fuad Tabba
@ 2021-05-17  7:51 ` Fuad Tabba
  2021-05-17  7:51 ` [PATCH v2 10/16] arm64: __flush_dcache_area " Fuad Tabba
                   ` (6 subsequent siblings)
  15 siblings, 0 replies; 29+ messages in thread
From: Fuad Tabba @ 2021-05-17  7:51 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: will, catalin.marinas, mark.rutland, maz, ardb, james.morse,
	alexandru.elisei, suzuki.poulose, robin.murphy, tabba

To be consistent with other functions with similar names and
functionality in cacheflush.h, cache.S, and cachetlb.rst, change
to specify the range in terms of start and end, as opposed to
start and size.

No functional change intended.

Reported-by: Will Deacon <will@kernel.org>
Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/assembler.h | 27 +++++++++++++--------------
 arch/arm64/kvm/hyp/nvhe/cache.S    |  1 +
 arch/arm64/mm/cache.S              |  5 +++++
 3 files changed, 19 insertions(+), 14 deletions(-)

diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
index 2bcfc5fdfafd..3f75a600e6c0 100644
--- a/arch/arm64/include/asm/assembler.h
+++ b/arch/arm64/include/asm/assembler.h
@@ -385,39 +385,38 @@ alternative_endif
 
 /*
  * Macro to perform a data cache maintenance for the interval
- * [kaddr, kaddr + size)
+ * [start, end)
  *
  * 	op:		operation passed to dc instruction
  * 	domain:		domain used in dsb instruciton
- * 	kaddr:		starting virtual address of the region
- * 	size:		size of the region
- * 	Corrupts:	kaddr, size, tmp1, tmp2
+ * 	start:		starting virtual address of the region
+ * 	end:		end virtual address of the region
+ * 	Corrupts:	start, end, tmp1, tmp2
  */
-	.macro dcache_by_line_op op, domain, kaddr, size, tmp1, tmp2
+	.macro dcache_by_line_op op, domain, start, end, tmp1, tmp2
 	dcache_line_size \tmp1, \tmp2
-	add	\size, \kaddr, \size
 	sub	\tmp2, \tmp1, #1
-	bic	\kaddr, \kaddr, \tmp2
+	bic	\start, \start, \tmp2
 9998:
 	.ifc	\op, cvau
-	__dcache_op_workaround_clean_cache \op, \kaddr
+	__dcache_op_workaround_clean_cache \op, \start
 	.else
 	.ifc	\op, cvac
-	__dcache_op_workaround_clean_cache \op, \kaddr
+	__dcache_op_workaround_clean_cache \op, \start
 	.else
 	.ifc	\op, cvap
-	sys	3, c7, c12, 1, \kaddr	// dc cvap
+	sys	3, c7, c12, 1, \start	// dc cvap
 	.else
 	.ifc	\op, cvadp
-	sys	3, c7, c13, 1, \kaddr	// dc cvadp
+	sys	3, c7, c13, 1, \start	// dc cvadp
 	.else
-	dc	\op, \kaddr
+	dc	\op, \start
 	.endif
 	.endif
 	.endif
 	.endif
-	add	\kaddr, \kaddr, \tmp1
-	cmp	\kaddr, \size
+	add	\start, \start, \tmp1
+	cmp	\start, \end
 	b.lo	9998b
 	dsb	\domain
 	.endm
diff --git a/arch/arm64/kvm/hyp/nvhe/cache.S b/arch/arm64/kvm/hyp/nvhe/cache.S
index 36cef6915428..3bcfa3cac46f 100644
--- a/arch/arm64/kvm/hyp/nvhe/cache.S
+++ b/arch/arm64/kvm/hyp/nvhe/cache.S
@@ -8,6 +8,7 @@
 #include <asm/alternative.h>
 
 SYM_FUNC_START_PI(__flush_dcache_area)
+	add	x1, x0, x1
 	dcache_by_line_op civac, sy, x0, x1, x2, x3
 	ret
 SYM_FUNC_END_PI(__flush_dcache_area)
diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
index 16660cbc45bf..b599c334a2e8 100644
--- a/arch/arm64/mm/cache.S
+++ b/arch/arm64/mm/cache.S
@@ -119,6 +119,7 @@ SYM_FUNC_END(invalidate_icache_range)
  *	- size    - size in question
  */
 SYM_FUNC_START_PI(__flush_dcache_area)
+	add	x1, x0, x1
 	dcache_by_line_op civac, sy, x0, x1, x2, x3
 	ret
 SYM_FUNC_END_PI(__flush_dcache_area)
@@ -137,6 +138,7 @@ alternative_if ARM64_HAS_CACHE_IDC
 	dsb	ishst
 	ret
 alternative_else_nop_endif
+	add	x1, x0, x1
 	dcache_by_line_op cvau, ish, x0, x1, x2, x3
 	ret
 SYM_FUNC_END(__clean_dcache_area_pou)
@@ -198,6 +200,7 @@ SYM_FUNC_START_PI(__clean_dcache_area_poc)
  *	- start   - virtual start address of region
  *	- size    - size in question
  */
+	add	x1, x0, x1
 	dcache_by_line_op cvac, sy, x0, x1, x2, x3
 	ret
 SYM_FUNC_END_PI(__clean_dcache_area_poc)
@@ -216,6 +219,7 @@ SYM_FUNC_START_PI(__clean_dcache_area_pop)
 	alternative_if_not ARM64_HAS_DCPOP
 	b	__clean_dcache_area_poc
 	alternative_else_nop_endif
+	add	x1, x0, x1
 	dcache_by_line_op cvap, sy, x0, x1, x2, x3
 	ret
 SYM_FUNC_END_PI(__clean_dcache_area_pop)
@@ -229,6 +233,7 @@ SYM_FUNC_END_PI(__clean_dcache_area_pop)
  *	- size    - size in question
  */
 SYM_FUNC_START_PI(__dma_flush_area)
+	add	x1, x0, x1
 	dcache_by_line_op civac, sy, x0, x1, x2, x3
 	ret
 SYM_FUNC_END_PI(__dma_flush_area)
-- 
2.31.1.751.gd2f1c929bd-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v2 10/16] arm64: __flush_dcache_area to take end parameter instead of size
  2021-05-17  7:51 [PATCH v2 00/16] Tidy up cache.S Fuad Tabba
                   ` (8 preceding siblings ...)
  2021-05-17  7:51 ` [PATCH v2 09/16] arm64: dcache_by_line_op " Fuad Tabba
@ 2021-05-17  7:51 ` Fuad Tabba
  2021-05-17  7:51 ` [PATCH v2 11/16] arm64: __clean_dcache_area_poc " Fuad Tabba
                   ` (5 subsequent siblings)
  15 siblings, 0 replies; 29+ messages in thread
From: Fuad Tabba @ 2021-05-17  7:51 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: will, catalin.marinas, mark.rutland, maz, ardb, james.morse,
	alexandru.elisei, suzuki.poulose, robin.murphy, tabba

To be consistent with other functions with similar names and
functionality in cacheflush.h, cache.S, and cachetlb.rst, change
to specify the range in terms of start and end, as opposed to
start and size.

No functional change intended.

Reported-by: Will Deacon <will@kernel.org>
Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/arch_gicv3.h |  3 ++-
 arch/arm64/include/asm/cacheflush.h |  8 ++++----
 arch/arm64/include/asm/efi.h        |  2 +-
 arch/arm64/include/asm/kvm_mmu.h    |  3 ++-
 arch/arm64/kernel/hibernate.c       | 18 +++++++++++-------
 arch/arm64/kernel/idreg-override.c  |  3 ++-
 arch/arm64/kernel/kaslr.c           | 12 +++++++++---
 arch/arm64/kernel/machine_kexec.c   | 20 +++++++++++++-------
 arch/arm64/kernel/smp.c             |  8 ++++++--
 arch/arm64/kernel/smp_spin_table.c  |  7 ++++---
 arch/arm64/kvm/hyp/nvhe/cache.S     |  1 -
 arch/arm64/kvm/hyp/nvhe/setup.c     |  3 ++-
 arch/arm64/kvm/hyp/pgtable.c        | 13 ++++++++++---
 arch/arm64/mm/cache.S               |  9 ++++-----
 14 files changed, 70 insertions(+), 40 deletions(-)

diff --git a/arch/arm64/include/asm/arch_gicv3.h b/arch/arm64/include/asm/arch_gicv3.h
index 934b9be582d2..ed1cc9d8e6df 100644
--- a/arch/arm64/include/asm/arch_gicv3.h
+++ b/arch/arm64/include/asm/arch_gicv3.h
@@ -124,7 +124,8 @@ static inline u32 gic_read_rpr(void)
 #define gic_read_lpir(c)		readq_relaxed(c)
 #define gic_write_lpir(v, c)		writeq_relaxed(v, c)
 
-#define gic_flush_dcache_to_poc(a,l)	__flush_dcache_area((a), (l))
+#define gic_flush_dcache_to_poc(a,l)	\
+	__flush_dcache_area((unsigned long)(a), (unsigned long)(a)+(l))
 
 #define gits_read_baser(c)		readq_relaxed(c)
 #define gits_write_baser(v, c)		writeq_relaxed(v, c)
diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
index 157234706817..695f88864784 100644
--- a/arch/arm64/include/asm/cacheflush.h
+++ b/arch/arm64/include/asm/cacheflush.h
@@ -50,15 +50,15 @@
  *		- start  - virtual start address
  *		- end    - virtual end address
  *
- *	__flush_dcache_area(kaddr, size)
+ *	__flush_dcache_area(start, end)
  *
  *		Ensure that the data held in page is written back.
- *		- kaddr  - page address
- *		- size   - region size
+ *		- start  - virtual start address
+ *		- end    - virtual end address
  */
 extern void __flush_icache_range(unsigned long start, unsigned long end);
 extern void invalidate_icache_range(unsigned long start, unsigned long end);
-extern void __flush_dcache_area(void *addr, size_t len);
+extern void __flush_dcache_area(unsigned long start, unsigned long end);
 extern void __inval_dcache_area(unsigned long start, unsigned long end);
 extern void __clean_dcache_area_poc(void *addr, size_t len);
 extern void __clean_dcache_area_pop(void *addr, size_t len);
diff --git a/arch/arm64/include/asm/efi.h b/arch/arm64/include/asm/efi.h
index 3578aba9c608..0ae2397076fd 100644
--- a/arch/arm64/include/asm/efi.h
+++ b/arch/arm64/include/asm/efi.h
@@ -137,7 +137,7 @@ void efi_virtmap_unload(void);
 
 static inline void efi_capsule_flush_cache_range(void *addr, int size)
 {
-	__flush_dcache_area(addr, size);
+	__flush_dcache_area((unsigned long)addr, (unsigned long)addr + size);
 }
 
 #endif /* _ASM_EFI_H */
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index 25ed956f9af1..33293d5855af 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -180,7 +180,8 @@ static inline void *__kvm_vector_slot2addr(void *base,
 
 struct kvm;
 
-#define kvm_flush_dcache_to_poc(a,l)	__flush_dcache_area((a), (l))
+#define kvm_flush_dcache_to_poc(a,l)	\
+	__flush_dcache_area((unsigned long)(a), (unsigned long)(a)+(l))
 
 static inline bool vcpu_has_cache_enabled(struct kvm_vcpu *vcpu)
 {
diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c
index b1cef371df2b..b40ddce71507 100644
--- a/arch/arm64/kernel/hibernate.c
+++ b/arch/arm64/kernel/hibernate.c
@@ -240,8 +240,6 @@ static int create_safe_exec_page(void *src_start, size_t length,
 	return 0;
 }
 
-#define dcache_clean_range(start, end)	__flush_dcache_area(start, (end - start))
-
 #ifdef CONFIG_ARM64_MTE
 
 static DEFINE_XARRAY(mte_pages);
@@ -383,13 +381,18 @@ int swsusp_arch_suspend(void)
 		ret = swsusp_save();
 	} else {
 		/* Clean kernel core startup/idle code to PoC*/
-		dcache_clean_range(__mmuoff_data_start, __mmuoff_data_end);
-		dcache_clean_range(__idmap_text_start, __idmap_text_end);
+		__flush_dcache_area((unsigned long)__mmuoff_data_start,
+				    (unsigned long)__mmuoff_data_end);
+		__flush_dcache_area((unsigned long)__idmap_text_start,
+				    (unsigned long)__idmap_text_end);
 
 		/* Clean kvm setup code to PoC? */
 		if (el2_reset_needed()) {
-			dcache_clean_range(__hyp_idmap_text_start, __hyp_idmap_text_end);
-			dcache_clean_range(__hyp_text_start, __hyp_text_end);
+			__flush_dcache_area(
+				(unsigned long)__hyp_idmap_text_start,
+				(unsigned long)__hyp_idmap_text_end);
+			__flush_dcache_area((unsigned long)__hyp_text_start,
+					    (unsigned long)__hyp_text_end);
 		}
 
 		swsusp_mte_restore_tags();
@@ -474,7 +477,8 @@ int swsusp_arch_resume(void)
 	 * The hibernate exit text contains a set of el2 vectors, that will
 	 * be executed at el2 with the mmu off in order to reload hyp-stub.
 	 */
-	__flush_dcache_area(hibernate_exit, exit_size);
+	__flush_dcache_area((unsigned long)hibernate_exit,
+			    (unsigned long)hibernate_exit + exit_size);
 
 	/*
 	 * KASLR will cause the el2 vectors to be in a different location in
diff --git a/arch/arm64/kernel/idreg-override.c b/arch/arm64/kernel/idreg-override.c
index e628c8ce1ffe..3dd515baf526 100644
--- a/arch/arm64/kernel/idreg-override.c
+++ b/arch/arm64/kernel/idreg-override.c
@@ -237,7 +237,8 @@ asmlinkage void __init init_feature_override(void)
 
 	for (i = 0; i < ARRAY_SIZE(regs); i++) {
 		if (regs[i]->override)
-			__flush_dcache_area(regs[i]->override,
+			__flush_dcache_area((unsigned long)regs[i]->override,
+					    (unsigned long)regs[i]->override +
 					    sizeof(*regs[i]->override));
 	}
 }
diff --git a/arch/arm64/kernel/kaslr.c b/arch/arm64/kernel/kaslr.c
index 341342b207f6..49cccd03cb37 100644
--- a/arch/arm64/kernel/kaslr.c
+++ b/arch/arm64/kernel/kaslr.c
@@ -72,7 +72,9 @@ u64 __init kaslr_early_init(void)
 	 * we end up running with module randomization disabled.
 	 */
 	module_alloc_base = (u64)_etext - MODULES_VSIZE;
-	__flush_dcache_area(&module_alloc_base, sizeof(module_alloc_base));
+	__flush_dcache_area((unsigned long)&module_alloc_base,
+			    (unsigned long)&module_alloc_base +
+				    sizeof(module_alloc_base));
 
 	/*
 	 * Try to map the FDT early. If this fails, we simply bail,
@@ -170,8 +172,12 @@ u64 __init kaslr_early_init(void)
 	module_alloc_base += (module_range * (seed & ((1 << 21) - 1))) >> 21;
 	module_alloc_base &= PAGE_MASK;
 
-	__flush_dcache_area(&module_alloc_base, sizeof(module_alloc_base));
-	__flush_dcache_area(&memstart_offset_seed, sizeof(memstart_offset_seed));
+	__flush_dcache_area((unsigned long)&module_alloc_base,
+			    (unsigned long)&module_alloc_base +
+				    sizeof(module_alloc_base));
+	__flush_dcache_area((unsigned long)&memstart_offset_seed,
+			    (unsigned long)&memstart_offset_seed +
+				    sizeof(memstart_offset_seed));
 
 	return offset;
 }
diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c
index ecd8915e02e1..eadb0b189348 100644
--- a/arch/arm64/kernel/machine_kexec.c
+++ b/arch/arm64/kernel/machine_kexec.c
@@ -72,7 +72,9 @@ int machine_kexec_post_load(struct kimage *kimage)
 	 * For execution with the MMU off and I-cache on, reloc_code needs to be
 	 * cleaned to the PoC and invalidated from the I-cache.
 	 */
-	__flush_dcache_area(reloc_code, arm64_relocate_new_kernel_size);
+	__flush_dcache_area((unsigned long)reloc_code,
+			    (unsigned long)reloc_code +
+				    arm64_relocate_new_kernel_size);
 	invalidate_icache_range((uintptr_t)reloc_code,
 				(uintptr_t)reloc_code +
 					arm64_relocate_new_kernel_size);
@@ -106,16 +108,18 @@ static void kexec_list_flush(struct kimage *kimage)
 
 	for (entry = &kimage->head; ; entry++) {
 		unsigned int flag;
-		void *addr;
+		unsigned long addr;
 
 		/* flush the list entries. */
-		__flush_dcache_area(entry, sizeof(kimage_entry_t));
+		__flush_dcache_area((unsigned long)entry,
+				    (unsigned long)entry +
+					    sizeof(kimage_entry_t));
 
 		flag = *entry & IND_FLAGS;
 		if (flag == IND_DONE)
 			break;
 
-		addr = phys_to_virt(*entry & PAGE_MASK);
+		addr = (unsigned long)phys_to_virt(*entry & PAGE_MASK);
 
 		switch (flag) {
 		case IND_INDIRECTION:
@@ -124,7 +128,7 @@ static void kexec_list_flush(struct kimage *kimage)
 			break;
 		case IND_SOURCE:
 			/* flush the source pages. */
-			__flush_dcache_area(addr, PAGE_SIZE);
+			__flush_dcache_area(addr, addr + PAGE_SIZE);
 			break;
 		case IND_DESTINATION:
 			break;
@@ -151,8 +155,10 @@ static void kexec_segment_flush(const struct kimage *kimage)
 			kimage->segment[i].memsz,
 			kimage->segment[i].memsz /  PAGE_SIZE);
 
-		__flush_dcache_area(phys_to_virt(kimage->segment[i].mem),
-			kimage->segment[i].memsz);
+		__flush_dcache_area(
+			(unsigned long)phys_to_virt(kimage->segment[i].mem),
+			(unsigned long)phys_to_virt(kimage->segment[i].mem) +
+				kimage->segment[i].memsz);
 	}
 }
 
diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
index dcd7041b2b07..5fcdee331087 100644
--- a/arch/arm64/kernel/smp.c
+++ b/arch/arm64/kernel/smp.c
@@ -122,7 +122,9 @@ int __cpu_up(unsigned int cpu, struct task_struct *idle)
 	secondary_data.task = idle;
 	secondary_data.stack = task_stack_page(idle) + THREAD_SIZE;
 	update_cpu_boot_status(CPU_MMU_OFF);
-	__flush_dcache_area(&secondary_data, sizeof(secondary_data));
+	__flush_dcache_area((unsigned long)&secondary_data,
+			    (unsigned long)&secondary_data +
+				    sizeof(secondary_data));
 
 	/* Now bring the CPU into our world */
 	ret = boot_secondary(cpu, idle);
@@ -143,7 +145,9 @@ int __cpu_up(unsigned int cpu, struct task_struct *idle)
 	pr_crit("CPU%u: failed to come online\n", cpu);
 	secondary_data.task = NULL;
 	secondary_data.stack = NULL;
-	__flush_dcache_area(&secondary_data, sizeof(secondary_data));
+	__flush_dcache_area((unsigned long)&secondary_data,
+			    (unsigned long)&secondary_data +
+				    sizeof(secondary_data));
 	status = READ_ONCE(secondary_data.status);
 	if (status == CPU_MMU_OFF)
 		status = READ_ONCE(__early_cpu_boot_status);
diff --git a/arch/arm64/kernel/smp_spin_table.c b/arch/arm64/kernel/smp_spin_table.c
index c45a83512805..58d804582a35 100644
--- a/arch/arm64/kernel/smp_spin_table.c
+++ b/arch/arm64/kernel/smp_spin_table.c
@@ -36,7 +36,7 @@ static void write_pen_release(u64 val)
 	unsigned long size = sizeof(secondary_holding_pen_release);
 
 	secondary_holding_pen_release = val;
-	__flush_dcache_area(start, size);
+	__flush_dcache_area((unsigned long)start, (unsigned long)start + size);
 }
 
 
@@ -90,8 +90,9 @@ static int smp_spin_table_cpu_prepare(unsigned int cpu)
 	 * the boot protocol.
 	 */
 	writeq_relaxed(pa_holding_pen, release_addr);
-	__flush_dcache_area((__force void *)release_addr,
-			    sizeof(*release_addr));
+	__flush_dcache_area((__force unsigned long)release_addr,
+			    (__force unsigned long)release_addr +
+				    sizeof(*release_addr));
 
 	/*
 	 * Send an event to wake up the secondary CPU.
diff --git a/arch/arm64/kvm/hyp/nvhe/cache.S b/arch/arm64/kvm/hyp/nvhe/cache.S
index 3bcfa3cac46f..36cef6915428 100644
--- a/arch/arm64/kvm/hyp/nvhe/cache.S
+++ b/arch/arm64/kvm/hyp/nvhe/cache.S
@@ -8,7 +8,6 @@
 #include <asm/alternative.h>
 
 SYM_FUNC_START_PI(__flush_dcache_area)
-	add	x1, x0, x1
 	dcache_by_line_op civac, sy, x0, x1, x2, x3
 	ret
 SYM_FUNC_END_PI(__flush_dcache_area)
diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c
index 7488f53b0aa2..5dffe928f256 100644
--- a/arch/arm64/kvm/hyp/nvhe/setup.c
+++ b/arch/arm64/kvm/hyp/nvhe/setup.c
@@ -134,7 +134,8 @@ static void update_nvhe_init_params(void)
 	for (i = 0; i < hyp_nr_cpus; i++) {
 		params = per_cpu_ptr(&kvm_init_params, i);
 		params->pgd_pa = __hyp_pa(pkvm_pgtable.pgd);
-		__flush_dcache_area(params, sizeof(*params));
+		__flush_dcache_area((unsigned long)params,
+				    (unsigned long)params + sizeof(*params));
 	}
 }
 
diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
index c37c1dc4feaf..10d2f04013d4 100644
--- a/arch/arm64/kvm/hyp/pgtable.c
+++ b/arch/arm64/kvm/hyp/pgtable.c
@@ -839,8 +839,11 @@ static int stage2_unmap_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep,
 	stage2_put_pte(ptep, mmu, addr, level, mm_ops);
 
 	if (need_flush) {
-		__flush_dcache_area(kvm_pte_follow(pte, mm_ops),
-				    kvm_granule_size(level));
+		kvm_pte_t *pte_follow = kvm_pte_follow(pte, mm_ops);
+
+		__flush_dcache_area((unsigned long)pte_follow,
+				    (unsigned long)pte_follow +
+					    kvm_granule_size(level));
 	}
 
 	if (childp)
@@ -988,11 +991,15 @@ static int stage2_flush_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep,
 	struct kvm_pgtable *pgt = arg;
 	struct kvm_pgtable_mm_ops *mm_ops = pgt->mm_ops;
 	kvm_pte_t pte = *ptep;
+	kvm_pte_t *pte_follow;
 
 	if (!kvm_pte_valid(pte) || !stage2_pte_cacheable(pgt, pte))
 		return 0;
 
-	__flush_dcache_area(kvm_pte_follow(pte, mm_ops), kvm_granule_size(level));
+	pte_follow = kvm_pte_follow(pte, mm_ops);
+	__flush_dcache_area((unsigned long)pte_follow,
+			    (unsigned long)pte_follow +
+				    kvm_granule_size(level));
 	return 0;
 }
 
diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
index b599c334a2e8..058605ac75a1 100644
--- a/arch/arm64/mm/cache.S
+++ b/arch/arm64/mm/cache.S
@@ -110,16 +110,15 @@ alternative_else_nop_endif
 SYM_FUNC_END(invalidate_icache_range)
 
 /*
- *	__flush_dcache_area(kaddr, size)
+ *	__flush_dcache_area(start, end)
  *
- *	Ensure that any D-cache lines for the interval [kaddr, kaddr+size)
+ *	Ensure that any D-cache lines for the interval [start, end)
  *	are cleaned and invalidated to the PoC.
  *
- *	- kaddr   - kernel address
- *	- size    - size in question
+ *	- start   - virtual start address of region
+ *	- end     - virtual end address of region
  */
 SYM_FUNC_START_PI(__flush_dcache_area)
-	add	x1, x0, x1
 	dcache_by_line_op civac, sy, x0, x1, x2, x3
 	ret
 SYM_FUNC_END_PI(__flush_dcache_area)
-- 
2.31.1.751.gd2f1c929bd-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v2 11/16] arm64: __clean_dcache_area_poc to take end parameter instead of size
  2021-05-17  7:51 [PATCH v2 00/16] Tidy up cache.S Fuad Tabba
                   ` (9 preceding siblings ...)
  2021-05-17  7:51 ` [PATCH v2 10/16] arm64: __flush_dcache_area " Fuad Tabba
@ 2021-05-17  7:51 ` Fuad Tabba
  2021-05-17  7:51 ` [PATCH v2 12/16] arm64: __clean_dcache_area_pop " Fuad Tabba
                   ` (4 subsequent siblings)
  15 siblings, 0 replies; 29+ messages in thread
From: Fuad Tabba @ 2021-05-17  7:51 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: will, catalin.marinas, mark.rutland, maz, ardb, james.morse,
	alexandru.elisei, suzuki.poulose, robin.murphy, tabba

To be consistent with other functions with similar names and
functionality in cacheflush.h, cache.S, and cachetlb.rst, change
to specify the range in terms of start and end, as opposed to
start and size.

Because the code is shared with __dma_clean_area, it changes the
parameters for that as well. However, __dma_clean_area is local to
cache.S, so no other users are affected.

No functional change intended.

Reported-by: Will Deacon <will@kernel.org>
Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/cacheflush.h |  2 +-
 arch/arm64/kernel/efi-entry.S       |  5 +++--
 arch/arm64/mm/cache.S               | 16 +++++++---------
 3 files changed, 11 insertions(+), 12 deletions(-)

diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
index 695f88864784..3255878d6f30 100644
--- a/arch/arm64/include/asm/cacheflush.h
+++ b/arch/arm64/include/asm/cacheflush.h
@@ -60,7 +60,7 @@ extern void __flush_icache_range(unsigned long start, unsigned long end);
 extern void invalidate_icache_range(unsigned long start, unsigned long end);
 extern void __flush_dcache_area(unsigned long start, unsigned long end);
 extern void __inval_dcache_area(unsigned long start, unsigned long end);
-extern void __clean_dcache_area_poc(void *addr, size_t len);
+extern void __clean_dcache_area_poc(unsigned long start, unsigned long end);
 extern void __clean_dcache_area_pop(void *addr, size_t len);
 extern void __clean_dcache_area_pou(void *addr, size_t len);
 extern long __flush_cache_user_range(unsigned long start, unsigned long end);
diff --git a/arch/arm64/kernel/efi-entry.S b/arch/arm64/kernel/efi-entry.S
index 0073b24b5d25..72e6a580290a 100644
--- a/arch/arm64/kernel/efi-entry.S
+++ b/arch/arm64/kernel/efi-entry.S
@@ -28,6 +28,7 @@ SYM_CODE_START(efi_enter_kernel)
 	 * stale icache entries from before relocation.
 	 */
 	ldr	w1, =kernel_size
+	add	x1, x0, x1
 	bl	__clean_dcache_area_poc
 	ic	ialluis
 
@@ -36,7 +37,7 @@ SYM_CODE_START(efi_enter_kernel)
 	 * so that we can safely disable the MMU and caches.
 	 */
 	adr	x0, 0f
-	ldr	w1, 3f
+	adr	x1, 3f
 	bl	__clean_dcache_area_poc
 0:
 	/* Turn off Dcache and MMU */
@@ -65,4 +66,4 @@ SYM_CODE_START(efi_enter_kernel)
 	mov	x3, xzr
 	br	x19
 SYM_CODE_END(efi_enter_kernel)
-3:	.long	. - 0b
+3:
diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
index 058605ac75a1..38d62cef243f 100644
--- a/arch/arm64/mm/cache.S
+++ b/arch/arm64/mm/cache.S
@@ -182,24 +182,23 @@ SYM_FUNC_END_PI(__inval_dcache_area)
 SYM_FUNC_END(__dma_inv_area)
 
 /*
- *	__clean_dcache_area_poc(kaddr, size)
+ *	__clean_dcache_area_poc(start, end)
  *
- * 	Ensure that any D-cache lines for the interval [kaddr, kaddr+size)
+ * 	Ensure that any D-cache lines for the interval [start, end)
  * 	are cleaned to the PoC.
  *
- *	- kaddr   - kernel address
- *	- size    - size in question
+ *	- start   - virtual start address of region
+ *	- end     - virtual end address of region
  */
 SYM_FUNC_START_LOCAL(__dma_clean_area)
 SYM_FUNC_START_PI(__clean_dcache_area_poc)
 	/* FALLTHROUGH */
 
 /*
- *	__dma_clean_area(start, size)
+ *	__dma_clean_area(start, end)
  *	- start   - virtual start address of region
- *	- size    - size in question
+ *	- end     - virtual end address of region
  */
-	add	x1, x0, x1
 	dcache_by_line_op cvac, sy, x0, x1, x2, x3
 	ret
 SYM_FUNC_END_PI(__clean_dcache_area_poc)
@@ -215,10 +214,10 @@ SYM_FUNC_END(__dma_clean_area)
  *	- size    - size in question
  */
 SYM_FUNC_START_PI(__clean_dcache_area_pop)
+	add	x1, x0, x1
 	alternative_if_not ARM64_HAS_DCPOP
 	b	__clean_dcache_area_poc
 	alternative_else_nop_endif
-	add	x1, x0, x1
 	dcache_by_line_op cvap, sy, x0, x1, x2, x3
 	ret
 SYM_FUNC_END_PI(__clean_dcache_area_pop)
@@ -247,7 +246,6 @@ SYM_FUNC_START_PI(__dma_map_area)
 	add	x1, x0, x1
 	cmp	w2, #DMA_FROM_DEVICE
 	b.eq	__dma_inv_area
-	sub	x1, x1, x0
 	b	__dma_clean_area
 SYM_FUNC_END_PI(__dma_map_area)
 
-- 
2.31.1.751.gd2f1c929bd-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v2 12/16] arm64: __clean_dcache_area_pop to take end parameter instead of size
  2021-05-17  7:51 [PATCH v2 00/16] Tidy up cache.S Fuad Tabba
                   ` (10 preceding siblings ...)
  2021-05-17  7:51 ` [PATCH v2 11/16] arm64: __clean_dcache_area_poc " Fuad Tabba
@ 2021-05-17  7:51 ` Fuad Tabba
  2021-05-17  7:51 ` [PATCH v2 13/16] arm64: __clean_dcache_area_pou " Fuad Tabba
                   ` (3 subsequent siblings)
  15 siblings, 0 replies; 29+ messages in thread
From: Fuad Tabba @ 2021-05-17  7:51 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: will, catalin.marinas, mark.rutland, maz, ardb, james.morse,
	alexandru.elisei, suzuki.poulose, robin.murphy, tabba

To be consistent with other functions with similar names and
functionality in cacheflush.h, cache.S, and cachetlb.rst, change
to specify the range in terms of start and end, as opposed to
start and size.

No functional change intended.

Reported-by: Will Deacon <will@kernel.org>
Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/cacheflush.h | 2 +-
 arch/arm64/lib/uaccess_flushcache.c | 4 ++--
 arch/arm64/mm/cache.S               | 9 ++++-----
 arch/arm64/mm/flush.c               | 2 +-
 4 files changed, 8 insertions(+), 9 deletions(-)

diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
index 3255878d6f30..fa5641868d65 100644
--- a/arch/arm64/include/asm/cacheflush.h
+++ b/arch/arm64/include/asm/cacheflush.h
@@ -61,7 +61,7 @@ extern void invalidate_icache_range(unsigned long start, unsigned long end);
 extern void __flush_dcache_area(unsigned long start, unsigned long end);
 extern void __inval_dcache_area(unsigned long start, unsigned long end);
 extern void __clean_dcache_area_poc(unsigned long start, unsigned long end);
-extern void __clean_dcache_area_pop(void *addr, size_t len);
+extern void __clean_dcache_area_pop(unsigned long start, unsigned long end);
 extern void __clean_dcache_area_pou(void *addr, size_t len);
 extern long __flush_cache_user_range(unsigned long start, unsigned long end);
 extern void sync_icache_aliases(void *kaddr, unsigned long len);
diff --git a/arch/arm64/lib/uaccess_flushcache.c b/arch/arm64/lib/uaccess_flushcache.c
index c83bb5a4aad2..62ea989effe8 100644
--- a/arch/arm64/lib/uaccess_flushcache.c
+++ b/arch/arm64/lib/uaccess_flushcache.c
@@ -15,7 +15,7 @@ void memcpy_flushcache(void *dst, const void *src, size_t cnt)
 	 * barrier to order the cache maintenance against the memcpy.
 	 */
 	memcpy(dst, src, cnt);
-	__clean_dcache_area_pop(dst, cnt);
+	__clean_dcache_area_pop((unsigned long)dst, (unsigned long)dst + cnt);
 }
 EXPORT_SYMBOL_GPL(memcpy_flushcache);
 
@@ -33,6 +33,6 @@ unsigned long __copy_user_flushcache(void *to, const void __user *from,
 	rc = raw_copy_from_user(to, from, n);
 
 	/* See above */
-	__clean_dcache_area_pop(to, n - rc);
+	__clean_dcache_area_pop((unsigned long)to, (unsigned long)to + n - rc);
 	return rc;
 }
diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
index 38d62cef243f..8c0707167ab2 100644
--- a/arch/arm64/mm/cache.S
+++ b/arch/arm64/mm/cache.S
@@ -205,16 +205,15 @@ SYM_FUNC_END_PI(__clean_dcache_area_poc)
 SYM_FUNC_END(__dma_clean_area)
 
 /*
- *	__clean_dcache_area_pop(kaddr, size)
+ *	__clean_dcache_area_pop(start, end)
  *
- * 	Ensure that any D-cache lines for the interval [kaddr, kaddr+size)
+ * 	Ensure that any D-cache lines for the interval [start, end)
  * 	are cleaned to the PoP.
  *
- *	- kaddr   - kernel address
- *	- size    - size in question
+ *	- start   - virtual start address of region
+ *	- end     - virtual end address of region
  */
 SYM_FUNC_START_PI(__clean_dcache_area_pop)
-	add	x1, x0, x1
 	alternative_if_not ARM64_HAS_DCPOP
 	b	__clean_dcache_area_poc
 	alternative_else_nop_endif
diff --git a/arch/arm64/mm/flush.c b/arch/arm64/mm/flush.c
index 4e3505c2bea6..5aba7fe42d4b 100644
--- a/arch/arm64/mm/flush.c
+++ b/arch/arm64/mm/flush.c
@@ -82,7 +82,7 @@ void arch_wb_cache_pmem(void *addr, size_t size)
 {
 	/* Ensure order against any prior non-cacheable writes */
 	dmb(osh);
-	__clean_dcache_area_pop(addr, size);
+	__clean_dcache_area_pop((unsigned long)addr, (unsigned long)addr + size);
 }
 EXPORT_SYMBOL_GPL(arch_wb_cache_pmem);
 
-- 
2.31.1.751.gd2f1c929bd-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v2 13/16] arm64: __clean_dcache_area_pou to take end parameter instead of size
  2021-05-17  7:51 [PATCH v2 00/16] Tidy up cache.S Fuad Tabba
                   ` (11 preceding siblings ...)
  2021-05-17  7:51 ` [PATCH v2 12/16] arm64: __clean_dcache_area_pop " Fuad Tabba
@ 2021-05-17  7:51 ` Fuad Tabba
  2021-05-17  7:51 ` [PATCH v2 14/16] arm64: sync_icache_aliases " Fuad Tabba
                   ` (2 subsequent siblings)
  15 siblings, 0 replies; 29+ messages in thread
From: Fuad Tabba @ 2021-05-17  7:51 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: will, catalin.marinas, mark.rutland, maz, ardb, james.morse,
	alexandru.elisei, suzuki.poulose, robin.murphy, tabba

To be consistent with other functions with similar names and
functionality in cacheflush.h, cache.S, and cachetlb.rst, change
to specify the range in terms of start and end, as opposed to
start and size.

No functional change intended.

Reported-by: Will Deacon <will@kernel.org>
Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/cacheflush.h | 2 +-
 arch/arm64/mm/cache.S               | 9 ++++-----
 arch/arm64/mm/flush.c               | 2 +-
 3 files changed, 6 insertions(+), 7 deletions(-)

diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
index fa5641868d65..f86723047315 100644
--- a/arch/arm64/include/asm/cacheflush.h
+++ b/arch/arm64/include/asm/cacheflush.h
@@ -62,7 +62,7 @@ extern void __flush_dcache_area(unsigned long start, unsigned long end);
 extern void __inval_dcache_area(unsigned long start, unsigned long end);
 extern void __clean_dcache_area_poc(unsigned long start, unsigned long end);
 extern void __clean_dcache_area_pop(unsigned long start, unsigned long end);
-extern void __clean_dcache_area_pou(void *addr, size_t len);
+extern void __clean_dcache_area_pou(unsigned long start, unsigned long end);
 extern long __flush_cache_user_range(unsigned long start, unsigned long end);
 extern void sync_icache_aliases(void *kaddr, unsigned long len);
 
diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
index 8c0707167ab2..fbf003b956cc 100644
--- a/arch/arm64/mm/cache.S
+++ b/arch/arm64/mm/cache.S
@@ -124,20 +124,19 @@ SYM_FUNC_START_PI(__flush_dcache_area)
 SYM_FUNC_END_PI(__flush_dcache_area)
 
 /*
- *	__clean_dcache_area_pou(kaddr, size)
+ *	__clean_dcache_area_pou(start, end)
  *
- * 	Ensure that any D-cache lines for the interval [kaddr, kaddr+size)
+ * 	Ensure that any D-cache lines for the interval [start, end)
  * 	are cleaned to the PoU.
  *
- *	- kaddr   - kernel address
- *	- size    - size in question
+ *	- start   - virtual start address of region
+ *	- end     - virtual end address of region
  */
 SYM_FUNC_START(__clean_dcache_area_pou)
 alternative_if ARM64_HAS_CACHE_IDC
 	dsb	ishst
 	ret
 alternative_else_nop_endif
-	add	x1, x0, x1
 	dcache_by_line_op cvau, ish, x0, x1, x2, x3
 	ret
 SYM_FUNC_END(__clean_dcache_area_pou)
diff --git a/arch/arm64/mm/flush.c b/arch/arm64/mm/flush.c
index 5aba7fe42d4b..a69d745fb1dc 100644
--- a/arch/arm64/mm/flush.c
+++ b/arch/arm64/mm/flush.c
@@ -19,7 +19,7 @@ void sync_icache_aliases(void *kaddr, unsigned long len)
 	unsigned long addr = (unsigned long)kaddr;
 
 	if (icache_is_aliasing()) {
-		__clean_dcache_area_pou(kaddr, len);
+		__clean_dcache_area_pou(kaddr, kaddr + len);
 		__flush_icache_all();
 	} else {
 		/*
-- 
2.31.1.751.gd2f1c929bd-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v2 14/16] arm64: sync_icache_aliases to take end parameter instead of size
  2021-05-17  7:51 [PATCH v2 00/16] Tidy up cache.S Fuad Tabba
                   ` (12 preceding siblings ...)
  2021-05-17  7:51 ` [PATCH v2 13/16] arm64: __clean_dcache_area_pou " Fuad Tabba
@ 2021-05-17  7:51 ` Fuad Tabba
  2021-05-17  7:51 ` [PATCH v2 15/16] arm64: Fix cache maintenance function comments Fuad Tabba
  2021-05-17  7:51 ` [PATCH v2 16/16] arm64: Rename arm64-internal cache maintenance functions Fuad Tabba
  15 siblings, 0 replies; 29+ messages in thread
From: Fuad Tabba @ 2021-05-17  7:51 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: will, catalin.marinas, mark.rutland, maz, ardb, james.morse,
	alexandru.elisei, suzuki.poulose, robin.murphy, tabba

To be consistent with other functions with similar names and
functionality in cacheflush.h, cache.S, and cachetlb.rst, change
to specify the range in terms of start and end, as opposed to
start and size.

No functional change intended.

Reported-by: Will Deacon <will@kernel.org>
Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/cacheflush.h |  2 +-
 arch/arm64/kernel/probes/uprobes.c  |  2 +-
 arch/arm64/mm/flush.c               | 21 +++++++++++----------
 3 files changed, 13 insertions(+), 12 deletions(-)

diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
index f86723047315..70b389a8dea5 100644
--- a/arch/arm64/include/asm/cacheflush.h
+++ b/arch/arm64/include/asm/cacheflush.h
@@ -64,7 +64,7 @@ extern void __clean_dcache_area_poc(unsigned long start, unsigned long end);
 extern void __clean_dcache_area_pop(unsigned long start, unsigned long end);
 extern void __clean_dcache_area_pou(unsigned long start, unsigned long end);
 extern long __flush_cache_user_range(unsigned long start, unsigned long end);
-extern void sync_icache_aliases(void *kaddr, unsigned long len);
+extern void sync_icache_aliases(unsigned long start, unsigned long end);
 
 static inline void flush_icache_range(unsigned long start, unsigned long end)
 {
diff --git a/arch/arm64/kernel/probes/uprobes.c b/arch/arm64/kernel/probes/uprobes.c
index 2c247634552b..9be668f3f034 100644
--- a/arch/arm64/kernel/probes/uprobes.c
+++ b/arch/arm64/kernel/probes/uprobes.c
@@ -21,7 +21,7 @@ void arch_uprobe_copy_ixol(struct page *page, unsigned long vaddr,
 	memcpy(dst, src, len);
 
 	/* flush caches (dcache/icache) */
-	sync_icache_aliases(dst, len);
+	sync_icache_aliases((unsigned long)dst, (unsigned long)dst + len);
 
 	kunmap_atomic(xol_page_kaddr);
 }
diff --git a/arch/arm64/mm/flush.c b/arch/arm64/mm/flush.c
index a69d745fb1dc..143f625e7727 100644
--- a/arch/arm64/mm/flush.c
+++ b/arch/arm64/mm/flush.c
@@ -14,28 +14,26 @@
 #include <asm/cache.h>
 #include <asm/tlbflush.h>
 
-void sync_icache_aliases(void *kaddr, unsigned long len)
+void sync_icache_aliases(unsigned long start, unsigned long end)
 {
-	unsigned long addr = (unsigned long)kaddr;
-
 	if (icache_is_aliasing()) {
-		__clean_dcache_area_pou(kaddr, kaddr + len);
+		__clean_dcache_area_pou(start, end);
 		__flush_icache_all();
 	} else {
 		/*
 		 * Don't issue kick_all_cpus_sync() after I-cache invalidation
 		 * for user mappings.
 		 */
-		__flush_icache_range(addr, addr + len);
+		__flush_icache_range(start, end);
 	}
 }
 
 static void flush_ptrace_access(struct vm_area_struct *vma, struct page *page,
-				unsigned long uaddr, void *kaddr,
-				unsigned long len)
+				unsigned long uaddr, unsigned long start,
+				unsigned long end)
 {
 	if (vma->vm_flags & VM_EXEC)
-		sync_icache_aliases(kaddr, len);
+		sync_icache_aliases(start, end);
 }
 
 /*
@@ -48,7 +46,8 @@ void copy_to_user_page(struct vm_area_struct *vma, struct page *page,
 		       unsigned long len)
 {
 	memcpy(dst, src, len);
-	flush_ptrace_access(vma, page, uaddr, dst, len);
+	flush_ptrace_access(vma, page, uaddr, (unsigned long)dst,
+			    (unsigned long)dst + len);
 }
 
 void __sync_icache_dcache(pte_t pte)
@@ -56,7 +55,9 @@ void __sync_icache_dcache(pte_t pte)
 	struct page *page = pte_page(pte);
 
 	if (!test_and_set_bit(PG_dcache_clean, &page->flags))
-		sync_icache_aliases(page_address(page), page_size(page));
+		sync_icache_aliases((unsigned long)page_address(page),
+				    (unsigned long)page_address(page) +
+					    page_size(page));
 }
 EXPORT_SYMBOL_GPL(__sync_icache_dcache);
 
-- 
2.31.1.751.gd2f1c929bd-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v2 15/16] arm64: Fix cache maintenance function comments
  2021-05-17  7:51 [PATCH v2 00/16] Tidy up cache.S Fuad Tabba
                   ` (13 preceding siblings ...)
  2021-05-17  7:51 ` [PATCH v2 14/16] arm64: sync_icache_aliases " Fuad Tabba
@ 2021-05-17  7:51 ` Fuad Tabba
  2021-05-17  7:51 ` [PATCH v2 16/16] arm64: Rename arm64-internal cache maintenance functions Fuad Tabba
  15 siblings, 0 replies; 29+ messages in thread
From: Fuad Tabba @ 2021-05-17  7:51 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: will, catalin.marinas, mark.rutland, maz, ardb, james.morse,
	alexandru.elisei, suzuki.poulose, robin.murphy, tabba

Fix and expand comments for the cache maintenance functions in
cacheflush.h. Adds comments to functions that weren't described
before. Explains what the functions do using Arm Architecture
Reference Manual terminology.

No functional change intended.

Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/cacheflush.h | 43 +++++++++++++++++++----------
 1 file changed, 28 insertions(+), 15 deletions(-)

diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
index 70b389a8dea5..4b91d3530013 100644
--- a/arch/arm64/include/asm/cacheflush.h
+++ b/arch/arm64/include/asm/cacheflush.h
@@ -30,31 +30,44 @@
  *	the implementation assumes non-aliasing VIPT D-cache and (aliasing)
  *	VIPT I-cache.
  *
- *	flush_icache_range(start, end)
- *
- *		Ensure coherency between the I-cache and the D-cache in the
- *		region described by start, end.
+ *	All functions below apply to the region described by [start, end)
  *		- start  - virtual start address
  *		- end    - virtual end address
  *
- *	invalidate_icache_range(start, end)
+ *	__flush_icache_range(start, end)
  *
- *		Invalidate the I-cache in the region described by start, end.
- *		- start  - virtual start address
- *		- end    - virtual end address
+ *		Ensure coherency between the I-cache and the D-cache region to
+ *		the Point of Unification.
  *
  *	__flush_cache_user_range(start, end)
  *
- *		Ensure coherency between the I-cache and the D-cache in the
- *		region described by start, end.
- *		- start  - virtual start address
- *		- end    - virtual end address
+ *		Ensure coherency between the I-cache and the D-cache region to
+ *		the Point of Unification.
+ *		Use only if the region might access user memory.
+ *
+ *	invalidate_icache_range(start, end)
+ *
+ *		Invalidate I-cache region to the Point of Unification.
  *
  *	__flush_dcache_area(start, end)
  *
- *		Ensure that the data held in page is written back.
- *		- start  - virtual start address
- *		- end    - virtual end address
+ *		Clean and invalidate D-cache region to the Point of Coherence.
+ *
+ *	__inval_dcache_area(start, end)
+ *
+ *		Invalidate D-cache region to the Point of Coherence.
+ *
+ *	__clean_dcache_area_poc(start, end)
+ *
+ *		Clean D-cache region to the Point of Coherence.
+ *
+ *	__clean_dcache_area_pop(start, end)
+ *
+ *		Clean D-cache region to the Point of Persistence.
+ *
+ *	__clean_dcache_area_pou(start, end)
+ *
+ *		Clean D-cache region to the Point of Unification.
  */
 extern void __flush_icache_range(unsigned long start, unsigned long end);
 extern void invalidate_icache_range(unsigned long start, unsigned long end);
-- 
2.31.1.751.gd2f1c929bd-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v2 16/16] arm64: Rename arm64-internal cache maintenance functions
  2021-05-17  7:51 [PATCH v2 00/16] Tidy up cache.S Fuad Tabba
                   ` (14 preceding siblings ...)
  2021-05-17  7:51 ` [PATCH v2 15/16] arm64: Fix cache maintenance function comments Fuad Tabba
@ 2021-05-17  7:51 ` Fuad Tabba
  15 siblings, 0 replies; 29+ messages in thread
From: Fuad Tabba @ 2021-05-17  7:51 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: will, catalin.marinas, mark.rutland, maz, ardb, james.morse,
	alexandru.elisei, suzuki.poulose, robin.murphy, tabba

Although naming across the codebase isn't that consistent, it
tends to follow certain patterns. Moreover, the term "flush"
isn't defined in the Arm Architecture reference manual, and might
be interpreted to mean clean, invalidate, or both for a cache.

Rename arm64-internal functions to make the naming internally
consistent, as well as making it consistent with the Arm ARM, by
specifying whether it applies to the instruction, data, or both
caches, whether the operation is a clean, invalidate, or both.
Also specify which point the operation applies to, i.e., to the
point of unification (PoU), coherence (PoC), or persistence
(PoP).

This commit applies the following sed transformation to all files
under arch/arm64:

"s/\b__flush_cache_range\b/caches_clean_inval_pou_macro/g;"\
"s/\b__flush_icache_range\b/caches_clean_inval_pou/g;"\
"s/\binvalidate_icache_range\b/icache_inval_pou/g;"\
"s/\b__flush_dcache_area\b/dcache_clean_inval_poc/g;"\
"s/\b__inval_dcache_area\b/dcache_inval_poc/g;"\
"s/__clean_dcache_area_poc\b/dcache_clean_poc/g;"\
"s/\b__clean_dcache_area_pop\b/dcache_clean_pop/g;"\
"s/\b__clean_dcache_area_pou\b/dcache_clean_pou/g;"\
"s/\b__flush_cache_user_range\b/caches_clean_inval_user_pou/g;"\
"s/\b__flush_icache_all\b/icache_inval_all_pou/g;"

Note that __clean_dcache_area_poc is deliberately missing a word
boundary check at the beginning in order to match the efistub
symbols in image-vars.h.

Also note that, despite its name, __flush_icache_range operates
on both instruction and data caches. The name change here
reflects that.

No functional change intended.

Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/arch_gicv3.h |  2 +-
 arch/arm64/include/asm/cacheflush.h | 36 +++++++++---------
 arch/arm64/include/asm/efi.h        |  2 +-
 arch/arm64/include/asm/kvm_mmu.h    |  6 +--
 arch/arm64/kernel/alternative.c     |  2 +-
 arch/arm64/kernel/efi-entry.S       |  4 +-
 arch/arm64/kernel/head.S            |  8 ++--
 arch/arm64/kernel/hibernate-asm.S   |  4 +-
 arch/arm64/kernel/hibernate.c       | 12 +++---
 arch/arm64/kernel/idreg-override.c  |  2 +-
 arch/arm64/kernel/image-vars.h      |  2 +-
 arch/arm64/kernel/insn.c            |  2 +-
 arch/arm64/kernel/kaslr.c           |  6 +--
 arch/arm64/kernel/machine_kexec.c   | 10 ++---
 arch/arm64/kernel/smp.c             |  4 +-
 arch/arm64/kernel/smp_spin_table.c  |  4 +-
 arch/arm64/kernel/sys_compat.c      |  2 +-
 arch/arm64/kvm/arm.c                |  2 +-
 arch/arm64/kvm/hyp/nvhe/cache.S     |  4 +-
 arch/arm64/kvm/hyp/nvhe/setup.c     |  2 +-
 arch/arm64/kvm/hyp/nvhe/tlb.c       |  2 +-
 arch/arm64/kvm/hyp/pgtable.c        |  4 +-
 arch/arm64/lib/uaccess_flushcache.c |  4 +-
 arch/arm64/mm/cache.S               | 58 ++++++++++++++---------------
 arch/arm64/mm/flush.c               | 12 +++---
 25 files changed, 98 insertions(+), 98 deletions(-)

diff --git a/arch/arm64/include/asm/arch_gicv3.h b/arch/arm64/include/asm/arch_gicv3.h
index ed1cc9d8e6df..4ad22c3135db 100644
--- a/arch/arm64/include/asm/arch_gicv3.h
+++ b/arch/arm64/include/asm/arch_gicv3.h
@@ -125,7 +125,7 @@ static inline u32 gic_read_rpr(void)
 #define gic_write_lpir(v, c)		writeq_relaxed(v, c)
 
 #define gic_flush_dcache_to_poc(a,l)	\
-	__flush_dcache_area((unsigned long)(a), (unsigned long)(a)+(l))
+	dcache_clean_inval_poc((unsigned long)(a), (unsigned long)(a)+(l))
 
 #define gits_read_baser(c)		readq_relaxed(c)
 #define gits_write_baser(v, c)		writeq_relaxed(v, c)
diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
index 4b91d3530013..885bda37b805 100644
--- a/arch/arm64/include/asm/cacheflush.h
+++ b/arch/arm64/include/asm/cacheflush.h
@@ -34,54 +34,54 @@
  *		- start  - virtual start address
  *		- end    - virtual end address
  *
- *	__flush_icache_range(start, end)
+ *	caches_clean_inval_pou(start, end)
  *
  *		Ensure coherency between the I-cache and the D-cache region to
  *		the Point of Unification.
  *
- *	__flush_cache_user_range(start, end)
+ *	caches_clean_inval_user_pou(start, end)
  *
  *		Ensure coherency between the I-cache and the D-cache region to
  *		the Point of Unification.
  *		Use only if the region might access user memory.
  *
- *	invalidate_icache_range(start, end)
+ *	icache_inval_pou(start, end)
  *
  *		Invalidate I-cache region to the Point of Unification.
  *
- *	__flush_dcache_area(start, end)
+ *	dcache_clean_inval_poc(start, end)
  *
  *		Clean and invalidate D-cache region to the Point of Coherence.
  *
- *	__inval_dcache_area(start, end)
+ *	dcache_inval_poc(start, end)
  *
  *		Invalidate D-cache region to the Point of Coherence.
  *
- *	__clean_dcache_area_poc(start, end)
+ *	dcache_clean_poc(start, end)
  *
  *		Clean D-cache region to the Point of Coherence.
  *
- *	__clean_dcache_area_pop(start, end)
+ *	dcache_clean_pop(start, end)
  *
  *		Clean D-cache region to the Point of Persistence.
  *
- *	__clean_dcache_area_pou(start, end)
+ *	dcache_clean_pou(start, end)
  *
  *		Clean D-cache region to the Point of Unification.
  */
-extern void __flush_icache_range(unsigned long start, unsigned long end);
-extern void invalidate_icache_range(unsigned long start, unsigned long end);
-extern void __flush_dcache_area(unsigned long start, unsigned long end);
-extern void __inval_dcache_area(unsigned long start, unsigned long end);
-extern void __clean_dcache_area_poc(unsigned long start, unsigned long end);
-extern void __clean_dcache_area_pop(unsigned long start, unsigned long end);
-extern void __clean_dcache_area_pou(unsigned long start, unsigned long end);
-extern long __flush_cache_user_range(unsigned long start, unsigned long end);
+extern void caches_clean_inval_pou(unsigned long start, unsigned long end);
+extern void icache_inval_pou(unsigned long start, unsigned long end);
+extern void dcache_clean_inval_poc(unsigned long start, unsigned long end);
+extern void dcache_inval_poc(unsigned long start, unsigned long end);
+extern void dcache_clean_poc(unsigned long start, unsigned long end);
+extern void dcache_clean_pop(unsigned long start, unsigned long end);
+extern void dcache_clean_pou(unsigned long start, unsigned long end);
+extern long caches_clean_inval_user_pou(unsigned long start, unsigned long end);
 extern void sync_icache_aliases(unsigned long start, unsigned long end);
 
 static inline void flush_icache_range(unsigned long start, unsigned long end)
 {
-	__flush_icache_range(start, end);
+	caches_clean_inval_pou(start, end);
 
 	/*
 	 * IPI all online CPUs so that they undergo a context synchronization
@@ -135,7 +135,7 @@ extern void copy_to_user_page(struct vm_area_struct *, struct page *,
 #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1
 extern void flush_dcache_page(struct page *);
 
-static __always_inline void __flush_icache_all(void)
+static __always_inline void icache_inval_all_pou(void)
 {
 	if (cpus_have_const_cap(ARM64_HAS_CACHE_DIC))
 		return;
diff --git a/arch/arm64/include/asm/efi.h b/arch/arm64/include/asm/efi.h
index 0ae2397076fd..1bed37eb013a 100644
--- a/arch/arm64/include/asm/efi.h
+++ b/arch/arm64/include/asm/efi.h
@@ -137,7 +137,7 @@ void efi_virtmap_unload(void);
 
 static inline void efi_capsule_flush_cache_range(void *addr, int size)
 {
-	__flush_dcache_area((unsigned long)addr, (unsigned long)addr + size);
+	dcache_clean_inval_poc((unsigned long)addr, (unsigned long)addr + size);
 }
 
 #endif /* _ASM_EFI_H */
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index 33293d5855af..f4cbfa9025a8 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -181,7 +181,7 @@ static inline void *__kvm_vector_slot2addr(void *base,
 struct kvm;
 
 #define kvm_flush_dcache_to_poc(a,l)	\
-	__flush_dcache_area((unsigned long)(a), (unsigned long)(a)+(l))
+	dcache_clean_inval_poc((unsigned long)(a), (unsigned long)(a)+(l))
 
 static inline bool vcpu_has_cache_enabled(struct kvm_vcpu *vcpu)
 {
@@ -209,12 +209,12 @@ static inline void __invalidate_icache_guest_page(kvm_pfn_t pfn,
 {
 	if (icache_is_aliasing()) {
 		/* any kind of VIPT cache */
-		__flush_icache_all();
+		icache_inval_all_pou();
 	} else if (is_kernel_in_hyp_mode() || !icache_is_vpipt()) {
 		/* PIPT or VPIPT at EL2 (see comment in __kvm_tlb_flush_vmid_ipa) */
 		void *va = page_address(pfn_to_page(pfn));
 
-		invalidate_icache_range((unsigned long)va,
+		icache_inval_pou((unsigned long)va,
 					(unsigned long)va + size);
 	}
 }
diff --git a/arch/arm64/kernel/alternative.c b/arch/arm64/kernel/alternative.c
index c906d20c7b52..3fb79b76e9d9 100644
--- a/arch/arm64/kernel/alternative.c
+++ b/arch/arm64/kernel/alternative.c
@@ -181,7 +181,7 @@ static void __nocfi __apply_alternatives(struct alt_region *region, bool is_modu
 	 */
 	if (!is_module) {
 		dsb(ish);
-		__flush_icache_all();
+		icache_inval_all_pou();
 		isb();
 
 		/* Ignore ARM64_CB bit from feature mask */
diff --git a/arch/arm64/kernel/efi-entry.S b/arch/arm64/kernel/efi-entry.S
index 72e6a580290a..6668bad21f86 100644
--- a/arch/arm64/kernel/efi-entry.S
+++ b/arch/arm64/kernel/efi-entry.S
@@ -29,7 +29,7 @@ SYM_CODE_START(efi_enter_kernel)
 	 */
 	ldr	w1, =kernel_size
 	add	x1, x0, x1
-	bl	__clean_dcache_area_poc
+	bl	dcache_clean_poc
 	ic	ialluis
 
 	/*
@@ -38,7 +38,7 @@ SYM_CODE_START(efi_enter_kernel)
 	 */
 	adr	x0, 0f
 	adr	x1, 3f
-	bl	__clean_dcache_area_poc
+	bl	dcache_clean_poc
 0:
 	/* Turn off Dcache and MMU */
 	mrs	x0, CurrentEL
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index 8df0ac8d9123..6928cb67d3a0 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -118,7 +118,7 @@ SYM_CODE_START_LOCAL(preserve_boot_args)
 						// MMU off
 
 	add	x1, x0, #0x20			// 4 x 8 bytes
-	b	__inval_dcache_area		// tail call
+	b	dcache_inval_poc		// tail call
 SYM_CODE_END(preserve_boot_args)
 
 /*
@@ -268,7 +268,7 @@ SYM_FUNC_START_LOCAL(__create_page_tables)
 	 */
 	adrp	x0, init_pg_dir
 	adrp	x1, init_pg_end
-	bl	__inval_dcache_area
+	bl	dcache_inval_poc
 
 	/*
 	 * Clear the init page tables.
@@ -381,11 +381,11 @@ SYM_FUNC_START_LOCAL(__create_page_tables)
 
 	adrp	x0, idmap_pg_dir
 	adrp	x1, idmap_pg_end
-	bl	__inval_dcache_area
+	bl	dcache_inval_poc
 
 	adrp	x0, init_pg_dir
 	adrp	x1, init_pg_end
-	bl	__inval_dcache_area
+	bl	dcache_inval_poc
 
 	ret	x28
 SYM_FUNC_END(__create_page_tables)
diff --git a/arch/arm64/kernel/hibernate-asm.S b/arch/arm64/kernel/hibernate-asm.S
index ef2ab7caf815..81c0186a5e32 100644
--- a/arch/arm64/kernel/hibernate-asm.S
+++ b/arch/arm64/kernel/hibernate-asm.S
@@ -45,7 +45,7 @@
  * Because this code has to be copied to a 'safe' page, it can't call out to
  * other functions by PC-relative address. Also remember that it may be
  * mid-way through over-writing other functions. For this reason it contains
- * code from __flush_icache_range() and uses the copy_page() macro.
+ * code from caches_clean_inval_pou() and uses the copy_page() macro.
  *
  * This 'safe' page is mapped via ttbr0, and executed from there. This function
  * switches to a copy of the linear map in ttbr1, performs the restore, then
@@ -87,7 +87,7 @@ SYM_CODE_START(swsusp_arch_suspend_exit)
 	copy_page	x0, x1, x2, x3, x4, x5, x6, x7, x8, x9
 
 	add	x1, x10, #PAGE_SIZE
-	/* Clean the copied page to PoU - based on __flush_icache_range() */
+	/* Clean the copied page to PoU - based on caches_clean_inval_pou() */
 	raw_dcache_line_size x2, x3
 	sub	x3, x2, #1
 	bic	x4, x10, x3
diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c
index b40ddce71507..46a0b4d6e251 100644
--- a/arch/arm64/kernel/hibernate.c
+++ b/arch/arm64/kernel/hibernate.c
@@ -210,7 +210,7 @@ static int create_safe_exec_page(void *src_start, size_t length,
 		return -ENOMEM;
 
 	memcpy(page, src_start, length);
-	__flush_icache_range((unsigned long)page, (unsigned long)page + length);
+	caches_clean_inval_pou((unsigned long)page, (unsigned long)page + length);
 	rc = trans_pgd_idmap_page(&trans_info, &trans_ttbr0, &t0sz, page);
 	if (rc)
 		return rc;
@@ -381,17 +381,17 @@ int swsusp_arch_suspend(void)
 		ret = swsusp_save();
 	} else {
 		/* Clean kernel core startup/idle code to PoC*/
-		__flush_dcache_area((unsigned long)__mmuoff_data_start,
+		dcache_clean_inval_poc((unsigned long)__mmuoff_data_start,
 				    (unsigned long)__mmuoff_data_end);
-		__flush_dcache_area((unsigned long)__idmap_text_start,
+		dcache_clean_inval_poc((unsigned long)__idmap_text_start,
 				    (unsigned long)__idmap_text_end);
 
 		/* Clean kvm setup code to PoC? */
 		if (el2_reset_needed()) {
-			__flush_dcache_area(
+			dcache_clean_inval_poc(
 				(unsigned long)__hyp_idmap_text_start,
 				(unsigned long)__hyp_idmap_text_end);
-			__flush_dcache_area((unsigned long)__hyp_text_start,
+			dcache_clean_inval_poc((unsigned long)__hyp_text_start,
 					    (unsigned long)__hyp_text_end);
 		}
 
@@ -477,7 +477,7 @@ int swsusp_arch_resume(void)
 	 * The hibernate exit text contains a set of el2 vectors, that will
 	 * be executed at el2 with the mmu off in order to reload hyp-stub.
 	 */
-	__flush_dcache_area((unsigned long)hibernate_exit,
+	dcache_clean_inval_poc((unsigned long)hibernate_exit,
 			    (unsigned long)hibernate_exit + exit_size);
 
 	/*
diff --git a/arch/arm64/kernel/idreg-override.c b/arch/arm64/kernel/idreg-override.c
index 3dd515baf526..53a381a7f65d 100644
--- a/arch/arm64/kernel/idreg-override.c
+++ b/arch/arm64/kernel/idreg-override.c
@@ -237,7 +237,7 @@ asmlinkage void __init init_feature_override(void)
 
 	for (i = 0; i < ARRAY_SIZE(regs); i++) {
 		if (regs[i]->override)
-			__flush_dcache_area((unsigned long)regs[i]->override,
+			dcache_clean_inval_poc((unsigned long)regs[i]->override,
 					    (unsigned long)regs[i]->override +
 					    sizeof(*regs[i]->override));
 	}
diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h
index bcf3c2755370..c96a9a0043bf 100644
--- a/arch/arm64/kernel/image-vars.h
+++ b/arch/arm64/kernel/image-vars.h
@@ -35,7 +35,7 @@ __efistub_strnlen		= __pi_strnlen;
 __efistub_strcmp		= __pi_strcmp;
 __efistub_strncmp		= __pi_strncmp;
 __efistub_strrchr		= __pi_strrchr;
-__efistub___clean_dcache_area_poc = __pi___clean_dcache_area_poc;
+__efistub_dcache_clean_poc = __pi_dcache_clean_poc;
 
 #if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)
 __efistub___memcpy		= __pi_memcpy;
diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
index 6c0de2f60ea9..51cb8dc98d00 100644
--- a/arch/arm64/kernel/insn.c
+++ b/arch/arm64/kernel/insn.c
@@ -198,7 +198,7 @@ int __kprobes aarch64_insn_patch_text_nosync(void *addr, u32 insn)
 
 	ret = aarch64_insn_write(tp, insn);
 	if (ret == 0)
-		__flush_icache_range((uintptr_t)tp,
+		caches_clean_inval_pou((uintptr_t)tp,
 				     (uintptr_t)tp + AARCH64_INSN_SIZE);
 
 	return ret;
diff --git a/arch/arm64/kernel/kaslr.c b/arch/arm64/kernel/kaslr.c
index 49cccd03cb37..cfa2cfde3019 100644
--- a/arch/arm64/kernel/kaslr.c
+++ b/arch/arm64/kernel/kaslr.c
@@ -72,7 +72,7 @@ u64 __init kaslr_early_init(void)
 	 * we end up running with module randomization disabled.
 	 */
 	module_alloc_base = (u64)_etext - MODULES_VSIZE;
-	__flush_dcache_area((unsigned long)&module_alloc_base,
+	dcache_clean_inval_poc((unsigned long)&module_alloc_base,
 			    (unsigned long)&module_alloc_base +
 				    sizeof(module_alloc_base));
 
@@ -172,10 +172,10 @@ u64 __init kaslr_early_init(void)
 	module_alloc_base += (module_range * (seed & ((1 << 21) - 1))) >> 21;
 	module_alloc_base &= PAGE_MASK;
 
-	__flush_dcache_area((unsigned long)&module_alloc_base,
+	dcache_clean_inval_poc((unsigned long)&module_alloc_base,
 			    (unsigned long)&module_alloc_base +
 				    sizeof(module_alloc_base));
-	__flush_dcache_area((unsigned long)&memstart_offset_seed,
+	dcache_clean_inval_poc((unsigned long)&memstart_offset_seed,
 			    (unsigned long)&memstart_offset_seed +
 				    sizeof(memstart_offset_seed));
 
diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c
index eadb0b189348..aad4078650fd 100644
--- a/arch/arm64/kernel/machine_kexec.c
+++ b/arch/arm64/kernel/machine_kexec.c
@@ -72,10 +72,10 @@ int machine_kexec_post_load(struct kimage *kimage)
 	 * For execution with the MMU off and I-cache on, reloc_code needs to be
 	 * cleaned to the PoC and invalidated from the I-cache.
 	 */
-	__flush_dcache_area((unsigned long)reloc_code,
+	dcache_clean_inval_poc((unsigned long)reloc_code,
 			    (unsigned long)reloc_code +
 				    arm64_relocate_new_kernel_size);
-	invalidate_icache_range((uintptr_t)reloc_code,
+	icache_inval_pou((uintptr_t)reloc_code,
 				(uintptr_t)reloc_code +
 					arm64_relocate_new_kernel_size);
 
@@ -111,7 +111,7 @@ static void kexec_list_flush(struct kimage *kimage)
 		unsigned long addr;
 
 		/* flush the list entries. */
-		__flush_dcache_area((unsigned long)entry,
+		dcache_clean_inval_poc((unsigned long)entry,
 				    (unsigned long)entry +
 					    sizeof(kimage_entry_t));
 
@@ -128,7 +128,7 @@ static void kexec_list_flush(struct kimage *kimage)
 			break;
 		case IND_SOURCE:
 			/* flush the source pages. */
-			__flush_dcache_area(addr, addr + PAGE_SIZE);
+			dcache_clean_inval_poc(addr, addr + PAGE_SIZE);
 			break;
 		case IND_DESTINATION:
 			break;
@@ -155,7 +155,7 @@ static void kexec_segment_flush(const struct kimage *kimage)
 			kimage->segment[i].memsz,
 			kimage->segment[i].memsz /  PAGE_SIZE);
 
-		__flush_dcache_area(
+		dcache_clean_inval_poc(
 			(unsigned long)phys_to_virt(kimage->segment[i].mem),
 			(unsigned long)phys_to_virt(kimage->segment[i].mem) +
 				kimage->segment[i].memsz);
diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
index 5fcdee331087..9b4c1118194d 100644
--- a/arch/arm64/kernel/smp.c
+++ b/arch/arm64/kernel/smp.c
@@ -122,7 +122,7 @@ int __cpu_up(unsigned int cpu, struct task_struct *idle)
 	secondary_data.task = idle;
 	secondary_data.stack = task_stack_page(idle) + THREAD_SIZE;
 	update_cpu_boot_status(CPU_MMU_OFF);
-	__flush_dcache_area((unsigned long)&secondary_data,
+	dcache_clean_inval_poc((unsigned long)&secondary_data,
 			    (unsigned long)&secondary_data +
 				    sizeof(secondary_data));
 
@@ -145,7 +145,7 @@ int __cpu_up(unsigned int cpu, struct task_struct *idle)
 	pr_crit("CPU%u: failed to come online\n", cpu);
 	secondary_data.task = NULL;
 	secondary_data.stack = NULL;
-	__flush_dcache_area((unsigned long)&secondary_data,
+	dcache_clean_inval_poc((unsigned long)&secondary_data,
 			    (unsigned long)&secondary_data +
 				    sizeof(secondary_data));
 	status = READ_ONCE(secondary_data.status);
diff --git a/arch/arm64/kernel/smp_spin_table.c b/arch/arm64/kernel/smp_spin_table.c
index 58d804582a35..7e1624ecab3c 100644
--- a/arch/arm64/kernel/smp_spin_table.c
+++ b/arch/arm64/kernel/smp_spin_table.c
@@ -36,7 +36,7 @@ static void write_pen_release(u64 val)
 	unsigned long size = sizeof(secondary_holding_pen_release);
 
 	secondary_holding_pen_release = val;
-	__flush_dcache_area((unsigned long)start, (unsigned long)start + size);
+	dcache_clean_inval_poc((unsigned long)start, (unsigned long)start + size);
 }
 
 
@@ -90,7 +90,7 @@ static int smp_spin_table_cpu_prepare(unsigned int cpu)
 	 * the boot protocol.
 	 */
 	writeq_relaxed(pa_holding_pen, release_addr);
-	__flush_dcache_area((__force unsigned long)release_addr,
+	dcache_clean_inval_poc((__force unsigned long)release_addr,
 			    (__force unsigned long)release_addr +
 				    sizeof(*release_addr));
 
diff --git a/arch/arm64/kernel/sys_compat.c b/arch/arm64/kernel/sys_compat.c
index 265fe3eb1069..db5159a3055f 100644
--- a/arch/arm64/kernel/sys_compat.c
+++ b/arch/arm64/kernel/sys_compat.c
@@ -41,7 +41,7 @@ __do_compat_cache_op(unsigned long start, unsigned long end)
 			dsb(ish);
 		}
 
-		ret = __flush_cache_user_range(start, start + chunk);
+		ret = caches_clean_inval_user_pou(start, start + chunk);
 		if (ret)
 			return ret;
 
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 1cb39c0803a4..c1953f65ca0e 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -1064,7 +1064,7 @@ static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm_vcpu *vcpu,
 		if (!cpus_have_final_cap(ARM64_HAS_STAGE2_FWB))
 			stage2_unmap_vm(vcpu->kvm);
 		else
-			__flush_icache_all();
+			icache_inval_all_pou();
 	}
 
 	vcpu_reset_hcr(vcpu);
diff --git a/arch/arm64/kvm/hyp/nvhe/cache.S b/arch/arm64/kvm/hyp/nvhe/cache.S
index 36cef6915428..958734f4d6b0 100644
--- a/arch/arm64/kvm/hyp/nvhe/cache.S
+++ b/arch/arm64/kvm/hyp/nvhe/cache.S
@@ -7,7 +7,7 @@
 #include <asm/assembler.h>
 #include <asm/alternative.h>
 
-SYM_FUNC_START_PI(__flush_dcache_area)
+SYM_FUNC_START_PI(dcache_clean_inval_poc)
 	dcache_by_line_op civac, sy, x0, x1, x2, x3
 	ret
-SYM_FUNC_END_PI(__flush_dcache_area)
+SYM_FUNC_END_PI(dcache_clean_inval_poc)
diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c
index 5dffe928f256..8143ebd4fb72 100644
--- a/arch/arm64/kvm/hyp/nvhe/setup.c
+++ b/arch/arm64/kvm/hyp/nvhe/setup.c
@@ -134,7 +134,7 @@ static void update_nvhe_init_params(void)
 	for (i = 0; i < hyp_nr_cpus; i++) {
 		params = per_cpu_ptr(&kvm_init_params, i);
 		params->pgd_pa = __hyp_pa(pkvm_pgtable.pgd);
-		__flush_dcache_area((unsigned long)params,
+		dcache_clean_inval_poc((unsigned long)params,
 				    (unsigned long)params + sizeof(*params));
 	}
 }
diff --git a/arch/arm64/kvm/hyp/nvhe/tlb.c b/arch/arm64/kvm/hyp/nvhe/tlb.c
index 83dc3b271bc5..38ed0f6f2703 100644
--- a/arch/arm64/kvm/hyp/nvhe/tlb.c
+++ b/arch/arm64/kvm/hyp/nvhe/tlb.c
@@ -104,7 +104,7 @@ void __kvm_tlb_flush_vmid_ipa(struct kvm_s2_mmu *mmu,
 	 * you should be running with VHE enabled.
 	 */
 	if (icache_is_vpipt())
-		__flush_icache_all();
+		icache_inval_all_pou();
 
 	__tlb_switch_to_host(&cxt);
 }
diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
index 10d2f04013d4..e9ad7fb28ee3 100644
--- a/arch/arm64/kvm/hyp/pgtable.c
+++ b/arch/arm64/kvm/hyp/pgtable.c
@@ -841,7 +841,7 @@ static int stage2_unmap_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep,
 	if (need_flush) {
 		kvm_pte_t *pte_follow = kvm_pte_follow(pte, mm_ops);
 
-		__flush_dcache_area((unsigned long)pte_follow,
+		dcache_clean_inval_poc((unsigned long)pte_follow,
 				    (unsigned long)pte_follow +
 					    kvm_granule_size(level));
 	}
@@ -997,7 +997,7 @@ static int stage2_flush_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep,
 		return 0;
 
 	pte_follow = kvm_pte_follow(pte, mm_ops);
-	__flush_dcache_area((unsigned long)pte_follow,
+	dcache_clean_inval_poc((unsigned long)pte_follow,
 			    (unsigned long)pte_follow +
 				    kvm_granule_size(level));
 	return 0;
diff --git a/arch/arm64/lib/uaccess_flushcache.c b/arch/arm64/lib/uaccess_flushcache.c
index 62ea989effe8..baee22961bdb 100644
--- a/arch/arm64/lib/uaccess_flushcache.c
+++ b/arch/arm64/lib/uaccess_flushcache.c
@@ -15,7 +15,7 @@ void memcpy_flushcache(void *dst, const void *src, size_t cnt)
 	 * barrier to order the cache maintenance against the memcpy.
 	 */
 	memcpy(dst, src, cnt);
-	__clean_dcache_area_pop((unsigned long)dst, (unsigned long)dst + cnt);
+	dcache_clean_pop((unsigned long)dst, (unsigned long)dst + cnt);
 }
 EXPORT_SYMBOL_GPL(memcpy_flushcache);
 
@@ -33,6 +33,6 @@ unsigned long __copy_user_flushcache(void *to, const void __user *from,
 	rc = raw_copy_from_user(to, from, n);
 
 	/* See above */
-	__clean_dcache_area_pop((unsigned long)to, (unsigned long)to + n - rc);
+	dcache_clean_pop((unsigned long)to, (unsigned long)to + n - rc);
 	return rc;
 }
diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
index fbf003b956cc..c74fe985c60c 100644
--- a/arch/arm64/mm/cache.S
+++ b/arch/arm64/mm/cache.S
@@ -15,7 +15,7 @@
 #include <asm/asm-uaccess.h>
 
 /*
- *	__flush_cache_range(start,end) [needs_uaccess]
+ *	caches_clean_inval_pou_macro(start,end) [needs_uaccess]
  *
  *	Ensure that the I and D caches are coherent within specified region.
  *	This is typically used when code has been written to a memory region,
@@ -25,7 +25,7 @@
  *	- end     	- virtual end address of region
  *	- needs_uaccess - (macro parameter) might access user space memory
  */
-.macro	__flush_cache_range, needs_uaccess
+.macro	caches_clean_inval_pou_macro, needs_uaccess
 alternative_if ARM64_HAS_CACHE_IDC
 	dsb	ishst
 	b	7f
@@ -62,7 +62,7 @@ alternative_else_nop_endif
 .endm
 
 /*
- *	__flush_icache_range(start,end)
+ *	caches_clean_inval_pou(start,end)
  *
  *	Ensure that the I and D caches are coherent within specified region.
  *	This is typically used when code has been written to a memory region,
@@ -71,12 +71,12 @@ alternative_else_nop_endif
  *	- start   - virtual start address of region
  *	- end     - virtual end address of region
  */
-SYM_FUNC_START(__flush_icache_range)
-	__flush_cache_range needs_uaccess=0
-SYM_FUNC_END(__flush_icache_range)
+SYM_FUNC_START(caches_clean_inval_pou)
+	caches_clean_inval_pou_macro needs_uaccess=0
+SYM_FUNC_END(caches_clean_inval_pou)
 
 /*
- *	__flush_cache_user_range(start,end)
+ *	caches_clean_inval_user_pou(start,end)
  *
  *	Ensure that the I and D caches are coherent within specified region.
  *	This is typically used when code has been written to a memory region,
@@ -85,21 +85,21 @@ SYM_FUNC_END(__flush_icache_range)
  *	- start   - virtual start address of region
  *	- end     - virtual end address of region
  */
-SYM_FUNC_START(__flush_cache_user_range)
+SYM_FUNC_START(caches_clean_inval_user_pou)
 	uaccess_ttbr0_enable x2, x3, x4
-	__flush_cache_range needs_uaccess=1
+	caches_clean_inval_pou_macro needs_uaccess=1
 	uaccess_ttbr0_disable x1, x2
-SYM_FUNC_END(__flush_cache_user_range)
+SYM_FUNC_END(caches_clean_inval_user_pou)
 
 /*
- *	invalidate_icache_range(start,end)
+ *	icache_inval_pou(start,end)
  *
  *	Ensure that the I cache is invalid within specified region.
  *
  *	- start   - virtual start address of region
  *	- end     - virtual end address of region
  */
-SYM_FUNC_START(invalidate_icache_range)
+SYM_FUNC_START(icache_inval_pou)
 alternative_if ARM64_HAS_CACHE_DIC
 	isb
 	ret
@@ -107,10 +107,10 @@ alternative_else_nop_endif
 
 	invalidate_icache_by_line x0, x1, x2, x3, 0, 0f
 	ret
-SYM_FUNC_END(invalidate_icache_range)
+SYM_FUNC_END(icache_inval_pou)
 
 /*
- *	__flush_dcache_area(start, end)
+ *	dcache_clean_inval_poc(start, end)
  *
  *	Ensure that any D-cache lines for the interval [start, end)
  *	are cleaned and invalidated to the PoC.
@@ -118,13 +118,13 @@ SYM_FUNC_END(invalidate_icache_range)
  *	- start   - virtual start address of region
  *	- end     - virtual end address of region
  */
-SYM_FUNC_START_PI(__flush_dcache_area)
+SYM_FUNC_START_PI(dcache_clean_inval_poc)
 	dcache_by_line_op civac, sy, x0, x1, x2, x3
 	ret
-SYM_FUNC_END_PI(__flush_dcache_area)
+SYM_FUNC_END_PI(dcache_clean_inval_poc)
 
 /*
- *	__clean_dcache_area_pou(start, end)
+ *	dcache_clean_pou(start, end)
  *
  * 	Ensure that any D-cache lines for the interval [start, end)
  * 	are cleaned to the PoU.
@@ -132,17 +132,17 @@ SYM_FUNC_END_PI(__flush_dcache_area)
  *	- start   - virtual start address of region
  *	- end     - virtual end address of region
  */
-SYM_FUNC_START(__clean_dcache_area_pou)
+SYM_FUNC_START(dcache_clean_pou)
 alternative_if ARM64_HAS_CACHE_IDC
 	dsb	ishst
 	ret
 alternative_else_nop_endif
 	dcache_by_line_op cvau, ish, x0, x1, x2, x3
 	ret
-SYM_FUNC_END(__clean_dcache_area_pou)
+SYM_FUNC_END(dcache_clean_pou)
 
 /*
- *	__inval_dcache_area(start, end)
+ *	dcache_inval_poc(start, end)
  *
  * 	Ensure that any D-cache lines for the interval [start, end)
  * 	are invalidated. Any partial lines at the ends of the interval are
@@ -152,7 +152,7 @@ SYM_FUNC_END(__clean_dcache_area_pou)
  *	- end     - kernel end address of region
  */
 SYM_FUNC_START_LOCAL(__dma_inv_area)
-SYM_FUNC_START_PI(__inval_dcache_area)
+SYM_FUNC_START_PI(dcache_inval_poc)
 	/* FALLTHROUGH */
 
 /*
@@ -177,11 +177,11 @@ SYM_FUNC_START_PI(__inval_dcache_area)
 	b.lo	2b
 	dsb	sy
 	ret
-SYM_FUNC_END_PI(__inval_dcache_area)
+SYM_FUNC_END_PI(dcache_inval_poc)
 SYM_FUNC_END(__dma_inv_area)
 
 /*
- *	__clean_dcache_area_poc(start, end)
+ *	dcache_clean_poc(start, end)
  *
  * 	Ensure that any D-cache lines for the interval [start, end)
  * 	are cleaned to the PoC.
@@ -190,7 +190,7 @@ SYM_FUNC_END(__dma_inv_area)
  *	- end     - virtual end address of region
  */
 SYM_FUNC_START_LOCAL(__dma_clean_area)
-SYM_FUNC_START_PI(__clean_dcache_area_poc)
+SYM_FUNC_START_PI(dcache_clean_poc)
 	/* FALLTHROUGH */
 
 /*
@@ -200,11 +200,11 @@ SYM_FUNC_START_PI(__clean_dcache_area_poc)
  */
 	dcache_by_line_op cvac, sy, x0, x1, x2, x3
 	ret
-SYM_FUNC_END_PI(__clean_dcache_area_poc)
+SYM_FUNC_END_PI(dcache_clean_poc)
 SYM_FUNC_END(__dma_clean_area)
 
 /*
- *	__clean_dcache_area_pop(start, end)
+ *	dcache_clean_pop(start, end)
  *
  * 	Ensure that any D-cache lines for the interval [start, end)
  * 	are cleaned to the PoP.
@@ -212,13 +212,13 @@ SYM_FUNC_END(__dma_clean_area)
  *	- start   - virtual start address of region
  *	- end     - virtual end address of region
  */
-SYM_FUNC_START_PI(__clean_dcache_area_pop)
+SYM_FUNC_START_PI(dcache_clean_pop)
 	alternative_if_not ARM64_HAS_DCPOP
-	b	__clean_dcache_area_poc
+	b	dcache_clean_poc
 	alternative_else_nop_endif
 	dcache_by_line_op cvap, sy, x0, x1, x2, x3
 	ret
-SYM_FUNC_END_PI(__clean_dcache_area_pop)
+SYM_FUNC_END_PI(dcache_clean_pop)
 
 /*
  *	__dma_flush_area(start, size)
diff --git a/arch/arm64/mm/flush.c b/arch/arm64/mm/flush.c
index 143f625e7727..5fea9a3f6663 100644
--- a/arch/arm64/mm/flush.c
+++ b/arch/arm64/mm/flush.c
@@ -17,14 +17,14 @@
 void sync_icache_aliases(unsigned long start, unsigned long end)
 {
 	if (icache_is_aliasing()) {
-		__clean_dcache_area_pou(start, end);
-		__flush_icache_all();
+		dcache_clean_pou(start, end);
+		icache_inval_all_pou();
 	} else {
 		/*
 		 * Don't issue kick_all_cpus_sync() after I-cache invalidation
 		 * for user mappings.
 		 */
-		__flush_icache_range(start, end);
+		caches_clean_inval_pou(start, end);
 	}
 }
 
@@ -76,20 +76,20 @@ EXPORT_SYMBOL(flush_dcache_page);
 /*
  * Additional functions defined in assembly.
  */
-EXPORT_SYMBOL(__flush_icache_range);
+EXPORT_SYMBOL(caches_clean_inval_pou);
 
 #ifdef CONFIG_ARCH_HAS_PMEM_API
 void arch_wb_cache_pmem(void *addr, size_t size)
 {
 	/* Ensure order against any prior non-cacheable writes */
 	dmb(osh);
-	__clean_dcache_area_pop((unsigned long)addr, (unsigned long)addr + size);
+	dcache_clean_pop((unsigned long)addr, (unsigned long)addr + size);
 }
 EXPORT_SYMBOL_GPL(arch_wb_cache_pmem);
 
 void arch_invalidate_pmem(void *addr, size_t size)
 {
-	__inval_dcache_area((unsigned long)addr, (unsigned long)addr + size);
+	dcache_inval_poc((unsigned long)addr, (unsigned long)addr + size);
 }
 EXPORT_SYMBOL_GPL(arch_invalidate_pmem);
 #endif
-- 
2.31.1.751.gd2f1c929bd-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* Re: [PATCH v2 02/16] arm64: Do not enable uaccess for flush_icache_range
  2021-05-17  7:51 ` [PATCH v2 02/16] arm64: Do not enable uaccess for flush_icache_range Fuad Tabba
@ 2021-05-18 15:33   ` Mark Rutland
  2021-05-19 16:25     ` Fuad Tabba
  0 siblings, 1 reply; 29+ messages in thread
From: Mark Rutland @ 2021-05-18 15:33 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: linux-arm-kernel, will, catalin.marinas, maz, ardb, james.morse,
	alexandru.elisei, suzuki.poulose, robin.murphy

Hi Fuad,

This is great! I had a play with the series locally, and I have a few
suggestions below for how to make this a bit clearer.

On Mon, May 17, 2021 at 08:51:10AM +0100, Fuad Tabba wrote:
> __flush_icache_range works on the kernel linear map, and doesn't
> need uaccess. The existing code is a side-effect of its current
> implementation with __flush_cache_user_range fallthrough.
> 
> Instead of fallthrough to share the code, use a common macro for
> the two where the caller can specify whether user-space access is
> needed.
> 
> No functional change intended.
> Possible performance impact due to the reduced number of
> instructions.

This looks correct, but I'm not too keen on all the duplication we have
to do w.r.t. `needs_uaccess`, and I think it would be much clearer to
put the TTBR maintenance directly in `__flush_cache_user_range`
immediately, rather than doing that later in the series.

> Reported-by: Catalin Marinas <catalin.marinas@arm.com>
> Reported-by: Will Deacon <will@kernel.org>
> Link: https://lore.kernel.org/linux-arch/20200511110014.lb9PEahJ4hVOYrbwIb_qUHXyNy9KQzNFdb_I3YlzY6A@z/
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
>  arch/arm64/include/asm/assembler.h | 13 ++++--
>  arch/arm64/mm/cache.S              | 64 +++++++++++++++++++++---------
>  2 files changed, 54 insertions(+), 23 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
> index 8418c1bd8f04..6ff7a3a3b238 100644
> --- a/arch/arm64/include/asm/assembler.h
> +++ b/arch/arm64/include/asm/assembler.h
> @@ -426,16 +426,21 @@ alternative_endif
>   * Macro to perform an instruction cache maintenance for the interval
>   * [start, end)
>   *
> - * 	start, end:	virtual addresses describing the region
> - *	label:		A label to branch to on user fault.
> - * 	Corrupts:	tmp1, tmp2
> + *	start, end:	virtual addresses describing the region
> + *	needs_uaccess:	might access user space memory
> + *	label:		label to branch to on user fault (if needs_uaccess)
> + *	Corrupts:	tmp1, tmp2
>   */

I'm not too keen on the separate `needs_uaccess` and `label` arguments.
We should be able to collapse those into a single argument by checking
with .ifnc, e.g.

	.macro op arg, fixup
	.ifnc fixup,
	do_thing_with \fixup
	.endif
	.endm

... which I think would make things clearer overall.

> -	.macro invalidate_icache_by_line start, end, tmp1, tmp2, label
> +	.macro invalidate_icache_by_line start, end, tmp1, tmp2, needs_uaccess, label
>  	icache_line_size \tmp1, \tmp2
>  	sub	\tmp2, \tmp1, #1
>  	bic	\tmp2, \start, \tmp2
>  9997:
> +	.if	\needs_uaccess
>  USER(\label, ic	ivau, \tmp2)			// invalidate I line PoU
> +	.else
> +	ic	ivau, \tmp2
> +	.endif
>  	add	\tmp2, \tmp2, \tmp1
>  	cmp	\tmp2, \end
>  	b.lo	9997b

I'm also not keen on duplicating the instruction here. I reckon what we
should do is add a conditional extable macro:

	.macro _cond_extable insn, fixup
	.ifnc \fixup,
	_asm_extable \insn, \fixup
	.endif
	.endm

... which'd allow us to do:

        .macro invalidate_icache_by_line start, end, tmp1, tmp2, fixup
        icache_line_size \tmp1, \tmp2
        sub     \tmp2, \tmp1, #1
        bic     \tmp2, \start, \tmp2
.Licache_op\@:
        ic      ivau, \tmp2                     // invalidate I line PoU
        add     \tmp2, \tmp2, \tmp1
        cmp     \tmp2, \end
        b.lo    .Licache_op\@
        dsb     ish 
        isb 

        _cond_extable .Licache_op\@, \fixup
        .endm

... which I think is clearer.

We could do likewise in dcache_by_line_op, and with some refactoring we
could remove the logic that we have to currently duplicate.

I pushed a couple of prearatory patches for that to:

  https://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git/log/?h=arm64/cleanups/cache
  git://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git arm64/cleanups/cache

... in case you felt like taking those as-is.

> diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
> index 2d881f34dd9d..092f73acdf9a 100644
> --- a/arch/arm64/mm/cache.S
> +++ b/arch/arm64/mm/cache.S
> @@ -15,30 +15,20 @@
>  #include <asm/asm-uaccess.h>
>  
>  /*
> - *	flush_icache_range(start,end)
> + *	__flush_cache_range(start,end) [needs_uaccess]
>   *
>   *	Ensure that the I and D caches are coherent within specified region.
>   *	This is typically used when code has been written to a memory region,
>   *	and will be executed.
>   *
> - *	- start   - virtual start address of region
> - *	- end     - virtual end address of region
> + *	- start   	- virtual start address of region
> + *	- end     	- virtual end address of region
> + *	- needs_uaccess - (macro parameter) might access user space memory
>   */
> -SYM_FUNC_START(__flush_icache_range)
> -	/* FALLTHROUGH */
> -
> -/*
> - *	__flush_cache_user_range(start,end)
> - *
> - *	Ensure that the I and D caches are coherent within specified region.
> - *	This is typically used when code has been written to a memory region,
> - *	and will be executed.
> - *
> - *	- start   - virtual start address of region
> - *	- end     - virtual end address of region
> - */
> -SYM_FUNC_START(__flush_cache_user_range)
> +.macro	__flush_cache_range, needs_uaccess
> +	.if 	\needs_uaccess
>  	uaccess_ttbr0_enable x2, x3, x4
> +	.endif
>  alternative_if ARM64_HAS_CACHE_IDC
>  	dsb	ishst
>  	b	7f
> @@ -47,7 +37,11 @@ alternative_else_nop_endif
>  	sub	x3, x2, #1
>  	bic	x4, x0, x3
>  1:
> +	.if 	\needs_uaccess
>  user_alt 9f, "dc cvau, x4",  "dc civac, x4",  ARM64_WORKAROUND_CLEAN_CACHE
> +	.else
> +alternative_insn "dc cvau, x4",  "dc civac, x4",  ARM64_WORKAROUND_CLEAN_CACHE
> +	.endif
>  	add	x4, x4, x2
>  	cmp	x4, x1
>  	b.lo	1b
> @@ -58,15 +52,47 @@ alternative_if ARM64_HAS_CACHE_DIC
>  	isb
>  	b	8f
>  alternative_else_nop_endif
> -	invalidate_icache_by_line x0, x1, x2, x3, 9f
> +	invalidate_icache_by_line x0, x1, x2, x3, \needs_uaccess, 9f
>  8:	mov	x0, #0
>  1:
> +	.if	\needs_uaccess
>  	uaccess_ttbr0_disable x1, x2
> +	.endif
>  	ret
> +
> +	.if 	\needs_uaccess
>  9:
>  	mov	x0, #-EFAULT
>  	b	1b
> +	.endif
> +.endm

As above, I think we should reduce this to the core logic, moving the
ttbr manipulation and fixup handler inline in __flush_cache_user_range.

For clarity, I'd also like to leave the RETs out of the macro, since
that's required for the fixup handling anyway, and it generally amkes
the control flow clearer at the function definition.

> +/*
> + *	flush_icache_range(start,end)
> + *
> + *	Ensure that the I and D caches are coherent within specified region.
> + *	This is typically used when code has been written to a memory region,
> + *	and will be executed.
> + *
> + *	- start   - virtual start address of region
> + *	- end     - virtual end address of region
> + */
> +SYM_FUNC_START(__flush_icache_range)
> +	__flush_cache_range needs_uaccess=0
>  SYM_FUNC_END(__flush_icache_range)

...so with the suggestions above, this could be:

SYM_FUNC_START(__flush_icache_range)
	__flush_cache_range
	ret
SYM_FUNC_END(__flush_icache_range)

> +/*
> + *	__flush_cache_user_range(start,end)
> + *
> + *	Ensure that the I and D caches are coherent within specified region.
> + *	This is typically used when code has been written to a memory region,
> + *	and will be executed.
> + *
> + *	- start   - virtual start address of region
> + *	- end     - virtual end address of region
> + */
> +SYM_FUNC_START(__flush_cache_user_range)
> +	__flush_cache_range needs_uaccess=1
>  SYM_FUNC_END(__flush_cache_user_range)

... this could be:

SYM_FUNC_START(__flush_cache_user_range)
        uaccess_ttbr0_enable x2, x3, x4
        __flush_cache_range 2f
1:
        uaccess_ttbr0_disable x1, x2
        ret 
2:
        mov     x0, #-EFAULT
        b       1b  
SYM_FUNC_END(__flush_cache_user_range)

>  /*
> @@ -86,7 +112,7 @@ alternative_else_nop_endif
>  
>  	uaccess_ttbr0_enable x2, x3, x4
>  
> -	invalidate_icache_by_line x0, x1, x2, x3, 2f
> +	invalidate_icache_by_line x0, x1, x2, x3, 1, 2f

... and this wouldn't need to change.

Thanks,
Mark.

>  	mov	x0, xzr
>  1:
>  	uaccess_ttbr0_disable x1, x2
> -- 
> 2.31.1.751.gd2f1c929bd-goog
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v2 03/16] arm64: Do not enable uaccess for invalidate_icache_range
  2021-05-17  7:51 ` [PATCH v2 03/16] arm64: Do not enable uaccess for invalidate_icache_range Fuad Tabba
@ 2021-05-18 15:36   ` Mark Rutland
  2021-05-19 16:26     ` Fuad Tabba
  0 siblings, 1 reply; 29+ messages in thread
From: Mark Rutland @ 2021-05-18 15:36 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: linux-arm-kernel, will, catalin.marinas, maz, ardb, james.morse,
	alexandru.elisei, suzuki.poulose, robin.murphy

On Mon, May 17, 2021 at 08:51:11AM +0100, Fuad Tabba wrote:
> invalidate_icache_range() works on the kernel linear map, and
> doesn't need uaccess. Remove the code that toggles
> uaccess_ttbr0_enable, as well as the code that emits an entry
> into the exception table (via the macro
> invalidate_icache_by_line).
> 
> Changes return type of invalidate_icache_range() from int (which
> used to indicate a fault) to void, since it doesn't need uaccess
> and won't fault. Note that return value was never checked by any
> of the callers.
> 
> No functional change intended.
> Possible performance impact due to the reduced number of
> instructions.
> 
> Reported-by: Catalin Marinas <catalin.marinas@arm.com>
> Reported-by: Will Deacon <will@kernel.org>
> Link: https://lore.kernel.org/linux-arch/20200511110014.lb9PEahJ4hVOYrbwIb_qUHXyNy9KQzNFdb_I3YlzY6A@z/
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
>  arch/arm64/include/asm/cacheflush.h |  2 +-
>  arch/arm64/mm/cache.S               | 11 +----------
>  2 files changed, 2 insertions(+), 11 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
> index 52e5c1623224..a586afa84172 100644
> --- a/arch/arm64/include/asm/cacheflush.h
> +++ b/arch/arm64/include/asm/cacheflush.h
> @@ -57,7 +57,7 @@
>   *		- size   - region size
>   */
>  extern void __flush_icache_range(unsigned long start, unsigned long end);
> -extern int  invalidate_icache_range(unsigned long start, unsigned long end);
> +extern void invalidate_icache_range(unsigned long start, unsigned long end);
>  extern void __flush_dcache_area(void *addr, size_t len);
>  extern void __inval_dcache_area(void *addr, size_t len);
>  extern void __clean_dcache_area_poc(void *addr, size_t len);
> diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
> index 092f73acdf9a..6babaaf34f17 100644
> --- a/arch/arm64/mm/cache.S
> +++ b/arch/arm64/mm/cache.S
> @@ -105,21 +105,12 @@ SYM_FUNC_END(__flush_cache_user_range)
>   */
>  SYM_FUNC_START(invalidate_icache_range)
>  alternative_if ARM64_HAS_CACHE_DIC
> -	mov	x0, xzr
>  	isb
>  	ret
>  alternative_else_nop_endif
>  
> -	uaccess_ttbr0_enable x2, x3, x4
> -
> -	invalidate_icache_by_line x0, x1, x2, x3, 1, 2f
> -	mov	x0, xzr
> -1:
> -	uaccess_ttbr0_disable x1, x2
> +	invalidate_icache_by_line x0, x1, x2, x3, 0, 0f

This all looks good to me, but I'd prefer that we didn't have to pass a
fake label. I think if you use the approach I suggested on the prior
patch, this can be:

	invalidate_icache_by_line x0, x1, x2, x3

... which I think makes it clearer that there's no fixup.

Thanks,
Mark.

>  	ret
> -2:
> -	mov	x0, #-EFAULT
> -	b	1b
>  SYM_FUNC_END(invalidate_icache_range)
>  
>  /*
> -- 
> 2.31.1.751.gd2f1c929bd-goog
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v2 04/16] arm64: Downgrade flush_icache_range to invalidate
  2021-05-17  7:51 ` [PATCH v2 04/16] arm64: Downgrade flush_icache_range to invalidate Fuad Tabba
@ 2021-05-18 15:53   ` Mark Rutland
  2021-05-18 16:02     ` Ard Biesheuvel
  0 siblings, 1 reply; 29+ messages in thread
From: Mark Rutland @ 2021-05-18 15:53 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: linux-arm-kernel, will, catalin.marinas, maz, ardb, james.morse,
	alexandru.elisei, suzuki.poulose, robin.murphy

On Mon, May 17, 2021 at 08:51:12AM +0100, Fuad Tabba wrote:
> Since __flush_dcache_area is called right before,
> invalidate_icache_range is sufficient in this case.
> 
> Rewrite the comment to better explain the rationale behind the
> cache maintenance operations used here.
> 
> No functional change intended.
> Possible performance impact due to invalidating only the icache
> rather than invalidating and cleaning both caches.
> 
> Reported-by: Catalin Marinas <catalin.marinas@arm.com>
> Reported-by: Will Deacon <will@kernel.org>
> Link: https://lore.kernel.org/linux-arch/20200511110014.lb9PEahJ4hVOYrbwIb_qUHXyNy9KQzNFdb_I3YlzY6A@z/
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
>  arch/arm64/kernel/machine_kexec.c | 10 +++++++---
>  1 file changed, 7 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c
> index 90a335c74442..ecd8915e02e1 100644
> --- a/arch/arm64/kernel/machine_kexec.c
> +++ b/arch/arm64/kernel/machine_kexec.c
> @@ -68,10 +68,14 @@ int machine_kexec_post_load(struct kimage *kimage)
>  	kimage->arch.kern_reloc = __pa(reloc_code);
>  	kexec_image_info(kimage);
>  
> -	/* Flush the reloc_code in preparation for its execution. */
> +	/*
> +	 * For execution with the MMU off and I-cache on, reloc_code needs to be
> +	 * cleaned to the PoC and invalidated from the I-cache.
> +	 */

Minor nit, but the I-cache is *always* on (SCTLR.I affects the
attributes used for fetches into the I-caches), so it would be slightly
better to drop the "and I-cache on" words.

Otherwise, this looks good to me.

Mark.

>  	__flush_dcache_area(reloc_code, arm64_relocate_new_kernel_size);
> -	flush_icache_range((uintptr_t)reloc_code, (uintptr_t)reloc_code +
> -			   arm64_relocate_new_kernel_size);
> +	invalidate_icache_range((uintptr_t)reloc_code,
> +				(uintptr_t)reloc_code +
> +					arm64_relocate_new_kernel_size);
>  
>  	return 0;
>  }
> -- 
> 2.31.1.751.gd2f1c929bd-goog
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v2 05/16] arm64: Remove uaccess toggle from __flush_cache_range macro
  2021-05-17  7:51 ` [PATCH v2 05/16] arm64: Remove uaccess toggle from __flush_cache_range macro Fuad Tabba
@ 2021-05-18 16:00   ` Mark Rutland
  2021-05-19 16:27     ` Fuad Tabba
  0 siblings, 1 reply; 29+ messages in thread
From: Mark Rutland @ 2021-05-18 16:00 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: linux-arm-kernel, will, catalin.marinas, maz, ardb, james.morse,
	alexandru.elisei, suzuki.poulose, robin.murphy

On Mon, May 17, 2021 at 08:51:13AM +0100, Fuad Tabba wrote:
> The uaccess toggle isn't part of the cache maintenance operation.
> Move it directly to where it's needed.
> 
> No functional change intended.
> 
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
>  arch/arm64/mm/cache.S | 8 ++------
>  1 file changed, 2 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
> index 6babaaf34f17..d74b20cd6449 100644
> --- a/arch/arm64/mm/cache.S
> +++ b/arch/arm64/mm/cache.S
> @@ -26,9 +26,6 @@
>   *	- needs_uaccess - (macro parameter) might access user space memory
>   */
>  .macro	__flush_cache_range, needs_uaccess
> -	.if 	\needs_uaccess
> -	uaccess_ttbr0_enable x2, x3, x4
> -	.endif
>  alternative_if ARM64_HAS_CACHE_IDC
>  	dsb	ishst
>  	b	7f
> @@ -55,9 +52,6 @@ alternative_else_nop_endif
>  	invalidate_icache_by_line x0, x1, x2, x3, \needs_uaccess, 9f
>  8:	mov	x0, #0
>  1:
> -	.if	\needs_uaccess
> -	uaccess_ttbr0_disable x1, x2
> -	.endif
>  	ret
>  
>  	.if 	\needs_uaccess
> @@ -92,7 +86,9 @@ SYM_FUNC_END(__flush_icache_range)
>   *	- end     - virtual end address of region
>   */
>  SYM_FUNC_START(__flush_cache_user_range)
> +	uaccess_ttbr0_enable x2, x3, x4
>  	__flush_cache_range needs_uaccess=1
> +	uaccess_ttbr0_disable x1, x2
>  SYM_FUNC_END(__flush_cache_user_range)

The RET is still in the __flush_cache_range macro, so I don't think
we'll ever execute the uaccess_ttbr0_disable step here.

Mark.

>  
>  /*
> -- 
> 2.31.1.751.gd2f1c929bd-goog
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v2 04/16] arm64: Downgrade flush_icache_range to invalidate
  2021-05-18 15:53   ` Mark Rutland
@ 2021-05-18 16:02     ` Ard Biesheuvel
  2021-05-18 16:06       ` Mark Rutland
  0 siblings, 1 reply; 29+ messages in thread
From: Ard Biesheuvel @ 2021-05-18 16:02 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Fuad Tabba, Linux ARM, Will Deacon, Catalin Marinas,
	Marc Zyngier, James Morse, Alexandru Elisei, Suzuki K Poulose,
	Robin Murphy

On Tue, 18 May 2021 at 17:53, Mark Rutland <mark.rutland@arm.com> wrote:
>
> On Mon, May 17, 2021 at 08:51:12AM +0100, Fuad Tabba wrote:
> > Since __flush_dcache_area is called right before,
> > invalidate_icache_range is sufficient in this case.
> >
> > Rewrite the comment to better explain the rationale behind the
> > cache maintenance operations used here.
> >
> > No functional change intended.
> > Possible performance impact due to invalidating only the icache
> > rather than invalidating and cleaning both caches.
> >
> > Reported-by: Catalin Marinas <catalin.marinas@arm.com>
> > Reported-by: Will Deacon <will@kernel.org>
> > Link: https://lore.kernel.org/linux-arch/20200511110014.lb9PEahJ4hVOYrbwIb_qUHXyNy9KQzNFdb_I3YlzY6A@z/
> > Signed-off-by: Fuad Tabba <tabba@google.com>
> > ---
> >  arch/arm64/kernel/machine_kexec.c | 10 +++++++---
> >  1 file changed, 7 insertions(+), 3 deletions(-)
> >
> > diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c
> > index 90a335c74442..ecd8915e02e1 100644
> > --- a/arch/arm64/kernel/machine_kexec.c
> > +++ b/arch/arm64/kernel/machine_kexec.c
> > @@ -68,10 +68,14 @@ int machine_kexec_post_load(struct kimage *kimage)
> >       kimage->arch.kern_reloc = __pa(reloc_code);
> >       kexec_image_info(kimage);
> >
> > -     /* Flush the reloc_code in preparation for its execution. */
> > +     /*
> > +      * For execution with the MMU off and I-cache on, reloc_code needs to be
> > +      * cleaned to the PoC and invalidated from the I-cache.
> > +      */
>
> Minor nit, but the I-cache is *always* on (SCTLR.I affects the
> attributes used for fetches into the I-caches), so it would be slightly
> better to drop the "and I-cache on" words.
>

This may be true, but it may not be obvious to someone reading the
'MMU off' part of the comment. Bottom line is that we will be running
in a mode where we may hit in the I-cache, so it needs to be
invalidated. If we miss in the I-cache, we should fetch from the PoC,
hence the D-cache clean.


> Otherwise, this looks good to me.
>
> Mark.
>
> >       __flush_dcache_area(reloc_code, arm64_relocate_new_kernel_size);
> > -     flush_icache_range((uintptr_t)reloc_code, (uintptr_t)reloc_code +
> > -                        arm64_relocate_new_kernel_size);
> > +     invalidate_icache_range((uintptr_t)reloc_code,
> > +                             (uintptr_t)reloc_code +
> > +                                     arm64_relocate_new_kernel_size);
> >
> >       return 0;
> >  }
> > --
> > 2.31.1.751.gd2f1c929bd-goog
> >

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v2 07/16] arm64: Fix comments to refer to correct function __flush_icache_range
  2021-05-17  7:51 ` [PATCH v2 07/16] arm64: Fix comments to refer to correct function __flush_icache_range Fuad Tabba
@ 2021-05-18 16:03   ` Mark Rutland
  0 siblings, 0 replies; 29+ messages in thread
From: Mark Rutland @ 2021-05-18 16:03 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: linux-arm-kernel, will, catalin.marinas, maz, ardb, james.morse,
	alexandru.elisei, suzuki.poulose, robin.murphy

On Mon, May 17, 2021 at 08:51:15AM +0100, Fuad Tabba wrote:
> Many comments refer to the function flush_icache_range, where the
> intent is in fact __flush_icache_range. Fix these comments to
> refer to the intended function.
> 
> No functional change intended.

That's probably due to commit:

  3b8c9f1cdfc506e9 ("arm64: IPI each CPU after invalidating the I-cache for kernel mappings")

... since that renamed flush_icache_range() to __flush_icache_range()
and added a wrapper.

> 
> Signed-off-by: Fuad Tabba <tabba@google.com>

Acked-by: Mark Rutland <mark.rutland@arm.com>

> ---
>  arch/arm64/kernel/hibernate-asm.S | 4 ++--
>  arch/arm64/mm/cache.S             | 2 +-
>  2 files changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/arm64/kernel/hibernate-asm.S b/arch/arm64/kernel/hibernate-asm.S
> index 0ed2f72a6b94..ef2ab7caf815 100644
> --- a/arch/arm64/kernel/hibernate-asm.S
> +++ b/arch/arm64/kernel/hibernate-asm.S
> @@ -45,7 +45,7 @@
>   * Because this code has to be copied to a 'safe' page, it can't call out to
>   * other functions by PC-relative address. Also remember that it may be
>   * mid-way through over-writing other functions. For this reason it contains
> - * code from flush_icache_range() and uses the copy_page() macro.
> + * code from __flush_icache_range() and uses the copy_page() macro.
>   *
>   * This 'safe' page is mapped via ttbr0, and executed from there. This function
>   * switches to a copy of the linear map in ttbr1, performs the restore, then
> @@ -87,7 +87,7 @@ SYM_CODE_START(swsusp_arch_suspend_exit)
>  	copy_page	x0, x1, x2, x3, x4, x5, x6, x7, x8, x9
>  
>  	add	x1, x10, #PAGE_SIZE
> -	/* Clean the copied page to PoU - based on flush_icache_range() */
> +	/* Clean the copied page to PoU - based on __flush_icache_range() */
>  	raw_dcache_line_size x2, x3
>  	sub	x3, x2, #1
>  	bic	x4, x10, x3
> diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
> index d74b20cd6449..8920f63442ae 100644
> --- a/arch/arm64/mm/cache.S
> +++ b/arch/arm64/mm/cache.S
> @@ -62,7 +62,7 @@ alternative_else_nop_endif
>  .endm
>  
>  /*
> - *	flush_icache_range(start,end)
> + *	__flush_icache_range(start,end)
>   *
>   *	Ensure that the I and D caches are coherent within specified region.
>   *	This is typically used when code has been written to a memory region,
> -- 
> 2.31.1.751.gd2f1c929bd-goog
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v2 04/16] arm64: Downgrade flush_icache_range to invalidate
  2021-05-18 16:02     ` Ard Biesheuvel
@ 2021-05-18 16:06       ` Mark Rutland
  2021-05-19 16:29         ` Fuad Tabba
  0 siblings, 1 reply; 29+ messages in thread
From: Mark Rutland @ 2021-05-18 16:06 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Fuad Tabba, Linux ARM, Will Deacon, Catalin Marinas,
	Marc Zyngier, James Morse, Alexandru Elisei, Suzuki K Poulose,
	Robin Murphy

On Tue, May 18, 2021 at 06:02:32PM +0200, Ard Biesheuvel wrote:
> On Tue, 18 May 2021 at 17:53, Mark Rutland <mark.rutland@arm.com> wrote:
> >
> > On Mon, May 17, 2021 at 08:51:12AM +0100, Fuad Tabba wrote:
> > > Since __flush_dcache_area is called right before,
> > > invalidate_icache_range is sufficient in this case.
> > >
> > > Rewrite the comment to better explain the rationale behind the
> > > cache maintenance operations used here.
> > >
> > > No functional change intended.
> > > Possible performance impact due to invalidating only the icache
> > > rather than invalidating and cleaning both caches.
> > >
> > > Reported-by: Catalin Marinas <catalin.marinas@arm.com>
> > > Reported-by: Will Deacon <will@kernel.org>
> > > Link: https://lore.kernel.org/linux-arch/20200511110014.lb9PEahJ4hVOYrbwIb_qUHXyNy9KQzNFdb_I3YlzY6A@z/
> > > Signed-off-by: Fuad Tabba <tabba@google.com>
> > > ---
> > >  arch/arm64/kernel/machine_kexec.c | 10 +++++++---
> > >  1 file changed, 7 insertions(+), 3 deletions(-)
> > >
> > > diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c
> > > index 90a335c74442..ecd8915e02e1 100644
> > > --- a/arch/arm64/kernel/machine_kexec.c
> > > +++ b/arch/arm64/kernel/machine_kexec.c
> > > @@ -68,10 +68,14 @@ int machine_kexec_post_load(struct kimage *kimage)
> > >       kimage->arch.kern_reloc = __pa(reloc_code);
> > >       kexec_image_info(kimage);
> > >
> > > -     /* Flush the reloc_code in preparation for its execution. */
> > > +     /*
> > > +      * For execution with the MMU off and I-cache on, reloc_code needs to be
> > > +      * cleaned to the PoC and invalidated from the I-cache.
> > > +      */
> >
> > Minor nit, but the I-cache is *always* on (SCTLR.I affects the
> > attributes used for fetches into the I-caches), so it would be slightly
> > better to drop the "and I-cache on" words.
> 
> This may be true, but it may not be obvious to someone reading the
> 'MMU off' part of the comment. Bottom line is that we will be running
> in a mode where we may hit in the I-cache, so it needs to be
> invalidated. If we miss in the I-cache, we should fetch from the PoC,
> hence the D-cache clean.

No disagreement there. I literally meant dropping the words "and I-cache
on", leaving the whole comment as:

	/*
	 * For execution with the MMU off, reloc_code needs to be
	 * cleaned to the PoC and invalidated from the I-cache.
	 */

Thanks,
Mark.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v2 02/16] arm64: Do not enable uaccess for flush_icache_range
  2021-05-18 15:33   ` Mark Rutland
@ 2021-05-19 16:25     ` Fuad Tabba
  2021-05-20 10:47       ` Mark Rutland
  0 siblings, 1 reply; 29+ messages in thread
From: Fuad Tabba @ 2021-05-19 16:25 UTC (permalink / raw)
  To: Mark Rutland
  Cc: moderated list:ARM64 PORT (AARCH64 ARCHITECTURE),
	Will Deacon, Catalin Marinas, Marc Zyngier, Ard Biesheuvel,
	James Morse, Alexandru Elisei, Suzuki K Poulose, Robin Murphy

Hi Mark,

On Tue, May 18, 2021 at 4:33 PM Mark Rutland <mark.rutland@arm.com> wrote:
>
> Hi Fuad,
>
> This is great! I had a play with the series locally, and I have a few
> suggestions below for how to make this a bit clearer.
>
> On Mon, May 17, 2021 at 08:51:10AM +0100, Fuad Tabba wrote:
> > __flush_icache_range works on the kernel linear map, and doesn't
> > need uaccess. The existing code is a side-effect of its current
> > implementation with __flush_cache_user_range fallthrough.
> >
> > Instead of fallthrough to share the code, use a common macro for
> > the two where the caller can specify whether user-space access is
> > needed.
> >
> > No functional change intended.
> > Possible performance impact due to the reduced number of
> > instructions.
>
> This looks correct, but I'm not too keen on all the duplication we have
> to do w.r.t. `needs_uaccess`, and I think it would be much clearer to
> put the TTBR maintenance directly in `__flush_cache_user_range`
> immediately, rather than doing that later in the series.
>
> > Reported-by: Catalin Marinas <catalin.marinas@arm.com>
> > Reported-by: Will Deacon <will@kernel.org>
> > Link: https://lore.kernel.org/linux-arch/20200511110014.lb9PEahJ4hVOYrbwIb_qUHXyNy9KQzNFdb_I3YlzY6A@z/
> > Signed-off-by: Fuad Tabba <tabba@google.com>
> > ---
> >  arch/arm64/include/asm/assembler.h | 13 ++++--
> >  arch/arm64/mm/cache.S              | 64 +++++++++++++++++++++---------
> >  2 files changed, 54 insertions(+), 23 deletions(-)
> >
> > diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
> > index 8418c1bd8f04..6ff7a3a3b238 100644
> > --- a/arch/arm64/include/asm/assembler.h
> > +++ b/arch/arm64/include/asm/assembler.h
> > @@ -426,16 +426,21 @@ alternative_endif
> >   * Macro to perform an instruction cache maintenance for the interval
> >   * [start, end)
> >   *
> > - *   start, end:     virtual addresses describing the region
> > - *   label:          A label to branch to on user fault.
> > - *   Corrupts:       tmp1, tmp2
> > + *   start, end:     virtual addresses describing the region
> > + *   needs_uaccess:  might access user space memory
> > + *   label:          label to branch to on user fault (if needs_uaccess)
> > + *   Corrupts:       tmp1, tmp2
> >   */
>
> I'm not too keen on the separate `needs_uaccess` and `label` arguments.
> We should be able to collapse those into a single argument by checking
> with .ifnc, e.g.
>
>         .macro op arg, fixup
>         .ifnc fixup,
>         do_thing_with \fixup
>         .endif
>         .endm
>
> ... which I think would make things clearer overall.
>
> > -     .macro invalidate_icache_by_line start, end, tmp1, tmp2, label
> > +     .macro invalidate_icache_by_line start, end, tmp1, tmp2, needs_uaccess, label
> >       icache_line_size \tmp1, \tmp2
> >       sub     \tmp2, \tmp1, #1
> >       bic     \tmp2, \start, \tmp2
> >  9997:
> > +     .if     \needs_uaccess
> >  USER(\label, ic      ivau, \tmp2)                    // invalidate I line PoU
> > +     .else
> > +     ic      ivau, \tmp2
> > +     .endif
> >       add     \tmp2, \tmp2, \tmp1
> >       cmp     \tmp2, \end
> >       b.lo    9997b
>
> I'm also not keen on duplicating the instruction here. I reckon what we
> should do is add a conditional extable macro:
>
>         .macro _cond_extable insn, fixup
>         .ifnc \fixup,
>         _asm_extable \insn, \fixup
>         .endif
>         .endm
>
> ... which'd allow us to do:
>
>         .macro invalidate_icache_by_line start, end, tmp1, tmp2, fixup
>         icache_line_size \tmp1, \tmp2
>         sub     \tmp2, \tmp1, #1
>         bic     \tmp2, \start, \tmp2
> .Licache_op\@:
>         ic      ivau, \tmp2                     // invalidate I line PoU
>         add     \tmp2, \tmp2, \tmp1
>         cmp     \tmp2, \end
>         b.lo    .Licache_op\@
>         dsb     ish
>         isb
>
>         _cond_extable .Licache_op\@, \fixup
>         .endm
>
> ... which I think is clearer.
>
> We could do likewise in dcache_by_line_op, and with some refactoring we
> could remove the logic that we have to currently duplicate.
>
> I pushed a couple of prearatory patches for that to:
>
>   https://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git/log/?h=arm64/cleanups/cache
>   git://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git arm64/cleanups/cache
>
> ... in case you felt like taking those as-is.

Thanks for this, and for the other comments and suggestions. I'll take
your patches, as well as all the fixes you suggested in the next
round.

Cheers,
/fuad

> > diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
> > index 2d881f34dd9d..092f73acdf9a 100644
> > --- a/arch/arm64/mm/cache.S
> > +++ b/arch/arm64/mm/cache.S
> > @@ -15,30 +15,20 @@
> >  #include <asm/asm-uaccess.h>
> >
> >  /*
> > - *   flush_icache_range(start,end)
> > + *   __flush_cache_range(start,end) [needs_uaccess]
> >   *
> >   *   Ensure that the I and D caches are coherent within specified region.
> >   *   This is typically used when code has been written to a memory region,
> >   *   and will be executed.
> >   *
> > - *   - start   - virtual start address of region
> > - *   - end     - virtual end address of region
> > + *   - start         - virtual start address of region
> > + *   - end           - virtual end address of region
> > + *   - needs_uaccess - (macro parameter) might access user space memory
> >   */
> > -SYM_FUNC_START(__flush_icache_range)
> > -     /* FALLTHROUGH */
> > -
> > -/*
> > - *   __flush_cache_user_range(start,end)
> > - *
> > - *   Ensure that the I and D caches are coherent within specified region.
> > - *   This is typically used when code has been written to a memory region,
> > - *   and will be executed.
> > - *
> > - *   - start   - virtual start address of region
> > - *   - end     - virtual end address of region
> > - */
> > -SYM_FUNC_START(__flush_cache_user_range)
> > +.macro       __flush_cache_range, needs_uaccess
> > +     .if     \needs_uaccess
> >       uaccess_ttbr0_enable x2, x3, x4
> > +     .endif
> >  alternative_if ARM64_HAS_CACHE_IDC
> >       dsb     ishst
> >       b       7f
> > @@ -47,7 +37,11 @@ alternative_else_nop_endif
> >       sub     x3, x2, #1
> >       bic     x4, x0, x3
> >  1:
> > +     .if     \needs_uaccess
> >  user_alt 9f, "dc cvau, x4",  "dc civac, x4",  ARM64_WORKAROUND_CLEAN_CACHE
> > +     .else
> > +alternative_insn "dc cvau, x4",  "dc civac, x4",  ARM64_WORKAROUND_CLEAN_CACHE
> > +     .endif
> >       add     x4, x4, x2
> >       cmp     x4, x1
> >       b.lo    1b
> > @@ -58,15 +52,47 @@ alternative_if ARM64_HAS_CACHE_DIC
> >       isb
> >       b       8f
> >  alternative_else_nop_endif
> > -     invalidate_icache_by_line x0, x1, x2, x3, 9f
> > +     invalidate_icache_by_line x0, x1, x2, x3, \needs_uaccess, 9f
> >  8:   mov     x0, #0
> >  1:
> > +     .if     \needs_uaccess
> >       uaccess_ttbr0_disable x1, x2
> > +     .endif
> >       ret
> > +
> > +     .if     \needs_uaccess
> >  9:
> >       mov     x0, #-EFAULT
> >       b       1b
> > +     .endif
> > +.endm
>
> As above, I think we should reduce this to the core logic, moving the
> ttbr manipulation and fixup handler inline in __flush_cache_user_range.
>
> For clarity, I'd also like to leave the RETs out of the macro, since
> that's required for the fixup handling anyway, and it generally amkes
> the control flow clearer at the function definition.
>
> > +/*
> > + *   flush_icache_range(start,end)
> > + *
> > + *   Ensure that the I and D caches are coherent within specified region.
> > + *   This is typically used when code has been written to a memory region,
> > + *   and will be executed.
> > + *
> > + *   - start   - virtual start address of region
> > + *   - end     - virtual end address of region
> > + */
> > +SYM_FUNC_START(__flush_icache_range)
> > +     __flush_cache_range needs_uaccess=0
> >  SYM_FUNC_END(__flush_icache_range)
>
> ...so with the suggestions above, this could be:
>
> SYM_FUNC_START(__flush_icache_range)
>         __flush_cache_range
>         ret
> SYM_FUNC_END(__flush_icache_range)
>
> > +/*
> > + *   __flush_cache_user_range(start,end)
> > + *
> > + *   Ensure that the I and D caches are coherent within specified region.
> > + *   This is typically used when code has been written to a memory region,
> > + *   and will be executed.
> > + *
> > + *   - start   - virtual start address of region
> > + *   - end     - virtual end address of region
> > + */
> > +SYM_FUNC_START(__flush_cache_user_range)
> > +     __flush_cache_range needs_uaccess=1
> >  SYM_FUNC_END(__flush_cache_user_range)
>
> ... this could be:
>
> SYM_FUNC_START(__flush_cache_user_range)
>         uaccess_ttbr0_enable x2, x3, x4
>         __flush_cache_range 2f
> 1:
>         uaccess_ttbr0_disable x1, x2
>         ret
> 2:
>         mov     x0, #-EFAULT
>         b       1b
> SYM_FUNC_END(__flush_cache_user_range)
>
> >  /*
> > @@ -86,7 +112,7 @@ alternative_else_nop_endif
> >
> >       uaccess_ttbr0_enable x2, x3, x4
> >
> > -     invalidate_icache_by_line x0, x1, x2, x3, 2f
> > +     invalidate_icache_by_line x0, x1, x2, x3, 1, 2f
>
> ... and this wouldn't need to change.
>
> Thanks,
> Mark.
>
> >       mov     x0, xzr
> >  1:
> >       uaccess_ttbr0_disable x1, x2
> > --
> > 2.31.1.751.gd2f1c929bd-goog
> >

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v2 03/16] arm64: Do not enable uaccess for invalidate_icache_range
  2021-05-18 15:36   ` Mark Rutland
@ 2021-05-19 16:26     ` Fuad Tabba
  0 siblings, 0 replies; 29+ messages in thread
From: Fuad Tabba @ 2021-05-19 16:26 UTC (permalink / raw)
  To: Mark Rutland
  Cc: moderated list:ARM64 PORT (AARCH64 ARCHITECTURE),
	Will Deacon, Catalin Marinas, Marc Zyngier, Ard Biesheuvel,
	James Morse, Alexandru Elisei, Suzuki K Poulose, Robin Murphy

On Tue, May 18, 2021 at 4:36 PM Mark Rutland <mark.rutland@arm.com> wrote:
>
> On Mon, May 17, 2021 at 08:51:11AM +0100, Fuad Tabba wrote:
> > invalidate_icache_range() works on the kernel linear map, and
> > doesn't need uaccess. Remove the code that toggles
> > uaccess_ttbr0_enable, as well as the code that emits an entry
> > into the exception table (via the macro
> > invalidate_icache_by_line).
> >
> > Changes return type of invalidate_icache_range() from int (which
> > used to indicate a fault) to void, since it doesn't need uaccess
> > and won't fault. Note that return value was never checked by any
> > of the callers.
> >
> > No functional change intended.
> > Possible performance impact due to the reduced number of
> > instructions.
> >
> > Reported-by: Catalin Marinas <catalin.marinas@arm.com>
> > Reported-by: Will Deacon <will@kernel.org>
> > Link: https://lore.kernel.org/linux-arch/20200511110014.lb9PEahJ4hVOYrbwIb_qUHXyNy9KQzNFdb_I3YlzY6A@z/
> > Signed-off-by: Fuad Tabba <tabba@google.com>
> > ---
> >  arch/arm64/include/asm/cacheflush.h |  2 +-
> >  arch/arm64/mm/cache.S               | 11 +----------
> >  2 files changed, 2 insertions(+), 11 deletions(-)
> >
> > diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
> > index 52e5c1623224..a586afa84172 100644
> > --- a/arch/arm64/include/asm/cacheflush.h
> > +++ b/arch/arm64/include/asm/cacheflush.h
> > @@ -57,7 +57,7 @@
> >   *           - size   - region size
> >   */
> >  extern void __flush_icache_range(unsigned long start, unsigned long end);
> > -extern int  invalidate_icache_range(unsigned long start, unsigned long end);
> > +extern void invalidate_icache_range(unsigned long start, unsigned long end);
> >  extern void __flush_dcache_area(void *addr, size_t len);
> >  extern void __inval_dcache_area(void *addr, size_t len);
> >  extern void __clean_dcache_area_poc(void *addr, size_t len);
> > diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
> > index 092f73acdf9a..6babaaf34f17 100644
> > --- a/arch/arm64/mm/cache.S
> > +++ b/arch/arm64/mm/cache.S
> > @@ -105,21 +105,12 @@ SYM_FUNC_END(__flush_cache_user_range)
> >   */
> >  SYM_FUNC_START(invalidate_icache_range)
> >  alternative_if ARM64_HAS_CACHE_DIC
> > -     mov     x0, xzr
> >       isb
> >       ret
> >  alternative_else_nop_endif
> >
> > -     uaccess_ttbr0_enable x2, x3, x4
> > -
> > -     invalidate_icache_by_line x0, x1, x2, x3, 1, 2f
> > -     mov     x0, xzr
> > -1:
> > -     uaccess_ttbr0_disable x1, x2
> > +     invalidate_icache_by_line x0, x1, x2, x3, 0, 0f
>
> This all looks good to me, but I'd prefer that we didn't have to pass a
> fake label. I think if you use the approach I suggested on the prior
> patch, this can be:
>
>         invalidate_icache_by_line x0, x1, x2, x3
>
> ... which I think makes it clearer that there's no fixup.

Got it!

Thanks,
/fuad

> Thanks,
> Mark.
>
> >       ret
> > -2:
> > -     mov     x0, #-EFAULT
> > -     b       1b
> >  SYM_FUNC_END(invalidate_icache_range)
> >
> >  /*
> > --
> > 2.31.1.751.gd2f1c929bd-goog
> >

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v2 05/16] arm64: Remove uaccess toggle from __flush_cache_range macro
  2021-05-18 16:00   ` Mark Rutland
@ 2021-05-19 16:27     ` Fuad Tabba
  0 siblings, 0 replies; 29+ messages in thread
From: Fuad Tabba @ 2021-05-19 16:27 UTC (permalink / raw)
  To: Mark Rutland
  Cc: moderated list:ARM64 PORT (AARCH64 ARCHITECTURE),
	Will Deacon, Catalin Marinas, Marc Zyngier, Ard Biesheuvel,
	James Morse, Alexandru Elisei, Suzuki K Poulose, Robin Murphy

On Tue, May 18, 2021 at 5:00 PM Mark Rutland <mark.rutland@arm.com> wrote:
>
> On Mon, May 17, 2021 at 08:51:13AM +0100, Fuad Tabba wrote:
> > The uaccess toggle isn't part of the cache maintenance operation.
> > Move it directly to where it's needed.
> >
> > No functional change intended.
> >
> > Signed-off-by: Fuad Tabba <tabba@google.com>
> > ---
> >  arch/arm64/mm/cache.S | 8 ++------
> >  1 file changed, 2 insertions(+), 6 deletions(-)
> >
> > diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
> > index 6babaaf34f17..d74b20cd6449 100644
> > --- a/arch/arm64/mm/cache.S
> > +++ b/arch/arm64/mm/cache.S
> > @@ -26,9 +26,6 @@
> >   *   - needs_uaccess - (macro parameter) might access user space memory
> >   */
> >  .macro       __flush_cache_range, needs_uaccess
> > -     .if     \needs_uaccess
> > -     uaccess_ttbr0_enable x2, x3, x4
> > -     .endif
> >  alternative_if ARM64_HAS_CACHE_IDC
> >       dsb     ishst
> >       b       7f
> > @@ -55,9 +52,6 @@ alternative_else_nop_endif
> >       invalidate_icache_by_line x0, x1, x2, x3, \needs_uaccess, 9f
> >  8:   mov     x0, #0
> >  1:
> > -     .if     \needs_uaccess
> > -     uaccess_ttbr0_disable x1, x2
> > -     .endif
> >       ret
> >
> >       .if     \needs_uaccess
> > @@ -92,7 +86,9 @@ SYM_FUNC_END(__flush_icache_range)
> >   *   - end     - virtual end address of region
> >   */
> >  SYM_FUNC_START(__flush_cache_user_range)
> > +     uaccess_ttbr0_enable x2, x3, x4
> >       __flush_cache_range needs_uaccess=1
> > +     uaccess_ttbr0_disable x1, x2
> >  SYM_FUNC_END(__flush_cache_user_range)
>
> The RET is still in the __flush_cache_range macro, so I don't think
> we'll ever execute the uaccess_ttbr0_disable step here.

Yes. Like you suggested earlier, it's moving out of the macro.

Thanks,
/fuad

> Mark.
>
> >
> >  /*
> > --
> > 2.31.1.751.gd2f1c929bd-goog
> >

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v2 04/16] arm64: Downgrade flush_icache_range to invalidate
  2021-05-18 16:06       ` Mark Rutland
@ 2021-05-19 16:29         ` Fuad Tabba
  0 siblings, 0 replies; 29+ messages in thread
From: Fuad Tabba @ 2021-05-19 16:29 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Ard Biesheuvel, Linux ARM, Will Deacon, Catalin Marinas,
	Marc Zyngier, James Morse, Alexandru Elisei, Suzuki K Poulose,
	Robin Murphy

On Tue, May 18, 2021 at 5:06 PM Mark Rutland <mark.rutland@arm.com> wrote:
>
> On Tue, May 18, 2021 at 06:02:32PM +0200, Ard Biesheuvel wrote:
> > On Tue, 18 May 2021 at 17:53, Mark Rutland <mark.rutland@arm.com> wrote:
> > >
> > > On Mon, May 17, 2021 at 08:51:12AM +0100, Fuad Tabba wrote:
> > > > Since __flush_dcache_area is called right before,
> > > > invalidate_icache_range is sufficient in this case.
> > > >
> > > > Rewrite the comment to better explain the rationale behind the
> > > > cache maintenance operations used here.
> > > >
> > > > No functional change intended.
> > > > Possible performance impact due to invalidating only the icache
> > > > rather than invalidating and cleaning both caches.
> > > >
> > > > Reported-by: Catalin Marinas <catalin.marinas@arm.com>
> > > > Reported-by: Will Deacon <will@kernel.org>
> > > > Link: https://lore.kernel.org/linux-arch/20200511110014.lb9PEahJ4hVOYrbwIb_qUHXyNy9KQzNFdb_I3YlzY6A@z/
> > > > Signed-off-by: Fuad Tabba <tabba@google.com>
> > > > ---
> > > >  arch/arm64/kernel/machine_kexec.c | 10 +++++++---
> > > >  1 file changed, 7 insertions(+), 3 deletions(-)
> > > >
> > > > diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c
> > > > index 90a335c74442..ecd8915e02e1 100644
> > > > --- a/arch/arm64/kernel/machine_kexec.c
> > > > +++ b/arch/arm64/kernel/machine_kexec.c
> > > > @@ -68,10 +68,14 @@ int machine_kexec_post_load(struct kimage *kimage)
> > > >       kimage->arch.kern_reloc = __pa(reloc_code);
> > > >       kexec_image_info(kimage);
> > > >
> > > > -     /* Flush the reloc_code in preparation for its execution. */
> > > > +     /*
> > > > +      * For execution with the MMU off and I-cache on, reloc_code needs to be
> > > > +      * cleaned to the PoC and invalidated from the I-cache.
> > > > +      */
> > >
> > > Minor nit, but the I-cache is *always* on (SCTLR.I affects the
> > > attributes used for fetches into the I-caches), so it would be slightly
> > > better to drop the "and I-cache on" words.
> >
> > This may be true, but it may not be obvious to someone reading the
> > 'MMU off' part of the comment. Bottom line is that we will be running
> > in a mode where we may hit in the I-cache, so it needs to be
> > invalidated. If we miss in the I-cache, we should fetch from the PoC,
> > hence the D-cache clean.
>
> No disagreement there. I literally meant dropping the words "and I-cache
> on", leaving the whole comment as:
>
>         /*
>          * For execution with the MMU off, reloc_code needs to be
>          * cleaned to the PoC and invalidated from the I-cache.
>          */

Will fix this too.

Thanks,
/fuad

> Thanks,
> Mark.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v2 02/16] arm64: Do not enable uaccess for flush_icache_range
  2021-05-19 16:25     ` Fuad Tabba
@ 2021-05-20 10:47       ` Mark Rutland
  0 siblings, 0 replies; 29+ messages in thread
From: Mark Rutland @ 2021-05-20 10:47 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: moderated list:ARM64 PORT (AARCH64 ARCHITECTURE),
	Will Deacon, Catalin Marinas, Marc Zyngier, Ard Biesheuvel,
	James Morse, Alexandru Elisei, Suzuki K Poulose, Robin Murphy

On Wed, May 19, 2021 at 05:25:37PM +0100, Fuad Tabba wrote:
> On Tue, May 18, 2021 at 4:33 PM Mark Rutland <mark.rutland@arm.com> wrote:
> > I pushed a couple of prearatory patches for that to:
> >
> >   https://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git/log/?h=arm64/cleanups/cache
> >   git://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git arm64/cleanups/cache
> >
> > ... in case you felt like taking those as-is.
> 
> Thanks for this, and for the other comments and suggestions. I'll take
> your patches, as well as all the fixes you suggested in the next
> round.

Great! Thanks again for working on this; it's really ncie to see all
this getting cleaned up.

Thanks,
Mark.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 29+ messages in thread

end of thread, other threads:[~2021-05-20 10:49 UTC | newest]

Thread overview: 29+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-05-17  7:51 [PATCH v2 00/16] Tidy up cache.S Fuad Tabba
2021-05-17  7:51 ` [PATCH v2 01/16] arm64: Apply errata to swsusp_arch_suspend_exit Fuad Tabba
2021-05-17  7:51 ` [PATCH v2 02/16] arm64: Do not enable uaccess for flush_icache_range Fuad Tabba
2021-05-18 15:33   ` Mark Rutland
2021-05-19 16:25     ` Fuad Tabba
2021-05-20 10:47       ` Mark Rutland
2021-05-17  7:51 ` [PATCH v2 03/16] arm64: Do not enable uaccess for invalidate_icache_range Fuad Tabba
2021-05-18 15:36   ` Mark Rutland
2021-05-19 16:26     ` Fuad Tabba
2021-05-17  7:51 ` [PATCH v2 04/16] arm64: Downgrade flush_icache_range to invalidate Fuad Tabba
2021-05-18 15:53   ` Mark Rutland
2021-05-18 16:02     ` Ard Biesheuvel
2021-05-18 16:06       ` Mark Rutland
2021-05-19 16:29         ` Fuad Tabba
2021-05-17  7:51 ` [PATCH v2 05/16] arm64: Remove uaccess toggle from __flush_cache_range macro Fuad Tabba
2021-05-18 16:00   ` Mark Rutland
2021-05-19 16:27     ` Fuad Tabba
2021-05-17  7:51 ` [PATCH v2 06/16] arm64: Move documentation of dcache_by_line_op Fuad Tabba
2021-05-17  7:51 ` [PATCH v2 07/16] arm64: Fix comments to refer to correct function __flush_icache_range Fuad Tabba
2021-05-18 16:03   ` Mark Rutland
2021-05-17  7:51 ` [PATCH v2 08/16] arm64: __inval_dcache_area to take end parameter instead of size Fuad Tabba
2021-05-17  7:51 ` [PATCH v2 09/16] arm64: dcache_by_line_op " Fuad Tabba
2021-05-17  7:51 ` [PATCH v2 10/16] arm64: __flush_dcache_area " Fuad Tabba
2021-05-17  7:51 ` [PATCH v2 11/16] arm64: __clean_dcache_area_poc " Fuad Tabba
2021-05-17  7:51 ` [PATCH v2 12/16] arm64: __clean_dcache_area_pop " Fuad Tabba
2021-05-17  7:51 ` [PATCH v2 13/16] arm64: __clean_dcache_area_pou " Fuad Tabba
2021-05-17  7:51 ` [PATCH v2 14/16] arm64: sync_icache_aliases " Fuad Tabba
2021-05-17  7:51 ` [PATCH v2 15/16] arm64: Fix cache maintenance function comments Fuad Tabba
2021-05-17  7:51 ` [PATCH v2 16/16] arm64: Rename arm64-internal cache maintenance functions Fuad Tabba

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.