linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v1 00/13] Tidy up cache.S
@ 2021-05-11 14:42 Fuad Tabba
  2021-05-11 14:42 ` [PATCH v1 01/13] arm64: Do not enable uaccess for flush_icache_range Fuad Tabba
                   ` (12 more replies)
  0 siblings, 13 replies; 32+ messages in thread
From: Fuad Tabba @ 2021-05-11 14:42 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: will, catalin.marinas, mark.rutland, maz, ardb, james.morse,
	alexandru.elisei, suzuki.poulose, tabba

TO: linux-arm-kernel@lists.infradead.org
CC: will@kernel.org, catalin.marinas@arm.com, mark.rutland@arm.com, maz@kernel.org,
ardb@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com,
suzuki.poulose@arm.com

Hi,

As has been noted before [1], the code in cache.S isn't very tidy. Some of its
functions accept address ranges by start and size, whereas others with similar
names do so by start and end. This has resulted in at least one bug [2].

Moreover, invalidate_icache_range and __flush_icache_range toggle uaccess,
which isn't necessary because they work on the kernel linear map [3].

This patch series attempts to fix these issues, as well as tidy up the code in
general to reduce ambiguity and make it consistent with Arm terminology. No
functional change intended in this series.

This series is based on v5.13-rc1. You can find the applied series here [4].

Cheers,
/fuad

[1] https://lore.kernel.org/linux-arch/20200511075115.GA16134@willie-the-truck/
[2] https://lore.kernel.org/linux-arch/20200510075510.987823-3-hch@lst.de/
[3] https://lore.kernel.org/linux-arch/20200511110014.lb9PEahJ4hVOYrbwIb_qUHXyNy9KQzNFdb_I3YlzY6A@z/
[4] https://android-kvm.googlesource.com/linux/+/refs/heads/tabba/fixcache-5.13

Fuad Tabba (13):
  arm64: Do not enable uaccess for flush_icache_range
  arm64: Do not enable uaccess for invalidate_icache_range
  arm64: Downgrade flush_icache_range to invalidate
  arm64: Move documentation of dcache_by_line_op
  arm64: __inval_dcache_area to take end parameter instead of size
  arm64: dcache_by_line_op to take end parameter instead of size
  arm64: __flush_dcache_area to take end parameter instead of size
  arm64: __clean_dcache_area_poc to take end parameter instead of size
  arm64: __clean_dcache_area_pop to take end parameter instead of size
  arm64: __clean_dcache_area_pou to take end parameter instead of size
  arm64: sync_icache_aliases to take end parameter instead of size
  arm64: Fix cache maintenance function comments
  arm64: Rename arm64-internal cache maintenance functions

 arch/arm64/include/asm/arch_gicv3.h |   3 +-
 arch/arm64/include/asm/assembler.h  |  52 ++++-----
 arch/arm64/include/asm/cacheflush.h |  69 +++++++-----
 arch/arm64/include/asm/efi.h        |   2 +-
 arch/arm64/include/asm/kvm_mmu.h    |   7 +-
 arch/arm64/kernel/alternative.c     |   2 +-
 arch/arm64/kernel/efi-entry.S       |   9 +-
 arch/arm64/kernel/head.S            |  13 +--
 arch/arm64/kernel/hibernate.c       |  20 ++--
 arch/arm64/kernel/idreg-override.c  |   3 +-
 arch/arm64/kernel/image-vars.h      |   2 +-
 arch/arm64/kernel/insn.c            |   2 +-
 arch/arm64/kernel/kaslr.c           |  12 ++-
 arch/arm64/kernel/machine_kexec.c   |  25 +++--
 arch/arm64/kernel/probes/uprobes.c  |   2 +-
 arch/arm64/kernel/smp.c             |   8 +-
 arch/arm64/kernel/smp_spin_table.c  |   7 +-
 arch/arm64/kernel/sys_compat.c      |   2 +-
 arch/arm64/kvm/arm.c                |   2 +-
 arch/arm64/kvm/hyp/nvhe/cache.S     |   4 +-
 arch/arm64/kvm/hyp/nvhe/setup.c     |   3 +-
 arch/arm64/kvm/hyp/nvhe/tlb.c       |   2 +-
 arch/arm64/kvm/hyp/pgtable.c        |  13 ++-
 arch/arm64/lib/uaccess_flushcache.c |   4 +-
 arch/arm64/mm/cache.S               | 157 ++++++++++++++++------------
 arch/arm64/mm/flush.c               |  29 ++---
 26 files changed, 261 insertions(+), 193 deletions(-)


base-commit: 6efb943b8616ec53a5e444193dccf1af9ad627b5
-- 
2.31.1.607.g51e8a6a459-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH v1 01/13] arm64: Do not enable uaccess for flush_icache_range
  2021-05-11 14:42 [PATCH v1 00/13] Tidy up cache.S Fuad Tabba
@ 2021-05-11 14:42 ` Fuad Tabba
  2021-05-11 15:22   ` Mark Rutland
  2021-05-11 16:53   ` Robin Murphy
  2021-05-11 14:42 ` [PATCH v1 02/13] arm64: Do not enable uaccess for invalidate_icache_range Fuad Tabba
                   ` (11 subsequent siblings)
  12 siblings, 2 replies; 32+ messages in thread
From: Fuad Tabba @ 2021-05-11 14:42 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: will, catalin.marinas, mark.rutland, maz, ardb, james.morse,
	alexandru.elisei, suzuki.poulose, tabba

__flush_icache_range works on the kernel linear map, and doesn't
need uaccess. The existing code is a side-effect of its current
implementation with __flush_cache_user_range fallthrough.

Instead of fallthrough to share the code, use a common macro for
the two where the caller can specify whether user-space access is
needed.

No functional change intended.

Reported-by: Catalin Marinas <catalin.marinas@arm.com>
Reported-by: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/linux-arch/20200511110014.lb9PEahJ4hVOYrbwIb_qUHXyNy9KQzNFdb_I3YlzY6A@z/
Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/assembler.h | 13 ++++--
 arch/arm64/mm/cache.S              | 64 +++++++++++++++++++++---------
 2 files changed, 54 insertions(+), 23 deletions(-)

diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
index 8418c1bd8f04..6ff7a3a3b238 100644
--- a/arch/arm64/include/asm/assembler.h
+++ b/arch/arm64/include/asm/assembler.h
@@ -426,16 +426,21 @@ alternative_endif
  * Macro to perform an instruction cache maintenance for the interval
  * [start, end)
  *
- * 	start, end:	virtual addresses describing the region
- *	label:		A label to branch to on user fault.
- * 	Corrupts:	tmp1, tmp2
+ *	start, end:	virtual addresses describing the region
+ *	needs_uaccess:	might access user space memory
+ *	label:		label to branch to on user fault (if needs_uaccess)
+ *	Corrupts:	tmp1, tmp2
  */
-	.macro invalidate_icache_by_line start, end, tmp1, tmp2, label
+	.macro invalidate_icache_by_line start, end, tmp1, tmp2, needs_uaccess, label
 	icache_line_size \tmp1, \tmp2
 	sub	\tmp2, \tmp1, #1
 	bic	\tmp2, \start, \tmp2
 9997:
+	.if	\needs_uaccess
 USER(\label, ic	ivau, \tmp2)			// invalidate I line PoU
+	.else
+	ic	ivau, \tmp2
+	.endif
 	add	\tmp2, \tmp2, \tmp1
 	cmp	\tmp2, \end
 	b.lo	9997b
diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
index 2d881f34dd9d..092f73acdf9a 100644
--- a/arch/arm64/mm/cache.S
+++ b/arch/arm64/mm/cache.S
@@ -15,30 +15,20 @@
 #include <asm/asm-uaccess.h>
 
 /*
- *	flush_icache_range(start,end)
+ *	__flush_cache_range(start,end) [needs_uaccess]
  *
  *	Ensure that the I and D caches are coherent within specified region.
  *	This is typically used when code has been written to a memory region,
  *	and will be executed.
  *
- *	- start   - virtual start address of region
- *	- end     - virtual end address of region
+ *	- start   	- virtual start address of region
+ *	- end     	- virtual end address of region
+ *	- needs_uaccess - (macro parameter) might access user space memory
  */
-SYM_FUNC_START(__flush_icache_range)
-	/* FALLTHROUGH */
-
-/*
- *	__flush_cache_user_range(start,end)
- *
- *	Ensure that the I and D caches are coherent within specified region.
- *	This is typically used when code has been written to a memory region,
- *	and will be executed.
- *
- *	- start   - virtual start address of region
- *	- end     - virtual end address of region
- */
-SYM_FUNC_START(__flush_cache_user_range)
+.macro	__flush_cache_range, needs_uaccess
+	.if 	\needs_uaccess
 	uaccess_ttbr0_enable x2, x3, x4
+	.endif
 alternative_if ARM64_HAS_CACHE_IDC
 	dsb	ishst
 	b	7f
@@ -47,7 +37,11 @@ alternative_else_nop_endif
 	sub	x3, x2, #1
 	bic	x4, x0, x3
 1:
+	.if 	\needs_uaccess
 user_alt 9f, "dc cvau, x4",  "dc civac, x4",  ARM64_WORKAROUND_CLEAN_CACHE
+	.else
+alternative_insn "dc cvau, x4",  "dc civac, x4",  ARM64_WORKAROUND_CLEAN_CACHE
+	.endif
 	add	x4, x4, x2
 	cmp	x4, x1
 	b.lo	1b
@@ -58,15 +52,47 @@ alternative_if ARM64_HAS_CACHE_DIC
 	isb
 	b	8f
 alternative_else_nop_endif
-	invalidate_icache_by_line x0, x1, x2, x3, 9f
+	invalidate_icache_by_line x0, x1, x2, x3, \needs_uaccess, 9f
 8:	mov	x0, #0
 1:
+	.if	\needs_uaccess
 	uaccess_ttbr0_disable x1, x2
+	.endif
 	ret
+
+	.if 	\needs_uaccess
 9:
 	mov	x0, #-EFAULT
 	b	1b
+	.endif
+.endm
+
+/*
+ *	flush_icache_range(start,end)
+ *
+ *	Ensure that the I and D caches are coherent within specified region.
+ *	This is typically used when code has been written to a memory region,
+ *	and will be executed.
+ *
+ *	- start   - virtual start address of region
+ *	- end     - virtual end address of region
+ */
+SYM_FUNC_START(__flush_icache_range)
+	__flush_cache_range needs_uaccess=0
 SYM_FUNC_END(__flush_icache_range)
+
+/*
+ *	__flush_cache_user_range(start,end)
+ *
+ *	Ensure that the I and D caches are coherent within specified region.
+ *	This is typically used when code has been written to a memory region,
+ *	and will be executed.
+ *
+ *	- start   - virtual start address of region
+ *	- end     - virtual end address of region
+ */
+SYM_FUNC_START(__flush_cache_user_range)
+	__flush_cache_range needs_uaccess=1
 SYM_FUNC_END(__flush_cache_user_range)
 
 /*
@@ -86,7 +112,7 @@ alternative_else_nop_endif
 
 	uaccess_ttbr0_enable x2, x3, x4
 
-	invalidate_icache_by_line x0, x1, x2, x3, 2f
+	invalidate_icache_by_line x0, x1, x2, x3, 1, 2f
 	mov	x0, xzr
 1:
 	uaccess_ttbr0_disable x1, x2
-- 
2.31.1.607.g51e8a6a459-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v1 02/13] arm64: Do not enable uaccess for invalidate_icache_range
  2021-05-11 14:42 [PATCH v1 00/13] Tidy up cache.S Fuad Tabba
  2021-05-11 14:42 ` [PATCH v1 01/13] arm64: Do not enable uaccess for flush_icache_range Fuad Tabba
@ 2021-05-11 14:42 ` Fuad Tabba
  2021-05-11 15:34   ` Mark Rutland
  2021-05-11 14:42 ` [PATCH v1 03/13] arm64: Downgrade flush_icache_range to invalidate Fuad Tabba
                   ` (10 subsequent siblings)
  12 siblings, 1 reply; 32+ messages in thread
From: Fuad Tabba @ 2021-05-11 14:42 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: will, catalin.marinas, mark.rutland, maz, ardb, james.morse,
	alexandru.elisei, suzuki.poulose, tabba

invalidate_icache_range() works on the kernel linear map, and
doesn't need uaccess. Remove the code that toggles
uaccess_ttbr0_enable, as well as the code that emits an entry
into the exception table (via the macro
invalidate_icache_by_line).

No functional change intended.

Reported-by: Catalin Marinas <catalin.marinas@arm.com>
Reported-by: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/linux-arch/20200511110014.lb9PEahJ4hVOYrbwIb_qUHXyNy9KQzNFdb_I3YlzY6A@z/
Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/cacheflush.h |  2 +-
 arch/arm64/mm/cache.S               | 11 +----------
 2 files changed, 2 insertions(+), 11 deletions(-)

diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
index 52e5c1623224..a586afa84172 100644
--- a/arch/arm64/include/asm/cacheflush.h
+++ b/arch/arm64/include/asm/cacheflush.h
@@ -57,7 +57,7 @@
  *		- size   - region size
  */
 extern void __flush_icache_range(unsigned long start, unsigned long end);
-extern int  invalidate_icache_range(unsigned long start, unsigned long end);
+extern void invalidate_icache_range(unsigned long start, unsigned long end);
 extern void __flush_dcache_area(void *addr, size_t len);
 extern void __inval_dcache_area(void *addr, size_t len);
 extern void __clean_dcache_area_poc(void *addr, size_t len);
diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
index 092f73acdf9a..6babaaf34f17 100644
--- a/arch/arm64/mm/cache.S
+++ b/arch/arm64/mm/cache.S
@@ -105,21 +105,12 @@ SYM_FUNC_END(__flush_cache_user_range)
  */
 SYM_FUNC_START(invalidate_icache_range)
 alternative_if ARM64_HAS_CACHE_DIC
-	mov	x0, xzr
 	isb
 	ret
 alternative_else_nop_endif
 
-	uaccess_ttbr0_enable x2, x3, x4
-
-	invalidate_icache_by_line x0, x1, x2, x3, 1, 2f
-	mov	x0, xzr
-1:
-	uaccess_ttbr0_disable x1, x2
+	invalidate_icache_by_line x0, x1, x2, x3, 0, 0f
 	ret
-2:
-	mov	x0, #-EFAULT
-	b	1b
 SYM_FUNC_END(invalidate_icache_range)
 
 /*
-- 
2.31.1.607.g51e8a6a459-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v1 03/13] arm64: Downgrade flush_icache_range to invalidate
  2021-05-11 14:42 [PATCH v1 00/13] Tidy up cache.S Fuad Tabba
  2021-05-11 14:42 ` [PATCH v1 01/13] arm64: Do not enable uaccess for flush_icache_range Fuad Tabba
  2021-05-11 14:42 ` [PATCH v1 02/13] arm64: Do not enable uaccess for invalidate_icache_range Fuad Tabba
@ 2021-05-11 14:42 ` Fuad Tabba
  2021-05-11 14:53   ` Ard Biesheuvel
  2021-05-11 14:42 ` [PATCH v1 04/13] arm64: Move documentation of dcache_by_line_op Fuad Tabba
                   ` (9 subsequent siblings)
  12 siblings, 1 reply; 32+ messages in thread
From: Fuad Tabba @ 2021-05-11 14:42 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: will, catalin.marinas, mark.rutland, maz, ardb, james.morse,
	alexandru.elisei, suzuki.poulose, tabba

Since __flush_dcache_area is called right before,
invalidate_icache_range is sufficient in this case.

No functional change intended.

Reported-by: Catalin Marinas <catalin.marinas@arm.com>
Reported-by: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/linux-arch/20200511110014.lb9PEahJ4hVOYrbwIb_qUHXyNy9KQzNFdb_I3YlzY6A@z/
Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/kernel/machine_kexec.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c
index 90a335c74442..001ffbfc645b 100644
--- a/arch/arm64/kernel/machine_kexec.c
+++ b/arch/arm64/kernel/machine_kexec.c
@@ -70,8 +70,9 @@ int machine_kexec_post_load(struct kimage *kimage)
 
 	/* Flush the reloc_code in preparation for its execution. */
 	__flush_dcache_area(reloc_code, arm64_relocate_new_kernel_size);
-	flush_icache_range((uintptr_t)reloc_code, (uintptr_t)reloc_code +
-			   arm64_relocate_new_kernel_size);
+	invalidate_icache_range((uintptr_t)reloc_code,
+				(uintptr_t)reloc_code +
+					arm64_relocate_new_kernel_size);
 
 	return 0;
 }
-- 
2.31.1.607.g51e8a6a459-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v1 04/13] arm64: Move documentation of dcache_by_line_op
  2021-05-11 14:42 [PATCH v1 00/13] Tidy up cache.S Fuad Tabba
                   ` (2 preceding siblings ...)
  2021-05-11 14:42 ` [PATCH v1 03/13] arm64: Downgrade flush_icache_range to invalidate Fuad Tabba
@ 2021-05-11 14:42 ` Fuad Tabba
  2021-05-11 14:42 ` [PATCH v1 05/13] arm64: __inval_dcache_area to take end parameter instead of size Fuad Tabba
                   ` (8 subsequent siblings)
  12 siblings, 0 replies; 32+ messages in thread
From: Fuad Tabba @ 2021-05-11 14:42 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: will, catalin.marinas, mark.rutland, maz, ardb, james.morse,
	alexandru.elisei, suzuki.poulose, tabba

The comment describing the macro dcache_by_line_op is placed
right before the previous macro of the one it describes, which is
a bit confusing. Move it to the macro it describes (dcache_by_line_op).

No functional change intended.

Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/assembler.h | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
index 6ff7a3a3b238..2bcfc5fdfafd 100644
--- a/arch/arm64/include/asm/assembler.h
+++ b/arch/arm64/include/asm/assembler.h
@@ -375,6 +375,14 @@ alternative_cb_end
 	bfi	\tcr, \tmp0, \pos, #3
 	.endm
 
+	.macro __dcache_op_workaround_clean_cache, op, kaddr
+alternative_if_not ARM64_WORKAROUND_CLEAN_CACHE
+	dc	\op, \kaddr
+alternative_else
+	dc	civac, \kaddr
+alternative_endif
+	.endm
+
 /*
  * Macro to perform a data cache maintenance for the interval
  * [kaddr, kaddr + size)
@@ -385,14 +393,6 @@ alternative_cb_end
  * 	size:		size of the region
  * 	Corrupts:	kaddr, size, tmp1, tmp2
  */
-	.macro __dcache_op_workaround_clean_cache, op, kaddr
-alternative_if_not ARM64_WORKAROUND_CLEAN_CACHE
-	dc	\op, \kaddr
-alternative_else
-	dc	civac, \kaddr
-alternative_endif
-	.endm
-
 	.macro dcache_by_line_op op, domain, kaddr, size, tmp1, tmp2
 	dcache_line_size \tmp1, \tmp2
 	add	\size, \kaddr, \size
-- 
2.31.1.607.g51e8a6a459-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v1 05/13] arm64: __inval_dcache_area to take end parameter instead of size
  2021-05-11 14:42 [PATCH v1 00/13] Tidy up cache.S Fuad Tabba
                   ` (3 preceding siblings ...)
  2021-05-11 14:42 ` [PATCH v1 04/13] arm64: Move documentation of dcache_by_line_op Fuad Tabba
@ 2021-05-11 14:42 ` Fuad Tabba
  2021-05-11 14:42 ` [PATCH v1 06/13] arm64: dcache_by_line_op " Fuad Tabba
                   ` (7 subsequent siblings)
  12 siblings, 0 replies; 32+ messages in thread
From: Fuad Tabba @ 2021-05-11 14:42 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: will, catalin.marinas, mark.rutland, maz, ardb, james.morse,
	alexandru.elisei, suzuki.poulose, tabba

To be consistent with other functions with similar names and
functionality in cacheflush.h, cache.S, and cachetlb.rst, change
to specify the range in terms of start and end, as opposed to
start and size.

Because the code is shared with __dma_inv_area, it changes the
parameters for that as well. However, __dma_inv_area, is local to
cache.S so no other users are affected.

No functional change intended.

Reported-by: Will Deacon <will@kernel.org>
Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/cacheflush.h |  2 +-
 arch/arm64/kernel/head.S            |  5 +----
 arch/arm64/mm/cache.S               | 16 +++++++++-------
 arch/arm64/mm/flush.c               |  2 +-
 4 files changed, 12 insertions(+), 13 deletions(-)

diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
index a586afa84172..157234706817 100644
--- a/arch/arm64/include/asm/cacheflush.h
+++ b/arch/arm64/include/asm/cacheflush.h
@@ -59,7 +59,7 @@
 extern void __flush_icache_range(unsigned long start, unsigned long end);
 extern void invalidate_icache_range(unsigned long start, unsigned long end);
 extern void __flush_dcache_area(void *addr, size_t len);
-extern void __inval_dcache_area(void *addr, size_t len);
+extern void __inval_dcache_area(unsigned long start, unsigned long end);
 extern void __clean_dcache_area_poc(void *addr, size_t len);
 extern void __clean_dcache_area_pop(void *addr, size_t len);
 extern void __clean_dcache_area_pou(void *addr, size_t len);
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index 96873dfa67fd..8df0ac8d9123 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -117,7 +117,7 @@ SYM_CODE_START_LOCAL(preserve_boot_args)
 	dmb	sy				// needed before dc ivac with
 						// MMU off
 
-	mov	x1, #0x20			// 4 x 8 bytes
+	add	x1, x0, #0x20			// 4 x 8 bytes
 	b	__inval_dcache_area		// tail call
 SYM_CODE_END(preserve_boot_args)
 
@@ -268,7 +268,6 @@ SYM_FUNC_START_LOCAL(__create_page_tables)
 	 */
 	adrp	x0, init_pg_dir
 	adrp	x1, init_pg_end
-	sub	x1, x1, x0
 	bl	__inval_dcache_area
 
 	/*
@@ -382,12 +381,10 @@ SYM_FUNC_START_LOCAL(__create_page_tables)
 
 	adrp	x0, idmap_pg_dir
 	adrp	x1, idmap_pg_end
-	sub	x1, x1, x0
 	bl	__inval_dcache_area
 
 	adrp	x0, init_pg_dir
 	adrp	x1, init_pg_end
-	sub	x1, x1, x0
 	bl	__inval_dcache_area
 
 	ret	x28
diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
index 6babaaf34f17..64507944b461 100644
--- a/arch/arm64/mm/cache.S
+++ b/arch/arm64/mm/cache.S
@@ -146,25 +146,24 @@ alternative_else_nop_endif
 SYM_FUNC_END(__clean_dcache_area_pou)
 
 /*
- *	__inval_dcache_area(kaddr, size)
+ *	__inval_dcache_area(start, end)
  *
- * 	Ensure that any D-cache lines for the interval [kaddr, kaddr+size)
+ * 	Ensure that any D-cache lines for the interval [start, end)
  * 	are invalidated. Any partial lines at the ends of the interval are
  *	also cleaned to PoC to prevent data loss.
  *
- *	- kaddr   - kernel address
- *	- size    - size in question
+ *	- start   - kernel start address of region
+ *	- end     - kernel end address of region
  */
 SYM_FUNC_START_LOCAL(__dma_inv_area)
 SYM_FUNC_START_PI(__inval_dcache_area)
 	/* FALLTHROUGH */
 
 /*
- *	__dma_inv_area(start, size)
+ *	__dma_inv_area(start, end)
  *	- start   - virtual start address of region
- *	- size    - size in question
+ *	- end     - virtual end address of region
  */
-	add	x1, x1, x0
 	dcache_line_size x2, x3
 	sub	x3, x2, #1
 	tst	x1, x3				// end cache line aligned?
@@ -245,8 +244,10 @@ SYM_FUNC_END_PI(__dma_flush_area)
  *	- dir	- DMA direction
  */
 SYM_FUNC_START_PI(__dma_map_area)
+	add	x1, x0, x1
 	cmp	w2, #DMA_FROM_DEVICE
 	b.eq	__dma_inv_area
+	sub	x1, x1, x0
 	b	__dma_clean_area
 SYM_FUNC_END_PI(__dma_map_area)
 
@@ -257,6 +258,7 @@ SYM_FUNC_END_PI(__dma_map_area)
  *	- dir	- DMA direction
  */
 SYM_FUNC_START_PI(__dma_unmap_area)
+	add	x1, x0, x1
 	cmp	w2, #DMA_TO_DEVICE
 	b.ne	__dma_inv_area
 	ret
diff --git a/arch/arm64/mm/flush.c b/arch/arm64/mm/flush.c
index ac485163a4a7..4e3505c2bea6 100644
--- a/arch/arm64/mm/flush.c
+++ b/arch/arm64/mm/flush.c
@@ -88,7 +88,7 @@ EXPORT_SYMBOL_GPL(arch_wb_cache_pmem);
 
 void arch_invalidate_pmem(void *addr, size_t size)
 {
-	__inval_dcache_area(addr, size);
+	__inval_dcache_area((unsigned long)addr, (unsigned long)addr + size);
 }
 EXPORT_SYMBOL_GPL(arch_invalidate_pmem);
 #endif
-- 
2.31.1.607.g51e8a6a459-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v1 06/13] arm64: dcache_by_line_op to take end parameter instead of size
  2021-05-11 14:42 [PATCH v1 00/13] Tidy up cache.S Fuad Tabba
                   ` (4 preceding siblings ...)
  2021-05-11 14:42 ` [PATCH v1 05/13] arm64: __inval_dcache_area to take end parameter instead of size Fuad Tabba
@ 2021-05-11 14:42 ` Fuad Tabba
  2021-05-11 14:42 ` [PATCH v1 07/13] arm64: __flush_dcache_area " Fuad Tabba
                   ` (6 subsequent siblings)
  12 siblings, 0 replies; 32+ messages in thread
From: Fuad Tabba @ 2021-05-11 14:42 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: will, catalin.marinas, mark.rutland, maz, ardb, james.morse,
	alexandru.elisei, suzuki.poulose, tabba

To be consistent with other functions with similar names and
functionality in cacheflush.h, cache.S, and cachetlb.rst, change
to specify the range in terms of start and end, as opposed to
start and size.

No functional change intended.

Reported-by: Will Deacon <will@kernel.org>
Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/assembler.h | 27 +++++++++++++--------------
 arch/arm64/kvm/hyp/nvhe/cache.S    |  1 +
 arch/arm64/mm/cache.S              |  5 +++++
 3 files changed, 19 insertions(+), 14 deletions(-)

diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
index 2bcfc5fdfafd..3f75a600e6c0 100644
--- a/arch/arm64/include/asm/assembler.h
+++ b/arch/arm64/include/asm/assembler.h
@@ -385,39 +385,38 @@ alternative_endif
 
 /*
  * Macro to perform a data cache maintenance for the interval
- * [kaddr, kaddr + size)
+ * [start, end)
  *
  * 	op:		operation passed to dc instruction
  * 	domain:		domain used in dsb instruciton
- * 	kaddr:		starting virtual address of the region
- * 	size:		size of the region
- * 	Corrupts:	kaddr, size, tmp1, tmp2
+ * 	start:		starting virtual address of the region
+ * 	end:		end virtual address of the region
+ * 	Corrupts:	start, end, tmp1, tmp2
  */
-	.macro dcache_by_line_op op, domain, kaddr, size, tmp1, tmp2
+	.macro dcache_by_line_op op, domain, start, end, tmp1, tmp2
 	dcache_line_size \tmp1, \tmp2
-	add	\size, \kaddr, \size
 	sub	\tmp2, \tmp1, #1
-	bic	\kaddr, \kaddr, \tmp2
+	bic	\start, \start, \tmp2
 9998:
 	.ifc	\op, cvau
-	__dcache_op_workaround_clean_cache \op, \kaddr
+	__dcache_op_workaround_clean_cache \op, \start
 	.else
 	.ifc	\op, cvac
-	__dcache_op_workaround_clean_cache \op, \kaddr
+	__dcache_op_workaround_clean_cache \op, \start
 	.else
 	.ifc	\op, cvap
-	sys	3, c7, c12, 1, \kaddr	// dc cvap
+	sys	3, c7, c12, 1, \start	// dc cvap
 	.else
 	.ifc	\op, cvadp
-	sys	3, c7, c13, 1, \kaddr	// dc cvadp
+	sys	3, c7, c13, 1, \start	// dc cvadp
 	.else
-	dc	\op, \kaddr
+	dc	\op, \start
 	.endif
 	.endif
 	.endif
 	.endif
-	add	\kaddr, \kaddr, \tmp1
-	cmp	\kaddr, \size
+	add	\start, \start, \tmp1
+	cmp	\start, \end
 	b.lo	9998b
 	dsb	\domain
 	.endm
diff --git a/arch/arm64/kvm/hyp/nvhe/cache.S b/arch/arm64/kvm/hyp/nvhe/cache.S
index 36cef6915428..3bcfa3cac46f 100644
--- a/arch/arm64/kvm/hyp/nvhe/cache.S
+++ b/arch/arm64/kvm/hyp/nvhe/cache.S
@@ -8,6 +8,7 @@
 #include <asm/alternative.h>
 
 SYM_FUNC_START_PI(__flush_dcache_area)
+	add	x1, x0, x1
 	dcache_by_line_op civac, sy, x0, x1, x2, x3
 	ret
 SYM_FUNC_END_PI(__flush_dcache_area)
diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
index 64507944b461..c801ebaf418f 100644
--- a/arch/arm64/mm/cache.S
+++ b/arch/arm64/mm/cache.S
@@ -123,6 +123,7 @@ SYM_FUNC_END(invalidate_icache_range)
  *	- size    - size in question
  */
 SYM_FUNC_START_PI(__flush_dcache_area)
+	add	x1, x0, x1
 	dcache_by_line_op civac, sy, x0, x1, x2, x3
 	ret
 SYM_FUNC_END_PI(__flush_dcache_area)
@@ -141,6 +142,7 @@ alternative_if ARM64_HAS_CACHE_IDC
 	dsb	ishst
 	ret
 alternative_else_nop_endif
+	add	x1, x0, x1
 	dcache_by_line_op cvau, ish, x0, x1, x2, x3
 	ret
 SYM_FUNC_END(__clean_dcache_area_pou)
@@ -202,6 +204,7 @@ SYM_FUNC_START_PI(__clean_dcache_area_poc)
  *	- start   - virtual start address of region
  *	- size    - size in question
  */
+	add	x1, x0, x1
 	dcache_by_line_op cvac, sy, x0, x1, x2, x3
 	ret
 SYM_FUNC_END_PI(__clean_dcache_area_poc)
@@ -220,6 +223,7 @@ SYM_FUNC_START_PI(__clean_dcache_area_pop)
 	alternative_if_not ARM64_HAS_DCPOP
 	b	__clean_dcache_area_poc
 	alternative_else_nop_endif
+	add	x1, x0, x1
 	dcache_by_line_op cvap, sy, x0, x1, x2, x3
 	ret
 SYM_FUNC_END_PI(__clean_dcache_area_pop)
@@ -233,6 +237,7 @@ SYM_FUNC_END_PI(__clean_dcache_area_pop)
  *	- size    - size in question
  */
 SYM_FUNC_START_PI(__dma_flush_area)
+	add	x1, x0, x1
 	dcache_by_line_op civac, sy, x0, x1, x2, x3
 	ret
 SYM_FUNC_END_PI(__dma_flush_area)
-- 
2.31.1.607.g51e8a6a459-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v1 07/13] arm64: __flush_dcache_area to take end parameter instead of size
  2021-05-11 14:42 [PATCH v1 00/13] Tidy up cache.S Fuad Tabba
                   ` (5 preceding siblings ...)
  2021-05-11 14:42 ` [PATCH v1 06/13] arm64: dcache_by_line_op " Fuad Tabba
@ 2021-05-11 14:42 ` Fuad Tabba
  2021-05-11 14:42 ` [PATCH v1 08/13] arm64: __clean_dcache_area_poc " Fuad Tabba
                   ` (5 subsequent siblings)
  12 siblings, 0 replies; 32+ messages in thread
From: Fuad Tabba @ 2021-05-11 14:42 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: will, catalin.marinas, mark.rutland, maz, ardb, james.morse,
	alexandru.elisei, suzuki.poulose, tabba

To be consistent with other functions with similar names and
functionality in cacheflush.h, cache.S, and cachetlb.rst, change
to specify the range in terms of start and end, as opposed to
start and size.

No functional change intended.

Reported-by: Will Deacon <will@kernel.org>
Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/arch_gicv3.h |  3 ++-
 arch/arm64/include/asm/cacheflush.h |  8 ++++----
 arch/arm64/include/asm/efi.h        |  2 +-
 arch/arm64/include/asm/kvm_mmu.h    |  3 ++-
 arch/arm64/kernel/hibernate.c       | 18 +++++++++++-------
 arch/arm64/kernel/idreg-override.c  |  3 ++-
 arch/arm64/kernel/kaslr.c           | 12 +++++++++---
 arch/arm64/kernel/machine_kexec.c   | 20 +++++++++++++-------
 arch/arm64/kernel/smp.c             |  8 ++++++--
 arch/arm64/kernel/smp_spin_table.c  |  7 ++++---
 arch/arm64/kvm/hyp/nvhe/cache.S     |  1 -
 arch/arm64/kvm/hyp/nvhe/setup.c     |  3 ++-
 arch/arm64/kvm/hyp/pgtable.c        | 13 ++++++++++---
 arch/arm64/mm/cache.S               |  9 ++++-----
 14 files changed, 70 insertions(+), 40 deletions(-)

diff --git a/arch/arm64/include/asm/arch_gicv3.h b/arch/arm64/include/asm/arch_gicv3.h
index 934b9be582d2..ed1cc9d8e6df 100644
--- a/arch/arm64/include/asm/arch_gicv3.h
+++ b/arch/arm64/include/asm/arch_gicv3.h
@@ -124,7 +124,8 @@ static inline u32 gic_read_rpr(void)
 #define gic_read_lpir(c)		readq_relaxed(c)
 #define gic_write_lpir(v, c)		writeq_relaxed(v, c)
 
-#define gic_flush_dcache_to_poc(a,l)	__flush_dcache_area((a), (l))
+#define gic_flush_dcache_to_poc(a,l)	\
+	__flush_dcache_area((unsigned long)(a), (unsigned long)(a)+(l))
 
 #define gits_read_baser(c)		readq_relaxed(c)
 #define gits_write_baser(v, c)		writeq_relaxed(v, c)
diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
index 157234706817..695f88864784 100644
--- a/arch/arm64/include/asm/cacheflush.h
+++ b/arch/arm64/include/asm/cacheflush.h
@@ -50,15 +50,15 @@
  *		- start  - virtual start address
  *		- end    - virtual end address
  *
- *	__flush_dcache_area(kaddr, size)
+ *	__flush_dcache_area(start, end)
  *
  *		Ensure that the data held in page is written back.
- *		- kaddr  - page address
- *		- size   - region size
+ *		- start  - virtual start address
+ *		- end    - virtual end address
  */
 extern void __flush_icache_range(unsigned long start, unsigned long end);
 extern void invalidate_icache_range(unsigned long start, unsigned long end);
-extern void __flush_dcache_area(void *addr, size_t len);
+extern void __flush_dcache_area(unsigned long start, unsigned long end);
 extern void __inval_dcache_area(unsigned long start, unsigned long end);
 extern void __clean_dcache_area_poc(void *addr, size_t len);
 extern void __clean_dcache_area_pop(void *addr, size_t len);
diff --git a/arch/arm64/include/asm/efi.h b/arch/arm64/include/asm/efi.h
index 3578aba9c608..0ae2397076fd 100644
--- a/arch/arm64/include/asm/efi.h
+++ b/arch/arm64/include/asm/efi.h
@@ -137,7 +137,7 @@ void efi_virtmap_unload(void);
 
 static inline void efi_capsule_flush_cache_range(void *addr, int size)
 {
-	__flush_dcache_area(addr, size);
+	__flush_dcache_area((unsigned long)addr, (unsigned long)addr + size);
 }
 
 #endif /* _ASM_EFI_H */
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index 25ed956f9af1..33293d5855af 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -180,7 +180,8 @@ static inline void *__kvm_vector_slot2addr(void *base,
 
 struct kvm;
 
-#define kvm_flush_dcache_to_poc(a,l)	__flush_dcache_area((a), (l))
+#define kvm_flush_dcache_to_poc(a,l)	\
+	__flush_dcache_area((unsigned long)(a), (unsigned long)(a)+(l))
 
 static inline bool vcpu_has_cache_enabled(struct kvm_vcpu *vcpu)
 {
diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c
index b1cef371df2b..b40ddce71507 100644
--- a/arch/arm64/kernel/hibernate.c
+++ b/arch/arm64/kernel/hibernate.c
@@ -240,8 +240,6 @@ static int create_safe_exec_page(void *src_start, size_t length,
 	return 0;
 }
 
-#define dcache_clean_range(start, end)	__flush_dcache_area(start, (end - start))
-
 #ifdef CONFIG_ARM64_MTE
 
 static DEFINE_XARRAY(mte_pages);
@@ -383,13 +381,18 @@ int swsusp_arch_suspend(void)
 		ret = swsusp_save();
 	} else {
 		/* Clean kernel core startup/idle code to PoC*/
-		dcache_clean_range(__mmuoff_data_start, __mmuoff_data_end);
-		dcache_clean_range(__idmap_text_start, __idmap_text_end);
+		__flush_dcache_area((unsigned long)__mmuoff_data_start,
+				    (unsigned long)__mmuoff_data_end);
+		__flush_dcache_area((unsigned long)__idmap_text_start,
+				    (unsigned long)__idmap_text_end);
 
 		/* Clean kvm setup code to PoC? */
 		if (el2_reset_needed()) {
-			dcache_clean_range(__hyp_idmap_text_start, __hyp_idmap_text_end);
-			dcache_clean_range(__hyp_text_start, __hyp_text_end);
+			__flush_dcache_area(
+				(unsigned long)__hyp_idmap_text_start,
+				(unsigned long)__hyp_idmap_text_end);
+			__flush_dcache_area((unsigned long)__hyp_text_start,
+					    (unsigned long)__hyp_text_end);
 		}
 
 		swsusp_mte_restore_tags();
@@ -474,7 +477,8 @@ int swsusp_arch_resume(void)
 	 * The hibernate exit text contains a set of el2 vectors, that will
 	 * be executed at el2 with the mmu off in order to reload hyp-stub.
 	 */
-	__flush_dcache_area(hibernate_exit, exit_size);
+	__flush_dcache_area((unsigned long)hibernate_exit,
+			    (unsigned long)hibernate_exit + exit_size);
 
 	/*
 	 * KASLR will cause the el2 vectors to be in a different location in
diff --git a/arch/arm64/kernel/idreg-override.c b/arch/arm64/kernel/idreg-override.c
index e628c8ce1ffe..3dd515baf526 100644
--- a/arch/arm64/kernel/idreg-override.c
+++ b/arch/arm64/kernel/idreg-override.c
@@ -237,7 +237,8 @@ asmlinkage void __init init_feature_override(void)
 
 	for (i = 0; i < ARRAY_SIZE(regs); i++) {
 		if (regs[i]->override)
-			__flush_dcache_area(regs[i]->override,
+			__flush_dcache_area((unsigned long)regs[i]->override,
+					    (unsigned long)regs[i]->override +
 					    sizeof(*regs[i]->override));
 	}
 }
diff --git a/arch/arm64/kernel/kaslr.c b/arch/arm64/kernel/kaslr.c
index 341342b207f6..49cccd03cb37 100644
--- a/arch/arm64/kernel/kaslr.c
+++ b/arch/arm64/kernel/kaslr.c
@@ -72,7 +72,9 @@ u64 __init kaslr_early_init(void)
 	 * we end up running with module randomization disabled.
 	 */
 	module_alloc_base = (u64)_etext - MODULES_VSIZE;
-	__flush_dcache_area(&module_alloc_base, sizeof(module_alloc_base));
+	__flush_dcache_area((unsigned long)&module_alloc_base,
+			    (unsigned long)&module_alloc_base +
+				    sizeof(module_alloc_base));
 
 	/*
 	 * Try to map the FDT early. If this fails, we simply bail,
@@ -170,8 +172,12 @@ u64 __init kaslr_early_init(void)
 	module_alloc_base += (module_range * (seed & ((1 << 21) - 1))) >> 21;
 	module_alloc_base &= PAGE_MASK;
 
-	__flush_dcache_area(&module_alloc_base, sizeof(module_alloc_base));
-	__flush_dcache_area(&memstart_offset_seed, sizeof(memstart_offset_seed));
+	__flush_dcache_area((unsigned long)&module_alloc_base,
+			    (unsigned long)&module_alloc_base +
+				    sizeof(module_alloc_base));
+	__flush_dcache_area((unsigned long)&memstart_offset_seed,
+			    (unsigned long)&memstart_offset_seed +
+				    sizeof(memstart_offset_seed));
 
 	return offset;
 }
diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c
index 001ffbfc645b..4cada9000acf 100644
--- a/arch/arm64/kernel/machine_kexec.c
+++ b/arch/arm64/kernel/machine_kexec.c
@@ -69,7 +69,9 @@ int machine_kexec_post_load(struct kimage *kimage)
 	kexec_image_info(kimage);
 
 	/* Flush the reloc_code in preparation for its execution. */
-	__flush_dcache_area(reloc_code, arm64_relocate_new_kernel_size);
+	__flush_dcache_area((unsigned long)reloc_code,
+			    (unsigned long)reloc_code +
+				    arm64_relocate_new_kernel_size);
 	invalidate_icache_range((uintptr_t)reloc_code,
 				(uintptr_t)reloc_code +
 					arm64_relocate_new_kernel_size);
@@ -103,16 +105,18 @@ static void kexec_list_flush(struct kimage *kimage)
 
 	for (entry = &kimage->head; ; entry++) {
 		unsigned int flag;
-		void *addr;
+		unsigned long addr;
 
 		/* flush the list entries. */
-		__flush_dcache_area(entry, sizeof(kimage_entry_t));
+		__flush_dcache_area((unsigned long)entry,
+				    (unsigned long)entry +
+					    sizeof(kimage_entry_t));
 
 		flag = *entry & IND_FLAGS;
 		if (flag == IND_DONE)
 			break;
 
-		addr = phys_to_virt(*entry & PAGE_MASK);
+		addr = (unsigned long)phys_to_virt(*entry & PAGE_MASK);
 
 		switch (flag) {
 		case IND_INDIRECTION:
@@ -121,7 +125,7 @@ static void kexec_list_flush(struct kimage *kimage)
 			break;
 		case IND_SOURCE:
 			/* flush the source pages. */
-			__flush_dcache_area(addr, PAGE_SIZE);
+			__flush_dcache_area(addr, addr + PAGE_SIZE);
 			break;
 		case IND_DESTINATION:
 			break;
@@ -148,8 +152,10 @@ static void kexec_segment_flush(const struct kimage *kimage)
 			kimage->segment[i].memsz,
 			kimage->segment[i].memsz /  PAGE_SIZE);
 
-		__flush_dcache_area(phys_to_virt(kimage->segment[i].mem),
-			kimage->segment[i].memsz);
+		__flush_dcache_area(
+			(unsigned long)phys_to_virt(kimage->segment[i].mem),
+			(unsigned long)phys_to_virt(kimage->segment[i].mem) +
+				kimage->segment[i].memsz);
 	}
 }
 
diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
index dcd7041b2b07..5fcdee331087 100644
--- a/arch/arm64/kernel/smp.c
+++ b/arch/arm64/kernel/smp.c
@@ -122,7 +122,9 @@ int __cpu_up(unsigned int cpu, struct task_struct *idle)
 	secondary_data.task = idle;
 	secondary_data.stack = task_stack_page(idle) + THREAD_SIZE;
 	update_cpu_boot_status(CPU_MMU_OFF);
-	__flush_dcache_area(&secondary_data, sizeof(secondary_data));
+	__flush_dcache_area((unsigned long)&secondary_data,
+			    (unsigned long)&secondary_data +
+				    sizeof(secondary_data));
 
 	/* Now bring the CPU into our world */
 	ret = boot_secondary(cpu, idle);
@@ -143,7 +145,9 @@ int __cpu_up(unsigned int cpu, struct task_struct *idle)
 	pr_crit("CPU%u: failed to come online\n", cpu);
 	secondary_data.task = NULL;
 	secondary_data.stack = NULL;
-	__flush_dcache_area(&secondary_data, sizeof(secondary_data));
+	__flush_dcache_area((unsigned long)&secondary_data,
+			    (unsigned long)&secondary_data +
+				    sizeof(secondary_data));
 	status = READ_ONCE(secondary_data.status);
 	if (status == CPU_MMU_OFF)
 		status = READ_ONCE(__early_cpu_boot_status);
diff --git a/arch/arm64/kernel/smp_spin_table.c b/arch/arm64/kernel/smp_spin_table.c
index c45a83512805..58d804582a35 100644
--- a/arch/arm64/kernel/smp_spin_table.c
+++ b/arch/arm64/kernel/smp_spin_table.c
@@ -36,7 +36,7 @@ static void write_pen_release(u64 val)
 	unsigned long size = sizeof(secondary_holding_pen_release);
 
 	secondary_holding_pen_release = val;
-	__flush_dcache_area(start, size);
+	__flush_dcache_area((unsigned long)start, (unsigned long)start + size);
 }
 
 
@@ -90,8 +90,9 @@ static int smp_spin_table_cpu_prepare(unsigned int cpu)
 	 * the boot protocol.
 	 */
 	writeq_relaxed(pa_holding_pen, release_addr);
-	__flush_dcache_area((__force void *)release_addr,
-			    sizeof(*release_addr));
+	__flush_dcache_area((__force unsigned long)release_addr,
+			    (__force unsigned long)release_addr +
+				    sizeof(*release_addr));
 
 	/*
 	 * Send an event to wake up the secondary CPU.
diff --git a/arch/arm64/kvm/hyp/nvhe/cache.S b/arch/arm64/kvm/hyp/nvhe/cache.S
index 3bcfa3cac46f..36cef6915428 100644
--- a/arch/arm64/kvm/hyp/nvhe/cache.S
+++ b/arch/arm64/kvm/hyp/nvhe/cache.S
@@ -8,7 +8,6 @@
 #include <asm/alternative.h>
 
 SYM_FUNC_START_PI(__flush_dcache_area)
-	add	x1, x0, x1
 	dcache_by_line_op civac, sy, x0, x1, x2, x3
 	ret
 SYM_FUNC_END_PI(__flush_dcache_area)
diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c
index 7488f53b0aa2..5dffe928f256 100644
--- a/arch/arm64/kvm/hyp/nvhe/setup.c
+++ b/arch/arm64/kvm/hyp/nvhe/setup.c
@@ -134,7 +134,8 @@ static void update_nvhe_init_params(void)
 	for (i = 0; i < hyp_nr_cpus; i++) {
 		params = per_cpu_ptr(&kvm_init_params, i);
 		params->pgd_pa = __hyp_pa(pkvm_pgtable.pgd);
-		__flush_dcache_area(params, sizeof(*params));
+		__flush_dcache_area((unsigned long)params,
+				    (unsigned long)params + sizeof(*params));
 	}
 }
 
diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
index c37c1dc4feaf..10d2f04013d4 100644
--- a/arch/arm64/kvm/hyp/pgtable.c
+++ b/arch/arm64/kvm/hyp/pgtable.c
@@ -839,8 +839,11 @@ static int stage2_unmap_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep,
 	stage2_put_pte(ptep, mmu, addr, level, mm_ops);
 
 	if (need_flush) {
-		__flush_dcache_area(kvm_pte_follow(pte, mm_ops),
-				    kvm_granule_size(level));
+		kvm_pte_t *pte_follow = kvm_pte_follow(pte, mm_ops);
+
+		__flush_dcache_area((unsigned long)pte_follow,
+				    (unsigned long)pte_follow +
+					    kvm_granule_size(level));
 	}
 
 	if (childp)
@@ -988,11 +991,15 @@ static int stage2_flush_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep,
 	struct kvm_pgtable *pgt = arg;
 	struct kvm_pgtable_mm_ops *mm_ops = pgt->mm_ops;
 	kvm_pte_t pte = *ptep;
+	kvm_pte_t *pte_follow;
 
 	if (!kvm_pte_valid(pte) || !stage2_pte_cacheable(pgt, pte))
 		return 0;
 
-	__flush_dcache_area(kvm_pte_follow(pte, mm_ops), kvm_granule_size(level));
+	pte_follow = kvm_pte_follow(pte, mm_ops);
+	__flush_dcache_area((unsigned long)pte_follow,
+			    (unsigned long)pte_follow +
+				    kvm_granule_size(level));
 	return 0;
 }
 
diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
index c801ebaf418f..72a80d19e2d1 100644
--- a/arch/arm64/mm/cache.S
+++ b/arch/arm64/mm/cache.S
@@ -114,16 +114,15 @@ alternative_else_nop_endif
 SYM_FUNC_END(invalidate_icache_range)
 
 /*
- *	__flush_dcache_area(kaddr, size)
+ *	__flush_dcache_area(start, end)
  *
- *	Ensure that any D-cache lines for the interval [kaddr, kaddr+size)
+ *	Ensure that any D-cache lines for the interval [start, end)
  *	are cleaned and invalidated to the PoC.
  *
- *	- kaddr   - kernel address
- *	- size    - size in question
+ *	- start   - virtual start address of region
+ *	- end     - virtual end address of region
  */
 SYM_FUNC_START_PI(__flush_dcache_area)
-	add	x1, x0, x1
 	dcache_by_line_op civac, sy, x0, x1, x2, x3
 	ret
 SYM_FUNC_END_PI(__flush_dcache_area)
-- 
2.31.1.607.g51e8a6a459-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v1 08/13] arm64: __clean_dcache_area_poc to take end parameter instead of size
  2021-05-11 14:42 [PATCH v1 00/13] Tidy up cache.S Fuad Tabba
                   ` (6 preceding siblings ...)
  2021-05-11 14:42 ` [PATCH v1 07/13] arm64: __flush_dcache_area " Fuad Tabba
@ 2021-05-11 14:42 ` Fuad Tabba
  2021-05-11 14:42 ` [PATCH v1 09/13] arm64: __clean_dcache_area_pop " Fuad Tabba
                   ` (4 subsequent siblings)
  12 siblings, 0 replies; 32+ messages in thread
From: Fuad Tabba @ 2021-05-11 14:42 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: will, catalin.marinas, mark.rutland, maz, ardb, james.morse,
	alexandru.elisei, suzuki.poulose, tabba

To be consistent with other functions with similar names and
functionality in cacheflush.h, cache.S, and cachetlb.rst, change
to specify the range in terms of start and end, as opposed to
start and size.

Because the code is shared with __dma_clean_area, it changes the
parameters for that as well. However, __dma_clean_area, is local to
cache.S so no other users are affected.

No functional change intended.

Reported-by: Will Deacon <will@kernel.org>
Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/cacheflush.h |  2 +-
 arch/arm64/kernel/efi-entry.S       |  5 +++--
 arch/arm64/mm/cache.S               | 16 +++++++---------
 3 files changed, 11 insertions(+), 12 deletions(-)

diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
index 695f88864784..3255878d6f30 100644
--- a/arch/arm64/include/asm/cacheflush.h
+++ b/arch/arm64/include/asm/cacheflush.h
@@ -60,7 +60,7 @@ extern void __flush_icache_range(unsigned long start, unsigned long end);
 extern void invalidate_icache_range(unsigned long start, unsigned long end);
 extern void __flush_dcache_area(unsigned long start, unsigned long end);
 extern void __inval_dcache_area(unsigned long start, unsigned long end);
-extern void __clean_dcache_area_poc(void *addr, size_t len);
+extern void __clean_dcache_area_poc(unsigned long start, unsigned long end);
 extern void __clean_dcache_area_pop(void *addr, size_t len);
 extern void __clean_dcache_area_pou(void *addr, size_t len);
 extern long __flush_cache_user_range(unsigned long start, unsigned long end);
diff --git a/arch/arm64/kernel/efi-entry.S b/arch/arm64/kernel/efi-entry.S
index 0073b24b5d25..72e6a580290a 100644
--- a/arch/arm64/kernel/efi-entry.S
+++ b/arch/arm64/kernel/efi-entry.S
@@ -28,6 +28,7 @@ SYM_CODE_START(efi_enter_kernel)
 	 * stale icache entries from before relocation.
 	 */
 	ldr	w1, =kernel_size
+	add	x1, x0, x1
 	bl	__clean_dcache_area_poc
 	ic	ialluis
 
@@ -36,7 +37,7 @@ SYM_CODE_START(efi_enter_kernel)
 	 * so that we can safely disable the MMU and caches.
 	 */
 	adr	x0, 0f
-	ldr	w1, 3f
+	adr	x1, 3f
 	bl	__clean_dcache_area_poc
 0:
 	/* Turn off Dcache and MMU */
@@ -65,4 +66,4 @@ SYM_CODE_START(efi_enter_kernel)
 	mov	x3, xzr
 	br	x19
 SYM_CODE_END(efi_enter_kernel)
-3:	.long	. - 0b
+3:
diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
index 72a80d19e2d1..7ddf6ff65b15 100644
--- a/arch/arm64/mm/cache.S
+++ b/arch/arm64/mm/cache.S
@@ -186,24 +186,23 @@ SYM_FUNC_END_PI(__inval_dcache_area)
 SYM_FUNC_END(__dma_inv_area)
 
 /*
- *	__clean_dcache_area_poc(kaddr, size)
+ *	__clean_dcache_area_poc(start, end)
  *
- * 	Ensure that any D-cache lines for the interval [kaddr, kaddr+size)
+ * 	Ensure that any D-cache lines for the interval [start, end)
  * 	are cleaned to the PoC.
  *
- *	- kaddr   - kernel address
- *	- size    - size in question
+ *	- start   - virtual start address of region
+ *	- end     - virtual end address of region
  */
 SYM_FUNC_START_LOCAL(__dma_clean_area)
 SYM_FUNC_START_PI(__clean_dcache_area_poc)
 	/* FALLTHROUGH */
 
 /*
- *	__dma_clean_area(start, size)
+ *	__dma_clean_area(start, end)
  *	- start   - virtual start address of region
- *	- size    - size in question
+ *	- end     - virtual end address of region
  */
-	add	x1, x0, x1
 	dcache_by_line_op cvac, sy, x0, x1, x2, x3
 	ret
 SYM_FUNC_END_PI(__clean_dcache_area_poc)
@@ -219,10 +218,10 @@ SYM_FUNC_END(__dma_clean_area)
  *	- size    - size in question
  */
 SYM_FUNC_START_PI(__clean_dcache_area_pop)
+	add	x1, x0, x1
 	alternative_if_not ARM64_HAS_DCPOP
 	b	__clean_dcache_area_poc
 	alternative_else_nop_endif
-	add	x1, x0, x1
 	dcache_by_line_op cvap, sy, x0, x1, x2, x3
 	ret
 SYM_FUNC_END_PI(__clean_dcache_area_pop)
@@ -251,7 +250,6 @@ SYM_FUNC_START_PI(__dma_map_area)
 	add	x1, x0, x1
 	cmp	w2, #DMA_FROM_DEVICE
 	b.eq	__dma_inv_area
-	sub	x1, x1, x0
 	b	__dma_clean_area
 SYM_FUNC_END_PI(__dma_map_area)
 
-- 
2.31.1.607.g51e8a6a459-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v1 09/13] arm64: __clean_dcache_area_pop to take end parameter instead of size
  2021-05-11 14:42 [PATCH v1 00/13] Tidy up cache.S Fuad Tabba
                   ` (7 preceding siblings ...)
  2021-05-11 14:42 ` [PATCH v1 08/13] arm64: __clean_dcache_area_poc " Fuad Tabba
@ 2021-05-11 14:42 ` Fuad Tabba
  2021-05-11 14:42 ` [PATCH v1 10/13] arm64: __clean_dcache_area_pou " Fuad Tabba
                   ` (3 subsequent siblings)
  12 siblings, 0 replies; 32+ messages in thread
From: Fuad Tabba @ 2021-05-11 14:42 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: will, catalin.marinas, mark.rutland, maz, ardb, james.morse,
	alexandru.elisei, suzuki.poulose, tabba

To be consistent with other functions with similar names and
functionality in cacheflush.h, cache.S, and cachetlb.rst, change
to specify the range in terms of start and end, as opposed to
start and size.

No functional change intended.

Reported-by: Will Deacon <will@kernel.org>
Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/cacheflush.h | 2 +-
 arch/arm64/lib/uaccess_flushcache.c | 4 ++--
 arch/arm64/mm/cache.S               | 9 ++++-----
 arch/arm64/mm/flush.c               | 2 +-
 4 files changed, 8 insertions(+), 9 deletions(-)

diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
index 3255878d6f30..fa5641868d65 100644
--- a/arch/arm64/include/asm/cacheflush.h
+++ b/arch/arm64/include/asm/cacheflush.h
@@ -61,7 +61,7 @@ extern void invalidate_icache_range(unsigned long start, unsigned long end);
 extern void __flush_dcache_area(unsigned long start, unsigned long end);
 extern void __inval_dcache_area(unsigned long start, unsigned long end);
 extern void __clean_dcache_area_poc(unsigned long start, unsigned long end);
-extern void __clean_dcache_area_pop(void *addr, size_t len);
+extern void __clean_dcache_area_pop(unsigned long start, unsigned long end);
 extern void __clean_dcache_area_pou(void *addr, size_t len);
 extern long __flush_cache_user_range(unsigned long start, unsigned long end);
 extern void sync_icache_aliases(void *kaddr, unsigned long len);
diff --git a/arch/arm64/lib/uaccess_flushcache.c b/arch/arm64/lib/uaccess_flushcache.c
index c83bb5a4aad2..62ea989effe8 100644
--- a/arch/arm64/lib/uaccess_flushcache.c
+++ b/arch/arm64/lib/uaccess_flushcache.c
@@ -15,7 +15,7 @@ void memcpy_flushcache(void *dst, const void *src, size_t cnt)
 	 * barrier to order the cache maintenance against the memcpy.
 	 */
 	memcpy(dst, src, cnt);
-	__clean_dcache_area_pop(dst, cnt);
+	__clean_dcache_area_pop((unsigned long)dst, (unsigned long)dst + cnt);
 }
 EXPORT_SYMBOL_GPL(memcpy_flushcache);
 
@@ -33,6 +33,6 @@ unsigned long __copy_user_flushcache(void *to, const void __user *from,
 	rc = raw_copy_from_user(to, from, n);
 
 	/* See above */
-	__clean_dcache_area_pop(to, n - rc);
+	__clean_dcache_area_pop((unsigned long)to, (unsigned long)to + n - rc);
 	return rc;
 }
diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
index 7ddf6ff65b15..f35f28845691 100644
--- a/arch/arm64/mm/cache.S
+++ b/arch/arm64/mm/cache.S
@@ -209,16 +209,15 @@ SYM_FUNC_END_PI(__clean_dcache_area_poc)
 SYM_FUNC_END(__dma_clean_area)
 
 /*
- *	__clean_dcache_area_pop(kaddr, size)
+ *	__clean_dcache_area_pop(start, end)
  *
- * 	Ensure that any D-cache lines for the interval [kaddr, kaddr+size)
+ * 	Ensure that any D-cache lines for the interval [start, end)
  * 	are cleaned to the PoP.
  *
- *	- kaddr   - kernel address
- *	- size    - size in question
+ *	- start   - virtual start address of region
+ *	- end     - virtual end address of region
  */
 SYM_FUNC_START_PI(__clean_dcache_area_pop)
-	add	x1, x0, x1
 	alternative_if_not ARM64_HAS_DCPOP
 	b	__clean_dcache_area_poc
 	alternative_else_nop_endif
diff --git a/arch/arm64/mm/flush.c b/arch/arm64/mm/flush.c
index 4e3505c2bea6..5aba7fe42d4b 100644
--- a/arch/arm64/mm/flush.c
+++ b/arch/arm64/mm/flush.c
@@ -82,7 +82,7 @@ void arch_wb_cache_pmem(void *addr, size_t size)
 {
 	/* Ensure order against any prior non-cacheable writes */
 	dmb(osh);
-	__clean_dcache_area_pop(addr, size);
+	__clean_dcache_area_pop((unsigned long)addr, (unsigned long)addr + size);
 }
 EXPORT_SYMBOL_GPL(arch_wb_cache_pmem);
 
-- 
2.31.1.607.g51e8a6a459-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v1 10/13] arm64: __clean_dcache_area_pou to take end parameter instead of size
  2021-05-11 14:42 [PATCH v1 00/13] Tidy up cache.S Fuad Tabba
                   ` (8 preceding siblings ...)
  2021-05-11 14:42 ` [PATCH v1 09/13] arm64: __clean_dcache_area_pop " Fuad Tabba
@ 2021-05-11 14:42 ` Fuad Tabba
  2021-05-11 14:42 ` [PATCH v1 11/13] arm64: sync_icache_aliases " Fuad Tabba
                   ` (2 subsequent siblings)
  12 siblings, 0 replies; 32+ messages in thread
From: Fuad Tabba @ 2021-05-11 14:42 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: will, catalin.marinas, mark.rutland, maz, ardb, james.morse,
	alexandru.elisei, suzuki.poulose, tabba

To be consistent with other functions with similar names and
functionality in cacheflush.h, cache.S, and cachetlb.rst, change
to specify the range in terms of start and end, as opposed to
start and size.

No functional change intended.

Reported-by: Will Deacon <will@kernel.org>
Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/cacheflush.h | 2 +-
 arch/arm64/mm/cache.S               | 9 ++++-----
 arch/arm64/mm/flush.c               | 2 +-
 3 files changed, 6 insertions(+), 7 deletions(-)

diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
index fa5641868d65..f86723047315 100644
--- a/arch/arm64/include/asm/cacheflush.h
+++ b/arch/arm64/include/asm/cacheflush.h
@@ -62,7 +62,7 @@ extern void __flush_dcache_area(unsigned long start, unsigned long end);
 extern void __inval_dcache_area(unsigned long start, unsigned long end);
 extern void __clean_dcache_area_poc(unsigned long start, unsigned long end);
 extern void __clean_dcache_area_pop(unsigned long start, unsigned long end);
-extern void __clean_dcache_area_pou(void *addr, size_t len);
+extern void __clean_dcache_area_pou(unsigned long start, unsigned long end);
 extern long __flush_cache_user_range(unsigned long start, unsigned long end);
 extern void sync_icache_aliases(void *kaddr, unsigned long len);
 
diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
index f35f28845691..d8434e57fab3 100644
--- a/arch/arm64/mm/cache.S
+++ b/arch/arm64/mm/cache.S
@@ -128,20 +128,19 @@ SYM_FUNC_START_PI(__flush_dcache_area)
 SYM_FUNC_END_PI(__flush_dcache_area)
 
 /*
- *	__clean_dcache_area_pou(kaddr, size)
+ *	__clean_dcache_area_pou(start, end)
  *
- * 	Ensure that any D-cache lines for the interval [kaddr, kaddr+size)
+ * 	Ensure that any D-cache lines for the interval [start, end)
  * 	are cleaned to the PoU.
  *
- *	- kaddr   - kernel address
- *	- size    - size in question
+ *	- start   - virtual start address of region
+ *	- end     - virtual end address of region
  */
 SYM_FUNC_START(__clean_dcache_area_pou)
 alternative_if ARM64_HAS_CACHE_IDC
 	dsb	ishst
 	ret
 alternative_else_nop_endif
-	add	x1, x0, x1
 	dcache_by_line_op cvau, ish, x0, x1, x2, x3
 	ret
 SYM_FUNC_END(__clean_dcache_area_pou)
diff --git a/arch/arm64/mm/flush.c b/arch/arm64/mm/flush.c
index 5aba7fe42d4b..a69d745fb1dc 100644
--- a/arch/arm64/mm/flush.c
+++ b/arch/arm64/mm/flush.c
@@ -19,7 +19,7 @@ void sync_icache_aliases(void *kaddr, unsigned long len)
 	unsigned long addr = (unsigned long)kaddr;
 
 	if (icache_is_aliasing()) {
-		__clean_dcache_area_pou(kaddr, len);
+		__clean_dcache_area_pou(kaddr, kaddr + len);
 		__flush_icache_all();
 	} else {
 		/*
-- 
2.31.1.607.g51e8a6a459-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v1 11/13] arm64: sync_icache_aliases to take end parameter instead of size
  2021-05-11 14:42 [PATCH v1 00/13] Tidy up cache.S Fuad Tabba
                   ` (9 preceding siblings ...)
  2021-05-11 14:42 ` [PATCH v1 10/13] arm64: __clean_dcache_area_pou " Fuad Tabba
@ 2021-05-11 14:42 ` Fuad Tabba
  2021-05-11 14:42 ` [PATCH v1 12/13] arm64: Fix cache maintenance function comments Fuad Tabba
  2021-05-11 14:42 ` [PATCH v1 13/13] arm64: Rename arm64-internal cache maintenance functions Fuad Tabba
  12 siblings, 0 replies; 32+ messages in thread
From: Fuad Tabba @ 2021-05-11 14:42 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: will, catalin.marinas, mark.rutland, maz, ardb, james.morse,
	alexandru.elisei, suzuki.poulose, tabba

To be consistent with other functions with similar names and
functionality in cacheflush.h, cache.S, and cachetlb.rst, change
to specify the range in terms of start and end, as opposed to
start and size.

No functional change intended.

Reported-by: Will Deacon <will@kernel.org>
Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/cacheflush.h |  2 +-
 arch/arm64/kernel/probes/uprobes.c  |  2 +-
 arch/arm64/mm/flush.c               | 21 +++++++++++----------
 3 files changed, 13 insertions(+), 12 deletions(-)

diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
index f86723047315..70b389a8dea5 100644
--- a/arch/arm64/include/asm/cacheflush.h
+++ b/arch/arm64/include/asm/cacheflush.h
@@ -64,7 +64,7 @@ extern void __clean_dcache_area_poc(unsigned long start, unsigned long end);
 extern void __clean_dcache_area_pop(unsigned long start, unsigned long end);
 extern void __clean_dcache_area_pou(unsigned long start, unsigned long end);
 extern long __flush_cache_user_range(unsigned long start, unsigned long end);
-extern void sync_icache_aliases(void *kaddr, unsigned long len);
+extern void sync_icache_aliases(unsigned long start, unsigned long end);
 
 static inline void flush_icache_range(unsigned long start, unsigned long end)
 {
diff --git a/arch/arm64/kernel/probes/uprobes.c b/arch/arm64/kernel/probes/uprobes.c
index 2c247634552b..9be668f3f034 100644
--- a/arch/arm64/kernel/probes/uprobes.c
+++ b/arch/arm64/kernel/probes/uprobes.c
@@ -21,7 +21,7 @@ void arch_uprobe_copy_ixol(struct page *page, unsigned long vaddr,
 	memcpy(dst, src, len);
 
 	/* flush caches (dcache/icache) */
-	sync_icache_aliases(dst, len);
+	sync_icache_aliases((unsigned long)dst, (unsigned long)dst + len);
 
 	kunmap_atomic(xol_page_kaddr);
 }
diff --git a/arch/arm64/mm/flush.c b/arch/arm64/mm/flush.c
index a69d745fb1dc..143f625e7727 100644
--- a/arch/arm64/mm/flush.c
+++ b/arch/arm64/mm/flush.c
@@ -14,28 +14,26 @@
 #include <asm/cache.h>
 #include <asm/tlbflush.h>
 
-void sync_icache_aliases(void *kaddr, unsigned long len)
+void sync_icache_aliases(unsigned long start, unsigned long end)
 {
-	unsigned long addr = (unsigned long)kaddr;
-
 	if (icache_is_aliasing()) {
-		__clean_dcache_area_pou(kaddr, kaddr + len);
+		__clean_dcache_area_pou(start, end);
 		__flush_icache_all();
 	} else {
 		/*
 		 * Don't issue kick_all_cpus_sync() after I-cache invalidation
 		 * for user mappings.
 		 */
-		__flush_icache_range(addr, addr + len);
+		__flush_icache_range(start, end);
 	}
 }
 
 static void flush_ptrace_access(struct vm_area_struct *vma, struct page *page,
-				unsigned long uaddr, void *kaddr,
-				unsigned long len)
+				unsigned long uaddr, unsigned long start,
+				unsigned long end)
 {
 	if (vma->vm_flags & VM_EXEC)
-		sync_icache_aliases(kaddr, len);
+		sync_icache_aliases(start, end);
 }
 
 /*
@@ -48,7 +46,8 @@ void copy_to_user_page(struct vm_area_struct *vma, struct page *page,
 		       unsigned long len)
 {
 	memcpy(dst, src, len);
-	flush_ptrace_access(vma, page, uaddr, dst, len);
+	flush_ptrace_access(vma, page, uaddr, (unsigned long)dst,
+			    (unsigned long)dst + len);
 }
 
 void __sync_icache_dcache(pte_t pte)
@@ -56,7 +55,9 @@ void __sync_icache_dcache(pte_t pte)
 	struct page *page = pte_page(pte);
 
 	if (!test_and_set_bit(PG_dcache_clean, &page->flags))
-		sync_icache_aliases(page_address(page), page_size(page));
+		sync_icache_aliases((unsigned long)page_address(page),
+				    (unsigned long)page_address(page) +
+					    page_size(page));
 }
 EXPORT_SYMBOL_GPL(__sync_icache_dcache);
 
-- 
2.31.1.607.g51e8a6a459-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v1 12/13] arm64: Fix cache maintenance function comments
  2021-05-11 14:42 [PATCH v1 00/13] Tidy up cache.S Fuad Tabba
                   ` (10 preceding siblings ...)
  2021-05-11 14:42 ` [PATCH v1 11/13] arm64: sync_icache_aliases " Fuad Tabba
@ 2021-05-11 14:42 ` Fuad Tabba
  2021-05-11 14:42 ` [PATCH v1 13/13] arm64: Rename arm64-internal cache maintenance functions Fuad Tabba
  12 siblings, 0 replies; 32+ messages in thread
From: Fuad Tabba @ 2021-05-11 14:42 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: will, catalin.marinas, mark.rutland, maz, ardb, james.morse,
	alexandru.elisei, suzuki.poulose, tabba

Fix and expand comments for the cache maintenance functions in
cacheflush.h. Adds comments to functions that weren't described
before. Explains what the functions do using Arm Architecture
Reference Manual terminology.

Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/cacheflush.h | 43 +++++++++++++++++++----------
 1 file changed, 28 insertions(+), 15 deletions(-)

diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
index 70b389a8dea5..4b91d3530013 100644
--- a/arch/arm64/include/asm/cacheflush.h
+++ b/arch/arm64/include/asm/cacheflush.h
@@ -30,31 +30,44 @@
  *	the implementation assumes non-aliasing VIPT D-cache and (aliasing)
  *	VIPT I-cache.
  *
- *	flush_icache_range(start, end)
- *
- *		Ensure coherency between the I-cache and the D-cache in the
- *		region described by start, end.
+ *	All functions below apply to the region described by [start, end)
  *		- start  - virtual start address
  *		- end    - virtual end address
  *
- *	invalidate_icache_range(start, end)
+ *	__flush_icache_range(start, end)
  *
- *		Invalidate the I-cache in the region described by start, end.
- *		- start  - virtual start address
- *		- end    - virtual end address
+ *		Ensure coherency between the I-cache and the D-cache region to
+ *		the Point of Unification.
  *
  *	__flush_cache_user_range(start, end)
  *
- *		Ensure coherency between the I-cache and the D-cache in the
- *		region described by start, end.
- *		- start  - virtual start address
- *		- end    - virtual end address
+ *		Ensure coherency between the I-cache and the D-cache region to
+ *		the Point of Unification.
+ *		Use only if the region might access user memory.
+ *
+ *	invalidate_icache_range(start, end)
+ *
+ *		Invalidate I-cache region to the Point of Unification.
  *
  *	__flush_dcache_area(start, end)
  *
- *		Ensure that the data held in page is written back.
- *		- start  - virtual start address
- *		- end    - virtual end address
+ *		Clean and invalidate D-cache region to the Point of Coherence.
+ *
+ *	__inval_dcache_area(start, end)
+ *
+ *		Invalidate D-cache region to the Point of Coherence.
+ *
+ *	__clean_dcache_area_poc(start, end)
+ *
+ *		Clean D-cache region to the Point of Coherence.
+ *
+ *	__clean_dcache_area_pop(start, end)
+ *
+ *		Clean D-cache region to the Point of Persistence.
+ *
+ *	__clean_dcache_area_pou(start, end)
+ *
+ *		Clean D-cache region to the Point of Unification.
  */
 extern void __flush_icache_range(unsigned long start, unsigned long end);
 extern void invalidate_icache_range(unsigned long start, unsigned long end);
-- 
2.31.1.607.g51e8a6a459-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v1 13/13] arm64: Rename arm64-internal cache maintenance functions
  2021-05-11 14:42 [PATCH v1 00/13] Tidy up cache.S Fuad Tabba
                   ` (11 preceding siblings ...)
  2021-05-11 14:42 ` [PATCH v1 12/13] arm64: Fix cache maintenance function comments Fuad Tabba
@ 2021-05-11 14:42 ` Fuad Tabba
  2021-05-11 15:09   ` Ard Biesheuvel
  12 siblings, 1 reply; 32+ messages in thread
From: Fuad Tabba @ 2021-05-11 14:42 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: will, catalin.marinas, mark.rutland, maz, ardb, james.morse,
	alexandru.elisei, suzuki.poulose, tabba

Although naming across the codebase isn't that consistent, it
tends to follow certain patterns. Moreover, the term "flush"
isn't defined in the Arm Architecture reference manual, and might
be interpreted to mean clean, invalidate, or both for a cache.

Rename arm64-internal functions to make the naming internally
consistent, as well as making it consistent with the Arm ARM, by
clarifying whether the operation is a clean, invalidate, or both.
Also specify which point the operation applies two, i.e., to the
point of unification (PoU), coherence (PoC), or persistence
(PoP).

This commit applies the following sed transformation to all files
under arch/arm64:

"s/\b__flush_cache_range\b/__clean_inval_cache_pou_macro/g;"\
"s/\b__flush_icache_range\b/__clean_inval_cache_pou/g;"\
"s/\binvalidate_icache_range\b/__inval_icache_pou/g;"\
"s/\b__flush_dcache_area\b/__clean_inval_dcache_poc/g;"\
"s/\b__inval_dcache_area\b/__inval_dcache_poc/g;"\
"s/__clean_dcache_area_poc\b/__clean_dcache_poc/g;"\
"s/\b__clean_dcache_area_pop\b/__clean_dcache_pop/g;"\
"s/\b__clean_dcache_area_pou\b/__clean_dcache_pou/g;"\
"s/\b__flush_cache_user_range\b/__clean_inval_cache_user_pou/g;"\
"s/\b__flush_icache_all\b/__clean_inval_all_icache_pou/g;"

Note that __clean_dcache_area_poc is deliberately missing a word
boundary check to match the efistub symbols in image-vars.h.

No functional change intended.

Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/arch_gicv3.h |  2 +-
 arch/arm64/include/asm/cacheflush.h | 36 +++++++++----------
 arch/arm64/include/asm/efi.h        |  2 +-
 arch/arm64/include/asm/kvm_mmu.h    |  6 ++--
 arch/arm64/kernel/alternative.c     |  2 +-
 arch/arm64/kernel/efi-entry.S       |  4 +--
 arch/arm64/kernel/head.S            |  8 ++---
 arch/arm64/kernel/hibernate.c       | 12 +++----
 arch/arm64/kernel/idreg-override.c  |  2 +-
 arch/arm64/kernel/image-vars.h      |  2 +-
 arch/arm64/kernel/insn.c            |  2 +-
 arch/arm64/kernel/kaslr.c           |  6 ++--
 arch/arm64/kernel/machine_kexec.c   | 10 +++---
 arch/arm64/kernel/smp.c             |  4 +--
 arch/arm64/kernel/smp_spin_table.c  |  4 +--
 arch/arm64/kernel/sys_compat.c      |  2 +-
 arch/arm64/kvm/arm.c                |  2 +-
 arch/arm64/kvm/hyp/nvhe/cache.S     |  4 +--
 arch/arm64/kvm/hyp/nvhe/setup.c     |  2 +-
 arch/arm64/kvm/hyp/nvhe/tlb.c       |  2 +-
 arch/arm64/kvm/hyp/pgtable.c        |  4 +--
 arch/arm64/lib/uaccess_flushcache.c |  4 +--
 arch/arm64/mm/cache.S               | 56 ++++++++++++++---------------
 arch/arm64/mm/flush.c               | 12 +++----
 24 files changed, 95 insertions(+), 95 deletions(-)

diff --git a/arch/arm64/include/asm/arch_gicv3.h b/arch/arm64/include/asm/arch_gicv3.h
index ed1cc9d8e6df..4b7ac9098e8f 100644
--- a/arch/arm64/include/asm/arch_gicv3.h
+++ b/arch/arm64/include/asm/arch_gicv3.h
@@ -125,7 +125,7 @@ static inline u32 gic_read_rpr(void)
 #define gic_write_lpir(v, c)		writeq_relaxed(v, c)
 
 #define gic_flush_dcache_to_poc(a,l)	\
-	__flush_dcache_area((unsigned long)(a), (unsigned long)(a)+(l))
+	__clean_inval_dcache_poc((unsigned long)(a), (unsigned long)(a)+(l))
 
 #define gits_read_baser(c)		readq_relaxed(c)
 #define gits_write_baser(v, c)		writeq_relaxed(v, c)
diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
index 4b91d3530013..526eee4522eb 100644
--- a/arch/arm64/include/asm/cacheflush.h
+++ b/arch/arm64/include/asm/cacheflush.h
@@ -34,54 +34,54 @@
  *		- start  - virtual start address
  *		- end    - virtual end address
  *
- *	__flush_icache_range(start, end)
+ *	__clean_inval_cache_pou(start, end)
  *
  *		Ensure coherency between the I-cache and the D-cache region to
  *		the Point of Unification.
  *
- *	__flush_cache_user_range(start, end)
+ *	__clean_inval_cache_user_pou(start, end)
  *
  *		Ensure coherency between the I-cache and the D-cache region to
  *		the Point of Unification.
  *		Use only if the region might access user memory.
  *
- *	invalidate_icache_range(start, end)
+ *	__inval_icache_pou(start, end)
  *
  *		Invalidate I-cache region to the Point of Unification.
  *
- *	__flush_dcache_area(start, end)
+ *	__clean_inval_dcache_poc(start, end)
  *
  *		Clean and invalidate D-cache region to the Point of Coherence.
  *
- *	__inval_dcache_area(start, end)
+ *	__inval_dcache_poc(start, end)
  *
  *		Invalidate D-cache region to the Point of Coherence.
  *
- *	__clean_dcache_area_poc(start, end)
+ *	__clean_dcache_poc(start, end)
  *
  *		Clean D-cache region to the Point of Coherence.
  *
- *	__clean_dcache_area_pop(start, end)
+ *	__clean_dcache_pop(start, end)
  *
  *		Clean D-cache region to the Point of Persistence.
  *
- *	__clean_dcache_area_pou(start, end)
+ *	__clean_dcache_pou(start, end)
  *
  *		Clean D-cache region to the Point of Unification.
  */
-extern void __flush_icache_range(unsigned long start, unsigned long end);
-extern void invalidate_icache_range(unsigned long start, unsigned long end);
-extern void __flush_dcache_area(unsigned long start, unsigned long end);
-extern void __inval_dcache_area(unsigned long start, unsigned long end);
-extern void __clean_dcache_area_poc(unsigned long start, unsigned long end);
-extern void __clean_dcache_area_pop(unsigned long start, unsigned long end);
-extern void __clean_dcache_area_pou(unsigned long start, unsigned long end);
-extern long __flush_cache_user_range(unsigned long start, unsigned long end);
+extern void __clean_inval_cache_pou(unsigned long start, unsigned long end);
+extern void __inval_icache_pou(unsigned long start, unsigned long end);
+extern void __clean_inval_dcache_poc(unsigned long start, unsigned long end);
+extern void __inval_dcache_poc(unsigned long start, unsigned long end);
+extern void __clean_dcache_poc(unsigned long start, unsigned long end);
+extern void __clean_dcache_pop(unsigned long start, unsigned long end);
+extern void __clean_dcache_pou(unsigned long start, unsigned long end);
+extern long __clean_inval_cache_user_pou(unsigned long start, unsigned long end);
 extern void sync_icache_aliases(unsigned long start, unsigned long end);
 
 static inline void flush_icache_range(unsigned long start, unsigned long end)
 {
-	__flush_icache_range(start, end);
+	__clean_inval_cache_pou(start, end);
 
 	/*
 	 * IPI all online CPUs so that they undergo a context synchronization
@@ -135,7 +135,7 @@ extern void copy_to_user_page(struct vm_area_struct *, struct page *,
 #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1
 extern void flush_dcache_page(struct page *);
 
-static __always_inline void __flush_icache_all(void)
+static __always_inline void __clean_inval_all_icache_pou(void)
 {
 	if (cpus_have_const_cap(ARM64_HAS_CACHE_DIC))
 		return;
diff --git a/arch/arm64/include/asm/efi.h b/arch/arm64/include/asm/efi.h
index 0ae2397076fd..d1e2a4bf8def 100644
--- a/arch/arm64/include/asm/efi.h
+++ b/arch/arm64/include/asm/efi.h
@@ -137,7 +137,7 @@ void efi_virtmap_unload(void);
 
 static inline void efi_capsule_flush_cache_range(void *addr, int size)
 {
-	__flush_dcache_area((unsigned long)addr, (unsigned long)addr + size);
+	__clean_inval_dcache_poc((unsigned long)addr, (unsigned long)addr + size);
 }
 
 #endif /* _ASM_EFI_H */
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index 33293d5855af..29d2aa6f3940 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -181,7 +181,7 @@ static inline void *__kvm_vector_slot2addr(void *base,
 struct kvm;
 
 #define kvm_flush_dcache_to_poc(a,l)	\
-	__flush_dcache_area((unsigned long)(a), (unsigned long)(a)+(l))
+	__clean_inval_dcache_poc((unsigned long)(a), (unsigned long)(a)+(l))
 
 static inline bool vcpu_has_cache_enabled(struct kvm_vcpu *vcpu)
 {
@@ -209,12 +209,12 @@ static inline void __invalidate_icache_guest_page(kvm_pfn_t pfn,
 {
 	if (icache_is_aliasing()) {
 		/* any kind of VIPT cache */
-		__flush_icache_all();
+		__clean_inval_all_icache_pou();
 	} else if (is_kernel_in_hyp_mode() || !icache_is_vpipt()) {
 		/* PIPT or VPIPT at EL2 (see comment in __kvm_tlb_flush_vmid_ipa) */
 		void *va = page_address(pfn_to_page(pfn));
 
-		invalidate_icache_range((unsigned long)va,
+		__inval_icache_pou((unsigned long)va,
 					(unsigned long)va + size);
 	}
 }
diff --git a/arch/arm64/kernel/alternative.c b/arch/arm64/kernel/alternative.c
index c906d20c7b52..ea2d52fa9a0c 100644
--- a/arch/arm64/kernel/alternative.c
+++ b/arch/arm64/kernel/alternative.c
@@ -181,7 +181,7 @@ static void __nocfi __apply_alternatives(struct alt_region *region, bool is_modu
 	 */
 	if (!is_module) {
 		dsb(ish);
-		__flush_icache_all();
+		__clean_inval_all_icache_pou();
 		isb();
 
 		/* Ignore ARM64_CB bit from feature mask */
diff --git a/arch/arm64/kernel/efi-entry.S b/arch/arm64/kernel/efi-entry.S
index 72e6a580290a..230506f460ec 100644
--- a/arch/arm64/kernel/efi-entry.S
+++ b/arch/arm64/kernel/efi-entry.S
@@ -29,7 +29,7 @@ SYM_CODE_START(efi_enter_kernel)
 	 */
 	ldr	w1, =kernel_size
 	add	x1, x0, x1
-	bl	__clean_dcache_area_poc
+	bl	__clean_dcache_poc
 	ic	ialluis
 
 	/*
@@ -38,7 +38,7 @@ SYM_CODE_START(efi_enter_kernel)
 	 */
 	adr	x0, 0f
 	adr	x1, 3f
-	bl	__clean_dcache_area_poc
+	bl	__clean_dcache_poc
 0:
 	/* Turn off Dcache and MMU */
 	mrs	x0, CurrentEL
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index 8df0ac8d9123..ea0447c5010a 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -118,7 +118,7 @@ SYM_CODE_START_LOCAL(preserve_boot_args)
 						// MMU off
 
 	add	x1, x0, #0x20			// 4 x 8 bytes
-	b	__inval_dcache_area		// tail call
+	b	__inval_dcache_poc		// tail call
 SYM_CODE_END(preserve_boot_args)
 
 /*
@@ -268,7 +268,7 @@ SYM_FUNC_START_LOCAL(__create_page_tables)
 	 */
 	adrp	x0, init_pg_dir
 	adrp	x1, init_pg_end
-	bl	__inval_dcache_area
+	bl	__inval_dcache_poc
 
 	/*
 	 * Clear the init page tables.
@@ -381,11 +381,11 @@ SYM_FUNC_START_LOCAL(__create_page_tables)
 
 	adrp	x0, idmap_pg_dir
 	adrp	x1, idmap_pg_end
-	bl	__inval_dcache_area
+	bl	__inval_dcache_poc
 
 	adrp	x0, init_pg_dir
 	adrp	x1, init_pg_end
-	bl	__inval_dcache_area
+	bl	__inval_dcache_poc
 
 	ret	x28
 SYM_FUNC_END(__create_page_tables)
diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c
index b40ddce71507..ec871b24fd5b 100644
--- a/arch/arm64/kernel/hibernate.c
+++ b/arch/arm64/kernel/hibernate.c
@@ -210,7 +210,7 @@ static int create_safe_exec_page(void *src_start, size_t length,
 		return -ENOMEM;
 
 	memcpy(page, src_start, length);
-	__flush_icache_range((unsigned long)page, (unsigned long)page + length);
+	__clean_inval_cache_pou((unsigned long)page, (unsigned long)page + length);
 	rc = trans_pgd_idmap_page(&trans_info, &trans_ttbr0, &t0sz, page);
 	if (rc)
 		return rc;
@@ -381,17 +381,17 @@ int swsusp_arch_suspend(void)
 		ret = swsusp_save();
 	} else {
 		/* Clean kernel core startup/idle code to PoC*/
-		__flush_dcache_area((unsigned long)__mmuoff_data_start,
+		__clean_inval_dcache_poc((unsigned long)__mmuoff_data_start,
 				    (unsigned long)__mmuoff_data_end);
-		__flush_dcache_area((unsigned long)__idmap_text_start,
+		__clean_inval_dcache_poc((unsigned long)__idmap_text_start,
 				    (unsigned long)__idmap_text_end);
 
 		/* Clean kvm setup code to PoC? */
 		if (el2_reset_needed()) {
-			__flush_dcache_area(
+			__clean_inval_dcache_poc(
 				(unsigned long)__hyp_idmap_text_start,
 				(unsigned long)__hyp_idmap_text_end);
-			__flush_dcache_area((unsigned long)__hyp_text_start,
+			__clean_inval_dcache_poc((unsigned long)__hyp_text_start,
 					    (unsigned long)__hyp_text_end);
 		}
 
@@ -477,7 +477,7 @@ int swsusp_arch_resume(void)
 	 * The hibernate exit text contains a set of el2 vectors, that will
 	 * be executed at el2 with the mmu off in order to reload hyp-stub.
 	 */
-	__flush_dcache_area((unsigned long)hibernate_exit,
+	__clean_inval_dcache_poc((unsigned long)hibernate_exit,
 			    (unsigned long)hibernate_exit + exit_size);
 
 	/*
diff --git a/arch/arm64/kernel/idreg-override.c b/arch/arm64/kernel/idreg-override.c
index 3dd515baf526..6b4b5727f2db 100644
--- a/arch/arm64/kernel/idreg-override.c
+++ b/arch/arm64/kernel/idreg-override.c
@@ -237,7 +237,7 @@ asmlinkage void __init init_feature_override(void)
 
 	for (i = 0; i < ARRAY_SIZE(regs); i++) {
 		if (regs[i]->override)
-			__flush_dcache_area((unsigned long)regs[i]->override,
+			__clean_inval_dcache_poc((unsigned long)regs[i]->override,
 					    (unsigned long)regs[i]->override +
 					    sizeof(*regs[i]->override));
 	}
diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h
index bcf3c2755370..14beda6a573d 100644
--- a/arch/arm64/kernel/image-vars.h
+++ b/arch/arm64/kernel/image-vars.h
@@ -35,7 +35,7 @@ __efistub_strnlen		= __pi_strnlen;
 __efistub_strcmp		= __pi_strcmp;
 __efistub_strncmp		= __pi_strncmp;
 __efistub_strrchr		= __pi_strrchr;
-__efistub___clean_dcache_area_poc = __pi___clean_dcache_area_poc;
+__efistub___clean_dcache_poc = __pi___clean_dcache_poc;
 
 #if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)
 __efistub___memcpy		= __pi_memcpy;
diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
index 6c0de2f60ea9..11c7be09e305 100644
--- a/arch/arm64/kernel/insn.c
+++ b/arch/arm64/kernel/insn.c
@@ -198,7 +198,7 @@ int __kprobes aarch64_insn_patch_text_nosync(void *addr, u32 insn)
 
 	ret = aarch64_insn_write(tp, insn);
 	if (ret == 0)
-		__flush_icache_range((uintptr_t)tp,
+		__clean_inval_cache_pou((uintptr_t)tp,
 				     (uintptr_t)tp + AARCH64_INSN_SIZE);
 
 	return ret;
diff --git a/arch/arm64/kernel/kaslr.c b/arch/arm64/kernel/kaslr.c
index 49cccd03cb37..038a4cc7de93 100644
--- a/arch/arm64/kernel/kaslr.c
+++ b/arch/arm64/kernel/kaslr.c
@@ -72,7 +72,7 @@ u64 __init kaslr_early_init(void)
 	 * we end up running with module randomization disabled.
 	 */
 	module_alloc_base = (u64)_etext - MODULES_VSIZE;
-	__flush_dcache_area((unsigned long)&module_alloc_base,
+	__clean_inval_dcache_poc((unsigned long)&module_alloc_base,
 			    (unsigned long)&module_alloc_base +
 				    sizeof(module_alloc_base));
 
@@ -172,10 +172,10 @@ u64 __init kaslr_early_init(void)
 	module_alloc_base += (module_range * (seed & ((1 << 21) - 1))) >> 21;
 	module_alloc_base &= PAGE_MASK;
 
-	__flush_dcache_area((unsigned long)&module_alloc_base,
+	__clean_inval_dcache_poc((unsigned long)&module_alloc_base,
 			    (unsigned long)&module_alloc_base +
 				    sizeof(module_alloc_base));
-	__flush_dcache_area((unsigned long)&memstart_offset_seed,
+	__clean_inval_dcache_poc((unsigned long)&memstart_offset_seed,
 			    (unsigned long)&memstart_offset_seed +
 				    sizeof(memstart_offset_seed));
 
diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c
index 4cada9000acf..0e20a789b03e 100644
--- a/arch/arm64/kernel/machine_kexec.c
+++ b/arch/arm64/kernel/machine_kexec.c
@@ -69,10 +69,10 @@ int machine_kexec_post_load(struct kimage *kimage)
 	kexec_image_info(kimage);
 
 	/* Flush the reloc_code in preparation for its execution. */
-	__flush_dcache_area((unsigned long)reloc_code,
+	__clean_inval_dcache_poc((unsigned long)reloc_code,
 			    (unsigned long)reloc_code +
 				    arm64_relocate_new_kernel_size);
-	invalidate_icache_range((uintptr_t)reloc_code,
+	__inval_icache_pou((uintptr_t)reloc_code,
 				(uintptr_t)reloc_code +
 					arm64_relocate_new_kernel_size);
 
@@ -108,7 +108,7 @@ static void kexec_list_flush(struct kimage *kimage)
 		unsigned long addr;
 
 		/* flush the list entries. */
-		__flush_dcache_area((unsigned long)entry,
+		__clean_inval_dcache_poc((unsigned long)entry,
 				    (unsigned long)entry +
 					    sizeof(kimage_entry_t));
 
@@ -125,7 +125,7 @@ static void kexec_list_flush(struct kimage *kimage)
 			break;
 		case IND_SOURCE:
 			/* flush the source pages. */
-			__flush_dcache_area(addr, addr + PAGE_SIZE);
+			__clean_inval_dcache_poc(addr, addr + PAGE_SIZE);
 			break;
 		case IND_DESTINATION:
 			break;
@@ -152,7 +152,7 @@ static void kexec_segment_flush(const struct kimage *kimage)
 			kimage->segment[i].memsz,
 			kimage->segment[i].memsz /  PAGE_SIZE);
 
-		__flush_dcache_area(
+		__clean_inval_dcache_poc(
 			(unsigned long)phys_to_virt(kimage->segment[i].mem),
 			(unsigned long)phys_to_virt(kimage->segment[i].mem) +
 				kimage->segment[i].memsz);
diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
index 5fcdee331087..2044210ed15a 100644
--- a/arch/arm64/kernel/smp.c
+++ b/arch/arm64/kernel/smp.c
@@ -122,7 +122,7 @@ int __cpu_up(unsigned int cpu, struct task_struct *idle)
 	secondary_data.task = idle;
 	secondary_data.stack = task_stack_page(idle) + THREAD_SIZE;
 	update_cpu_boot_status(CPU_MMU_OFF);
-	__flush_dcache_area((unsigned long)&secondary_data,
+	__clean_inval_dcache_poc((unsigned long)&secondary_data,
 			    (unsigned long)&secondary_data +
 				    sizeof(secondary_data));
 
@@ -145,7 +145,7 @@ int __cpu_up(unsigned int cpu, struct task_struct *idle)
 	pr_crit("CPU%u: failed to come online\n", cpu);
 	secondary_data.task = NULL;
 	secondary_data.stack = NULL;
-	__flush_dcache_area((unsigned long)&secondary_data,
+	__clean_inval_dcache_poc((unsigned long)&secondary_data,
 			    (unsigned long)&secondary_data +
 				    sizeof(secondary_data));
 	status = READ_ONCE(secondary_data.status);
diff --git a/arch/arm64/kernel/smp_spin_table.c b/arch/arm64/kernel/smp_spin_table.c
index 58d804582a35..a946ccb9791e 100644
--- a/arch/arm64/kernel/smp_spin_table.c
+++ b/arch/arm64/kernel/smp_spin_table.c
@@ -36,7 +36,7 @@ static void write_pen_release(u64 val)
 	unsigned long size = sizeof(secondary_holding_pen_release);
 
 	secondary_holding_pen_release = val;
-	__flush_dcache_area((unsigned long)start, (unsigned long)start + size);
+	__clean_inval_dcache_poc((unsigned long)start, (unsigned long)start + size);
 }
 
 
@@ -90,7 +90,7 @@ static int smp_spin_table_cpu_prepare(unsigned int cpu)
 	 * the boot protocol.
 	 */
 	writeq_relaxed(pa_holding_pen, release_addr);
-	__flush_dcache_area((__force unsigned long)release_addr,
+	__clean_inval_dcache_poc((__force unsigned long)release_addr,
 			    (__force unsigned long)release_addr +
 				    sizeof(*release_addr));
 
diff --git a/arch/arm64/kernel/sys_compat.c b/arch/arm64/kernel/sys_compat.c
index 265fe3eb1069..fdd415f8d841 100644
--- a/arch/arm64/kernel/sys_compat.c
+++ b/arch/arm64/kernel/sys_compat.c
@@ -41,7 +41,7 @@ __do_compat_cache_op(unsigned long start, unsigned long end)
 			dsb(ish);
 		}
 
-		ret = __flush_cache_user_range(start, start + chunk);
+		ret = __clean_inval_cache_user_pou(start, start + chunk);
 		if (ret)
 			return ret;
 
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 1cb39c0803a4..edeca89405ff 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -1064,7 +1064,7 @@ static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm_vcpu *vcpu,
 		if (!cpus_have_final_cap(ARM64_HAS_STAGE2_FWB))
 			stage2_unmap_vm(vcpu->kvm);
 		else
-			__flush_icache_all();
+			__clean_inval_all_icache_pou();
 	}
 
 	vcpu_reset_hcr(vcpu);
diff --git a/arch/arm64/kvm/hyp/nvhe/cache.S b/arch/arm64/kvm/hyp/nvhe/cache.S
index 36cef6915428..a906dd596e66 100644
--- a/arch/arm64/kvm/hyp/nvhe/cache.S
+++ b/arch/arm64/kvm/hyp/nvhe/cache.S
@@ -7,7 +7,7 @@
 #include <asm/assembler.h>
 #include <asm/alternative.h>
 
-SYM_FUNC_START_PI(__flush_dcache_area)
+SYM_FUNC_START_PI(__clean_inval_dcache_poc)
 	dcache_by_line_op civac, sy, x0, x1, x2, x3
 	ret
-SYM_FUNC_END_PI(__flush_dcache_area)
+SYM_FUNC_END_PI(__clean_inval_dcache_poc)
diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c
index 5dffe928f256..a16719f5068d 100644
--- a/arch/arm64/kvm/hyp/nvhe/setup.c
+++ b/arch/arm64/kvm/hyp/nvhe/setup.c
@@ -134,7 +134,7 @@ static void update_nvhe_init_params(void)
 	for (i = 0; i < hyp_nr_cpus; i++) {
 		params = per_cpu_ptr(&kvm_init_params, i);
 		params->pgd_pa = __hyp_pa(pkvm_pgtable.pgd);
-		__flush_dcache_area((unsigned long)params,
+		__clean_inval_dcache_poc((unsigned long)params,
 				    (unsigned long)params + sizeof(*params));
 	}
 }
diff --git a/arch/arm64/kvm/hyp/nvhe/tlb.c b/arch/arm64/kvm/hyp/nvhe/tlb.c
index 83dc3b271bc5..184c9c7c13bd 100644
--- a/arch/arm64/kvm/hyp/nvhe/tlb.c
+++ b/arch/arm64/kvm/hyp/nvhe/tlb.c
@@ -104,7 +104,7 @@ void __kvm_tlb_flush_vmid_ipa(struct kvm_s2_mmu *mmu,
 	 * you should be running with VHE enabled.
 	 */
 	if (icache_is_vpipt())
-		__flush_icache_all();
+		__clean_inval_all_icache_pou();
 
 	__tlb_switch_to_host(&cxt);
 }
diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
index 10d2f04013d4..fb2613f458de 100644
--- a/arch/arm64/kvm/hyp/pgtable.c
+++ b/arch/arm64/kvm/hyp/pgtable.c
@@ -841,7 +841,7 @@ static int stage2_unmap_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep,
 	if (need_flush) {
 		kvm_pte_t *pte_follow = kvm_pte_follow(pte, mm_ops);
 
-		__flush_dcache_area((unsigned long)pte_follow,
+		__clean_inval_dcache_poc((unsigned long)pte_follow,
 				    (unsigned long)pte_follow +
 					    kvm_granule_size(level));
 	}
@@ -997,7 +997,7 @@ static int stage2_flush_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep,
 		return 0;
 
 	pte_follow = kvm_pte_follow(pte, mm_ops);
-	__flush_dcache_area((unsigned long)pte_follow,
+	__clean_inval_dcache_poc((unsigned long)pte_follow,
 			    (unsigned long)pte_follow +
 				    kvm_granule_size(level));
 	return 0;
diff --git a/arch/arm64/lib/uaccess_flushcache.c b/arch/arm64/lib/uaccess_flushcache.c
index 62ea989effe8..b1a6d9823864 100644
--- a/arch/arm64/lib/uaccess_flushcache.c
+++ b/arch/arm64/lib/uaccess_flushcache.c
@@ -15,7 +15,7 @@ void memcpy_flushcache(void *dst, const void *src, size_t cnt)
 	 * barrier to order the cache maintenance against the memcpy.
 	 */
 	memcpy(dst, src, cnt);
-	__clean_dcache_area_pop((unsigned long)dst, (unsigned long)dst + cnt);
+	__clean_dcache_pop((unsigned long)dst, (unsigned long)dst + cnt);
 }
 EXPORT_SYMBOL_GPL(memcpy_flushcache);
 
@@ -33,6 +33,6 @@ unsigned long __copy_user_flushcache(void *to, const void __user *from,
 	rc = raw_copy_from_user(to, from, n);
 
 	/* See above */
-	__clean_dcache_area_pop((unsigned long)to, (unsigned long)to + n - rc);
+	__clean_dcache_pop((unsigned long)to, (unsigned long)to + n - rc);
 	return rc;
 }
diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
index d8434e57fab3..2df7212de799 100644
--- a/arch/arm64/mm/cache.S
+++ b/arch/arm64/mm/cache.S
@@ -15,7 +15,7 @@
 #include <asm/asm-uaccess.h>
 
 /*
- *	__flush_cache_range(start,end) [needs_uaccess]
+ *	__clean_inval_cache_pou_macro(start,end) [needs_uaccess]
  *
  *	Ensure that the I and D caches are coherent within specified region.
  *	This is typically used when code has been written to a memory region,
@@ -25,7 +25,7 @@
  *	- end     	- virtual end address of region
  *	- needs_uaccess - (macro parameter) might access user space memory
  */
-.macro	__flush_cache_range, needs_uaccess
+.macro	__clean_inval_cache_pou_macro, needs_uaccess
 	.if 	\needs_uaccess
 	uaccess_ttbr0_enable x2, x3, x4
 	.endif
@@ -77,12 +77,12 @@ alternative_else_nop_endif
  *	- start   - virtual start address of region
  *	- end     - virtual end address of region
  */
-SYM_FUNC_START(__flush_icache_range)
-	__flush_cache_range needs_uaccess=0
-SYM_FUNC_END(__flush_icache_range)
+SYM_FUNC_START(__clean_inval_cache_pou)
+	__clean_inval_cache_pou_macro needs_uaccess=0
+SYM_FUNC_END(__clean_inval_cache_pou)
 
 /*
- *	__flush_cache_user_range(start,end)
+ *	__clean_inval_cache_user_pou(start,end)
  *
  *	Ensure that the I and D caches are coherent within specified region.
  *	This is typically used when code has been written to a memory region,
@@ -91,19 +91,19 @@ SYM_FUNC_END(__flush_icache_range)
  *	- start   - virtual start address of region
  *	- end     - virtual end address of region
  */
-SYM_FUNC_START(__flush_cache_user_range)
-	__flush_cache_range needs_uaccess=1
-SYM_FUNC_END(__flush_cache_user_range)
+SYM_FUNC_START(__clean_inval_cache_user_pou)
+	__clean_inval_cache_pou_macro needs_uaccess=1
+SYM_FUNC_END(__clean_inval_cache_user_pou)
 
 /*
- *	invalidate_icache_range(start,end)
+ *	__inval_icache_pou(start,end)
  *
  *	Ensure that the I cache is invalid within specified region.
  *
  *	- start   - virtual start address of region
  *	- end     - virtual end address of region
  */
-SYM_FUNC_START(invalidate_icache_range)
+SYM_FUNC_START(__inval_icache_pou)
 alternative_if ARM64_HAS_CACHE_DIC
 	isb
 	ret
@@ -111,10 +111,10 @@ alternative_else_nop_endif
 
 	invalidate_icache_by_line x0, x1, x2, x3, 0, 0f
 	ret
-SYM_FUNC_END(invalidate_icache_range)
+SYM_FUNC_END(__inval_icache_pou)
 
 /*
- *	__flush_dcache_area(start, end)
+ *	__clean_inval_dcache_poc(start, end)
  *
  *	Ensure that any D-cache lines for the interval [start, end)
  *	are cleaned and invalidated to the PoC.
@@ -122,13 +122,13 @@ SYM_FUNC_END(invalidate_icache_range)
  *	- start   - virtual start address of region
  *	- end     - virtual end address of region
  */
-SYM_FUNC_START_PI(__flush_dcache_area)
+SYM_FUNC_START_PI(__clean_inval_dcache_poc)
 	dcache_by_line_op civac, sy, x0, x1, x2, x3
 	ret
-SYM_FUNC_END_PI(__flush_dcache_area)
+SYM_FUNC_END_PI(__clean_inval_dcache_poc)
 
 /*
- *	__clean_dcache_area_pou(start, end)
+ *	__clean_dcache_pou(start, end)
  *
  * 	Ensure that any D-cache lines for the interval [start, end)
  * 	are cleaned to the PoU.
@@ -136,17 +136,17 @@ SYM_FUNC_END_PI(__flush_dcache_area)
  *	- start   - virtual start address of region
  *	- end     - virtual end address of region
  */
-SYM_FUNC_START(__clean_dcache_area_pou)
+SYM_FUNC_START(__clean_dcache_pou)
 alternative_if ARM64_HAS_CACHE_IDC
 	dsb	ishst
 	ret
 alternative_else_nop_endif
 	dcache_by_line_op cvau, ish, x0, x1, x2, x3
 	ret
-SYM_FUNC_END(__clean_dcache_area_pou)
+SYM_FUNC_END(__clean_dcache_pou)
 
 /*
- *	__inval_dcache_area(start, end)
+ *	__inval_dcache_poc(start, end)
  *
  * 	Ensure that any D-cache lines for the interval [start, end)
  * 	are invalidated. Any partial lines at the ends of the interval are
@@ -156,7 +156,7 @@ SYM_FUNC_END(__clean_dcache_area_pou)
  *	- end     - kernel end address of region
  */
 SYM_FUNC_START_LOCAL(__dma_inv_area)
-SYM_FUNC_START_PI(__inval_dcache_area)
+SYM_FUNC_START_PI(__inval_dcache_poc)
 	/* FALLTHROUGH */
 
 /*
@@ -181,11 +181,11 @@ SYM_FUNC_START_PI(__inval_dcache_area)
 	b.lo	2b
 	dsb	sy
 	ret
-SYM_FUNC_END_PI(__inval_dcache_area)
+SYM_FUNC_END_PI(__inval_dcache_poc)
 SYM_FUNC_END(__dma_inv_area)
 
 /*
- *	__clean_dcache_area_poc(start, end)
+ *	__clean_dcache_poc(start, end)
  *
  * 	Ensure that any D-cache lines for the interval [start, end)
  * 	are cleaned to the PoC.
@@ -194,7 +194,7 @@ SYM_FUNC_END(__dma_inv_area)
  *	- end     - virtual end address of region
  */
 SYM_FUNC_START_LOCAL(__dma_clean_area)
-SYM_FUNC_START_PI(__clean_dcache_area_poc)
+SYM_FUNC_START_PI(__clean_dcache_poc)
 	/* FALLTHROUGH */
 
 /*
@@ -204,11 +204,11 @@ SYM_FUNC_START_PI(__clean_dcache_area_poc)
  */
 	dcache_by_line_op cvac, sy, x0, x1, x2, x3
 	ret
-SYM_FUNC_END_PI(__clean_dcache_area_poc)
+SYM_FUNC_END_PI(__clean_dcache_poc)
 SYM_FUNC_END(__dma_clean_area)
 
 /*
- *	__clean_dcache_area_pop(start, end)
+ *	__clean_dcache_pop(start, end)
  *
  * 	Ensure that any D-cache lines for the interval [start, end)
  * 	are cleaned to the PoP.
@@ -216,13 +216,13 @@ SYM_FUNC_END(__dma_clean_area)
  *	- start   - virtual start address of region
  *	- end     - virtual end address of region
  */
-SYM_FUNC_START_PI(__clean_dcache_area_pop)
+SYM_FUNC_START_PI(__clean_dcache_pop)
 	alternative_if_not ARM64_HAS_DCPOP
-	b	__clean_dcache_area_poc
+	b	__clean_dcache_poc
 	alternative_else_nop_endif
 	dcache_by_line_op cvap, sy, x0, x1, x2, x3
 	ret
-SYM_FUNC_END_PI(__clean_dcache_area_pop)
+SYM_FUNC_END_PI(__clean_dcache_pop)
 
 /*
  *	__dma_flush_area(start, size)
diff --git a/arch/arm64/mm/flush.c b/arch/arm64/mm/flush.c
index 143f625e7727..005b92148252 100644
--- a/arch/arm64/mm/flush.c
+++ b/arch/arm64/mm/flush.c
@@ -17,14 +17,14 @@
 void sync_icache_aliases(unsigned long start, unsigned long end)
 {
 	if (icache_is_aliasing()) {
-		__clean_dcache_area_pou(start, end);
-		__flush_icache_all();
+		__clean_dcache_pou(start, end);
+		__clean_inval_all_icache_pou();
 	} else {
 		/*
 		 * Don't issue kick_all_cpus_sync() after I-cache invalidation
 		 * for user mappings.
 		 */
-		__flush_icache_range(start, end);
+		__clean_inval_cache_pou(start, end);
 	}
 }
 
@@ -76,20 +76,20 @@ EXPORT_SYMBOL(flush_dcache_page);
 /*
  * Additional functions defined in assembly.
  */
-EXPORT_SYMBOL(__flush_icache_range);
+EXPORT_SYMBOL(__clean_inval_cache_pou);
 
 #ifdef CONFIG_ARCH_HAS_PMEM_API
 void arch_wb_cache_pmem(void *addr, size_t size)
 {
 	/* Ensure order against any prior non-cacheable writes */
 	dmb(osh);
-	__clean_dcache_area_pop((unsigned long)addr, (unsigned long)addr + size);
+	__clean_dcache_pop((unsigned long)addr, (unsigned long)addr + size);
 }
 EXPORT_SYMBOL_GPL(arch_wb_cache_pmem);
 
 void arch_invalidate_pmem(void *addr, size_t size)
 {
-	__inval_dcache_area((unsigned long)addr, (unsigned long)addr + size);
+	__inval_dcache_poc((unsigned long)addr, (unsigned long)addr + size);
 }
 EXPORT_SYMBOL_GPL(arch_invalidate_pmem);
 #endif
-- 
2.31.1.607.g51e8a6a459-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* Re: [PATCH v1 03/13] arm64: Downgrade flush_icache_range to invalidate
  2021-05-11 14:42 ` [PATCH v1 03/13] arm64: Downgrade flush_icache_range to invalidate Fuad Tabba
@ 2021-05-11 14:53   ` Ard Biesheuvel
  2021-05-12  9:45     ` Fuad Tabba
  0 siblings, 1 reply; 32+ messages in thread
From: Ard Biesheuvel @ 2021-05-11 14:53 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: Linux ARM, Will Deacon, Catalin Marinas, Mark Rutland,
	Marc Zyngier, James Morse, Alexandru Elisei, Suzuki K Poulose

On Tue, 11 May 2021 at 16:43, Fuad Tabba <tabba@google.com> wrote:
>
> Since __flush_dcache_area is called right before,
> invalidate_icache_range is sufficient in this case.
>
> No functional change intended.
>
> Reported-by: Catalin Marinas <catalin.marinas@arm.com>
> Reported-by: Will Deacon <will@kernel.org>
> Link: https://lore.kernel.org/linux-arch/20200511110014.lb9PEahJ4hVOYrbwIb_qUHXyNy9KQzNFdb_I3YlzY6A@z/
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
>  arch/arm64/kernel/machine_kexec.c | 5 +++--
>  1 file changed, 3 insertions(+), 2 deletions(-)
>
> diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c
> index 90a335c74442..001ffbfc645b 100644
> --- a/arch/arm64/kernel/machine_kexec.c
> +++ b/arch/arm64/kernel/machine_kexec.c
> @@ -70,8 +70,9 @@ int machine_kexec_post_load(struct kimage *kimage)
>
>         /* Flush the reloc_code in preparation for its execution. */
>         __flush_dcache_area(reloc_code, arm64_relocate_new_kernel_size);
> -       flush_icache_range((uintptr_t)reloc_code, (uintptr_t)reloc_code +
> -                          arm64_relocate_new_kernel_size);
> +       invalidate_icache_range((uintptr_t)reloc_code,
> +                               (uintptr_t)reloc_code +
> +                                       arm64_relocate_new_kernel_size);
>

So this is a clean to the PoC followed by a I-cache invalidate to the
PoU, right? Perhaps we could improve the comment while at it (avoid
'flush', and mention that the code needs to be cleaned to the PoC and
invalidated from the I-cache for execution with the MMU off and
I-cache on)


>         return 0;
>  }
> --
> 2.31.1.607.g51e8a6a459-goog
>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v1 13/13] arm64: Rename arm64-internal cache maintenance functions
  2021-05-11 14:42 ` [PATCH v1 13/13] arm64: Rename arm64-internal cache maintenance functions Fuad Tabba
@ 2021-05-11 15:09   ` Ard Biesheuvel
  2021-05-11 15:49     ` Mark Rutland
  2021-05-12  9:56     ` Fuad Tabba
  0 siblings, 2 replies; 32+ messages in thread
From: Ard Biesheuvel @ 2021-05-11 15:09 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: Linux ARM, Will Deacon, Catalin Marinas, Mark Rutland,
	Marc Zyngier, James Morse, Alexandru Elisei, Suzuki K Poulose

On Tue, 11 May 2021 at 16:43, Fuad Tabba <tabba@google.com> wrote:
>
> Although naming across the codebase isn't that consistent, it
> tends to follow certain patterns. Moreover, the term "flush"
> isn't defined in the Arm Architecture reference manual, and might
> be interpreted to mean clean, invalidate, or both for a cache.
>
> Rename arm64-internal functions to make the naming internally
> consistent, as well as making it consistent with the Arm ARM, by
> clarifying whether the operation is a clean, invalidate, or both.
> Also specify which point the operation applies two, i.e., to the
> point of unification (PoU), coherence (PoC), or persistence
> (PoP).
>
> This commit applies the following sed transformation to all files
> under arch/arm64:
>
> "s/\b__flush_cache_range\b/__clean_inval_cache_pou_macro/g;"\
> "s/\b__flush_icache_range\b/__clean_inval_cache_pou/g;"\
> "s/\binvalidate_icache_range\b/__inval_icache_pou/g;"\
> "s/\b__flush_dcache_area\b/__clean_inval_dcache_poc/g;"\
> "s/\b__inval_dcache_area\b/__inval_dcache_poc/g;"\
> "s/__clean_dcache_area_poc\b/__clean_dcache_poc/g;"\
> "s/\b__clean_dcache_area_pop\b/__clean_dcache_pop/g;"\
> "s/\b__clean_dcache_area_pou\b/__clean_dcache_pou/g;"\
> "s/\b__flush_cache_user_range\b/__clean_inval_cache_user_pou/g;"\
> "s/\b__flush_icache_all\b/__clean_inval_all_icache_pou/g;"
>
> Note that __clean_dcache_area_poc is deliberately missing a word
> boundary check to match the efistub symbols in image-vars.h.
>
> No functional change intended.
>
> Signed-off-by: Fuad Tabba <tabba@google.com>

I am a big fan of this change: code is so much easier to read if the
names of subroutines match their intent. I would suggest, though, that
we get rid of all the leading underscores while at it: we often use
them when refactoring existing routines into separate pieces (which is
where at least some of these came from), but here, they seem to have
little value.


> ---
>  arch/arm64/include/asm/arch_gicv3.h |  2 +-
>  arch/arm64/include/asm/cacheflush.h | 36 +++++++++----------
>  arch/arm64/include/asm/efi.h        |  2 +-
>  arch/arm64/include/asm/kvm_mmu.h    |  6 ++--
>  arch/arm64/kernel/alternative.c     |  2 +-
>  arch/arm64/kernel/efi-entry.S       |  4 +--
>  arch/arm64/kernel/head.S            |  8 ++---
>  arch/arm64/kernel/hibernate.c       | 12 +++----
>  arch/arm64/kernel/idreg-override.c  |  2 +-
>  arch/arm64/kernel/image-vars.h      |  2 +-
>  arch/arm64/kernel/insn.c            |  2 +-
>  arch/arm64/kernel/kaslr.c           |  6 ++--
>  arch/arm64/kernel/machine_kexec.c   | 10 +++---
>  arch/arm64/kernel/smp.c             |  4 +--
>  arch/arm64/kernel/smp_spin_table.c  |  4 +--
>  arch/arm64/kernel/sys_compat.c      |  2 +-
>  arch/arm64/kvm/arm.c                |  2 +-
>  arch/arm64/kvm/hyp/nvhe/cache.S     |  4 +--
>  arch/arm64/kvm/hyp/nvhe/setup.c     |  2 +-
>  arch/arm64/kvm/hyp/nvhe/tlb.c       |  2 +-
>  arch/arm64/kvm/hyp/pgtable.c        |  4 +--
>  arch/arm64/lib/uaccess_flushcache.c |  4 +--
>  arch/arm64/mm/cache.S               | 56 ++++++++++++++---------------
>  arch/arm64/mm/flush.c               | 12 +++----
>  24 files changed, 95 insertions(+), 95 deletions(-)
>
> diff --git a/arch/arm64/include/asm/arch_gicv3.h b/arch/arm64/include/asm/arch_gicv3.h
> index ed1cc9d8e6df..4b7ac9098e8f 100644
> --- a/arch/arm64/include/asm/arch_gicv3.h
> +++ b/arch/arm64/include/asm/arch_gicv3.h
> @@ -125,7 +125,7 @@ static inline u32 gic_read_rpr(void)
>  #define gic_write_lpir(v, c)           writeq_relaxed(v, c)
>
>  #define gic_flush_dcache_to_poc(a,l)   \
> -       __flush_dcache_area((unsigned long)(a), (unsigned long)(a)+(l))
> +       __clean_inval_dcache_poc((unsigned long)(a), (unsigned long)(a)+(l))
>
>  #define gits_read_baser(c)             readq_relaxed(c)
>  #define gits_write_baser(v, c)         writeq_relaxed(v, c)
> diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
> index 4b91d3530013..526eee4522eb 100644
> --- a/arch/arm64/include/asm/cacheflush.h
> +++ b/arch/arm64/include/asm/cacheflush.h
> @@ -34,54 +34,54 @@
>   *             - start  - virtual start address
>   *             - end    - virtual end address
>   *
> - *     __flush_icache_range(start, end)
> + *     __clean_inval_cache_pou(start, end)
>   *
>   *             Ensure coherency between the I-cache and the D-cache region to
>   *             the Point of Unification.
>   *
> - *     __flush_cache_user_range(start, end)
> + *     __clean_inval_cache_user_pou(start, end)
>   *
>   *             Ensure coherency between the I-cache and the D-cache region to
>   *             the Point of Unification.
>   *             Use only if the region might access user memory.
>   *
> - *     invalidate_icache_range(start, end)
> + *     __inval_icache_pou(start, end)
>   *
>   *             Invalidate I-cache region to the Point of Unification.
>   *
> - *     __flush_dcache_area(start, end)
> + *     __clean_inval_dcache_poc(start, end)
>   *
>   *             Clean and invalidate D-cache region to the Point of Coherence.
>   *
> - *     __inval_dcache_area(start, end)
> + *     __inval_dcache_poc(start, end)
>   *
>   *             Invalidate D-cache region to the Point of Coherence.
>   *
> - *     __clean_dcache_area_poc(start, end)
> + *     __clean_dcache_poc(start, end)
>   *
>   *             Clean D-cache region to the Point of Coherence.
>   *
> - *     __clean_dcache_area_pop(start, end)
> + *     __clean_dcache_pop(start, end)
>   *
>   *             Clean D-cache region to the Point of Persistence.
>   *
> - *     __clean_dcache_area_pou(start, end)
> + *     __clean_dcache_pou(start, end)
>   *
>   *             Clean D-cache region to the Point of Unification.
>   */
> -extern void __flush_icache_range(unsigned long start, unsigned long end);
> -extern void invalidate_icache_range(unsigned long start, unsigned long end);
> -extern void __flush_dcache_area(unsigned long start, unsigned long end);
> -extern void __inval_dcache_area(unsigned long start, unsigned long end);
> -extern void __clean_dcache_area_poc(unsigned long start, unsigned long end);
> -extern void __clean_dcache_area_pop(unsigned long start, unsigned long end);
> -extern void __clean_dcache_area_pou(unsigned long start, unsigned long end);
> -extern long __flush_cache_user_range(unsigned long start, unsigned long end);
> +extern void __clean_inval_cache_pou(unsigned long start, unsigned long end);
> +extern void __inval_icache_pou(unsigned long start, unsigned long end);
> +extern void __clean_inval_dcache_poc(unsigned long start, unsigned long end);
> +extern void __inval_dcache_poc(unsigned long start, unsigned long end);
> +extern void __clean_dcache_poc(unsigned long start, unsigned long end);
> +extern void __clean_dcache_pop(unsigned long start, unsigned long end);
> +extern void __clean_dcache_pou(unsigned long start, unsigned long end);
> +extern long __clean_inval_cache_user_pou(unsigned long start, unsigned long end);
>  extern void sync_icache_aliases(unsigned long start, unsigned long end);
>
>  static inline void flush_icache_range(unsigned long start, unsigned long end)
>  {
> -       __flush_icache_range(start, end);
> +       __clean_inval_cache_pou(start, end);
>
>         /*
>          * IPI all online CPUs so that they undergo a context synchronization
> @@ -135,7 +135,7 @@ extern void copy_to_user_page(struct vm_area_struct *, struct page *,
>  #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1
>  extern void flush_dcache_page(struct page *);
>
> -static __always_inline void __flush_icache_all(void)
> +static __always_inline void __clean_inval_all_icache_pou(void)
>  {
>         if (cpus_have_const_cap(ARM64_HAS_CACHE_DIC))
>                 return;
> diff --git a/arch/arm64/include/asm/efi.h b/arch/arm64/include/asm/efi.h
> index 0ae2397076fd..d1e2a4bf8def 100644
> --- a/arch/arm64/include/asm/efi.h
> +++ b/arch/arm64/include/asm/efi.h
> @@ -137,7 +137,7 @@ void efi_virtmap_unload(void);
>
>  static inline void efi_capsule_flush_cache_range(void *addr, int size)
>  {
> -       __flush_dcache_area((unsigned long)addr, (unsigned long)addr + size);
> +       __clean_inval_dcache_poc((unsigned long)addr, (unsigned long)addr + size);
>  }
>
>  #endif /* _ASM_EFI_H */
> diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
> index 33293d5855af..29d2aa6f3940 100644
> --- a/arch/arm64/include/asm/kvm_mmu.h
> +++ b/arch/arm64/include/asm/kvm_mmu.h
> @@ -181,7 +181,7 @@ static inline void *__kvm_vector_slot2addr(void *base,
>  struct kvm;
>
>  #define kvm_flush_dcache_to_poc(a,l)   \
> -       __flush_dcache_area((unsigned long)(a), (unsigned long)(a)+(l))
> +       __clean_inval_dcache_poc((unsigned long)(a), (unsigned long)(a)+(l))
>
>  static inline bool vcpu_has_cache_enabled(struct kvm_vcpu *vcpu)
>  {
> @@ -209,12 +209,12 @@ static inline void __invalidate_icache_guest_page(kvm_pfn_t pfn,
>  {
>         if (icache_is_aliasing()) {
>                 /* any kind of VIPT cache */
> -               __flush_icache_all();
> +               __clean_inval_all_icache_pou();
>         } else if (is_kernel_in_hyp_mode() || !icache_is_vpipt()) {
>                 /* PIPT or VPIPT at EL2 (see comment in __kvm_tlb_flush_vmid_ipa) */
>                 void *va = page_address(pfn_to_page(pfn));
>
> -               invalidate_icache_range((unsigned long)va,
> +               __inval_icache_pou((unsigned long)va,
>                                         (unsigned long)va + size);
>         }
>  }
> diff --git a/arch/arm64/kernel/alternative.c b/arch/arm64/kernel/alternative.c
> index c906d20c7b52..ea2d52fa9a0c 100644
> --- a/arch/arm64/kernel/alternative.c
> +++ b/arch/arm64/kernel/alternative.c
> @@ -181,7 +181,7 @@ static void __nocfi __apply_alternatives(struct alt_region *region, bool is_modu
>          */
>         if (!is_module) {
>                 dsb(ish);
> -               __flush_icache_all();
> +               __clean_inval_all_icache_pou();
>                 isb();
>
>                 /* Ignore ARM64_CB bit from feature mask */
> diff --git a/arch/arm64/kernel/efi-entry.S b/arch/arm64/kernel/efi-entry.S
> index 72e6a580290a..230506f460ec 100644
> --- a/arch/arm64/kernel/efi-entry.S
> +++ b/arch/arm64/kernel/efi-entry.S
> @@ -29,7 +29,7 @@ SYM_CODE_START(efi_enter_kernel)
>          */
>         ldr     w1, =kernel_size
>         add     x1, x0, x1
> -       bl      __clean_dcache_area_poc
> +       bl      __clean_dcache_poc
>         ic      ialluis
>
>         /*
> @@ -38,7 +38,7 @@ SYM_CODE_START(efi_enter_kernel)
>          */
>         adr     x0, 0f
>         adr     x1, 3f
> -       bl      __clean_dcache_area_poc
> +       bl      __clean_dcache_poc
>  0:
>         /* Turn off Dcache and MMU */
>         mrs     x0, CurrentEL
> diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
> index 8df0ac8d9123..ea0447c5010a 100644
> --- a/arch/arm64/kernel/head.S
> +++ b/arch/arm64/kernel/head.S
> @@ -118,7 +118,7 @@ SYM_CODE_START_LOCAL(preserve_boot_args)
>                                                 // MMU off
>
>         add     x1, x0, #0x20                   // 4 x 8 bytes
> -       b       __inval_dcache_area             // tail call
> +       b       __inval_dcache_poc              // tail call
>  SYM_CODE_END(preserve_boot_args)
>
>  /*
> @@ -268,7 +268,7 @@ SYM_FUNC_START_LOCAL(__create_page_tables)
>          */
>         adrp    x0, init_pg_dir
>         adrp    x1, init_pg_end
> -       bl      __inval_dcache_area
> +       bl      __inval_dcache_poc
>
>         /*
>          * Clear the init page tables.
> @@ -381,11 +381,11 @@ SYM_FUNC_START_LOCAL(__create_page_tables)
>
>         adrp    x0, idmap_pg_dir
>         adrp    x1, idmap_pg_end
> -       bl      __inval_dcache_area
> +       bl      __inval_dcache_poc
>
>         adrp    x0, init_pg_dir
>         adrp    x1, init_pg_end
> -       bl      __inval_dcache_area
> +       bl      __inval_dcache_poc
>
>         ret     x28
>  SYM_FUNC_END(__create_page_tables)
> diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c
> index b40ddce71507..ec871b24fd5b 100644
> --- a/arch/arm64/kernel/hibernate.c
> +++ b/arch/arm64/kernel/hibernate.c
> @@ -210,7 +210,7 @@ static int create_safe_exec_page(void *src_start, size_t length,
>                 return -ENOMEM;
>
>         memcpy(page, src_start, length);
> -       __flush_icache_range((unsigned long)page, (unsigned long)page + length);
> +       __clean_inval_cache_pou((unsigned long)page, (unsigned long)page + length);
>         rc = trans_pgd_idmap_page(&trans_info, &trans_ttbr0, &t0sz, page);
>         if (rc)
>                 return rc;
> @@ -381,17 +381,17 @@ int swsusp_arch_suspend(void)
>                 ret = swsusp_save();
>         } else {
>                 /* Clean kernel core startup/idle code to PoC*/
> -               __flush_dcache_area((unsigned long)__mmuoff_data_start,
> +               __clean_inval_dcache_poc((unsigned long)__mmuoff_data_start,
>                                     (unsigned long)__mmuoff_data_end);
> -               __flush_dcache_area((unsigned long)__idmap_text_start,
> +               __clean_inval_dcache_poc((unsigned long)__idmap_text_start,
>                                     (unsigned long)__idmap_text_end);
>
>                 /* Clean kvm setup code to PoC? */
>                 if (el2_reset_needed()) {
> -                       __flush_dcache_area(
> +                       __clean_inval_dcache_poc(
>                                 (unsigned long)__hyp_idmap_text_start,
>                                 (unsigned long)__hyp_idmap_text_end);
> -                       __flush_dcache_area((unsigned long)__hyp_text_start,
> +                       __clean_inval_dcache_poc((unsigned long)__hyp_text_start,
>                                             (unsigned long)__hyp_text_end);
>                 }
>
> @@ -477,7 +477,7 @@ int swsusp_arch_resume(void)
>          * The hibernate exit text contains a set of el2 vectors, that will
>          * be executed at el2 with the mmu off in order to reload hyp-stub.
>          */
> -       __flush_dcache_area((unsigned long)hibernate_exit,
> +       __clean_inval_dcache_poc((unsigned long)hibernate_exit,
>                             (unsigned long)hibernate_exit + exit_size);
>
>         /*
> diff --git a/arch/arm64/kernel/idreg-override.c b/arch/arm64/kernel/idreg-override.c
> index 3dd515baf526..6b4b5727f2db 100644
> --- a/arch/arm64/kernel/idreg-override.c
> +++ b/arch/arm64/kernel/idreg-override.c
> @@ -237,7 +237,7 @@ asmlinkage void __init init_feature_override(void)
>
>         for (i = 0; i < ARRAY_SIZE(regs); i++) {
>                 if (regs[i]->override)
> -                       __flush_dcache_area((unsigned long)regs[i]->override,
> +                       __clean_inval_dcache_poc((unsigned long)regs[i]->override,
>                                             (unsigned long)regs[i]->override +
>                                             sizeof(*regs[i]->override));
>         }
> diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h
> index bcf3c2755370..14beda6a573d 100644
> --- a/arch/arm64/kernel/image-vars.h
> +++ b/arch/arm64/kernel/image-vars.h
> @@ -35,7 +35,7 @@ __efistub_strnlen             = __pi_strnlen;
>  __efistub_strcmp               = __pi_strcmp;
>  __efistub_strncmp              = __pi_strncmp;
>  __efistub_strrchr              = __pi_strrchr;
> -__efistub___clean_dcache_area_poc = __pi___clean_dcache_area_poc;
> +__efistub___clean_dcache_poc = __pi___clean_dcache_poc;
>
>  #if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)
>  __efistub___memcpy             = __pi_memcpy;
> diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
> index 6c0de2f60ea9..11c7be09e305 100644
> --- a/arch/arm64/kernel/insn.c
> +++ b/arch/arm64/kernel/insn.c
> @@ -198,7 +198,7 @@ int __kprobes aarch64_insn_patch_text_nosync(void *addr, u32 insn)
>
>         ret = aarch64_insn_write(tp, insn);
>         if (ret == 0)
> -               __flush_icache_range((uintptr_t)tp,
> +               __clean_inval_cache_pou((uintptr_t)tp,
>                                      (uintptr_t)tp + AARCH64_INSN_SIZE);
>
>         return ret;
> diff --git a/arch/arm64/kernel/kaslr.c b/arch/arm64/kernel/kaslr.c
> index 49cccd03cb37..038a4cc7de93 100644
> --- a/arch/arm64/kernel/kaslr.c
> +++ b/arch/arm64/kernel/kaslr.c
> @@ -72,7 +72,7 @@ u64 __init kaslr_early_init(void)
>          * we end up running with module randomization disabled.
>          */
>         module_alloc_base = (u64)_etext - MODULES_VSIZE;
> -       __flush_dcache_area((unsigned long)&module_alloc_base,
> +       __clean_inval_dcache_poc((unsigned long)&module_alloc_base,
>                             (unsigned long)&module_alloc_base +
>                                     sizeof(module_alloc_base));
>
> @@ -172,10 +172,10 @@ u64 __init kaslr_early_init(void)
>         module_alloc_base += (module_range * (seed & ((1 << 21) - 1))) >> 21;
>         module_alloc_base &= PAGE_MASK;
>
> -       __flush_dcache_area((unsigned long)&module_alloc_base,
> +       __clean_inval_dcache_poc((unsigned long)&module_alloc_base,
>                             (unsigned long)&module_alloc_base +
>                                     sizeof(module_alloc_base));
> -       __flush_dcache_area((unsigned long)&memstart_offset_seed,
> +       __clean_inval_dcache_poc((unsigned long)&memstart_offset_seed,
>                             (unsigned long)&memstart_offset_seed +
>                                     sizeof(memstart_offset_seed));
>
> diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c
> index 4cada9000acf..0e20a789b03e 100644
> --- a/arch/arm64/kernel/machine_kexec.c
> +++ b/arch/arm64/kernel/machine_kexec.c
> @@ -69,10 +69,10 @@ int machine_kexec_post_load(struct kimage *kimage)
>         kexec_image_info(kimage);
>
>         /* Flush the reloc_code in preparation for its execution. */
> -       __flush_dcache_area((unsigned long)reloc_code,
> +       __clean_inval_dcache_poc((unsigned long)reloc_code,
>                             (unsigned long)reloc_code +
>                                     arm64_relocate_new_kernel_size);
> -       invalidate_icache_range((uintptr_t)reloc_code,
> +       __inval_icache_pou((uintptr_t)reloc_code,
>                                 (uintptr_t)reloc_code +
>                                         arm64_relocate_new_kernel_size);
>
> @@ -108,7 +108,7 @@ static void kexec_list_flush(struct kimage *kimage)
>                 unsigned long addr;
>
>                 /* flush the list entries. */
> -               __flush_dcache_area((unsigned long)entry,
> +               __clean_inval_dcache_poc((unsigned long)entry,
>                                     (unsigned long)entry +
>                                             sizeof(kimage_entry_t));
>
> @@ -125,7 +125,7 @@ static void kexec_list_flush(struct kimage *kimage)
>                         break;
>                 case IND_SOURCE:
>                         /* flush the source pages. */
> -                       __flush_dcache_area(addr, addr + PAGE_SIZE);
> +                       __clean_inval_dcache_poc(addr, addr + PAGE_SIZE);
>                         break;
>                 case IND_DESTINATION:
>                         break;
> @@ -152,7 +152,7 @@ static void kexec_segment_flush(const struct kimage *kimage)
>                         kimage->segment[i].memsz,
>                         kimage->segment[i].memsz /  PAGE_SIZE);
>
> -               __flush_dcache_area(
> +               __clean_inval_dcache_poc(
>                         (unsigned long)phys_to_virt(kimage->segment[i].mem),
>                         (unsigned long)phys_to_virt(kimage->segment[i].mem) +
>                                 kimage->segment[i].memsz);
> diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
> index 5fcdee331087..2044210ed15a 100644
> --- a/arch/arm64/kernel/smp.c
> +++ b/arch/arm64/kernel/smp.c
> @@ -122,7 +122,7 @@ int __cpu_up(unsigned int cpu, struct task_struct *idle)
>         secondary_data.task = idle;
>         secondary_data.stack = task_stack_page(idle) + THREAD_SIZE;
>         update_cpu_boot_status(CPU_MMU_OFF);
> -       __flush_dcache_area((unsigned long)&secondary_data,
> +       __clean_inval_dcache_poc((unsigned long)&secondary_data,
>                             (unsigned long)&secondary_data +
>                                     sizeof(secondary_data));
>
> @@ -145,7 +145,7 @@ int __cpu_up(unsigned int cpu, struct task_struct *idle)
>         pr_crit("CPU%u: failed to come online\n", cpu);
>         secondary_data.task = NULL;
>         secondary_data.stack = NULL;
> -       __flush_dcache_area((unsigned long)&secondary_data,
> +       __clean_inval_dcache_poc((unsigned long)&secondary_data,
>                             (unsigned long)&secondary_data +
>                                     sizeof(secondary_data));
>         status = READ_ONCE(secondary_data.status);
> diff --git a/arch/arm64/kernel/smp_spin_table.c b/arch/arm64/kernel/smp_spin_table.c
> index 58d804582a35..a946ccb9791e 100644
> --- a/arch/arm64/kernel/smp_spin_table.c
> +++ b/arch/arm64/kernel/smp_spin_table.c
> @@ -36,7 +36,7 @@ static void write_pen_release(u64 val)
>         unsigned long size = sizeof(secondary_holding_pen_release);
>
>         secondary_holding_pen_release = val;
> -       __flush_dcache_area((unsigned long)start, (unsigned long)start + size);
> +       __clean_inval_dcache_poc((unsigned long)start, (unsigned long)start + size);
>  }
>
>
> @@ -90,7 +90,7 @@ static int smp_spin_table_cpu_prepare(unsigned int cpu)
>          * the boot protocol.
>          */
>         writeq_relaxed(pa_holding_pen, release_addr);
> -       __flush_dcache_area((__force unsigned long)release_addr,
> +       __clean_inval_dcache_poc((__force unsigned long)release_addr,
>                             (__force unsigned long)release_addr +
>                                     sizeof(*release_addr));
>
> diff --git a/arch/arm64/kernel/sys_compat.c b/arch/arm64/kernel/sys_compat.c
> index 265fe3eb1069..fdd415f8d841 100644
> --- a/arch/arm64/kernel/sys_compat.c
> +++ b/arch/arm64/kernel/sys_compat.c
> @@ -41,7 +41,7 @@ __do_compat_cache_op(unsigned long start, unsigned long end)
>                         dsb(ish);
>                 }
>
> -               ret = __flush_cache_user_range(start, start + chunk);
> +               ret = __clean_inval_cache_user_pou(start, start + chunk);
>                 if (ret)
>                         return ret;
>
> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> index 1cb39c0803a4..edeca89405ff 100644
> --- a/arch/arm64/kvm/arm.c
> +++ b/arch/arm64/kvm/arm.c
> @@ -1064,7 +1064,7 @@ static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm_vcpu *vcpu,
>                 if (!cpus_have_final_cap(ARM64_HAS_STAGE2_FWB))
>                         stage2_unmap_vm(vcpu->kvm);
>                 else
> -                       __flush_icache_all();
> +                       __clean_inval_all_icache_pou();
>         }
>
>         vcpu_reset_hcr(vcpu);
> diff --git a/arch/arm64/kvm/hyp/nvhe/cache.S b/arch/arm64/kvm/hyp/nvhe/cache.S
> index 36cef6915428..a906dd596e66 100644
> --- a/arch/arm64/kvm/hyp/nvhe/cache.S
> +++ b/arch/arm64/kvm/hyp/nvhe/cache.S
> @@ -7,7 +7,7 @@
>  #include <asm/assembler.h>
>  #include <asm/alternative.h>
>
> -SYM_FUNC_START_PI(__flush_dcache_area)
> +SYM_FUNC_START_PI(__clean_inval_dcache_poc)
>         dcache_by_line_op civac, sy, x0, x1, x2, x3
>         ret
> -SYM_FUNC_END_PI(__flush_dcache_area)
> +SYM_FUNC_END_PI(__clean_inval_dcache_poc)
> diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c
> index 5dffe928f256..a16719f5068d 100644
> --- a/arch/arm64/kvm/hyp/nvhe/setup.c
> +++ b/arch/arm64/kvm/hyp/nvhe/setup.c
> @@ -134,7 +134,7 @@ static void update_nvhe_init_params(void)
>         for (i = 0; i < hyp_nr_cpus; i++) {
>                 params = per_cpu_ptr(&kvm_init_params, i);
>                 params->pgd_pa = __hyp_pa(pkvm_pgtable.pgd);
> -               __flush_dcache_area((unsigned long)params,
> +               __clean_inval_dcache_poc((unsigned long)params,
>                                     (unsigned long)params + sizeof(*params));
>         }
>  }
> diff --git a/arch/arm64/kvm/hyp/nvhe/tlb.c b/arch/arm64/kvm/hyp/nvhe/tlb.c
> index 83dc3b271bc5..184c9c7c13bd 100644
> --- a/arch/arm64/kvm/hyp/nvhe/tlb.c
> +++ b/arch/arm64/kvm/hyp/nvhe/tlb.c
> @@ -104,7 +104,7 @@ void __kvm_tlb_flush_vmid_ipa(struct kvm_s2_mmu *mmu,
>          * you should be running with VHE enabled.
>          */
>         if (icache_is_vpipt())
> -               __flush_icache_all();
> +               __clean_inval_all_icache_pou();
>
>         __tlb_switch_to_host(&cxt);
>  }
> diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
> index 10d2f04013d4..fb2613f458de 100644
> --- a/arch/arm64/kvm/hyp/pgtable.c
> +++ b/arch/arm64/kvm/hyp/pgtable.c
> @@ -841,7 +841,7 @@ static int stage2_unmap_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep,
>         if (need_flush) {
>                 kvm_pte_t *pte_follow = kvm_pte_follow(pte, mm_ops);
>
> -               __flush_dcache_area((unsigned long)pte_follow,
> +               __clean_inval_dcache_poc((unsigned long)pte_follow,
>                                     (unsigned long)pte_follow +
>                                             kvm_granule_size(level));
>         }
> @@ -997,7 +997,7 @@ static int stage2_flush_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep,
>                 return 0;
>
>         pte_follow = kvm_pte_follow(pte, mm_ops);
> -       __flush_dcache_area((unsigned long)pte_follow,
> +       __clean_inval_dcache_poc((unsigned long)pte_follow,
>                             (unsigned long)pte_follow +
>                                     kvm_granule_size(level));
>         return 0;
> diff --git a/arch/arm64/lib/uaccess_flushcache.c b/arch/arm64/lib/uaccess_flushcache.c
> index 62ea989effe8..b1a6d9823864 100644
> --- a/arch/arm64/lib/uaccess_flushcache.c
> +++ b/arch/arm64/lib/uaccess_flushcache.c
> @@ -15,7 +15,7 @@ void memcpy_flushcache(void *dst, const void *src, size_t cnt)
>          * barrier to order the cache maintenance against the memcpy.
>          */
>         memcpy(dst, src, cnt);
> -       __clean_dcache_area_pop((unsigned long)dst, (unsigned long)dst + cnt);
> +       __clean_dcache_pop((unsigned long)dst, (unsigned long)dst + cnt);
>  }
>  EXPORT_SYMBOL_GPL(memcpy_flushcache);
>
> @@ -33,6 +33,6 @@ unsigned long __copy_user_flushcache(void *to, const void __user *from,
>         rc = raw_copy_from_user(to, from, n);
>
>         /* See above */
> -       __clean_dcache_area_pop((unsigned long)to, (unsigned long)to + n - rc);
> +       __clean_dcache_pop((unsigned long)to, (unsigned long)to + n - rc);
>         return rc;
>  }
> diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
> index d8434e57fab3..2df7212de799 100644
> --- a/arch/arm64/mm/cache.S
> +++ b/arch/arm64/mm/cache.S
> @@ -15,7 +15,7 @@
>  #include <asm/asm-uaccess.h>
>
>  /*
> - *     __flush_cache_range(start,end) [needs_uaccess]
> + *     __clean_inval_cache_pou_macro(start,end) [needs_uaccess]
>   *
>   *     Ensure that the I and D caches are coherent within specified region.
>   *     This is typically used when code has been written to a memory region,
> @@ -25,7 +25,7 @@
>   *     - end           - virtual end address of region
>   *     - needs_uaccess - (macro parameter) might access user space memory
>   */
> -.macro __flush_cache_range, needs_uaccess
> +.macro __clean_inval_cache_pou_macro, needs_uaccess
>         .if     \needs_uaccess
>         uaccess_ttbr0_enable x2, x3, x4
>         .endif
> @@ -77,12 +77,12 @@ alternative_else_nop_endif
>   *     - start   - virtual start address of region
>   *     - end     - virtual end address of region
>   */
> -SYM_FUNC_START(__flush_icache_range)
> -       __flush_cache_range needs_uaccess=0
> -SYM_FUNC_END(__flush_icache_range)
> +SYM_FUNC_START(__clean_inval_cache_pou)
> +       __clean_inval_cache_pou_macro needs_uaccess=0
> +SYM_FUNC_END(__clean_inval_cache_pou)
>
>  /*
> - *     __flush_cache_user_range(start,end)
> + *     __clean_inval_cache_user_pou(start,end)
>   *
>   *     Ensure that the I and D caches are coherent within specified region.
>   *     This is typically used when code has been written to a memory region,
> @@ -91,19 +91,19 @@ SYM_FUNC_END(__flush_icache_range)
>   *     - start   - virtual start address of region
>   *     - end     - virtual end address of region
>   */
> -SYM_FUNC_START(__flush_cache_user_range)
> -       __flush_cache_range needs_uaccess=1
> -SYM_FUNC_END(__flush_cache_user_range)
> +SYM_FUNC_START(__clean_inval_cache_user_pou)
> +       __clean_inval_cache_pou_macro needs_uaccess=1
> +SYM_FUNC_END(__clean_inval_cache_user_pou)
>
>  /*
> - *     invalidate_icache_range(start,end)
> + *     __inval_icache_pou(start,end)
>   *
>   *     Ensure that the I cache is invalid within specified region.
>   *
>   *     - start   - virtual start address of region
>   *     - end     - virtual end address of region
>   */
> -SYM_FUNC_START(invalidate_icache_range)
> +SYM_FUNC_START(__inval_icache_pou)
>  alternative_if ARM64_HAS_CACHE_DIC
>         isb
>         ret
> @@ -111,10 +111,10 @@ alternative_else_nop_endif
>
>         invalidate_icache_by_line x0, x1, x2, x3, 0, 0f
>         ret
> -SYM_FUNC_END(invalidate_icache_range)
> +SYM_FUNC_END(__inval_icache_pou)
>
>  /*
> - *     __flush_dcache_area(start, end)
> + *     __clean_inval_dcache_poc(start, end)
>   *
>   *     Ensure that any D-cache lines for the interval [start, end)
>   *     are cleaned and invalidated to the PoC.
> @@ -122,13 +122,13 @@ SYM_FUNC_END(invalidate_icache_range)
>   *     - start   - virtual start address of region
>   *     - end     - virtual end address of region
>   */
> -SYM_FUNC_START_PI(__flush_dcache_area)
> +SYM_FUNC_START_PI(__clean_inval_dcache_poc)
>         dcache_by_line_op civac, sy, x0, x1, x2, x3
>         ret
> -SYM_FUNC_END_PI(__flush_dcache_area)
> +SYM_FUNC_END_PI(__clean_inval_dcache_poc)
>
>  /*
> - *     __clean_dcache_area_pou(start, end)
> + *     __clean_dcache_pou(start, end)
>   *
>   *     Ensure that any D-cache lines for the interval [start, end)
>   *     are cleaned to the PoU.
> @@ -136,17 +136,17 @@ SYM_FUNC_END_PI(__flush_dcache_area)
>   *     - start   - virtual start address of region
>   *     - end     - virtual end address of region
>   */
> -SYM_FUNC_START(__clean_dcache_area_pou)
> +SYM_FUNC_START(__clean_dcache_pou)
>  alternative_if ARM64_HAS_CACHE_IDC
>         dsb     ishst
>         ret
>  alternative_else_nop_endif
>         dcache_by_line_op cvau, ish, x0, x1, x2, x3
>         ret
> -SYM_FUNC_END(__clean_dcache_area_pou)
> +SYM_FUNC_END(__clean_dcache_pou)
>
>  /*
> - *     __inval_dcache_area(start, end)
> + *     __inval_dcache_poc(start, end)
>   *
>   *     Ensure that any D-cache lines for the interval [start, end)
>   *     are invalidated. Any partial lines at the ends of the interval are
> @@ -156,7 +156,7 @@ SYM_FUNC_END(__clean_dcache_area_pou)
>   *     - end     - kernel end address of region
>   */
>  SYM_FUNC_START_LOCAL(__dma_inv_area)
> -SYM_FUNC_START_PI(__inval_dcache_area)
> +SYM_FUNC_START_PI(__inval_dcache_poc)
>         /* FALLTHROUGH */
>
>  /*
> @@ -181,11 +181,11 @@ SYM_FUNC_START_PI(__inval_dcache_area)
>         b.lo    2b
>         dsb     sy
>         ret
> -SYM_FUNC_END_PI(__inval_dcache_area)
> +SYM_FUNC_END_PI(__inval_dcache_poc)
>  SYM_FUNC_END(__dma_inv_area)
>
>  /*
> - *     __clean_dcache_area_poc(start, end)
> + *     __clean_dcache_poc(start, end)
>   *
>   *     Ensure that any D-cache lines for the interval [start, end)
>   *     are cleaned to the PoC.
> @@ -194,7 +194,7 @@ SYM_FUNC_END(__dma_inv_area)
>   *     - end     - virtual end address of region
>   */
>  SYM_FUNC_START_LOCAL(__dma_clean_area)
> -SYM_FUNC_START_PI(__clean_dcache_area_poc)
> +SYM_FUNC_START_PI(__clean_dcache_poc)
>         /* FALLTHROUGH */
>
>  /*
> @@ -204,11 +204,11 @@ SYM_FUNC_START_PI(__clean_dcache_area_poc)
>   */
>         dcache_by_line_op cvac, sy, x0, x1, x2, x3
>         ret
> -SYM_FUNC_END_PI(__clean_dcache_area_poc)
> +SYM_FUNC_END_PI(__clean_dcache_poc)
>  SYM_FUNC_END(__dma_clean_area)
>
>  /*
> - *     __clean_dcache_area_pop(start, end)
> + *     __clean_dcache_pop(start, end)
>   *
>   *     Ensure that any D-cache lines for the interval [start, end)
>   *     are cleaned to the PoP.
> @@ -216,13 +216,13 @@ SYM_FUNC_END(__dma_clean_area)
>   *     - start   - virtual start address of region
>   *     - end     - virtual end address of region
>   */
> -SYM_FUNC_START_PI(__clean_dcache_area_pop)
> +SYM_FUNC_START_PI(__clean_dcache_pop)
>         alternative_if_not ARM64_HAS_DCPOP
> -       b       __clean_dcache_area_poc
> +       b       __clean_dcache_poc
>         alternative_else_nop_endif
>         dcache_by_line_op cvap, sy, x0, x1, x2, x3
>         ret
> -SYM_FUNC_END_PI(__clean_dcache_area_pop)
> +SYM_FUNC_END_PI(__clean_dcache_pop)
>
>  /*
>   *     __dma_flush_area(start, size)
> diff --git a/arch/arm64/mm/flush.c b/arch/arm64/mm/flush.c
> index 143f625e7727..005b92148252 100644
> --- a/arch/arm64/mm/flush.c
> +++ b/arch/arm64/mm/flush.c
> @@ -17,14 +17,14 @@
>  void sync_icache_aliases(unsigned long start, unsigned long end)
>  {
>         if (icache_is_aliasing()) {
> -               __clean_dcache_area_pou(start, end);
> -               __flush_icache_all();
> +               __clean_dcache_pou(start, end);
> +               __clean_inval_all_icache_pou();
>         } else {
>                 /*
>                  * Don't issue kick_all_cpus_sync() after I-cache invalidation
>                  * for user mappings.
>                  */
> -               __flush_icache_range(start, end);
> +               __clean_inval_cache_pou(start, end);
>         }
>  }
>
> @@ -76,20 +76,20 @@ EXPORT_SYMBOL(flush_dcache_page);
>  /*
>   * Additional functions defined in assembly.
>   */
> -EXPORT_SYMBOL(__flush_icache_range);
> +EXPORT_SYMBOL(__clean_inval_cache_pou);
>
>  #ifdef CONFIG_ARCH_HAS_PMEM_API
>  void arch_wb_cache_pmem(void *addr, size_t size)
>  {
>         /* Ensure order against any prior non-cacheable writes */
>         dmb(osh);
> -       __clean_dcache_area_pop((unsigned long)addr, (unsigned long)addr + size);
> +       __clean_dcache_pop((unsigned long)addr, (unsigned long)addr + size);
>  }
>  EXPORT_SYMBOL_GPL(arch_wb_cache_pmem);
>
>  void arch_invalidate_pmem(void *addr, size_t size)
>  {
> -       __inval_dcache_area((unsigned long)addr, (unsigned long)addr + size);
> +       __inval_dcache_poc((unsigned long)addr, (unsigned long)addr + size);
>  }
>  EXPORT_SYMBOL_GPL(arch_invalidate_pmem);
>  #endif
> --
> 2.31.1.607.g51e8a6a459-goog
>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v1 01/13] arm64: Do not enable uaccess for flush_icache_range
  2021-05-11 14:42 ` [PATCH v1 01/13] arm64: Do not enable uaccess for flush_icache_range Fuad Tabba
@ 2021-05-11 15:22   ` Mark Rutland
  2021-05-12  8:52     ` Fuad Tabba
  2021-05-11 16:53   ` Robin Murphy
  1 sibling, 1 reply; 32+ messages in thread
From: Mark Rutland @ 2021-05-11 15:22 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: linux-arm-kernel, will, catalin.marinas, maz, ardb, james.morse,
	alexandru.elisei, suzuki.poulose

Hi Fuad,

On Tue, May 11, 2021 at 03:42:40PM +0100, Fuad Tabba wrote:
> __flush_icache_range works on the kernel linear map, and doesn't
> need uaccess. The existing code is a side-effect of its current
> implementation with __flush_cache_user_range fallthrough.
> 
> Instead of fallthrough to share the code, use a common macro for
> the two where the caller can specify whether user-space access is
> needed.

FWIW, I agree that we should fix __flush_icache_range to not map fiddle
with uaccess, and that we should split these.

> No functional change intended.

There is a performance change here, since the existing
`__flush_cache_user_range` takes IDC and DIC into account, whereas
`invalidate_icache_by_line` does not.

There's also an existing oversight where `__flush_cache_user_range`
takes ARM64_WORKAROUND_CLEAN_CACHE into account, but
`invalidate_icache_by_line` does not. I think that's a bug that we
should fix first, so that we can backport something to stable. Arguably
similar is true in `swsusp_arch_suspend_exit`, but for that we could add
a comment and always use `DC CIVAC`.

Thanks,
Mark.

> Reported-by: Catalin Marinas <catalin.marinas@arm.com>
> Reported-by: Will Deacon <will@kernel.org>
> Link: https://lore.kernel.org/linux-arch/20200511110014.lb9PEahJ4hVOYrbwIb_qUHXyNy9KQzNFdb_I3YlzY6A@z/
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
>  arch/arm64/include/asm/assembler.h | 13 ++++--
>  arch/arm64/mm/cache.S              | 64 +++++++++++++++++++++---------
>  2 files changed, 54 insertions(+), 23 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
> index 8418c1bd8f04..6ff7a3a3b238 100644
> --- a/arch/arm64/include/asm/assembler.h
> +++ b/arch/arm64/include/asm/assembler.h
> @@ -426,16 +426,21 @@ alternative_endif
>   * Macro to perform an instruction cache maintenance for the interval
>   * [start, end)
>   *
> - * 	start, end:	virtual addresses describing the region
> - *	label:		A label to branch to on user fault.
> - * 	Corrupts:	tmp1, tmp2
> + *	start, end:	virtual addresses describing the region
> + *	needs_uaccess:	might access user space memory
> + *	label:		label to branch to on user fault (if needs_uaccess)
> + *	Corrupts:	tmp1, tmp2
>   */
> -	.macro invalidate_icache_by_line start, end, tmp1, tmp2, label
> +	.macro invalidate_icache_by_line start, end, tmp1, tmp2, needs_uaccess, label
>  	icache_line_size \tmp1, \tmp2
>  	sub	\tmp2, \tmp1, #1
>  	bic	\tmp2, \start, \tmp2
>  9997:
> +	.if	\needs_uaccess
>  USER(\label, ic	ivau, \tmp2)			// invalidate I line PoU
> +	.else
> +	ic	ivau, \tmp2
> +	.endif
>  	add	\tmp2, \tmp2, \tmp1
>  	cmp	\tmp2, \end
>  	b.lo	9997b
> diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
> index 2d881f34dd9d..092f73acdf9a 100644
> --- a/arch/arm64/mm/cache.S
> +++ b/arch/arm64/mm/cache.S
> @@ -15,30 +15,20 @@
>  #include <asm/asm-uaccess.h>
>  
>  /*
> - *	flush_icache_range(start,end)
> + *	__flush_cache_range(start,end) [needs_uaccess]
>   *
>   *	Ensure that the I and D caches are coherent within specified region.
>   *	This is typically used when code has been written to a memory region,
>   *	and will be executed.
>   *
> - *	- start   - virtual start address of region
> - *	- end     - virtual end address of region
> + *	- start   	- virtual start address of region
> + *	- end     	- virtual end address of region
> + *	- needs_uaccess - (macro parameter) might access user space memory
>   */
> -SYM_FUNC_START(__flush_icache_range)
> -	/* FALLTHROUGH */
> -
> -/*
> - *	__flush_cache_user_range(start,end)
> - *
> - *	Ensure that the I and D caches are coherent within specified region.
> - *	This is typically used when code has been written to a memory region,
> - *	and will be executed.
> - *
> - *	- start   - virtual start address of region
> - *	- end     - virtual end address of region
> - */
> -SYM_FUNC_START(__flush_cache_user_range)
> +.macro	__flush_cache_range, needs_uaccess
> +	.if 	\needs_uaccess
>  	uaccess_ttbr0_enable x2, x3, x4
> +	.endif
>  alternative_if ARM64_HAS_CACHE_IDC
>  	dsb	ishst
>  	b	7f
> @@ -47,7 +37,11 @@ alternative_else_nop_endif
>  	sub	x3, x2, #1
>  	bic	x4, x0, x3
>  1:
> +	.if 	\needs_uaccess
>  user_alt 9f, "dc cvau, x4",  "dc civac, x4",  ARM64_WORKAROUND_CLEAN_CACHE
> +	.else
> +alternative_insn "dc cvau, x4",  "dc civac, x4",  ARM64_WORKAROUND_CLEAN_CACHE
> +	.endif
>  	add	x4, x4, x2
>  	cmp	x4, x1
>  	b.lo	1b
> @@ -58,15 +52,47 @@ alternative_if ARM64_HAS_CACHE_DIC
>  	isb
>  	b	8f
>  alternative_else_nop_endif
> -	invalidate_icache_by_line x0, x1, x2, x3, 9f
> +	invalidate_icache_by_line x0, x1, x2, x3, \needs_uaccess, 9f
>  8:	mov	x0, #0
>  1:
> +	.if	\needs_uaccess
>  	uaccess_ttbr0_disable x1, x2
> +	.endif
>  	ret
> +
> +	.if 	\needs_uaccess
>  9:
>  	mov	x0, #-EFAULT
>  	b	1b
> +	.endif
> +.endm
> +
> +/*
> + *	flush_icache_range(start,end)
> + *
> + *	Ensure that the I and D caches are coherent within specified region.
> + *	This is typically used when code has been written to a memory region,
> + *	and will be executed.
> + *
> + *	- start   - virtual start address of region
> + *	- end     - virtual end address of region
> + */
> +SYM_FUNC_START(__flush_icache_range)
> +	__flush_cache_range needs_uaccess=0
>  SYM_FUNC_END(__flush_icache_range)
> +
> +/*
> + *	__flush_cache_user_range(start,end)
> + *
> + *	Ensure that the I and D caches are coherent within specified region.
> + *	This is typically used when code has been written to a memory region,
> + *	and will be executed.
> + *
> + *	- start   - virtual start address of region
> + *	- end     - virtual end address of region
> + */
> +SYM_FUNC_START(__flush_cache_user_range)
> +	__flush_cache_range needs_uaccess=1
>  SYM_FUNC_END(__flush_cache_user_range)
>  
>  /*
> @@ -86,7 +112,7 @@ alternative_else_nop_endif
>  
>  	uaccess_ttbr0_enable x2, x3, x4
>  
> -	invalidate_icache_by_line x0, x1, x2, x3, 2f
> +	invalidate_icache_by_line x0, x1, x2, x3, 1, 2f
>  	mov	x0, xzr
>  1:
>  	uaccess_ttbr0_disable x1, x2
> -- 
> 2.31.1.607.g51e8a6a459-goog
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v1 02/13] arm64: Do not enable uaccess for invalidate_icache_range
  2021-05-11 14:42 ` [PATCH v1 02/13] arm64: Do not enable uaccess for invalidate_icache_range Fuad Tabba
@ 2021-05-11 15:34   ` Mark Rutland
  2021-05-12  9:35     ` Fuad Tabba
  0 siblings, 1 reply; 32+ messages in thread
From: Mark Rutland @ 2021-05-11 15:34 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: linux-arm-kernel, will, catalin.marinas, maz, ardb, james.morse,
	alexandru.elisei, suzuki.poulose

On Tue, May 11, 2021 at 03:42:41PM +0100, Fuad Tabba wrote:
> invalidate_icache_range() works on the kernel linear map, and
> doesn't need uaccess. Remove the code that toggles
> uaccess_ttbr0_enable, as well as the code that emits an entry
> into the exception table (via the macro
> invalidate_icache_by_line).

Probably also worth mentioning the return type change, but regardless:

Acked-by: Mark Rutland <mark.rutland@arm.com>

I do worry this means we've been silently ignoring cases where this
faults, and so there's the risk that this has been masking bugs
elsewhere. It'd be good to throw Syzkaller and the like at this ASAP

Thanks,	
Mark.

> No functional change intended.
> 
> Reported-by: Catalin Marinas <catalin.marinas@arm.com>
> Reported-by: Will Deacon <will@kernel.org>
> Link: https://lore.kernel.org/linux-arch/20200511110014.lb9PEahJ4hVOYrbwIb_qUHXyNy9KQzNFdb_I3YlzY6A@z/
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
>  arch/arm64/include/asm/cacheflush.h |  2 +-
>  arch/arm64/mm/cache.S               | 11 +----------
>  2 files changed, 2 insertions(+), 11 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
> index 52e5c1623224..a586afa84172 100644
> --- a/arch/arm64/include/asm/cacheflush.h
> +++ b/arch/arm64/include/asm/cacheflush.h
> @@ -57,7 +57,7 @@
>   *		- size   - region size
>   */
>  extern void __flush_icache_range(unsigned long start, unsigned long end);
> -extern int  invalidate_icache_range(unsigned long start, unsigned long end);
> +extern void invalidate_icache_range(unsigned long start, unsigned long end);
>  extern void __flush_dcache_area(void *addr, size_t len);
>  extern void __inval_dcache_area(void *addr, size_t len);
>  extern void __clean_dcache_area_poc(void *addr, size_t len);
> diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
> index 092f73acdf9a..6babaaf34f17 100644
> --- a/arch/arm64/mm/cache.S
> +++ b/arch/arm64/mm/cache.S
> @@ -105,21 +105,12 @@ SYM_FUNC_END(__flush_cache_user_range)
>   */
>  SYM_FUNC_START(invalidate_icache_range)
>  alternative_if ARM64_HAS_CACHE_DIC
> -	mov	x0, xzr
>  	isb
>  	ret
>  alternative_else_nop_endif
>  
> -	uaccess_ttbr0_enable x2, x3, x4
> -
> -	invalidate_icache_by_line x0, x1, x2, x3, 1, 2f
> -	mov	x0, xzr
> -1:
> -	uaccess_ttbr0_disable x1, x2
> +	invalidate_icache_by_line x0, x1, x2, x3, 0, 0f
>  	ret
> -2:
> -	mov	x0, #-EFAULT
> -	b	1b
>  SYM_FUNC_END(invalidate_icache_range)
>  
>  /*
> -- 
> 2.31.1.607.g51e8a6a459-goog
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v1 13/13] arm64: Rename arm64-internal cache maintenance functions
  2021-05-11 15:09   ` Ard Biesheuvel
@ 2021-05-11 15:49     ` Mark Rutland
  2021-05-12  9:51       ` Marc Zyngier
  2021-05-12 10:00       ` Fuad Tabba
  2021-05-12  9:56     ` Fuad Tabba
  1 sibling, 2 replies; 32+ messages in thread
From: Mark Rutland @ 2021-05-11 15:49 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Fuad Tabba, Linux ARM, Will Deacon, Catalin Marinas,
	Marc Zyngier, James Morse, Alexandru Elisei, Suzuki K Poulose

On Tue, May 11, 2021 at 05:09:18PM +0200, Ard Biesheuvel wrote:
> On Tue, 11 May 2021 at 16:43, Fuad Tabba <tabba@google.com> wrote:
> >
> > Although naming across the codebase isn't that consistent, it
> > tends to follow certain patterns. Moreover, the term "flush"
> > isn't defined in the Arm Architecture reference manual, and might
> > be interpreted to mean clean, invalidate, or both for a cache.
> >
> > Rename arm64-internal functions to make the naming internally
> > consistent, as well as making it consistent with the Arm ARM, by
> > clarifying whether the operation is a clean, invalidate, or both.
> > Also specify which point the operation applies two, i.e., to the
> > point of unification (PoU), coherence (PoC), or persistence
> > (PoP).
> >
> > This commit applies the following sed transformation to all files
> > under arch/arm64:
> >
> > "s/\b__flush_cache_range\b/__clean_inval_cache_pou_macro/g;"\
> > "s/\b__flush_icache_range\b/__clean_inval_cache_pou/g;"\

For icaches, a "flush" is just an invalidate, so this doesn't need
"clean".

> > "s/\binvalidate_icache_range\b/__inval_icache_pou/g;"\
> > "s/\b__flush_dcache_area\b/__clean_inval_dcache_poc/g;"\
> > "s/\b__inval_dcache_area\b/__inval_dcache_poc/g;"\
> > "s/__clean_dcache_area_poc\b/__clean_dcache_poc/g;"\
> > "s/\b__clean_dcache_area_pop\b/__clean_dcache_pop/g;"\
> > "s/\b__clean_dcache_area_pou\b/__clean_dcache_pou/g;"\
> > "s/\b__flush_cache_user_range\b/__clean_inval_cache_user_pou/g;"\
> > "s/\b__flush_icache_all\b/__clean_inval_all_icache_pou/g;"

Likewise here.

> >
> > Note that __clean_dcache_area_poc is deliberately missing a word
> > boundary check to match the efistub symbols in image-vars.h.
> >
> > No functional change intended.
> >
> > Signed-off-by: Fuad Tabba <tabba@google.com>
> 
> I am a big fan of this change: code is so much easier to read if the
> names of subroutines match their intent.

Likewise!

> I would suggest, though, that we get rid of all the leading
> underscores while at it: we often use them when refactoring existing
> routines into separate pieces (which is where at least some of these
> came from), but here, they seem to have little value.

That all makes sense to me; I'd also suggest we make the cache type the
prefix, e.g.

* icache_clean_pou
* dcache_clean_inval_poc
* caches_clean_inval_user_pou // D+I caches

... since then it's easier to read consistently, rather than having to
search for the cache type midway through the name.

Thanks,
Mark.

> 
> 
> > ---
> >  arch/arm64/include/asm/arch_gicv3.h |  2 +-
> >  arch/arm64/include/asm/cacheflush.h | 36 +++++++++----------
> >  arch/arm64/include/asm/efi.h        |  2 +-
> >  arch/arm64/include/asm/kvm_mmu.h    |  6 ++--
> >  arch/arm64/kernel/alternative.c     |  2 +-
> >  arch/arm64/kernel/efi-entry.S       |  4 +--
> >  arch/arm64/kernel/head.S            |  8 ++---
> >  arch/arm64/kernel/hibernate.c       | 12 +++----
> >  arch/arm64/kernel/idreg-override.c  |  2 +-
> >  arch/arm64/kernel/image-vars.h      |  2 +-
> >  arch/arm64/kernel/insn.c            |  2 +-
> >  arch/arm64/kernel/kaslr.c           |  6 ++--
> >  arch/arm64/kernel/machine_kexec.c   | 10 +++---
> >  arch/arm64/kernel/smp.c             |  4 +--
> >  arch/arm64/kernel/smp_spin_table.c  |  4 +--
> >  arch/arm64/kernel/sys_compat.c      |  2 +-
> >  arch/arm64/kvm/arm.c                |  2 +-
> >  arch/arm64/kvm/hyp/nvhe/cache.S     |  4 +--
> >  arch/arm64/kvm/hyp/nvhe/setup.c     |  2 +-
> >  arch/arm64/kvm/hyp/nvhe/tlb.c       |  2 +-
> >  arch/arm64/kvm/hyp/pgtable.c        |  4 +--
> >  arch/arm64/lib/uaccess_flushcache.c |  4 +--
> >  arch/arm64/mm/cache.S               | 56 ++++++++++++++---------------
> >  arch/arm64/mm/flush.c               | 12 +++----
> >  24 files changed, 95 insertions(+), 95 deletions(-)
> >
> > diff --git a/arch/arm64/include/asm/arch_gicv3.h b/arch/arm64/include/asm/arch_gicv3.h
> > index ed1cc9d8e6df..4b7ac9098e8f 100644
> > --- a/arch/arm64/include/asm/arch_gicv3.h
> > +++ b/arch/arm64/include/asm/arch_gicv3.h
> > @@ -125,7 +125,7 @@ static inline u32 gic_read_rpr(void)
> >  #define gic_write_lpir(v, c)           writeq_relaxed(v, c)
> >
> >  #define gic_flush_dcache_to_poc(a,l)   \
> > -       __flush_dcache_area((unsigned long)(a), (unsigned long)(a)+(l))
> > +       __clean_inval_dcache_poc((unsigned long)(a), (unsigned long)(a)+(l))
> >
> >  #define gits_read_baser(c)             readq_relaxed(c)
> >  #define gits_write_baser(v, c)         writeq_relaxed(v, c)
> > diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
> > index 4b91d3530013..526eee4522eb 100644
> > --- a/arch/arm64/include/asm/cacheflush.h
> > +++ b/arch/arm64/include/asm/cacheflush.h
> > @@ -34,54 +34,54 @@
> >   *             - start  - virtual start address
> >   *             - end    - virtual end address
> >   *
> > - *     __flush_icache_range(start, end)
> > + *     __clean_inval_cache_pou(start, end)
> >   *
> >   *             Ensure coherency between the I-cache and the D-cache region to
> >   *             the Point of Unification.
> >   *
> > - *     __flush_cache_user_range(start, end)
> > + *     __clean_inval_cache_user_pou(start, end)
> >   *
> >   *             Ensure coherency between the I-cache and the D-cache region to
> >   *             the Point of Unification.
> >   *             Use only if the region might access user memory.
> >   *
> > - *     invalidate_icache_range(start, end)
> > + *     __inval_icache_pou(start, end)
> >   *
> >   *             Invalidate I-cache region to the Point of Unification.
> >   *
> > - *     __flush_dcache_area(start, end)
> > + *     __clean_inval_dcache_poc(start, end)
> >   *
> >   *             Clean and invalidate D-cache region to the Point of Coherence.
> >   *
> > - *     __inval_dcache_area(start, end)
> > + *     __inval_dcache_poc(start, end)
> >   *
> >   *             Invalidate D-cache region to the Point of Coherence.
> >   *
> > - *     __clean_dcache_area_poc(start, end)
> > + *     __clean_dcache_poc(start, end)
> >   *
> >   *             Clean D-cache region to the Point of Coherence.
> >   *
> > - *     __clean_dcache_area_pop(start, end)
> > + *     __clean_dcache_pop(start, end)
> >   *
> >   *             Clean D-cache region to the Point of Persistence.
> >   *
> > - *     __clean_dcache_area_pou(start, end)
> > + *     __clean_dcache_pou(start, end)
> >   *
> >   *             Clean D-cache region to the Point of Unification.
> >   */
> > -extern void __flush_icache_range(unsigned long start, unsigned long end);
> > -extern void invalidate_icache_range(unsigned long start, unsigned long end);
> > -extern void __flush_dcache_area(unsigned long start, unsigned long end);
> > -extern void __inval_dcache_area(unsigned long start, unsigned long end);
> > -extern void __clean_dcache_area_poc(unsigned long start, unsigned long end);
> > -extern void __clean_dcache_area_pop(unsigned long start, unsigned long end);
> > -extern void __clean_dcache_area_pou(unsigned long start, unsigned long end);
> > -extern long __flush_cache_user_range(unsigned long start, unsigned long end);
> > +extern void __clean_inval_cache_pou(unsigned long start, unsigned long end);
> > +extern void __inval_icache_pou(unsigned long start, unsigned long end);
> > +extern void __clean_inval_dcache_poc(unsigned long start, unsigned long end);
> > +extern void __inval_dcache_poc(unsigned long start, unsigned long end);
> > +extern void __clean_dcache_poc(unsigned long start, unsigned long end);
> > +extern void __clean_dcache_pop(unsigned long start, unsigned long end);
> > +extern void __clean_dcache_pou(unsigned long start, unsigned long end);
> > +extern long __clean_inval_cache_user_pou(unsigned long start, unsigned long end);
> >  extern void sync_icache_aliases(unsigned long start, unsigned long end);
> >
> >  static inline void flush_icache_range(unsigned long start, unsigned long end)
> >  {
> > -       __flush_icache_range(start, end);
> > +       __clean_inval_cache_pou(start, end);
> >
> >         /*
> >          * IPI all online CPUs so that they undergo a context synchronization
> > @@ -135,7 +135,7 @@ extern void copy_to_user_page(struct vm_area_struct *, struct page *,
> >  #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1
> >  extern void flush_dcache_page(struct page *);
> >
> > -static __always_inline void __flush_icache_all(void)
> > +static __always_inline void __clean_inval_all_icache_pou(void)
> >  {
> >         if (cpus_have_const_cap(ARM64_HAS_CACHE_DIC))
> >                 return;
> > diff --git a/arch/arm64/include/asm/efi.h b/arch/arm64/include/asm/efi.h
> > index 0ae2397076fd..d1e2a4bf8def 100644
> > --- a/arch/arm64/include/asm/efi.h
> > +++ b/arch/arm64/include/asm/efi.h
> > @@ -137,7 +137,7 @@ void efi_virtmap_unload(void);
> >
> >  static inline void efi_capsule_flush_cache_range(void *addr, int size)
> >  {
> > -       __flush_dcache_area((unsigned long)addr, (unsigned long)addr + size);
> > +       __clean_inval_dcache_poc((unsigned long)addr, (unsigned long)addr + size);
> >  }
> >
> >  #endif /* _ASM_EFI_H */
> > diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
> > index 33293d5855af..29d2aa6f3940 100644
> > --- a/arch/arm64/include/asm/kvm_mmu.h
> > +++ b/arch/arm64/include/asm/kvm_mmu.h
> > @@ -181,7 +181,7 @@ static inline void *__kvm_vector_slot2addr(void *base,
> >  struct kvm;
> >
> >  #define kvm_flush_dcache_to_poc(a,l)   \
> > -       __flush_dcache_area((unsigned long)(a), (unsigned long)(a)+(l))
> > +       __clean_inval_dcache_poc((unsigned long)(a), (unsigned long)(a)+(l))
> >
> >  static inline bool vcpu_has_cache_enabled(struct kvm_vcpu *vcpu)
> >  {
> > @@ -209,12 +209,12 @@ static inline void __invalidate_icache_guest_page(kvm_pfn_t pfn,
> >  {
> >         if (icache_is_aliasing()) {
> >                 /* any kind of VIPT cache */
> > -               __flush_icache_all();
> > +               __clean_inval_all_icache_pou();
> >         } else if (is_kernel_in_hyp_mode() || !icache_is_vpipt()) {
> >                 /* PIPT or VPIPT at EL2 (see comment in __kvm_tlb_flush_vmid_ipa) */
> >                 void *va = page_address(pfn_to_page(pfn));
> >
> > -               invalidate_icache_range((unsigned long)va,
> > +               __inval_icache_pou((unsigned long)va,
> >                                         (unsigned long)va + size);
> >         }
> >  }
> > diff --git a/arch/arm64/kernel/alternative.c b/arch/arm64/kernel/alternative.c
> > index c906d20c7b52..ea2d52fa9a0c 100644
> > --- a/arch/arm64/kernel/alternative.c
> > +++ b/arch/arm64/kernel/alternative.c
> > @@ -181,7 +181,7 @@ static void __nocfi __apply_alternatives(struct alt_region *region, bool is_modu
> >          */
> >         if (!is_module) {
> >                 dsb(ish);
> > -               __flush_icache_all();
> > +               __clean_inval_all_icache_pou();
> >                 isb();
> >
> >                 /* Ignore ARM64_CB bit from feature mask */
> > diff --git a/arch/arm64/kernel/efi-entry.S b/arch/arm64/kernel/efi-entry.S
> > index 72e6a580290a..230506f460ec 100644
> > --- a/arch/arm64/kernel/efi-entry.S
> > +++ b/arch/arm64/kernel/efi-entry.S
> > @@ -29,7 +29,7 @@ SYM_CODE_START(efi_enter_kernel)
> >          */
> >         ldr     w1, =kernel_size
> >         add     x1, x0, x1
> > -       bl      __clean_dcache_area_poc
> > +       bl      __clean_dcache_poc
> >         ic      ialluis
> >
> >         /*
> > @@ -38,7 +38,7 @@ SYM_CODE_START(efi_enter_kernel)
> >          */
> >         adr     x0, 0f
> >         adr     x1, 3f
> > -       bl      __clean_dcache_area_poc
> > +       bl      __clean_dcache_poc
> >  0:
> >         /* Turn off Dcache and MMU */
> >         mrs     x0, CurrentEL
> > diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
> > index 8df0ac8d9123..ea0447c5010a 100644
> > --- a/arch/arm64/kernel/head.S
> > +++ b/arch/arm64/kernel/head.S
> > @@ -118,7 +118,7 @@ SYM_CODE_START_LOCAL(preserve_boot_args)
> >                                                 // MMU off
> >
> >         add     x1, x0, #0x20                   // 4 x 8 bytes
> > -       b       __inval_dcache_area             // tail call
> > +       b       __inval_dcache_poc              // tail call
> >  SYM_CODE_END(preserve_boot_args)
> >
> >  /*
> > @@ -268,7 +268,7 @@ SYM_FUNC_START_LOCAL(__create_page_tables)
> >          */
> >         adrp    x0, init_pg_dir
> >         adrp    x1, init_pg_end
> > -       bl      __inval_dcache_area
> > +       bl      __inval_dcache_poc
> >
> >         /*
> >          * Clear the init page tables.
> > @@ -381,11 +381,11 @@ SYM_FUNC_START_LOCAL(__create_page_tables)
> >
> >         adrp    x0, idmap_pg_dir
> >         adrp    x1, idmap_pg_end
> > -       bl      __inval_dcache_area
> > +       bl      __inval_dcache_poc
> >
> >         adrp    x0, init_pg_dir
> >         adrp    x1, init_pg_end
> > -       bl      __inval_dcache_area
> > +       bl      __inval_dcache_poc
> >
> >         ret     x28
> >  SYM_FUNC_END(__create_page_tables)
> > diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c
> > index b40ddce71507..ec871b24fd5b 100644
> > --- a/arch/arm64/kernel/hibernate.c
> > +++ b/arch/arm64/kernel/hibernate.c
> > @@ -210,7 +210,7 @@ static int create_safe_exec_page(void *src_start, size_t length,
> >                 return -ENOMEM;
> >
> >         memcpy(page, src_start, length);
> > -       __flush_icache_range((unsigned long)page, (unsigned long)page + length);
> > +       __clean_inval_cache_pou((unsigned long)page, (unsigned long)page + length);
> >         rc = trans_pgd_idmap_page(&trans_info, &trans_ttbr0, &t0sz, page);
> >         if (rc)
> >                 return rc;
> > @@ -381,17 +381,17 @@ int swsusp_arch_suspend(void)
> >                 ret = swsusp_save();
> >         } else {
> >                 /* Clean kernel core startup/idle code to PoC*/
> > -               __flush_dcache_area((unsigned long)__mmuoff_data_start,
> > +               __clean_inval_dcache_poc((unsigned long)__mmuoff_data_start,
> >                                     (unsigned long)__mmuoff_data_end);
> > -               __flush_dcache_area((unsigned long)__idmap_text_start,
> > +               __clean_inval_dcache_poc((unsigned long)__idmap_text_start,
> >                                     (unsigned long)__idmap_text_end);
> >
> >                 /* Clean kvm setup code to PoC? */
> >                 if (el2_reset_needed()) {
> > -                       __flush_dcache_area(
> > +                       __clean_inval_dcache_poc(
> >                                 (unsigned long)__hyp_idmap_text_start,
> >                                 (unsigned long)__hyp_idmap_text_end);
> > -                       __flush_dcache_area((unsigned long)__hyp_text_start,
> > +                       __clean_inval_dcache_poc((unsigned long)__hyp_text_start,
> >                                             (unsigned long)__hyp_text_end);
> >                 }
> >
> > @@ -477,7 +477,7 @@ int swsusp_arch_resume(void)
> >          * The hibernate exit text contains a set of el2 vectors, that will
> >          * be executed at el2 with the mmu off in order to reload hyp-stub.
> >          */
> > -       __flush_dcache_area((unsigned long)hibernate_exit,
> > +       __clean_inval_dcache_poc((unsigned long)hibernate_exit,
> >                             (unsigned long)hibernate_exit + exit_size);
> >
> >         /*
> > diff --git a/arch/arm64/kernel/idreg-override.c b/arch/arm64/kernel/idreg-override.c
> > index 3dd515baf526..6b4b5727f2db 100644
> > --- a/arch/arm64/kernel/idreg-override.c
> > +++ b/arch/arm64/kernel/idreg-override.c
> > @@ -237,7 +237,7 @@ asmlinkage void __init init_feature_override(void)
> >
> >         for (i = 0; i < ARRAY_SIZE(regs); i++) {
> >                 if (regs[i]->override)
> > -                       __flush_dcache_area((unsigned long)regs[i]->override,
> > +                       __clean_inval_dcache_poc((unsigned long)regs[i]->override,
> >                                             (unsigned long)regs[i]->override +
> >                                             sizeof(*regs[i]->override));
> >         }
> > diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h
> > index bcf3c2755370..14beda6a573d 100644
> > --- a/arch/arm64/kernel/image-vars.h
> > +++ b/arch/arm64/kernel/image-vars.h
> > @@ -35,7 +35,7 @@ __efistub_strnlen             = __pi_strnlen;
> >  __efistub_strcmp               = __pi_strcmp;
> >  __efistub_strncmp              = __pi_strncmp;
> >  __efistub_strrchr              = __pi_strrchr;
> > -__efistub___clean_dcache_area_poc = __pi___clean_dcache_area_poc;
> > +__efistub___clean_dcache_poc = __pi___clean_dcache_poc;
> >
> >  #if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)
> >  __efistub___memcpy             = __pi_memcpy;
> > diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
> > index 6c0de2f60ea9..11c7be09e305 100644
> > --- a/arch/arm64/kernel/insn.c
> > +++ b/arch/arm64/kernel/insn.c
> > @@ -198,7 +198,7 @@ int __kprobes aarch64_insn_patch_text_nosync(void *addr, u32 insn)
> >
> >         ret = aarch64_insn_write(tp, insn);
> >         if (ret == 0)
> > -               __flush_icache_range((uintptr_t)tp,
> > +               __clean_inval_cache_pou((uintptr_t)tp,
> >                                      (uintptr_t)tp + AARCH64_INSN_SIZE);
> >
> >         return ret;
> > diff --git a/arch/arm64/kernel/kaslr.c b/arch/arm64/kernel/kaslr.c
> > index 49cccd03cb37..038a4cc7de93 100644
> > --- a/arch/arm64/kernel/kaslr.c
> > +++ b/arch/arm64/kernel/kaslr.c
> > @@ -72,7 +72,7 @@ u64 __init kaslr_early_init(void)
> >          * we end up running with module randomization disabled.
> >          */
> >         module_alloc_base = (u64)_etext - MODULES_VSIZE;
> > -       __flush_dcache_area((unsigned long)&module_alloc_base,
> > +       __clean_inval_dcache_poc((unsigned long)&module_alloc_base,
> >                             (unsigned long)&module_alloc_base +
> >                                     sizeof(module_alloc_base));
> >
> > @@ -172,10 +172,10 @@ u64 __init kaslr_early_init(void)
> >         module_alloc_base += (module_range * (seed & ((1 << 21) - 1))) >> 21;
> >         module_alloc_base &= PAGE_MASK;
> >
> > -       __flush_dcache_area((unsigned long)&module_alloc_base,
> > +       __clean_inval_dcache_poc((unsigned long)&module_alloc_base,
> >                             (unsigned long)&module_alloc_base +
> >                                     sizeof(module_alloc_base));
> > -       __flush_dcache_area((unsigned long)&memstart_offset_seed,
> > +       __clean_inval_dcache_poc((unsigned long)&memstart_offset_seed,
> >                             (unsigned long)&memstart_offset_seed +
> >                                     sizeof(memstart_offset_seed));
> >
> > diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c
> > index 4cada9000acf..0e20a789b03e 100644
> > --- a/arch/arm64/kernel/machine_kexec.c
> > +++ b/arch/arm64/kernel/machine_kexec.c
> > @@ -69,10 +69,10 @@ int machine_kexec_post_load(struct kimage *kimage)
> >         kexec_image_info(kimage);
> >
> >         /* Flush the reloc_code in preparation for its execution. */
> > -       __flush_dcache_area((unsigned long)reloc_code,
> > +       __clean_inval_dcache_poc((unsigned long)reloc_code,
> >                             (unsigned long)reloc_code +
> >                                     arm64_relocate_new_kernel_size);
> > -       invalidate_icache_range((uintptr_t)reloc_code,
> > +       __inval_icache_pou((uintptr_t)reloc_code,
> >                                 (uintptr_t)reloc_code +
> >                                         arm64_relocate_new_kernel_size);
> >
> > @@ -108,7 +108,7 @@ static void kexec_list_flush(struct kimage *kimage)
> >                 unsigned long addr;
> >
> >                 /* flush the list entries. */
> > -               __flush_dcache_area((unsigned long)entry,
> > +               __clean_inval_dcache_poc((unsigned long)entry,
> >                                     (unsigned long)entry +
> >                                             sizeof(kimage_entry_t));
> >
> > @@ -125,7 +125,7 @@ static void kexec_list_flush(struct kimage *kimage)
> >                         break;
> >                 case IND_SOURCE:
> >                         /* flush the source pages. */
> > -                       __flush_dcache_area(addr, addr + PAGE_SIZE);
> > +                       __clean_inval_dcache_poc(addr, addr + PAGE_SIZE);
> >                         break;
> >                 case IND_DESTINATION:
> >                         break;
> > @@ -152,7 +152,7 @@ static void kexec_segment_flush(const struct kimage *kimage)
> >                         kimage->segment[i].memsz,
> >                         kimage->segment[i].memsz /  PAGE_SIZE);
> >
> > -               __flush_dcache_area(
> > +               __clean_inval_dcache_poc(
> >                         (unsigned long)phys_to_virt(kimage->segment[i].mem),
> >                         (unsigned long)phys_to_virt(kimage->segment[i].mem) +
> >                                 kimage->segment[i].memsz);
> > diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
> > index 5fcdee331087..2044210ed15a 100644
> > --- a/arch/arm64/kernel/smp.c
> > +++ b/arch/arm64/kernel/smp.c
> > @@ -122,7 +122,7 @@ int __cpu_up(unsigned int cpu, struct task_struct *idle)
> >         secondary_data.task = idle;
> >         secondary_data.stack = task_stack_page(idle) + THREAD_SIZE;
> >         update_cpu_boot_status(CPU_MMU_OFF);
> > -       __flush_dcache_area((unsigned long)&secondary_data,
> > +       __clean_inval_dcache_poc((unsigned long)&secondary_data,
> >                             (unsigned long)&secondary_data +
> >                                     sizeof(secondary_data));
> >
> > @@ -145,7 +145,7 @@ int __cpu_up(unsigned int cpu, struct task_struct *idle)
> >         pr_crit("CPU%u: failed to come online\n", cpu);
> >         secondary_data.task = NULL;
> >         secondary_data.stack = NULL;
> > -       __flush_dcache_area((unsigned long)&secondary_data,
> > +       __clean_inval_dcache_poc((unsigned long)&secondary_data,
> >                             (unsigned long)&secondary_data +
> >                                     sizeof(secondary_data));
> >         status = READ_ONCE(secondary_data.status);
> > diff --git a/arch/arm64/kernel/smp_spin_table.c b/arch/arm64/kernel/smp_spin_table.c
> > index 58d804582a35..a946ccb9791e 100644
> > --- a/arch/arm64/kernel/smp_spin_table.c
> > +++ b/arch/arm64/kernel/smp_spin_table.c
> > @@ -36,7 +36,7 @@ static void write_pen_release(u64 val)
> >         unsigned long size = sizeof(secondary_holding_pen_release);
> >
> >         secondary_holding_pen_release = val;
> > -       __flush_dcache_area((unsigned long)start, (unsigned long)start + size);
> > +       __clean_inval_dcache_poc((unsigned long)start, (unsigned long)start + size);
> >  }
> >
> >
> > @@ -90,7 +90,7 @@ static int smp_spin_table_cpu_prepare(unsigned int cpu)
> >          * the boot protocol.
> >          */
> >         writeq_relaxed(pa_holding_pen, release_addr);
> > -       __flush_dcache_area((__force unsigned long)release_addr,
> > +       __clean_inval_dcache_poc((__force unsigned long)release_addr,
> >                             (__force unsigned long)release_addr +
> >                                     sizeof(*release_addr));
> >
> > diff --git a/arch/arm64/kernel/sys_compat.c b/arch/arm64/kernel/sys_compat.c
> > index 265fe3eb1069..fdd415f8d841 100644
> > --- a/arch/arm64/kernel/sys_compat.c
> > +++ b/arch/arm64/kernel/sys_compat.c
> > @@ -41,7 +41,7 @@ __do_compat_cache_op(unsigned long start, unsigned long end)
> >                         dsb(ish);
> >                 }
> >
> > -               ret = __flush_cache_user_range(start, start + chunk);
> > +               ret = __clean_inval_cache_user_pou(start, start + chunk);
> >                 if (ret)
> >                         return ret;
> >
> > diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> > index 1cb39c0803a4..edeca89405ff 100644
> > --- a/arch/arm64/kvm/arm.c
> > +++ b/arch/arm64/kvm/arm.c
> > @@ -1064,7 +1064,7 @@ static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm_vcpu *vcpu,
> >                 if (!cpus_have_final_cap(ARM64_HAS_STAGE2_FWB))
> >                         stage2_unmap_vm(vcpu->kvm);
> >                 else
> > -                       __flush_icache_all();
> > +                       __clean_inval_all_icache_pou();
> >         }
> >
> >         vcpu_reset_hcr(vcpu);
> > diff --git a/arch/arm64/kvm/hyp/nvhe/cache.S b/arch/arm64/kvm/hyp/nvhe/cache.S
> > index 36cef6915428..a906dd596e66 100644
> > --- a/arch/arm64/kvm/hyp/nvhe/cache.S
> > +++ b/arch/arm64/kvm/hyp/nvhe/cache.S
> > @@ -7,7 +7,7 @@
> >  #include <asm/assembler.h>
> >  #include <asm/alternative.h>
> >
> > -SYM_FUNC_START_PI(__flush_dcache_area)
> > +SYM_FUNC_START_PI(__clean_inval_dcache_poc)
> >         dcache_by_line_op civac, sy, x0, x1, x2, x3
> >         ret
> > -SYM_FUNC_END_PI(__flush_dcache_area)
> > +SYM_FUNC_END_PI(__clean_inval_dcache_poc)
> > diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c
> > index 5dffe928f256..a16719f5068d 100644
> > --- a/arch/arm64/kvm/hyp/nvhe/setup.c
> > +++ b/arch/arm64/kvm/hyp/nvhe/setup.c
> > @@ -134,7 +134,7 @@ static void update_nvhe_init_params(void)
> >         for (i = 0; i < hyp_nr_cpus; i++) {
> >                 params = per_cpu_ptr(&kvm_init_params, i);
> >                 params->pgd_pa = __hyp_pa(pkvm_pgtable.pgd);
> > -               __flush_dcache_area((unsigned long)params,
> > +               __clean_inval_dcache_poc((unsigned long)params,
> >                                     (unsigned long)params + sizeof(*params));
> >         }
> >  }
> > diff --git a/arch/arm64/kvm/hyp/nvhe/tlb.c b/arch/arm64/kvm/hyp/nvhe/tlb.c
> > index 83dc3b271bc5..184c9c7c13bd 100644
> > --- a/arch/arm64/kvm/hyp/nvhe/tlb.c
> > +++ b/arch/arm64/kvm/hyp/nvhe/tlb.c
> > @@ -104,7 +104,7 @@ void __kvm_tlb_flush_vmid_ipa(struct kvm_s2_mmu *mmu,
> >          * you should be running with VHE enabled.
> >          */
> >         if (icache_is_vpipt())
> > -               __flush_icache_all();
> > +               __clean_inval_all_icache_pou();
> >
> >         __tlb_switch_to_host(&cxt);
> >  }
> > diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
> > index 10d2f04013d4..fb2613f458de 100644
> > --- a/arch/arm64/kvm/hyp/pgtable.c
> > +++ b/arch/arm64/kvm/hyp/pgtable.c
> > @@ -841,7 +841,7 @@ static int stage2_unmap_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep,
> >         if (need_flush) {
> >                 kvm_pte_t *pte_follow = kvm_pte_follow(pte, mm_ops);
> >
> > -               __flush_dcache_area((unsigned long)pte_follow,
> > +               __clean_inval_dcache_poc((unsigned long)pte_follow,
> >                                     (unsigned long)pte_follow +
> >                                             kvm_granule_size(level));
> >         }
> > @@ -997,7 +997,7 @@ static int stage2_flush_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep,
> >                 return 0;
> >
> >         pte_follow = kvm_pte_follow(pte, mm_ops);
> > -       __flush_dcache_area((unsigned long)pte_follow,
> > +       __clean_inval_dcache_poc((unsigned long)pte_follow,
> >                             (unsigned long)pte_follow +
> >                                     kvm_granule_size(level));
> >         return 0;
> > diff --git a/arch/arm64/lib/uaccess_flushcache.c b/arch/arm64/lib/uaccess_flushcache.c
> > index 62ea989effe8..b1a6d9823864 100644
> > --- a/arch/arm64/lib/uaccess_flushcache.c
> > +++ b/arch/arm64/lib/uaccess_flushcache.c
> > @@ -15,7 +15,7 @@ void memcpy_flushcache(void *dst, const void *src, size_t cnt)
> >          * barrier to order the cache maintenance against the memcpy.
> >          */
> >         memcpy(dst, src, cnt);
> > -       __clean_dcache_area_pop((unsigned long)dst, (unsigned long)dst + cnt);
> > +       __clean_dcache_pop((unsigned long)dst, (unsigned long)dst + cnt);
> >  }
> >  EXPORT_SYMBOL_GPL(memcpy_flushcache);
> >
> > @@ -33,6 +33,6 @@ unsigned long __copy_user_flushcache(void *to, const void __user *from,
> >         rc = raw_copy_from_user(to, from, n);
> >
> >         /* See above */
> > -       __clean_dcache_area_pop((unsigned long)to, (unsigned long)to + n - rc);
> > +       __clean_dcache_pop((unsigned long)to, (unsigned long)to + n - rc);
> >         return rc;
> >  }
> > diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
> > index d8434e57fab3..2df7212de799 100644
> > --- a/arch/arm64/mm/cache.S
> > +++ b/arch/arm64/mm/cache.S
> > @@ -15,7 +15,7 @@
> >  #include <asm/asm-uaccess.h>
> >
> >  /*
> > - *     __flush_cache_range(start,end) [needs_uaccess]
> > + *     __clean_inval_cache_pou_macro(start,end) [needs_uaccess]
> >   *
> >   *     Ensure that the I and D caches are coherent within specified region.
> >   *     This is typically used when code has been written to a memory region,
> > @@ -25,7 +25,7 @@
> >   *     - end           - virtual end address of region
> >   *     - needs_uaccess - (macro parameter) might access user space memory
> >   */
> > -.macro __flush_cache_range, needs_uaccess
> > +.macro __clean_inval_cache_pou_macro, needs_uaccess
> >         .if     \needs_uaccess
> >         uaccess_ttbr0_enable x2, x3, x4
> >         .endif
> > @@ -77,12 +77,12 @@ alternative_else_nop_endif
> >   *     - start   - virtual start address of region
> >   *     - end     - virtual end address of region
> >   */
> > -SYM_FUNC_START(__flush_icache_range)
> > -       __flush_cache_range needs_uaccess=0
> > -SYM_FUNC_END(__flush_icache_range)
> > +SYM_FUNC_START(__clean_inval_cache_pou)
> > +       __clean_inval_cache_pou_macro needs_uaccess=0
> > +SYM_FUNC_END(__clean_inval_cache_pou)
> >
> >  /*
> > - *     __flush_cache_user_range(start,end)
> > + *     __clean_inval_cache_user_pou(start,end)
> >   *
> >   *     Ensure that the I and D caches are coherent within specified region.
> >   *     This is typically used when code has been written to a memory region,
> > @@ -91,19 +91,19 @@ SYM_FUNC_END(__flush_icache_range)
> >   *     - start   - virtual start address of region
> >   *     - end     - virtual end address of region
> >   */
> > -SYM_FUNC_START(__flush_cache_user_range)
> > -       __flush_cache_range needs_uaccess=1
> > -SYM_FUNC_END(__flush_cache_user_range)
> > +SYM_FUNC_START(__clean_inval_cache_user_pou)
> > +       __clean_inval_cache_pou_macro needs_uaccess=1
> > +SYM_FUNC_END(__clean_inval_cache_user_pou)
> >
> >  /*
> > - *     invalidate_icache_range(start,end)
> > + *     __inval_icache_pou(start,end)
> >   *
> >   *     Ensure that the I cache is invalid within specified region.
> >   *
> >   *     - start   - virtual start address of region
> >   *     - end     - virtual end address of region
> >   */
> > -SYM_FUNC_START(invalidate_icache_range)
> > +SYM_FUNC_START(__inval_icache_pou)
> >  alternative_if ARM64_HAS_CACHE_DIC
> >         isb
> >         ret
> > @@ -111,10 +111,10 @@ alternative_else_nop_endif
> >
> >         invalidate_icache_by_line x0, x1, x2, x3, 0, 0f
> >         ret
> > -SYM_FUNC_END(invalidate_icache_range)
> > +SYM_FUNC_END(__inval_icache_pou)
> >
> >  /*
> > - *     __flush_dcache_area(start, end)
> > + *     __clean_inval_dcache_poc(start, end)
> >   *
> >   *     Ensure that any D-cache lines for the interval [start, end)
> >   *     are cleaned and invalidated to the PoC.
> > @@ -122,13 +122,13 @@ SYM_FUNC_END(invalidate_icache_range)
> >   *     - start   - virtual start address of region
> >   *     - end     - virtual end address of region
> >   */
> > -SYM_FUNC_START_PI(__flush_dcache_area)
> > +SYM_FUNC_START_PI(__clean_inval_dcache_poc)
> >         dcache_by_line_op civac, sy, x0, x1, x2, x3
> >         ret
> > -SYM_FUNC_END_PI(__flush_dcache_area)
> > +SYM_FUNC_END_PI(__clean_inval_dcache_poc)
> >
> >  /*
> > - *     __clean_dcache_area_pou(start, end)
> > + *     __clean_dcache_pou(start, end)
> >   *
> >   *     Ensure that any D-cache lines for the interval [start, end)
> >   *     are cleaned to the PoU.
> > @@ -136,17 +136,17 @@ SYM_FUNC_END_PI(__flush_dcache_area)
> >   *     - start   - virtual start address of region
> >   *     - end     - virtual end address of region
> >   */
> > -SYM_FUNC_START(__clean_dcache_area_pou)
> > +SYM_FUNC_START(__clean_dcache_pou)
> >  alternative_if ARM64_HAS_CACHE_IDC
> >         dsb     ishst
> >         ret
> >  alternative_else_nop_endif
> >         dcache_by_line_op cvau, ish, x0, x1, x2, x3
> >         ret
> > -SYM_FUNC_END(__clean_dcache_area_pou)
> > +SYM_FUNC_END(__clean_dcache_pou)
> >
> >  /*
> > - *     __inval_dcache_area(start, end)
> > + *     __inval_dcache_poc(start, end)
> >   *
> >   *     Ensure that any D-cache lines for the interval [start, end)
> >   *     are invalidated. Any partial lines at the ends of the interval are
> > @@ -156,7 +156,7 @@ SYM_FUNC_END(__clean_dcache_area_pou)
> >   *     - end     - kernel end address of region
> >   */
> >  SYM_FUNC_START_LOCAL(__dma_inv_area)
> > -SYM_FUNC_START_PI(__inval_dcache_area)
> > +SYM_FUNC_START_PI(__inval_dcache_poc)
> >         /* FALLTHROUGH */
> >
> >  /*
> > @@ -181,11 +181,11 @@ SYM_FUNC_START_PI(__inval_dcache_area)
> >         b.lo    2b
> >         dsb     sy
> >         ret
> > -SYM_FUNC_END_PI(__inval_dcache_area)
> > +SYM_FUNC_END_PI(__inval_dcache_poc)
> >  SYM_FUNC_END(__dma_inv_area)
> >
> >  /*
> > - *     __clean_dcache_area_poc(start, end)
> > + *     __clean_dcache_poc(start, end)
> >   *
> >   *     Ensure that any D-cache lines for the interval [start, end)
> >   *     are cleaned to the PoC.
> > @@ -194,7 +194,7 @@ SYM_FUNC_END(__dma_inv_area)
> >   *     - end     - virtual end address of region
> >   */
> >  SYM_FUNC_START_LOCAL(__dma_clean_area)
> > -SYM_FUNC_START_PI(__clean_dcache_area_poc)
> > +SYM_FUNC_START_PI(__clean_dcache_poc)
> >         /* FALLTHROUGH */
> >
> >  /*
> > @@ -204,11 +204,11 @@ SYM_FUNC_START_PI(__clean_dcache_area_poc)
> >   */
> >         dcache_by_line_op cvac, sy, x0, x1, x2, x3
> >         ret
> > -SYM_FUNC_END_PI(__clean_dcache_area_poc)
> > +SYM_FUNC_END_PI(__clean_dcache_poc)
> >  SYM_FUNC_END(__dma_clean_area)
> >
> >  /*
> > - *     __clean_dcache_area_pop(start, end)
> > + *     __clean_dcache_pop(start, end)
> >   *
> >   *     Ensure that any D-cache lines for the interval [start, end)
> >   *     are cleaned to the PoP.
> > @@ -216,13 +216,13 @@ SYM_FUNC_END(__dma_clean_area)
> >   *     - start   - virtual start address of region
> >   *     - end     - virtual end address of region
> >   */
> > -SYM_FUNC_START_PI(__clean_dcache_area_pop)
> > +SYM_FUNC_START_PI(__clean_dcache_pop)
> >         alternative_if_not ARM64_HAS_DCPOP
> > -       b       __clean_dcache_area_poc
> > +       b       __clean_dcache_poc
> >         alternative_else_nop_endif
> >         dcache_by_line_op cvap, sy, x0, x1, x2, x3
> >         ret
> > -SYM_FUNC_END_PI(__clean_dcache_area_pop)
> > +SYM_FUNC_END_PI(__clean_dcache_pop)
> >
> >  /*
> >   *     __dma_flush_area(start, size)
> > diff --git a/arch/arm64/mm/flush.c b/arch/arm64/mm/flush.c
> > index 143f625e7727..005b92148252 100644
> > --- a/arch/arm64/mm/flush.c
> > +++ b/arch/arm64/mm/flush.c
> > @@ -17,14 +17,14 @@
> >  void sync_icache_aliases(unsigned long start, unsigned long end)
> >  {
> >         if (icache_is_aliasing()) {
> > -               __clean_dcache_area_pou(start, end);
> > -               __flush_icache_all();
> > +               __clean_dcache_pou(start, end);
> > +               __clean_inval_all_icache_pou();
> >         } else {
> >                 /*
> >                  * Don't issue kick_all_cpus_sync() after I-cache invalidation
> >                  * for user mappings.
> >                  */
> > -               __flush_icache_range(start, end);
> > +               __clean_inval_cache_pou(start, end);
> >         }
> >  }
> >
> > @@ -76,20 +76,20 @@ EXPORT_SYMBOL(flush_dcache_page);
> >  /*
> >   * Additional functions defined in assembly.
> >   */
> > -EXPORT_SYMBOL(__flush_icache_range);
> > +EXPORT_SYMBOL(__clean_inval_cache_pou);
> >
> >  #ifdef CONFIG_ARCH_HAS_PMEM_API
> >  void arch_wb_cache_pmem(void *addr, size_t size)
> >  {
> >         /* Ensure order against any prior non-cacheable writes */
> >         dmb(osh);
> > -       __clean_dcache_area_pop((unsigned long)addr, (unsigned long)addr + size);
> > +       __clean_dcache_pop((unsigned long)addr, (unsigned long)addr + size);
> >  }
> >  EXPORT_SYMBOL_GPL(arch_wb_cache_pmem);
> >
> >  void arch_invalidate_pmem(void *addr, size_t size)
> >  {
> > -       __inval_dcache_area((unsigned long)addr, (unsigned long)addr + size);
> > +       __inval_dcache_poc((unsigned long)addr, (unsigned long)addr + size);
> >  }
> >  EXPORT_SYMBOL_GPL(arch_invalidate_pmem);
> >  #endif
> > --
> > 2.31.1.607.g51e8a6a459-goog
> >

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v1 01/13] arm64: Do not enable uaccess for flush_icache_range
  2021-05-11 14:42 ` [PATCH v1 01/13] arm64: Do not enable uaccess for flush_icache_range Fuad Tabba
  2021-05-11 15:22   ` Mark Rutland
@ 2021-05-11 16:53   ` Robin Murphy
  2021-05-12  8:57     ` Fuad Tabba
  1 sibling, 1 reply; 32+ messages in thread
From: Robin Murphy @ 2021-05-11 16:53 UTC (permalink / raw)
  To: Fuad Tabba, linux-arm-kernel
  Cc: will, catalin.marinas, mark.rutland, maz, ardb, james.morse,
	alexandru.elisei, suzuki.poulose

On 2021-05-11 15:42, Fuad Tabba wrote:
> __flush_icache_range works on the kernel linear map, and doesn't
> need uaccess. The existing code is a side-effect of its current
> implementation with __flush_cache_user_range fallthrough.
> 
> Instead of fallthrough to share the code, use a common macro for
> the two where the caller can specify whether user-space access is
> needed.
> 
> No functional change intended.
> 
> Reported-by: Catalin Marinas <catalin.marinas@arm.com>
> Reported-by: Will Deacon <will@kernel.org>
> Link: https://lore.kernel.org/linux-arch/20200511110014.lb9PEahJ4hVOYrbwIb_qUHXyNy9KQzNFdb_I3YlzY6A@z/
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
>   arch/arm64/include/asm/assembler.h | 13 ++++--
>   arch/arm64/mm/cache.S              | 64 +++++++++++++++++++++---------
>   2 files changed, 54 insertions(+), 23 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
> index 8418c1bd8f04..6ff7a3a3b238 100644
> --- a/arch/arm64/include/asm/assembler.h
> +++ b/arch/arm64/include/asm/assembler.h
> @@ -426,16 +426,21 @@ alternative_endif
>    * Macro to perform an instruction cache maintenance for the interval
>    * [start, end)
>    *
> - * 	start, end:	virtual addresses describing the region
> - *	label:		A label to branch to on user fault.
> - * 	Corrupts:	tmp1, tmp2
> + *	start, end:	virtual addresses describing the region
> + *	needs_uaccess:	might access user space memory
> + *	label:		label to branch to on user fault (if needs_uaccess)
> + *	Corrupts:	tmp1, tmp2
>    */
> -	.macro invalidate_icache_by_line start, end, tmp1, tmp2, label
> +	.macro invalidate_icache_by_line start, end, tmp1, tmp2, needs_uaccess, label
>   	icache_line_size \tmp1, \tmp2
>   	sub	\tmp2, \tmp1, #1
>   	bic	\tmp2, \start, \tmp2
>   9997:
> +	.if	\needs_uaccess
>   USER(\label, ic	ivau, \tmp2)			// invalidate I line PoU
> +	.else
> +	ic	ivau, \tmp2
> +	.endif
>   	add	\tmp2, \tmp2, \tmp1
>   	cmp	\tmp2, \end
>   	b.lo	9997b
> diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
> index 2d881f34dd9d..092f73acdf9a 100644
> --- a/arch/arm64/mm/cache.S
> +++ b/arch/arm64/mm/cache.S
> @@ -15,30 +15,20 @@
>   #include <asm/asm-uaccess.h>
>   
>   /*
> - *	flush_icache_range(start,end)
> + *	__flush_cache_range(start,end) [needs_uaccess]
>    *
>    *	Ensure that the I and D caches are coherent within specified region.
>    *	This is typically used when code has been written to a memory region,
>    *	and will be executed.
>    *
> - *	- start   - virtual start address of region
> - *	- end     - virtual end address of region
> + *	- start   	- virtual start address of region
> + *	- end     	- virtual end address of region
> + *	- needs_uaccess - (macro parameter) might access user space memory
>    */
> -SYM_FUNC_START(__flush_icache_range)
> -	/* FALLTHROUGH */
> -
> -/*
> - *	__flush_cache_user_range(start,end)
> - *
> - *	Ensure that the I and D caches are coherent within specified region.
> - *	This is typically used when code has been written to a memory region,
> - *	and will be executed.
> - *
> - *	- start   - virtual start address of region
> - *	- end     - virtual end address of region
> - */
> -SYM_FUNC_START(__flush_cache_user_range)
> +.macro	__flush_cache_range, needs_uaccess
> +	.if 	\needs_uaccess
>   	uaccess_ttbr0_enable x2, x3, x4
> +	.endif

Nit: this feels like it belongs directly in __flush_cache_user_range() 
rather than being hidden in the macro, since it's not really an integral 
part of the cache maintenance operation itself.

Robin.

>   alternative_if ARM64_HAS_CACHE_IDC
>   	dsb	ishst
>   	b	7f
> @@ -47,7 +37,11 @@ alternative_else_nop_endif
>   	sub	x3, x2, #1
>   	bic	x4, x0, x3
>   1:
> +	.if 	\needs_uaccess
>   user_alt 9f, "dc cvau, x4",  "dc civac, x4",  ARM64_WORKAROUND_CLEAN_CACHE
> +	.else
> +alternative_insn "dc cvau, x4",  "dc civac, x4",  ARM64_WORKAROUND_CLEAN_CACHE
> +	.endif
>   	add	x4, x4, x2
>   	cmp	x4, x1
>   	b.lo	1b
> @@ -58,15 +52,47 @@ alternative_if ARM64_HAS_CACHE_DIC
>   	isb
>   	b	8f
>   alternative_else_nop_endif
> -	invalidate_icache_by_line x0, x1, x2, x3, 9f
> +	invalidate_icache_by_line x0, x1, x2, x3, \needs_uaccess, 9f
>   8:	mov	x0, #0
>   1:
> +	.if	\needs_uaccess
>   	uaccess_ttbr0_disable x1, x2
> +	.endif
>   	ret
> +
> +	.if 	\needs_uaccess
>   9:
>   	mov	x0, #-EFAULT
>   	b	1b
> +	.endif
> +.endm
> +
> +/*
> + *	flush_icache_range(start,end)
> + *
> + *	Ensure that the I and D caches are coherent within specified region.
> + *	This is typically used when code has been written to a memory region,
> + *	and will be executed.
> + *
> + *	- start   - virtual start address of region
> + *	- end     - virtual end address of region
> + */
> +SYM_FUNC_START(__flush_icache_range)
> +	__flush_cache_range needs_uaccess=0
>   SYM_FUNC_END(__flush_icache_range)
> +
> +/*
> + *	__flush_cache_user_range(start,end)
> + *
> + *	Ensure that the I and D caches are coherent within specified region.
> + *	This is typically used when code has been written to a memory region,
> + *	and will be executed.
> + *
> + *	- start   - virtual start address of region
> + *	- end     - virtual end address of region
> + */
> +SYM_FUNC_START(__flush_cache_user_range)
> +	__flush_cache_range needs_uaccess=1
>   SYM_FUNC_END(__flush_cache_user_range)
>   
>   /*
> @@ -86,7 +112,7 @@ alternative_else_nop_endif
>   
>   	uaccess_ttbr0_enable x2, x3, x4
>   
> -	invalidate_icache_by_line x0, x1, x2, x3, 2f
> +	invalidate_icache_by_line x0, x1, x2, x3, 1, 2f
>   	mov	x0, xzr
>   1:
>   	uaccess_ttbr0_disable x1, x2
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v1 01/13] arm64: Do not enable uaccess for flush_icache_range
  2021-05-11 15:22   ` Mark Rutland
@ 2021-05-12  8:52     ` Fuad Tabba
  2021-05-12  9:59       ` Mark Rutland
  0 siblings, 1 reply; 32+ messages in thread
From: Fuad Tabba @ 2021-05-12  8:52 UTC (permalink / raw)
  To: Mark Rutland
  Cc: moderated list:ARM64 PORT (AARCH64 ARCHITECTURE),
	Will Deacon, Catalin Marinas, Marc Zyngier, ardb, James Morse,
	Alexandru Elisei, Suzuki K Poulose

Hi Mark,

> > No functional change intended.
>
> There is a performance change here, since the existing
> `__flush_cache_user_range` takes IDC and DIC into account, whereas
> `invalidate_icache_by_line` does not.

You're right. There is a performance change in this patch and a couple
of the others, which I will note in v2. However, I don't think that
this patch changes the behavior when it comes to IDC and DIC, does it?

> There's also an existing oversight where `__flush_cache_user_range`
> takes ARM64_WORKAROUND_CLEAN_CACHE into account, but
> `invalidate_icache_by_line` does not. I think that's a bug that we
> should fix first, so that we can backport something to stable.

I'd be happy to address that in v2, but let me make sure I understand
the issue properly.

Errata 819472 and friends (ARM64_WORKAROUND_CLEAN_CACHE) are related
to cache maintenance operations on data caches happening concurrently
with other accesses to the same address. The two places
invalidate_icache_by_line is used in conjunction with data caches are
__flush_icache_range and __flush_cache_user_range (which share the
same code before and after my patch series). In both cases,
invalidate_icache_by_line is called after the workaround is applied.
The third and only other user of invalidate_icache_by_line is
invalidate_icache_range, which only performs cache maintenance on the
icache.

The concern is that invalidate_icache_range might be performing a
cache maintenance operation on an address concurrently with another
processor performing a dc operation on the same address. Therefore,
invalidate_icache_range should perform DC CIVAC on the line before
invalidate_icache_by_line if ARM64_WORKAROUND_CLEAN_CACHE applies. Is
that right?

https://documentation-service.arm.com/static/5fa29fddb209f547eebd361d

> Arguably
> similar is true in `swsusp_arch_suspend_exit`, but for that we could add
> a comment and always use `DC CIVAC`.

I can do that in v2 as well.

Thanks,
/fuad

> Thanks,
> Mark.
>
> > Reported-by: Catalin Marinas <catalin.marinas@arm.com>
> > Reported-by: Will Deacon <will@kernel.org>
> > Link: https://lore.kernel.org/linux-arch/20200511110014.lb9PEahJ4hVOYrbwIb_qUHXyNy9KQzNFdb_I3YlzY6A@z/
> > Signed-off-by: Fuad Tabba <tabba@google.com>
> > ---
> >  arch/arm64/include/asm/assembler.h | 13 ++++--
> >  arch/arm64/mm/cache.S              | 64 +++++++++++++++++++++---------
> >  2 files changed, 54 insertions(+), 23 deletions(-)
> >
> > diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
> > index 8418c1bd8f04..6ff7a3a3b238 100644
> > --- a/arch/arm64/include/asm/assembler.h
> > +++ b/arch/arm64/include/asm/assembler.h
> > @@ -426,16 +426,21 @@ alternative_endif
> >   * Macro to perform an instruction cache maintenance for the interval
> >   * [start, end)
> >   *
> > - *   start, end:     virtual addresses describing the region
> > - *   label:          A label to branch to on user fault.
> > - *   Corrupts:       tmp1, tmp2
> > + *   start, end:     virtual addresses describing the region
> > + *   needs_uaccess:  might access user space memory
> > + *   label:          label to branch to on user fault (if needs_uaccess)
> > + *   Corrupts:       tmp1, tmp2
> >   */
> > -     .macro invalidate_icache_by_line start, end, tmp1, tmp2, label
> > +     .macro invalidate_icache_by_line start, end, tmp1, tmp2, needs_uaccess, label
> >       icache_line_size \tmp1, \tmp2
> >       sub     \tmp2, \tmp1, #1
> >       bic     \tmp2, \start, \tmp2
> >  9997:
> > +     .if     \needs_uaccess
> >  USER(\label, ic      ivau, \tmp2)                    // invalidate I line PoU
> > +     .else
> > +     ic      ivau, \tmp2
> > +     .endif
> >       add     \tmp2, \tmp2, \tmp1
> >       cmp     \tmp2, \end
> >       b.lo    9997b
> > diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
> > index 2d881f34dd9d..092f73acdf9a 100644
> > --- a/arch/arm64/mm/cache.S
> > +++ b/arch/arm64/mm/cache.S
> > @@ -15,30 +15,20 @@
> >  #include <asm/asm-uaccess.h>
> >
> >  /*
> > - *   flush_icache_range(start,end)
> > + *   __flush_cache_range(start,end) [needs_uaccess]
> >   *
> >   *   Ensure that the I and D caches are coherent within specified region.
> >   *   This is typically used when code has been written to a memory region,
> >   *   and will be executed.
> >   *
> > - *   - start   - virtual start address of region
> > - *   - end     - virtual end address of region
> > + *   - start         - virtual start address of region
> > + *   - end           - virtual end address of region
> > + *   - needs_uaccess - (macro parameter) might access user space memory
> >   */
> > -SYM_FUNC_START(__flush_icache_range)
> > -     /* FALLTHROUGH */
> > -
> > -/*
> > - *   __flush_cache_user_range(start,end)
> > - *
> > - *   Ensure that the I and D caches are coherent within specified region.
> > - *   This is typically used when code has been written to a memory region,
> > - *   and will be executed.
> > - *
> > - *   - start   - virtual start address of region
> > - *   - end     - virtual end address of region
> > - */
> > -SYM_FUNC_START(__flush_cache_user_range)
> > +.macro       __flush_cache_range, needs_uaccess
> > +     .if     \needs_uaccess
> >       uaccess_ttbr0_enable x2, x3, x4
> > +     .endif
> >  alternative_if ARM64_HAS_CACHE_IDC
> >       dsb     ishst
> >       b       7f
> > @@ -47,7 +37,11 @@ alternative_else_nop_endif
> >       sub     x3, x2, #1
> >       bic     x4, x0, x3
> >  1:
> > +     .if     \needs_uaccess
> >  user_alt 9f, "dc cvau, x4",  "dc civac, x4",  ARM64_WORKAROUND_CLEAN_CACHE
> > +     .else
> > +alternative_insn "dc cvau, x4",  "dc civac, x4",  ARM64_WORKAROUND_CLEAN_CACHE
> > +     .endif
> >       add     x4, x4, x2
> >       cmp     x4, x1
> >       b.lo    1b
> > @@ -58,15 +52,47 @@ alternative_if ARM64_HAS_CACHE_DIC
> >       isb
> >       b       8f
> >  alternative_else_nop_endif
> > -     invalidate_icache_by_line x0, x1, x2, x3, 9f
> > +     invalidate_icache_by_line x0, x1, x2, x3, \needs_uaccess, 9f
> >  8:   mov     x0, #0
> >  1:
> > +     .if     \needs_uaccess
> >       uaccess_ttbr0_disable x1, x2
> > +     .endif
> >       ret
> > +
> > +     .if     \needs_uaccess
> >  9:
> >       mov     x0, #-EFAULT
> >       b       1b
> > +     .endif
> > +.endm
> > +
> > +/*
> > + *   flush_icache_range(start,end)
> > + *
> > + *   Ensure that the I and D caches are coherent within specified region.
> > + *   This is typically used when code has been written to a memory region,
> > + *   and will be executed.
> > + *
> > + *   - start   - virtual start address of region
> > + *   - end     - virtual end address of region
> > + */
> > +SYM_FUNC_START(__flush_icache_range)
> > +     __flush_cache_range needs_uaccess=0
> >  SYM_FUNC_END(__flush_icache_range)
> > +
> > +/*
> > + *   __flush_cache_user_range(start,end)
> > + *
> > + *   Ensure that the I and D caches are coherent within specified region.
> > + *   This is typically used when code has been written to a memory region,
> > + *   and will be executed.
> > + *
> > + *   - start   - virtual start address of region
> > + *   - end     - virtual end address of region
> > + */
> > +SYM_FUNC_START(__flush_cache_user_range)
> > +     __flush_cache_range needs_uaccess=1
> >  SYM_FUNC_END(__flush_cache_user_range)
> >
> >  /*
> > @@ -86,7 +112,7 @@ alternative_else_nop_endif
> >
> >       uaccess_ttbr0_enable x2, x3, x4
> >
> > -     invalidate_icache_by_line x0, x1, x2, x3, 2f
> > +     invalidate_icache_by_line x0, x1, x2, x3, 1, 2f
> >       mov     x0, xzr
> >  1:
> >       uaccess_ttbr0_disable x1, x2
> > --
> > 2.31.1.607.g51e8a6a459-goog
> >

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v1 01/13] arm64: Do not enable uaccess for flush_icache_range
  2021-05-11 16:53   ` Robin Murphy
@ 2021-05-12  8:57     ` Fuad Tabba
  0 siblings, 0 replies; 32+ messages in thread
From: Fuad Tabba @ 2021-05-12  8:57 UTC (permalink / raw)
  To: Robin Murphy
  Cc: moderated list:ARM64 PORT (AARCH64 ARCHITECTURE),
	Will Deacon, Catalin Marinas, Mark Rutland, Marc Zyngier, ardb,
	James Morse, Alexandru Elisei, Suzuki K Poulose

Hi Robin,

> > -SYM_FUNC_START(__flush_cache_user_range)
> > +.macro       __flush_cache_range, needs_uaccess
> > +     .if     \needs_uaccess
> >       uaccess_ttbr0_enable x2, x3, x4
> > +     .endif
>
> Nit: this feels like it belongs directly in __flush_cache_user_range()
> rather than being hidden in the macro, since it's not really an integral
> part of the cache maintenance operation itself.

I will fix this in v2.

Thanks,
/fuad

> Robin.
>
> >   alternative_if ARM64_HAS_CACHE_IDC
> >       dsb     ishst
> >       b       7f
> > @@ -47,7 +37,11 @@ alternative_else_nop_endif
> >       sub     x3, x2, #1
> >       bic     x4, x0, x3
> >   1:
> > +     .if     \needs_uaccess
> >   user_alt 9f, "dc cvau, x4",  "dc civac, x4",  ARM64_WORKAROUND_CLEAN_CACHE
> > +     .else
> > +alternative_insn "dc cvau, x4",  "dc civac, x4",  ARM64_WORKAROUND_CLEAN_CACHE
> > +     .endif
> >       add     x4, x4, x2
> >       cmp     x4, x1
> >       b.lo    1b
> > @@ -58,15 +52,47 @@ alternative_if ARM64_HAS_CACHE_DIC
> >       isb
> >       b       8f
> >   alternative_else_nop_endif
> > -     invalidate_icache_by_line x0, x1, x2, x3, 9f
> > +     invalidate_icache_by_line x0, x1, x2, x3, \needs_uaccess, 9f
> >   8:  mov     x0, #0
> >   1:
> > +     .if     \needs_uaccess
> >       uaccess_ttbr0_disable x1, x2
> > +     .endif
> >       ret
> > +
> > +     .if     \needs_uaccess
> >   9:
> >       mov     x0, #-EFAULT
> >       b       1b
> > +     .endif
> > +.endm
> > +
> > +/*
> > + *   flush_icache_range(start,end)
> > + *
> > + *   Ensure that the I and D caches are coherent within specified region.
> > + *   This is typically used when code has been written to a memory region,
> > + *   and will be executed.
> > + *
> > + *   - start   - virtual start address of region
> > + *   - end     - virtual end address of region
> > + */
> > +SYM_FUNC_START(__flush_icache_range)
> > +     __flush_cache_range needs_uaccess=0
> >   SYM_FUNC_END(__flush_icache_range)
> > +
> > +/*
> > + *   __flush_cache_user_range(start,end)
> > + *
> > + *   Ensure that the I and D caches are coherent within specified region.
> > + *   This is typically used when code has been written to a memory region,
> > + *   and will be executed.
> > + *
> > + *   - start   - virtual start address of region
> > + *   - end     - virtual end address of region
> > + */
> > +SYM_FUNC_START(__flush_cache_user_range)
> > +     __flush_cache_range needs_uaccess=1
> >   SYM_FUNC_END(__flush_cache_user_range)
> >
> >   /*
> > @@ -86,7 +112,7 @@ alternative_else_nop_endif
> >
> >       uaccess_ttbr0_enable x2, x3, x4
> >
> > -     invalidate_icache_by_line x0, x1, x2, x3, 2f
> > +     invalidate_icache_by_line x0, x1, x2, x3, 1, 2f
> >       mov     x0, xzr
> >   1:
> >       uaccess_ttbr0_disable x1, x2
> >

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v1 02/13] arm64: Do not enable uaccess for invalidate_icache_range
  2021-05-11 15:34   ` Mark Rutland
@ 2021-05-12  9:35     ` Fuad Tabba
  0 siblings, 0 replies; 32+ messages in thread
From: Fuad Tabba @ 2021-05-12  9:35 UTC (permalink / raw)
  To: Mark Rutland
  Cc: moderated list:ARM64 PORT (AARCH64 ARCHITECTURE),
	Will Deacon, Catalin Marinas, Marc Zyngier, ardb, James Morse,
	Alexandru Elisei, Suzuki K Poulose

Hi Mark,

On Tue, May 11, 2021 at 4:34 PM Mark Rutland <mark.rutland@arm.com> wrote:
>
> On Tue, May 11, 2021 at 03:42:41PM +0100, Fuad Tabba wrote:
> > invalidate_icache_range() works on the kernel linear map, and
> > doesn't need uaccess. Remove the code that toggles
> > uaccess_ttbr0_enable, as well as the code that emits an entry
> > into the exception table (via the macro
> > invalidate_icache_by_line).
>
> Probably also worth mentioning the return type change, but regardless:

Will do in v2.

> Acked-by: Mark Rutland <mark.rutland@arm.com>
>
> I do worry this means we've been silently ignoring cases where this
> faults, and so there's the risk that this has been masking bugs
> elsewhere. It'd be good to throw Syzkaller and the like at this ASAP

Good point. I'll look into that.

Thanks,
/fuad



> Thanks,
> Mark.
>
> > No functional change intended.
> >
> > Reported-by: Catalin Marinas <catalin.marinas@arm.com>
> > Reported-by: Will Deacon <will@kernel.org>
> > Link: https://lore.kernel.org/linux-arch/20200511110014.lb9PEahJ4hVOYrbwIb_qUHXyNy9KQzNFdb_I3YlzY6A@z/
> > Signed-off-by: Fuad Tabba <tabba@google.com>
> > ---
> >  arch/arm64/include/asm/cacheflush.h |  2 +-
> >  arch/arm64/mm/cache.S               | 11 +----------
> >  2 files changed, 2 insertions(+), 11 deletions(-)
> >
> > diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
> > index 52e5c1623224..a586afa84172 100644
> > --- a/arch/arm64/include/asm/cacheflush.h
> > +++ b/arch/arm64/include/asm/cacheflush.h
> > @@ -57,7 +57,7 @@
> >   *           - size   - region size
> >   */
> >  extern void __flush_icache_range(unsigned long start, unsigned long end);
> > -extern int  invalidate_icache_range(unsigned long start, unsigned long end);
> > +extern void invalidate_icache_range(unsigned long start, unsigned long end);
> >  extern void __flush_dcache_area(void *addr, size_t len);
> >  extern void __inval_dcache_area(void *addr, size_t len);
> >  extern void __clean_dcache_area_poc(void *addr, size_t len);
> > diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
> > index 092f73acdf9a..6babaaf34f17 100644
> > --- a/arch/arm64/mm/cache.S
> > +++ b/arch/arm64/mm/cache.S
> > @@ -105,21 +105,12 @@ SYM_FUNC_END(__flush_cache_user_range)
> >   */
> >  SYM_FUNC_START(invalidate_icache_range)
> >  alternative_if ARM64_HAS_CACHE_DIC
> > -     mov     x0, xzr
> >       isb
> >       ret
> >  alternative_else_nop_endif
> >
> > -     uaccess_ttbr0_enable x2, x3, x4
> > -
> > -     invalidate_icache_by_line x0, x1, x2, x3, 1, 2f
> > -     mov     x0, xzr
> > -1:
> > -     uaccess_ttbr0_disable x1, x2
> > +     invalidate_icache_by_line x0, x1, x2, x3, 0, 0f
> >       ret
> > -2:
> > -     mov     x0, #-EFAULT
> > -     b       1b
> >  SYM_FUNC_END(invalidate_icache_range)
> >
> >  /*
> > --
> > 2.31.1.607.g51e8a6a459-goog
> >

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v1 03/13] arm64: Downgrade flush_icache_range to invalidate
  2021-05-11 14:53   ` Ard Biesheuvel
@ 2021-05-12  9:45     ` Fuad Tabba
  0 siblings, 0 replies; 32+ messages in thread
From: Fuad Tabba @ 2021-05-12  9:45 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Linux ARM, Will Deacon, Catalin Marinas, Mark Rutland,
	Marc Zyngier, James Morse, Alexandru Elisei, Suzuki K Poulose

Hi Ard,

> > diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c
> > index 90a335c74442..001ffbfc645b 100644
> > --- a/arch/arm64/kernel/machine_kexec.c
> > +++ b/arch/arm64/kernel/machine_kexec.c
> > @@ -70,8 +70,9 @@ int machine_kexec_post_load(struct kimage *kimage)
> >
> >         /* Flush the reloc_code in preparation for its execution. */
> >         __flush_dcache_area(reloc_code, arm64_relocate_new_kernel_size);
> > -       flush_icache_range((uintptr_t)reloc_code, (uintptr_t)reloc_code +
> > -                          arm64_relocate_new_kernel_size);
> > +       invalidate_icache_range((uintptr_t)reloc_code,
> > +                               (uintptr_t)reloc_code +
> > +                                       arm64_relocate_new_kernel_size);
> >
>
> So this is a clean to the PoC followed by a I-cache invalidate to the
> PoU, right? Perhaps we could improve the comment while at it (avoid
> 'flush', and mention that the code needs to be cleaned to the PoC and
> invalidated from the I-cache for execution with the MMU off and
> I-cache on)

Yes it is. The renaming I do later on in the series clarifies this,
but the comment should be fixed to match. I'll do that in v2.

Thanks,
/fuad

>
> >         return 0;
> >  }
> > --
> > 2.31.1.607.g51e8a6a459-goog
> >

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v1 13/13] arm64: Rename arm64-internal cache maintenance functions
  2021-05-11 15:49     ` Mark Rutland
@ 2021-05-12  9:51       ` Marc Zyngier
  2021-05-12 10:00         ` Mark Rutland
  2021-05-12 10:00       ` Fuad Tabba
  1 sibling, 1 reply; 32+ messages in thread
From: Marc Zyngier @ 2021-05-12  9:51 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Ard Biesheuvel, Fuad Tabba, Linux ARM, Will Deacon,
	Catalin Marinas, James Morse, Alexandru Elisei, Suzuki K Poulose

On 2021-05-11 16:49, Mark Rutland wrote:
> On Tue, May 11, 2021 at 05:09:18PM +0200, Ard Biesheuvel wrote:
>> On Tue, 11 May 2021 at 16:43, Fuad Tabba <tabba@google.com> wrote:
>> >
>> > Although naming across the codebase isn't that consistent, it
>> > tends to follow certain patterns. Moreover, the term "flush"
>> > isn't defined in the Arm Architecture reference manual, and might
>> > be interpreted to mean clean, invalidate, or both for a cache.
>> >
>> > Rename arm64-internal functions to make the naming internally
>> > consistent, as well as making it consistent with the Arm ARM, by
>> > clarifying whether the operation is a clean, invalidate, or both.
>> > Also specify which point the operation applies two, i.e., to the
>> > point of unification (PoU), coherence (PoC), or persistence
>> > (PoP).
>> >
>> > This commit applies the following sed transformation to all files
>> > under arch/arm64:
>> >
>> > "s/\b__flush_cache_range\b/__clean_inval_cache_pou_macro/g;"\
>> > "s/\b__flush_icache_range\b/__clean_inval_cache_pou/g;"\
> 
> For icaches, a "flush" is just an invalidate, so this doesn't need
> "clean".
> 
>> > "s/\binvalidate_icache_range\b/__inval_icache_pou/g;"\
>> > "s/\b__flush_dcache_area\b/__clean_inval_dcache_poc/g;"\
>> > "s/\b__inval_dcache_area\b/__inval_dcache_poc/g;"\
>> > "s/__clean_dcache_area_poc\b/__clean_dcache_poc/g;"\
>> > "s/\b__clean_dcache_area_pop\b/__clean_dcache_pop/g;"\
>> > "s/\b__clean_dcache_area_pou\b/__clean_dcache_pou/g;"\
>> > "s/\b__flush_cache_user_range\b/__clean_inval_cache_user_pou/g;"\
>> > "s/\b__flush_icache_all\b/__clean_inval_all_icache_pou/g;"
> 
> Likewise here.
> 
>> >
>> > Note that __clean_dcache_area_poc is deliberately missing a word
>> > boundary check to match the efistub symbols in image-vars.h.
>> >
>> > No functional change intended.
>> >
>> > Signed-off-by: Fuad Tabba <tabba@google.com>
>> 
>> I am a big fan of this change: code is so much easier to read if the
>> names of subroutines match their intent.
> 
> Likewise!
> 
>> I would suggest, though, that we get rid of all the leading
>> underscores while at it: we often use them when refactoring existing
>> routines into separate pieces (which is where at least some of these
>> came from), but here, they seem to have little value.
> 
> That all makes sense to me; I'd also suggest we make the cache type the
> prefix, e.g.
> 
> * icache_clean_pou

I guess you meant "icache_inval_pou", right, as per your comment above?

Thanks,

         M.
-- 
Jazz is not dead. It just smells funny...

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v1 13/13] arm64: Rename arm64-internal cache maintenance functions
  2021-05-11 15:09   ` Ard Biesheuvel
  2021-05-11 15:49     ` Mark Rutland
@ 2021-05-12  9:56     ` Fuad Tabba
  1 sibling, 0 replies; 32+ messages in thread
From: Fuad Tabba @ 2021-05-12  9:56 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Linux ARM, Will Deacon, Catalin Marinas, Mark Rutland,
	Marc Zyngier, James Morse, Alexandru Elisei, Suzuki K Poulose

Hi Ard,

> I am a big fan of this change: code is so much easier to read if the
> names of subroutines match their intent. I would suggest, though, that
> we get rid of all the leading underscores while at it: we often use
> them when refactoring existing routines into separate pieces (which is
> where at least some of these came from), but here, they seem to have
> little value.

Thank you. I'll remove the underscores in v2.

Thanks,
/fuad

>
> > ---
> >  arch/arm64/include/asm/arch_gicv3.h |  2 +-
> >  arch/arm64/include/asm/cacheflush.h | 36 +++++++++----------
> >  arch/arm64/include/asm/efi.h        |  2 +-
> >  arch/arm64/include/asm/kvm_mmu.h    |  6 ++--
> >  arch/arm64/kernel/alternative.c     |  2 +-
> >  arch/arm64/kernel/efi-entry.S       |  4 +--
> >  arch/arm64/kernel/head.S            |  8 ++---
> >  arch/arm64/kernel/hibernate.c       | 12 +++----
> >  arch/arm64/kernel/idreg-override.c  |  2 +-
> >  arch/arm64/kernel/image-vars.h      |  2 +-
> >  arch/arm64/kernel/insn.c            |  2 +-
> >  arch/arm64/kernel/kaslr.c           |  6 ++--
> >  arch/arm64/kernel/machine_kexec.c   | 10 +++---
> >  arch/arm64/kernel/smp.c             |  4 +--
> >  arch/arm64/kernel/smp_spin_table.c  |  4 +--
> >  arch/arm64/kernel/sys_compat.c      |  2 +-
> >  arch/arm64/kvm/arm.c                |  2 +-
> >  arch/arm64/kvm/hyp/nvhe/cache.S     |  4 +--
> >  arch/arm64/kvm/hyp/nvhe/setup.c     |  2 +-
> >  arch/arm64/kvm/hyp/nvhe/tlb.c       |  2 +-
> >  arch/arm64/kvm/hyp/pgtable.c        |  4 +--
> >  arch/arm64/lib/uaccess_flushcache.c |  4 +--
> >  arch/arm64/mm/cache.S               | 56 ++++++++++++++---------------
> >  arch/arm64/mm/flush.c               | 12 +++----
> >  24 files changed, 95 insertions(+), 95 deletions(-)
> >
> > diff --git a/arch/arm64/include/asm/arch_gicv3.h b/arch/arm64/include/asm/arch_gicv3.h
> > index ed1cc9d8e6df..4b7ac9098e8f 100644
> > --- a/arch/arm64/include/asm/arch_gicv3.h
> > +++ b/arch/arm64/include/asm/arch_gicv3.h
> > @@ -125,7 +125,7 @@ static inline u32 gic_read_rpr(void)
> >  #define gic_write_lpir(v, c)           writeq_relaxed(v, c)
> >
> >  #define gic_flush_dcache_to_poc(a,l)   \
> > -       __flush_dcache_area((unsigned long)(a), (unsigned long)(a)+(l))
> > +       __clean_inval_dcache_poc((unsigned long)(a), (unsigned long)(a)+(l))
> >
> >  #define gits_read_baser(c)             readq_relaxed(c)
> >  #define gits_write_baser(v, c)         writeq_relaxed(v, c)
> > diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
> > index 4b91d3530013..526eee4522eb 100644
> > --- a/arch/arm64/include/asm/cacheflush.h
> > +++ b/arch/arm64/include/asm/cacheflush.h
> > @@ -34,54 +34,54 @@
> >   *             - start  - virtual start address
> >   *             - end    - virtual end address
> >   *
> > - *     __flush_icache_range(start, end)
> > + *     __clean_inval_cache_pou(start, end)
> >   *
> >   *             Ensure coherency between the I-cache and the D-cache region to
> >   *             the Point of Unification.
> >   *
> > - *     __flush_cache_user_range(start, end)
> > + *     __clean_inval_cache_user_pou(start, end)
> >   *
> >   *             Ensure coherency between the I-cache and the D-cache region to
> >   *             the Point of Unification.
> >   *             Use only if the region might access user memory.
> >   *
> > - *     invalidate_icache_range(start, end)
> > + *     __inval_icache_pou(start, end)
> >   *
> >   *             Invalidate I-cache region to the Point of Unification.
> >   *
> > - *     __flush_dcache_area(start, end)
> > + *     __clean_inval_dcache_poc(start, end)
> >   *
> >   *             Clean and invalidate D-cache region to the Point of Coherence.
> >   *
> > - *     __inval_dcache_area(start, end)
> > + *     __inval_dcache_poc(start, end)
> >   *
> >   *             Invalidate D-cache region to the Point of Coherence.
> >   *
> > - *     __clean_dcache_area_poc(start, end)
> > + *     __clean_dcache_poc(start, end)
> >   *
> >   *             Clean D-cache region to the Point of Coherence.
> >   *
> > - *     __clean_dcache_area_pop(start, end)
> > + *     __clean_dcache_pop(start, end)
> >   *
> >   *             Clean D-cache region to the Point of Persistence.
> >   *
> > - *     __clean_dcache_area_pou(start, end)
> > + *     __clean_dcache_pou(start, end)
> >   *
> >   *             Clean D-cache region to the Point of Unification.
> >   */
> > -extern void __flush_icache_range(unsigned long start, unsigned long end);
> > -extern void invalidate_icache_range(unsigned long start, unsigned long end);
> > -extern void __flush_dcache_area(unsigned long start, unsigned long end);
> > -extern void __inval_dcache_area(unsigned long start, unsigned long end);
> > -extern void __clean_dcache_area_poc(unsigned long start, unsigned long end);
> > -extern void __clean_dcache_area_pop(unsigned long start, unsigned long end);
> > -extern void __clean_dcache_area_pou(unsigned long start, unsigned long end);
> > -extern long __flush_cache_user_range(unsigned long start, unsigned long end);
> > +extern void __clean_inval_cache_pou(unsigned long start, unsigned long end);
> > +extern void __inval_icache_pou(unsigned long start, unsigned long end);
> > +extern void __clean_inval_dcache_poc(unsigned long start, unsigned long end);
> > +extern void __inval_dcache_poc(unsigned long start, unsigned long end);
> > +extern void __clean_dcache_poc(unsigned long start, unsigned long end);
> > +extern void __clean_dcache_pop(unsigned long start, unsigned long end);
> > +extern void __clean_dcache_pou(unsigned long start, unsigned long end);
> > +extern long __clean_inval_cache_user_pou(unsigned long start, unsigned long end);
> >  extern void sync_icache_aliases(unsigned long start, unsigned long end);
> >
> >  static inline void flush_icache_range(unsigned long start, unsigned long end)
> >  {
> > -       __flush_icache_range(start, end);
> > +       __clean_inval_cache_pou(start, end);
> >
> >         /*
> >          * IPI all online CPUs so that they undergo a context synchronization
> > @@ -135,7 +135,7 @@ extern void copy_to_user_page(struct vm_area_struct *, struct page *,
> >  #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1
> >  extern void flush_dcache_page(struct page *);
> >
> > -static __always_inline void __flush_icache_all(void)
> > +static __always_inline void __clean_inval_all_icache_pou(void)
> >  {
> >         if (cpus_have_const_cap(ARM64_HAS_CACHE_DIC))
> >                 return;
> > diff --git a/arch/arm64/include/asm/efi.h b/arch/arm64/include/asm/efi.h
> > index 0ae2397076fd..d1e2a4bf8def 100644
> > --- a/arch/arm64/include/asm/efi.h
> > +++ b/arch/arm64/include/asm/efi.h
> > @@ -137,7 +137,7 @@ void efi_virtmap_unload(void);
> >
> >  static inline void efi_capsule_flush_cache_range(void *addr, int size)
> >  {
> > -       __flush_dcache_area((unsigned long)addr, (unsigned long)addr + size);
> > +       __clean_inval_dcache_poc((unsigned long)addr, (unsigned long)addr + size);
> >  }
> >
> >  #endif /* _ASM_EFI_H */
> > diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
> > index 33293d5855af..29d2aa6f3940 100644
> > --- a/arch/arm64/include/asm/kvm_mmu.h
> > +++ b/arch/arm64/include/asm/kvm_mmu.h
> > @@ -181,7 +181,7 @@ static inline void *__kvm_vector_slot2addr(void *base,
> >  struct kvm;
> >
> >  #define kvm_flush_dcache_to_poc(a,l)   \
> > -       __flush_dcache_area((unsigned long)(a), (unsigned long)(a)+(l))
> > +       __clean_inval_dcache_poc((unsigned long)(a), (unsigned long)(a)+(l))
> >
> >  static inline bool vcpu_has_cache_enabled(struct kvm_vcpu *vcpu)
> >  {
> > @@ -209,12 +209,12 @@ static inline void __invalidate_icache_guest_page(kvm_pfn_t pfn,
> >  {
> >         if (icache_is_aliasing()) {
> >                 /* any kind of VIPT cache */
> > -               __flush_icache_all();
> > +               __clean_inval_all_icache_pou();
> >         } else if (is_kernel_in_hyp_mode() || !icache_is_vpipt()) {
> >                 /* PIPT or VPIPT at EL2 (see comment in __kvm_tlb_flush_vmid_ipa) */
> >                 void *va = page_address(pfn_to_page(pfn));
> >
> > -               invalidate_icache_range((unsigned long)va,
> > +               __inval_icache_pou((unsigned long)va,
> >                                         (unsigned long)va + size);
> >         }
> >  }
> > diff --git a/arch/arm64/kernel/alternative.c b/arch/arm64/kernel/alternative.c
> > index c906d20c7b52..ea2d52fa9a0c 100644
> > --- a/arch/arm64/kernel/alternative.c
> > +++ b/arch/arm64/kernel/alternative.c
> > @@ -181,7 +181,7 @@ static void __nocfi __apply_alternatives(struct alt_region *region, bool is_modu
> >          */
> >         if (!is_module) {
> >                 dsb(ish);
> > -               __flush_icache_all();
> > +               __clean_inval_all_icache_pou();
> >                 isb();
> >
> >                 /* Ignore ARM64_CB bit from feature mask */
> > diff --git a/arch/arm64/kernel/efi-entry.S b/arch/arm64/kernel/efi-entry.S
> > index 72e6a580290a..230506f460ec 100644
> > --- a/arch/arm64/kernel/efi-entry.S
> > +++ b/arch/arm64/kernel/efi-entry.S
> > @@ -29,7 +29,7 @@ SYM_CODE_START(efi_enter_kernel)
> >          */
> >         ldr     w1, =kernel_size
> >         add     x1, x0, x1
> > -       bl      __clean_dcache_area_poc
> > +       bl      __clean_dcache_poc
> >         ic      ialluis
> >
> >         /*
> > @@ -38,7 +38,7 @@ SYM_CODE_START(efi_enter_kernel)
> >          */
> >         adr     x0, 0f
> >         adr     x1, 3f
> > -       bl      __clean_dcache_area_poc
> > +       bl      __clean_dcache_poc
> >  0:
> >         /* Turn off Dcache and MMU */
> >         mrs     x0, CurrentEL
> > diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
> > index 8df0ac8d9123..ea0447c5010a 100644
> > --- a/arch/arm64/kernel/head.S
> > +++ b/arch/arm64/kernel/head.S
> > @@ -118,7 +118,7 @@ SYM_CODE_START_LOCAL(preserve_boot_args)
> >                                                 // MMU off
> >
> >         add     x1, x0, #0x20                   // 4 x 8 bytes
> > -       b       __inval_dcache_area             // tail call
> > +       b       __inval_dcache_poc              // tail call
> >  SYM_CODE_END(preserve_boot_args)
> >
> >  /*
> > @@ -268,7 +268,7 @@ SYM_FUNC_START_LOCAL(__create_page_tables)
> >          */
> >         adrp    x0, init_pg_dir
> >         adrp    x1, init_pg_end
> > -       bl      __inval_dcache_area
> > +       bl      __inval_dcache_poc
> >
> >         /*
> >          * Clear the init page tables.
> > @@ -381,11 +381,11 @@ SYM_FUNC_START_LOCAL(__create_page_tables)
> >
> >         adrp    x0, idmap_pg_dir
> >         adrp    x1, idmap_pg_end
> > -       bl      __inval_dcache_area
> > +       bl      __inval_dcache_poc
> >
> >         adrp    x0, init_pg_dir
> >         adrp    x1, init_pg_end
> > -       bl      __inval_dcache_area
> > +       bl      __inval_dcache_poc
> >
> >         ret     x28
> >  SYM_FUNC_END(__create_page_tables)
> > diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c
> > index b40ddce71507..ec871b24fd5b 100644
> > --- a/arch/arm64/kernel/hibernate.c
> > +++ b/arch/arm64/kernel/hibernate.c
> > @@ -210,7 +210,7 @@ static int create_safe_exec_page(void *src_start, size_t length,
> >                 return -ENOMEM;
> >
> >         memcpy(page, src_start, length);
> > -       __flush_icache_range((unsigned long)page, (unsigned long)page + length);
> > +       __clean_inval_cache_pou((unsigned long)page, (unsigned long)page + length);
> >         rc = trans_pgd_idmap_page(&trans_info, &trans_ttbr0, &t0sz, page);
> >         if (rc)
> >                 return rc;
> > @@ -381,17 +381,17 @@ int swsusp_arch_suspend(void)
> >                 ret = swsusp_save();
> >         } else {
> >                 /* Clean kernel core startup/idle code to PoC*/
> > -               __flush_dcache_area((unsigned long)__mmuoff_data_start,
> > +               __clean_inval_dcache_poc((unsigned long)__mmuoff_data_start,
> >                                     (unsigned long)__mmuoff_data_end);
> > -               __flush_dcache_area((unsigned long)__idmap_text_start,
> > +               __clean_inval_dcache_poc((unsigned long)__idmap_text_start,
> >                                     (unsigned long)__idmap_text_end);
> >
> >                 /* Clean kvm setup code to PoC? */
> >                 if (el2_reset_needed()) {
> > -                       __flush_dcache_area(
> > +                       __clean_inval_dcache_poc(
> >                                 (unsigned long)__hyp_idmap_text_start,
> >                                 (unsigned long)__hyp_idmap_text_end);
> > -                       __flush_dcache_area((unsigned long)__hyp_text_start,
> > +                       __clean_inval_dcache_poc((unsigned long)__hyp_text_start,
> >                                             (unsigned long)__hyp_text_end);
> >                 }
> >
> > @@ -477,7 +477,7 @@ int swsusp_arch_resume(void)
> >          * The hibernate exit text contains a set of el2 vectors, that will
> >          * be executed at el2 with the mmu off in order to reload hyp-stub.
> >          */
> > -       __flush_dcache_area((unsigned long)hibernate_exit,
> > +       __clean_inval_dcache_poc((unsigned long)hibernate_exit,
> >                             (unsigned long)hibernate_exit + exit_size);
> >
> >         /*
> > diff --git a/arch/arm64/kernel/idreg-override.c b/arch/arm64/kernel/idreg-override.c
> > index 3dd515baf526..6b4b5727f2db 100644
> > --- a/arch/arm64/kernel/idreg-override.c
> > +++ b/arch/arm64/kernel/idreg-override.c
> > @@ -237,7 +237,7 @@ asmlinkage void __init init_feature_override(void)
> >
> >         for (i = 0; i < ARRAY_SIZE(regs); i++) {
> >                 if (regs[i]->override)
> > -                       __flush_dcache_area((unsigned long)regs[i]->override,
> > +                       __clean_inval_dcache_poc((unsigned long)regs[i]->override,
> >                                             (unsigned long)regs[i]->override +
> >                                             sizeof(*regs[i]->override));
> >         }
> > diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h
> > index bcf3c2755370..14beda6a573d 100644
> > --- a/arch/arm64/kernel/image-vars.h
> > +++ b/arch/arm64/kernel/image-vars.h
> > @@ -35,7 +35,7 @@ __efistub_strnlen             = __pi_strnlen;
> >  __efistub_strcmp               = __pi_strcmp;
> >  __efistub_strncmp              = __pi_strncmp;
> >  __efistub_strrchr              = __pi_strrchr;
> > -__efistub___clean_dcache_area_poc = __pi___clean_dcache_area_poc;
> > +__efistub___clean_dcache_poc = __pi___clean_dcache_poc;
> >
> >  #if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)
> >  __efistub___memcpy             = __pi_memcpy;
> > diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
> > index 6c0de2f60ea9..11c7be09e305 100644
> > --- a/arch/arm64/kernel/insn.c
> > +++ b/arch/arm64/kernel/insn.c
> > @@ -198,7 +198,7 @@ int __kprobes aarch64_insn_patch_text_nosync(void *addr, u32 insn)
> >
> >         ret = aarch64_insn_write(tp, insn);
> >         if (ret == 0)
> > -               __flush_icache_range((uintptr_t)tp,
> > +               __clean_inval_cache_pou((uintptr_t)tp,
> >                                      (uintptr_t)tp + AARCH64_INSN_SIZE);
> >
> >         return ret;
> > diff --git a/arch/arm64/kernel/kaslr.c b/arch/arm64/kernel/kaslr.c
> > index 49cccd03cb37..038a4cc7de93 100644
> > --- a/arch/arm64/kernel/kaslr.c
> > +++ b/arch/arm64/kernel/kaslr.c
> > @@ -72,7 +72,7 @@ u64 __init kaslr_early_init(void)
> >          * we end up running with module randomization disabled.
> >          */
> >         module_alloc_base = (u64)_etext - MODULES_VSIZE;
> > -       __flush_dcache_area((unsigned long)&module_alloc_base,
> > +       __clean_inval_dcache_poc((unsigned long)&module_alloc_base,
> >                             (unsigned long)&module_alloc_base +
> >                                     sizeof(module_alloc_base));
> >
> > @@ -172,10 +172,10 @@ u64 __init kaslr_early_init(void)
> >         module_alloc_base += (module_range * (seed & ((1 << 21) - 1))) >> 21;
> >         module_alloc_base &= PAGE_MASK;
> >
> > -       __flush_dcache_area((unsigned long)&module_alloc_base,
> > +       __clean_inval_dcache_poc((unsigned long)&module_alloc_base,
> >                             (unsigned long)&module_alloc_base +
> >                                     sizeof(module_alloc_base));
> > -       __flush_dcache_area((unsigned long)&memstart_offset_seed,
> > +       __clean_inval_dcache_poc((unsigned long)&memstart_offset_seed,
> >                             (unsigned long)&memstart_offset_seed +
> >                                     sizeof(memstart_offset_seed));
> >
> > diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c
> > index 4cada9000acf..0e20a789b03e 100644
> > --- a/arch/arm64/kernel/machine_kexec.c
> > +++ b/arch/arm64/kernel/machine_kexec.c
> > @@ -69,10 +69,10 @@ int machine_kexec_post_load(struct kimage *kimage)
> >         kexec_image_info(kimage);
> >
> >         /* Flush the reloc_code in preparation for its execution. */
> > -       __flush_dcache_area((unsigned long)reloc_code,
> > +       __clean_inval_dcache_poc((unsigned long)reloc_code,
> >                             (unsigned long)reloc_code +
> >                                     arm64_relocate_new_kernel_size);
> > -       invalidate_icache_range((uintptr_t)reloc_code,
> > +       __inval_icache_pou((uintptr_t)reloc_code,
> >                                 (uintptr_t)reloc_code +
> >                                         arm64_relocate_new_kernel_size);
> >
> > @@ -108,7 +108,7 @@ static void kexec_list_flush(struct kimage *kimage)
> >                 unsigned long addr;
> >
> >                 /* flush the list entries. */
> > -               __flush_dcache_area((unsigned long)entry,
> > +               __clean_inval_dcache_poc((unsigned long)entry,
> >                                     (unsigned long)entry +
> >                                             sizeof(kimage_entry_t));
> >
> > @@ -125,7 +125,7 @@ static void kexec_list_flush(struct kimage *kimage)
> >                         break;
> >                 case IND_SOURCE:
> >                         /* flush the source pages. */
> > -                       __flush_dcache_area(addr, addr + PAGE_SIZE);
> > +                       __clean_inval_dcache_poc(addr, addr + PAGE_SIZE);
> >                         break;
> >                 case IND_DESTINATION:
> >                         break;
> > @@ -152,7 +152,7 @@ static void kexec_segment_flush(const struct kimage *kimage)
> >                         kimage->segment[i].memsz,
> >                         kimage->segment[i].memsz /  PAGE_SIZE);
> >
> > -               __flush_dcache_area(
> > +               __clean_inval_dcache_poc(
> >                         (unsigned long)phys_to_virt(kimage->segment[i].mem),
> >                         (unsigned long)phys_to_virt(kimage->segment[i].mem) +
> >                                 kimage->segment[i].memsz);
> > diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
> > index 5fcdee331087..2044210ed15a 100644
> > --- a/arch/arm64/kernel/smp.c
> > +++ b/arch/arm64/kernel/smp.c
> > @@ -122,7 +122,7 @@ int __cpu_up(unsigned int cpu, struct task_struct *idle)
> >         secondary_data.task = idle;
> >         secondary_data.stack = task_stack_page(idle) + THREAD_SIZE;
> >         update_cpu_boot_status(CPU_MMU_OFF);
> > -       __flush_dcache_area((unsigned long)&secondary_data,
> > +       __clean_inval_dcache_poc((unsigned long)&secondary_data,
> >                             (unsigned long)&secondary_data +
> >                                     sizeof(secondary_data));
> >
> > @@ -145,7 +145,7 @@ int __cpu_up(unsigned int cpu, struct task_struct *idle)
> >         pr_crit("CPU%u: failed to come online\n", cpu);
> >         secondary_data.task = NULL;
> >         secondary_data.stack = NULL;
> > -       __flush_dcache_area((unsigned long)&secondary_data,
> > +       __clean_inval_dcache_poc((unsigned long)&secondary_data,
> >                             (unsigned long)&secondary_data +
> >                                     sizeof(secondary_data));
> >         status = READ_ONCE(secondary_data.status);
> > diff --git a/arch/arm64/kernel/smp_spin_table.c b/arch/arm64/kernel/smp_spin_table.c
> > index 58d804582a35..a946ccb9791e 100644
> > --- a/arch/arm64/kernel/smp_spin_table.c
> > +++ b/arch/arm64/kernel/smp_spin_table.c
> > @@ -36,7 +36,7 @@ static void write_pen_release(u64 val)
> >         unsigned long size = sizeof(secondary_holding_pen_release);
> >
> >         secondary_holding_pen_release = val;
> > -       __flush_dcache_area((unsigned long)start, (unsigned long)start + size);
> > +       __clean_inval_dcache_poc((unsigned long)start, (unsigned long)start + size);
> >  }
> >
> >
> > @@ -90,7 +90,7 @@ static int smp_spin_table_cpu_prepare(unsigned int cpu)
> >          * the boot protocol.
> >          */
> >         writeq_relaxed(pa_holding_pen, release_addr);
> > -       __flush_dcache_area((__force unsigned long)release_addr,
> > +       __clean_inval_dcache_poc((__force unsigned long)release_addr,
> >                             (__force unsigned long)release_addr +
> >                                     sizeof(*release_addr));
> >
> > diff --git a/arch/arm64/kernel/sys_compat.c b/arch/arm64/kernel/sys_compat.c
> > index 265fe3eb1069..fdd415f8d841 100644
> > --- a/arch/arm64/kernel/sys_compat.c
> > +++ b/arch/arm64/kernel/sys_compat.c
> > @@ -41,7 +41,7 @@ __do_compat_cache_op(unsigned long start, unsigned long end)
> >                         dsb(ish);
> >                 }
> >
> > -               ret = __flush_cache_user_range(start, start + chunk);
> > +               ret = __clean_inval_cache_user_pou(start, start + chunk);
> >                 if (ret)
> >                         return ret;
> >
> > diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> > index 1cb39c0803a4..edeca89405ff 100644
> > --- a/arch/arm64/kvm/arm.c
> > +++ b/arch/arm64/kvm/arm.c
> > @@ -1064,7 +1064,7 @@ static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm_vcpu *vcpu,
> >                 if (!cpus_have_final_cap(ARM64_HAS_STAGE2_FWB))
> >                         stage2_unmap_vm(vcpu->kvm);
> >                 else
> > -                       __flush_icache_all();
> > +                       __clean_inval_all_icache_pou();
> >         }
> >
> >         vcpu_reset_hcr(vcpu);
> > diff --git a/arch/arm64/kvm/hyp/nvhe/cache.S b/arch/arm64/kvm/hyp/nvhe/cache.S
> > index 36cef6915428..a906dd596e66 100644
> > --- a/arch/arm64/kvm/hyp/nvhe/cache.S
> > +++ b/arch/arm64/kvm/hyp/nvhe/cache.S
> > @@ -7,7 +7,7 @@
> >  #include <asm/assembler.h>
> >  #include <asm/alternative.h>
> >
> > -SYM_FUNC_START_PI(__flush_dcache_area)
> > +SYM_FUNC_START_PI(__clean_inval_dcache_poc)
> >         dcache_by_line_op civac, sy, x0, x1, x2, x3
> >         ret
> > -SYM_FUNC_END_PI(__flush_dcache_area)
> > +SYM_FUNC_END_PI(__clean_inval_dcache_poc)
> > diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c
> > index 5dffe928f256..a16719f5068d 100644
> > --- a/arch/arm64/kvm/hyp/nvhe/setup.c
> > +++ b/arch/arm64/kvm/hyp/nvhe/setup.c
> > @@ -134,7 +134,7 @@ static void update_nvhe_init_params(void)
> >         for (i = 0; i < hyp_nr_cpus; i++) {
> >                 params = per_cpu_ptr(&kvm_init_params, i);
> >                 params->pgd_pa = __hyp_pa(pkvm_pgtable.pgd);
> > -               __flush_dcache_area((unsigned long)params,
> > +               __clean_inval_dcache_poc((unsigned long)params,
> >                                     (unsigned long)params + sizeof(*params));
> >         }
> >  }
> > diff --git a/arch/arm64/kvm/hyp/nvhe/tlb.c b/arch/arm64/kvm/hyp/nvhe/tlb.c
> > index 83dc3b271bc5..184c9c7c13bd 100644
> > --- a/arch/arm64/kvm/hyp/nvhe/tlb.c
> > +++ b/arch/arm64/kvm/hyp/nvhe/tlb.c
> > @@ -104,7 +104,7 @@ void __kvm_tlb_flush_vmid_ipa(struct kvm_s2_mmu *mmu,
> >          * you should be running with VHE enabled.
> >          */
> >         if (icache_is_vpipt())
> > -               __flush_icache_all();
> > +               __clean_inval_all_icache_pou();
> >
> >         __tlb_switch_to_host(&cxt);
> >  }
> > diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
> > index 10d2f04013d4..fb2613f458de 100644
> > --- a/arch/arm64/kvm/hyp/pgtable.c
> > +++ b/arch/arm64/kvm/hyp/pgtable.c
> > @@ -841,7 +841,7 @@ static int stage2_unmap_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep,
> >         if (need_flush) {
> >                 kvm_pte_t *pte_follow = kvm_pte_follow(pte, mm_ops);
> >
> > -               __flush_dcache_area((unsigned long)pte_follow,
> > +               __clean_inval_dcache_poc((unsigned long)pte_follow,
> >                                     (unsigned long)pte_follow +
> >                                             kvm_granule_size(level));
> >         }
> > @@ -997,7 +997,7 @@ static int stage2_flush_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep,
> >                 return 0;
> >
> >         pte_follow = kvm_pte_follow(pte, mm_ops);
> > -       __flush_dcache_area((unsigned long)pte_follow,
> > +       __clean_inval_dcache_poc((unsigned long)pte_follow,
> >                             (unsigned long)pte_follow +
> >                                     kvm_granule_size(level));
> >         return 0;
> > diff --git a/arch/arm64/lib/uaccess_flushcache.c b/arch/arm64/lib/uaccess_flushcache.c
> > index 62ea989effe8..b1a6d9823864 100644
> > --- a/arch/arm64/lib/uaccess_flushcache.c
> > +++ b/arch/arm64/lib/uaccess_flushcache.c
> > @@ -15,7 +15,7 @@ void memcpy_flushcache(void *dst, const void *src, size_t cnt)
> >          * barrier to order the cache maintenance against the memcpy.
> >          */
> >         memcpy(dst, src, cnt);
> > -       __clean_dcache_area_pop((unsigned long)dst, (unsigned long)dst + cnt);
> > +       __clean_dcache_pop((unsigned long)dst, (unsigned long)dst + cnt);
> >  }
> >  EXPORT_SYMBOL_GPL(memcpy_flushcache);
> >
> > @@ -33,6 +33,6 @@ unsigned long __copy_user_flushcache(void *to, const void __user *from,
> >         rc = raw_copy_from_user(to, from, n);
> >
> >         /* See above */
> > -       __clean_dcache_area_pop((unsigned long)to, (unsigned long)to + n - rc);
> > +       __clean_dcache_pop((unsigned long)to, (unsigned long)to + n - rc);
> >         return rc;
> >  }
> > diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
> > index d8434e57fab3..2df7212de799 100644
> > --- a/arch/arm64/mm/cache.S
> > +++ b/arch/arm64/mm/cache.S
> > @@ -15,7 +15,7 @@
> >  #include <asm/asm-uaccess.h>
> >
> >  /*
> > - *     __flush_cache_range(start,end) [needs_uaccess]
> > + *     __clean_inval_cache_pou_macro(start,end) [needs_uaccess]
> >   *
> >   *     Ensure that the I and D caches are coherent within specified region.
> >   *     This is typically used when code has been written to a memory region,
> > @@ -25,7 +25,7 @@
> >   *     - end           - virtual end address of region
> >   *     - needs_uaccess - (macro parameter) might access user space memory
> >   */
> > -.macro __flush_cache_range, needs_uaccess
> > +.macro __clean_inval_cache_pou_macro, needs_uaccess
> >         .if     \needs_uaccess
> >         uaccess_ttbr0_enable x2, x3, x4
> >         .endif
> > @@ -77,12 +77,12 @@ alternative_else_nop_endif
> >   *     - start   - virtual start address of region
> >   *     - end     - virtual end address of region
> >   */
> > -SYM_FUNC_START(__flush_icache_range)
> > -       __flush_cache_range needs_uaccess=0
> > -SYM_FUNC_END(__flush_icache_range)
> > +SYM_FUNC_START(__clean_inval_cache_pou)
> > +       __clean_inval_cache_pou_macro needs_uaccess=0
> > +SYM_FUNC_END(__clean_inval_cache_pou)
> >
> >  /*
> > - *     __flush_cache_user_range(start,end)
> > + *     __clean_inval_cache_user_pou(start,end)
> >   *
> >   *     Ensure that the I and D caches are coherent within specified region.
> >   *     This is typically used when code has been written to a memory region,
> > @@ -91,19 +91,19 @@ SYM_FUNC_END(__flush_icache_range)
> >   *     - start   - virtual start address of region
> >   *     - end     - virtual end address of region
> >   */
> > -SYM_FUNC_START(__flush_cache_user_range)
> > -       __flush_cache_range needs_uaccess=1
> > -SYM_FUNC_END(__flush_cache_user_range)
> > +SYM_FUNC_START(__clean_inval_cache_user_pou)
> > +       __clean_inval_cache_pou_macro needs_uaccess=1
> > +SYM_FUNC_END(__clean_inval_cache_user_pou)
> >
> >  /*
> > - *     invalidate_icache_range(start,end)
> > + *     __inval_icache_pou(start,end)
> >   *
> >   *     Ensure that the I cache is invalid within specified region.
> >   *
> >   *     - start   - virtual start address of region
> >   *     - end     - virtual end address of region
> >   */
> > -SYM_FUNC_START(invalidate_icache_range)
> > +SYM_FUNC_START(__inval_icache_pou)
> >  alternative_if ARM64_HAS_CACHE_DIC
> >         isb
> >         ret
> > @@ -111,10 +111,10 @@ alternative_else_nop_endif
> >
> >         invalidate_icache_by_line x0, x1, x2, x3, 0, 0f
> >         ret
> > -SYM_FUNC_END(invalidate_icache_range)
> > +SYM_FUNC_END(__inval_icache_pou)
> >
> >  /*
> > - *     __flush_dcache_area(start, end)
> > + *     __clean_inval_dcache_poc(start, end)
> >   *
> >   *     Ensure that any D-cache lines for the interval [start, end)
> >   *     are cleaned and invalidated to the PoC.
> > @@ -122,13 +122,13 @@ SYM_FUNC_END(invalidate_icache_range)
> >   *     - start   - virtual start address of region
> >   *     - end     - virtual end address of region
> >   */
> > -SYM_FUNC_START_PI(__flush_dcache_area)
> > +SYM_FUNC_START_PI(__clean_inval_dcache_poc)
> >         dcache_by_line_op civac, sy, x0, x1, x2, x3
> >         ret
> > -SYM_FUNC_END_PI(__flush_dcache_area)
> > +SYM_FUNC_END_PI(__clean_inval_dcache_poc)
> >
> >  /*
> > - *     __clean_dcache_area_pou(start, end)
> > + *     __clean_dcache_pou(start, end)
> >   *
> >   *     Ensure that any D-cache lines for the interval [start, end)
> >   *     are cleaned to the PoU.
> > @@ -136,17 +136,17 @@ SYM_FUNC_END_PI(__flush_dcache_area)
> >   *     - start   - virtual start address of region
> >   *     - end     - virtual end address of region
> >   */
> > -SYM_FUNC_START(__clean_dcache_area_pou)
> > +SYM_FUNC_START(__clean_dcache_pou)
> >  alternative_if ARM64_HAS_CACHE_IDC
> >         dsb     ishst
> >         ret
> >  alternative_else_nop_endif
> >         dcache_by_line_op cvau, ish, x0, x1, x2, x3
> >         ret
> > -SYM_FUNC_END(__clean_dcache_area_pou)
> > +SYM_FUNC_END(__clean_dcache_pou)
> >
> >  /*
> > - *     __inval_dcache_area(start, end)
> > + *     __inval_dcache_poc(start, end)
> >   *
> >   *     Ensure that any D-cache lines for the interval [start, end)
> >   *     are invalidated. Any partial lines at the ends of the interval are
> > @@ -156,7 +156,7 @@ SYM_FUNC_END(__clean_dcache_area_pou)
> >   *     - end     - kernel end address of region
> >   */
> >  SYM_FUNC_START_LOCAL(__dma_inv_area)
> > -SYM_FUNC_START_PI(__inval_dcache_area)
> > +SYM_FUNC_START_PI(__inval_dcache_poc)
> >         /* FALLTHROUGH */
> >
> >  /*
> > @@ -181,11 +181,11 @@ SYM_FUNC_START_PI(__inval_dcache_area)
> >         b.lo    2b
> >         dsb     sy
> >         ret
> > -SYM_FUNC_END_PI(__inval_dcache_area)
> > +SYM_FUNC_END_PI(__inval_dcache_poc)
> >  SYM_FUNC_END(__dma_inv_area)
> >
> >  /*
> > - *     __clean_dcache_area_poc(start, end)
> > + *     __clean_dcache_poc(start, end)
> >   *
> >   *     Ensure that any D-cache lines for the interval [start, end)
> >   *     are cleaned to the PoC.
> > @@ -194,7 +194,7 @@ SYM_FUNC_END(__dma_inv_area)
> >   *     - end     - virtual end address of region
> >   */
> >  SYM_FUNC_START_LOCAL(__dma_clean_area)
> > -SYM_FUNC_START_PI(__clean_dcache_area_poc)
> > +SYM_FUNC_START_PI(__clean_dcache_poc)
> >         /* FALLTHROUGH */
> >
> >  /*
> > @@ -204,11 +204,11 @@ SYM_FUNC_START_PI(__clean_dcache_area_poc)
> >   */
> >         dcache_by_line_op cvac, sy, x0, x1, x2, x3
> >         ret
> > -SYM_FUNC_END_PI(__clean_dcache_area_poc)
> > +SYM_FUNC_END_PI(__clean_dcache_poc)
> >  SYM_FUNC_END(__dma_clean_area)
> >
> >  /*
> > - *     __clean_dcache_area_pop(start, end)
> > + *     __clean_dcache_pop(start, end)
> >   *
> >   *     Ensure that any D-cache lines for the interval [start, end)
> >   *     are cleaned to the PoP.
> > @@ -216,13 +216,13 @@ SYM_FUNC_END(__dma_clean_area)
> >   *     - start   - virtual start address of region
> >   *     - end     - virtual end address of region
> >   */
> > -SYM_FUNC_START_PI(__clean_dcache_area_pop)
> > +SYM_FUNC_START_PI(__clean_dcache_pop)
> >         alternative_if_not ARM64_HAS_DCPOP
> > -       b       __clean_dcache_area_poc
> > +       b       __clean_dcache_poc
> >         alternative_else_nop_endif
> >         dcache_by_line_op cvap, sy, x0, x1, x2, x3
> >         ret
> > -SYM_FUNC_END_PI(__clean_dcache_area_pop)
> > +SYM_FUNC_END_PI(__clean_dcache_pop)
> >
> >  /*
> >   *     __dma_flush_area(start, size)
> > diff --git a/arch/arm64/mm/flush.c b/arch/arm64/mm/flush.c
> > index 143f625e7727..005b92148252 100644
> > --- a/arch/arm64/mm/flush.c
> > +++ b/arch/arm64/mm/flush.c
> > @@ -17,14 +17,14 @@
> >  void sync_icache_aliases(unsigned long start, unsigned long end)
> >  {
> >         if (icache_is_aliasing()) {
> > -               __clean_dcache_area_pou(start, end);
> > -               __flush_icache_all();
> > +               __clean_dcache_pou(start, end);
> > +               __clean_inval_all_icache_pou();
> >         } else {
> >                 /*
> >                  * Don't issue kick_all_cpus_sync() after I-cache invalidation
> >                  * for user mappings.
> >                  */
> > -               __flush_icache_range(start, end);
> > +               __clean_inval_cache_pou(start, end);
> >         }
> >  }
> >
> > @@ -76,20 +76,20 @@ EXPORT_SYMBOL(flush_dcache_page);
> >  /*
> >   * Additional functions defined in assembly.
> >   */
> > -EXPORT_SYMBOL(__flush_icache_range);
> > +EXPORT_SYMBOL(__clean_inval_cache_pou);
> >
> >  #ifdef CONFIG_ARCH_HAS_PMEM_API
> >  void arch_wb_cache_pmem(void *addr, size_t size)
> >  {
> >         /* Ensure order against any prior non-cacheable writes */
> >         dmb(osh);
> > -       __clean_dcache_area_pop((unsigned long)addr, (unsigned long)addr + size);
> > +       __clean_dcache_pop((unsigned long)addr, (unsigned long)addr + size);
> >  }
> >  EXPORT_SYMBOL_GPL(arch_wb_cache_pmem);
> >
> >  void arch_invalidate_pmem(void *addr, size_t size)
> >  {
> > -       __inval_dcache_area((unsigned long)addr, (unsigned long)addr + size);
> > +       __inval_dcache_poc((unsigned long)addr, (unsigned long)addr + size);
> >  }
> >  EXPORT_SYMBOL_GPL(arch_invalidate_pmem);
> >  #endif
> > --
> > 2.31.1.607.g51e8a6a459-goog
> >

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v1 01/13] arm64: Do not enable uaccess for flush_icache_range
  2021-05-12  8:52     ` Fuad Tabba
@ 2021-05-12  9:59       ` Mark Rutland
  2021-05-12 10:29         ` Fuad Tabba
  0 siblings, 1 reply; 32+ messages in thread
From: Mark Rutland @ 2021-05-12  9:59 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: moderated list:ARM64 PORT (AARCH64 ARCHITECTURE),
	Will Deacon, Catalin Marinas, Marc Zyngier, ardb, James Morse,
	Alexandru Elisei, Suzuki K Poulose

On Wed, May 12, 2021 at 09:52:28AM +0100, Fuad Tabba wrote:
> Hi Mark,
> 
> > > No functional change intended.
> >
> > There is a performance change here, since the existing
> > `__flush_cache_user_range` takes IDC and DIC into account, whereas
> > `invalidate_icache_by_line` does not.
> 
> You're right. There is a performance change in this patch and a couple
> of the others, which I will note in v2. However, I don't think that
> this patch changes the behavior when it comes to IDC and DIC, does it?

It shouldn't be a functional problem, but it means that the new
`__flush_icache_range` will always perform redundant I-cache maintenance
rather than skipping this when the cpu has DIC=1.

It would be nice if we could structure this to take DIC into account
either in the new `__flush_icache_range`, or in the
`invalidate_icache_by_line` helper.

> > There's also an existing oversight where `__flush_cache_user_range`
> > takes ARM64_WORKAROUND_CLEAN_CACHE into account, but
> > `invalidate_icache_by_line` does not.

Sorry about this. I was evidently confused, as this does not make any
sense. This doesn't matter to `invalidate_icache_by_line`, and
`invalidate_dcache_by_line` already does the right thing via
`__dcache_op_workaround_clean_cache`.

> I'd be happy to address that in v2, but let me make sure I understand
> the issue properly.
> 
> Errata 819472 and friends (ARM64_WORKAROUND_CLEAN_CACHE) are related
> to cache maintenance operations on data caches happening concurrently
> with other accesses to the same address. The two places
> invalidate_icache_by_line is used in conjunction with data caches are
> __flush_icache_range and __flush_cache_user_range (which share the
> same code before and after my patch series). In both cases,
> invalidate_icache_by_line is called after the workaround is applied.
> The third and only other user of invalidate_icache_by_line is
> invalidate_icache_range, which only performs cache maintenance on the
> icache.
> 
> The concern is that invalidate_icache_range might be performing a
> cache maintenance operation on an address concurrently with another
> processor performing a dc operation on the same address. Therefore,
> invalidate_icache_range should perform DC CIVAC on the line before
> invalidate_icache_by_line if ARM64_WORKAROUND_CLEAN_CACHE applies. Is
> that right?
> 
> https://documentation-service.arm.com/static/5fa29fddb209f547eebd361d

Sorry, I had misread the code, and I don't think there's a bug to fix
here after all. Regardless, thanks for digging into that and trying to
make sense of my bogus suggestion.

> > Arguably similar is true in `swsusp_arch_suspend_exit`, but for that
> > we could add a comment and always use `DC CIVAC`.
> 
> I can do that in v2 as well.

A separate patch for `swsusp_arch_suspend_exit` would be great, since
that is something we should backport to stable as a fix.

Thanks,
Mark.

> > > Reported-by: Catalin Marinas <catalin.marinas@arm.com>
> > > Reported-by: Will Deacon <will@kernel.org>
> > > Link: https://lore.kernel.org/linux-arch/20200511110014.lb9PEahJ4hVOYrbwIb_qUHXyNy9KQzNFdb_I3YlzY6A@z/
> > > Signed-off-by: Fuad Tabba <tabba@google.com>
> > > ---
> > >  arch/arm64/include/asm/assembler.h | 13 ++++--
> > >  arch/arm64/mm/cache.S              | 64 +++++++++++++++++++++---------
> > >  2 files changed, 54 insertions(+), 23 deletions(-)
> > >
> > > diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
> > > index 8418c1bd8f04..6ff7a3a3b238 100644
> > > --- a/arch/arm64/include/asm/assembler.h
> > > +++ b/arch/arm64/include/asm/assembler.h
> > > @@ -426,16 +426,21 @@ alternative_endif
> > >   * Macro to perform an instruction cache maintenance for the interval
> > >   * [start, end)
> > >   *
> > > - *   start, end:     virtual addresses describing the region
> > > - *   label:          A label to branch to on user fault.
> > > - *   Corrupts:       tmp1, tmp2
> > > + *   start, end:     virtual addresses describing the region
> > > + *   needs_uaccess:  might access user space memory
> > > + *   label:          label to branch to on user fault (if needs_uaccess)
> > > + *   Corrupts:       tmp1, tmp2
> > >   */
> > > -     .macro invalidate_icache_by_line start, end, tmp1, tmp2, label
> > > +     .macro invalidate_icache_by_line start, end, tmp1, tmp2, needs_uaccess, label
> > >       icache_line_size \tmp1, \tmp2
> > >       sub     \tmp2, \tmp1, #1
> > >       bic     \tmp2, \start, \tmp2
> > >  9997:
> > > +     .if     \needs_uaccess
> > >  USER(\label, ic      ivau, \tmp2)                    // invalidate I line PoU
> > > +     .else
> > > +     ic      ivau, \tmp2
> > > +     .endif
> > >       add     \tmp2, \tmp2, \tmp1
> > >       cmp     \tmp2, \end
> > >       b.lo    9997b
> > > diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
> > > index 2d881f34dd9d..092f73acdf9a 100644
> > > --- a/arch/arm64/mm/cache.S
> > > +++ b/arch/arm64/mm/cache.S
> > > @@ -15,30 +15,20 @@
> > >  #include <asm/asm-uaccess.h>
> > >
> > >  /*
> > > - *   flush_icache_range(start,end)
> > > + *   __flush_cache_range(start,end) [needs_uaccess]
> > >   *
> > >   *   Ensure that the I and D caches are coherent within specified region.
> > >   *   This is typically used when code has been written to a memory region,
> > >   *   and will be executed.
> > >   *
> > > - *   - start   - virtual start address of region
> > > - *   - end     - virtual end address of region
> > > + *   - start         - virtual start address of region
> > > + *   - end           - virtual end address of region
> > > + *   - needs_uaccess - (macro parameter) might access user space memory
> > >   */
> > > -SYM_FUNC_START(__flush_icache_range)
> > > -     /* FALLTHROUGH */
> > > -
> > > -/*
> > > - *   __flush_cache_user_range(start,end)
> > > - *
> > > - *   Ensure that the I and D caches are coherent within specified region.
> > > - *   This is typically used when code has been written to a memory region,
> > > - *   and will be executed.
> > > - *
> > > - *   - start   - virtual start address of region
> > > - *   - end     - virtual end address of region
> > > - */
> > > -SYM_FUNC_START(__flush_cache_user_range)
> > > +.macro       __flush_cache_range, needs_uaccess
> > > +     .if     \needs_uaccess
> > >       uaccess_ttbr0_enable x2, x3, x4
> > > +     .endif
> > >  alternative_if ARM64_HAS_CACHE_IDC
> > >       dsb     ishst
> > >       b       7f
> > > @@ -47,7 +37,11 @@ alternative_else_nop_endif
> > >       sub     x3, x2, #1
> > >       bic     x4, x0, x3
> > >  1:
> > > +     .if     \needs_uaccess
> > >  user_alt 9f, "dc cvau, x4",  "dc civac, x4",  ARM64_WORKAROUND_CLEAN_CACHE
> > > +     .else
> > > +alternative_insn "dc cvau, x4",  "dc civac, x4",  ARM64_WORKAROUND_CLEAN_CACHE
> > > +     .endif
> > >       add     x4, x4, x2
> > >       cmp     x4, x1
> > >       b.lo    1b
> > > @@ -58,15 +52,47 @@ alternative_if ARM64_HAS_CACHE_DIC
> > >       isb
> > >       b       8f
> > >  alternative_else_nop_endif
> > > -     invalidate_icache_by_line x0, x1, x2, x3, 9f
> > > +     invalidate_icache_by_line x0, x1, x2, x3, \needs_uaccess, 9f
> > >  8:   mov     x0, #0
> > >  1:
> > > +     .if     \needs_uaccess
> > >       uaccess_ttbr0_disable x1, x2
> > > +     .endif
> > >       ret
> > > +
> > > +     .if     \needs_uaccess
> > >  9:
> > >       mov     x0, #-EFAULT
> > >       b       1b
> > > +     .endif
> > > +.endm
> > > +
> > > +/*
> > > + *   flush_icache_range(start,end)
> > > + *
> > > + *   Ensure that the I and D caches are coherent within specified region.
> > > + *   This is typically used when code has been written to a memory region,
> > > + *   and will be executed.
> > > + *
> > > + *   - start   - virtual start address of region
> > > + *   - end     - virtual end address of region
> > > + */
> > > +SYM_FUNC_START(__flush_icache_range)
> > > +     __flush_cache_range needs_uaccess=0
> > >  SYM_FUNC_END(__flush_icache_range)
> > > +
> > > +/*
> > > + *   __flush_cache_user_range(start,end)
> > > + *
> > > + *   Ensure that the I and D caches are coherent within specified region.
> > > + *   This is typically used when code has been written to a memory region,
> > > + *   and will be executed.
> > > + *
> > > + *   - start   - virtual start address of region
> > > + *   - end     - virtual end address of region
> > > + */
> > > +SYM_FUNC_START(__flush_cache_user_range)
> > > +     __flush_cache_range needs_uaccess=1
> > >  SYM_FUNC_END(__flush_cache_user_range)
> > >
> > >  /*
> > > @@ -86,7 +112,7 @@ alternative_else_nop_endif
> > >
> > >       uaccess_ttbr0_enable x2, x3, x4
> > >
> > > -     invalidate_icache_by_line x0, x1, x2, x3, 2f
> > > +     invalidate_icache_by_line x0, x1, x2, x3, 1, 2f
> > >       mov     x0, xzr
> > >  1:
> > >       uaccess_ttbr0_disable x1, x2
> > > --
> > > 2.31.1.607.g51e8a6a459-goog
> > >

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v1 13/13] arm64: Rename arm64-internal cache maintenance functions
  2021-05-11 15:49     ` Mark Rutland
  2021-05-12  9:51       ` Marc Zyngier
@ 2021-05-12 10:00       ` Fuad Tabba
  2021-05-12 10:04         ` Mark Rutland
  1 sibling, 1 reply; 32+ messages in thread
From: Fuad Tabba @ 2021-05-12 10:00 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Ard Biesheuvel, Linux ARM, Will Deacon, Catalin Marinas,
	Marc Zyngier, James Morse, Alexandru Elisei, Suzuki K Poulose

Hi Mark,

> > > "s/\b__flush_icache_range\b/__clean_inval_cache_pou/g;"\
>
> For icaches, a "flush" is just an invalidate, so this doesn't need
> "clean".

This is one of the reasons for this patch. Although you are correct
when it comes to what the name __flush_icache_range implies, it wasn't
doing only that. It's flushing both I and D caches. Therefore, the new
naming scheme with the cache type as a prefix that you suggest below
should make that clearer.

> > > "s/\b__flush_icache_all\b/__clean_inval_all_icache_pou/g;"
>
> Likewise here.

I'll fix that in v2.

> > >
> > > Note that __clean_dcache_area_poc is deliberately missing a word
> > > boundary check to match the efistub symbols in image-vars.h.
> > >
> > > No functional change intended.
> > >
> > > Signed-off-by: Fuad Tabba <tabba@google.com>
> >
> > I am a big fan of this change: code is so much easier to read if the
> > names of subroutines match their intent.
>
> Likewise!

Thanks!

> > I would suggest, though, that we get rid of all the leading
> > underscores while at it: we often use them when refactoring existing
> > routines into separate pieces (which is where at least some of these
> > came from), but here, they seem to have little value.
>
> That all makes sense to me; I'd also suggest we make the cache type the
> prefix, e.g.
>
> * icache_clean_pou
> * dcache_clean_inval_poc
> * caches_clean_inval_user_pou // D+I caches
>
> ... since then it's easier to read consistently, rather than having to
> search for the cache type midway through the name.

I'll fix that as well in v2.

Thanks,
/fuad


>
> Thanks,
> Mark.
>
> >
> >
> > > ---
> > >  arch/arm64/include/asm/arch_gicv3.h |  2 +-
> > >  arch/arm64/include/asm/cacheflush.h | 36 +++++++++----------
> > >  arch/arm64/include/asm/efi.h        |  2 +-
> > >  arch/arm64/include/asm/kvm_mmu.h    |  6 ++--
> > >  arch/arm64/kernel/alternative.c     |  2 +-
> > >  arch/arm64/kernel/efi-entry.S       |  4 +--
> > >  arch/arm64/kernel/head.S            |  8 ++---
> > >  arch/arm64/kernel/hibernate.c       | 12 +++----
> > >  arch/arm64/kernel/idreg-override.c  |  2 +-
> > >  arch/arm64/kernel/image-vars.h      |  2 +-
> > >  arch/arm64/kernel/insn.c            |  2 +-
> > >  arch/arm64/kernel/kaslr.c           |  6 ++--
> > >  arch/arm64/kernel/machine_kexec.c   | 10 +++---
> > >  arch/arm64/kernel/smp.c             |  4 +--
> > >  arch/arm64/kernel/smp_spin_table.c  |  4 +--
> > >  arch/arm64/kernel/sys_compat.c      |  2 +-
> > >  arch/arm64/kvm/arm.c                |  2 +-
> > >  arch/arm64/kvm/hyp/nvhe/cache.S     |  4 +--
> > >  arch/arm64/kvm/hyp/nvhe/setup.c     |  2 +-
> > >  arch/arm64/kvm/hyp/nvhe/tlb.c       |  2 +-
> > >  arch/arm64/kvm/hyp/pgtable.c        |  4 +--
> > >  arch/arm64/lib/uaccess_flushcache.c |  4 +--
> > >  arch/arm64/mm/cache.S               | 56 ++++++++++++++---------------
> > >  arch/arm64/mm/flush.c               | 12 +++----
> > >  24 files changed, 95 insertions(+), 95 deletions(-)
> > >
> > > diff --git a/arch/arm64/include/asm/arch_gicv3.h b/arch/arm64/include/asm/arch_gicv3.h
> > > index ed1cc9d8e6df..4b7ac9098e8f 100644
> > > --- a/arch/arm64/include/asm/arch_gicv3.h
> > > +++ b/arch/arm64/include/asm/arch_gicv3.h
> > > @@ -125,7 +125,7 @@ static inline u32 gic_read_rpr(void)
> > >  #define gic_write_lpir(v, c)           writeq_relaxed(v, c)
> > >
> > >  #define gic_flush_dcache_to_poc(a,l)   \
> > > -       __flush_dcache_area((unsigned long)(a), (unsigned long)(a)+(l))
> > > +       __clean_inval_dcache_poc((unsigned long)(a), (unsigned long)(a)+(l))
> > >
> > >  #define gits_read_baser(c)             readq_relaxed(c)
> > >  #define gits_write_baser(v, c)         writeq_relaxed(v, c)
> > > diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
> > > index 4b91d3530013..526eee4522eb 100644
> > > --- a/arch/arm64/include/asm/cacheflush.h
> > > +++ b/arch/arm64/include/asm/cacheflush.h
> > > @@ -34,54 +34,54 @@
> > >   *             - start  - virtual start address
> > >   *             - end    - virtual end address
> > >   *
> > > - *     __flush_icache_range(start, end)
> > > + *     __clean_inval_cache_pou(start, end)
> > >   *
> > >   *             Ensure coherency between the I-cache and the D-cache region to
> > >   *             the Point of Unification.
> > >   *
> > > - *     __flush_cache_user_range(start, end)
> > > + *     __clean_inval_cache_user_pou(start, end)
> > >   *
> > >   *             Ensure coherency between the I-cache and the D-cache region to
> > >   *             the Point of Unification.
> > >   *             Use only if the region might access user memory.
> > >   *
> > > - *     invalidate_icache_range(start, end)
> > > + *     __inval_icache_pou(start, end)
> > >   *
> > >   *             Invalidate I-cache region to the Point of Unification.
> > >   *
> > > - *     __flush_dcache_area(start, end)
> > > + *     __clean_inval_dcache_poc(start, end)
> > >   *
> > >   *             Clean and invalidate D-cache region to the Point of Coherence.
> > >   *
> > > - *     __inval_dcache_area(start, end)
> > > + *     __inval_dcache_poc(start, end)
> > >   *
> > >   *             Invalidate D-cache region to the Point of Coherence.
> > >   *
> > > - *     __clean_dcache_area_poc(start, end)
> > > + *     __clean_dcache_poc(start, end)
> > >   *
> > >   *             Clean D-cache region to the Point of Coherence.
> > >   *
> > > - *     __clean_dcache_area_pop(start, end)
> > > + *     __clean_dcache_pop(start, end)
> > >   *
> > >   *             Clean D-cache region to the Point of Persistence.
> > >   *
> > > - *     __clean_dcache_area_pou(start, end)
> > > + *     __clean_dcache_pou(start, end)
> > >   *
> > >   *             Clean D-cache region to the Point of Unification.
> > >   */
> > > -extern void __flush_icache_range(unsigned long start, unsigned long end);
> > > -extern void invalidate_icache_range(unsigned long start, unsigned long end);
> > > -extern void __flush_dcache_area(unsigned long start, unsigned long end);
> > > -extern void __inval_dcache_area(unsigned long start, unsigned long end);
> > > -extern void __clean_dcache_area_poc(unsigned long start, unsigned long end);
> > > -extern void __clean_dcache_area_pop(unsigned long start, unsigned long end);
> > > -extern void __clean_dcache_area_pou(unsigned long start, unsigned long end);
> > > -extern long __flush_cache_user_range(unsigned long start, unsigned long end);
> > > +extern void __clean_inval_cache_pou(unsigned long start, unsigned long end);
> > > +extern void __inval_icache_pou(unsigned long start, unsigned long end);
> > > +extern void __clean_inval_dcache_poc(unsigned long start, unsigned long end);
> > > +extern void __inval_dcache_poc(unsigned long start, unsigned long end);
> > > +extern void __clean_dcache_poc(unsigned long start, unsigned long end);
> > > +extern void __clean_dcache_pop(unsigned long start, unsigned long end);
> > > +extern void __clean_dcache_pou(unsigned long start, unsigned long end);
> > > +extern long __clean_inval_cache_user_pou(unsigned long start, unsigned long end);
> > >  extern void sync_icache_aliases(unsigned long start, unsigned long end);
> > >
> > >  static inline void flush_icache_range(unsigned long start, unsigned long end)
> > >  {
> > > -       __flush_icache_range(start, end);
> > > +       __clean_inval_cache_pou(start, end);
> > >
> > >         /*
> > >          * IPI all online CPUs so that they undergo a context synchronization
> > > @@ -135,7 +135,7 @@ extern void copy_to_user_page(struct vm_area_struct *, struct page *,
> > >  #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1
> > >  extern void flush_dcache_page(struct page *);
> > >
> > > -static __always_inline void __flush_icache_all(void)
> > > +static __always_inline void __clean_inval_all_icache_pou(void)
> > >  {
> > >         if (cpus_have_const_cap(ARM64_HAS_CACHE_DIC))
> > >                 return;
> > > diff --git a/arch/arm64/include/asm/efi.h b/arch/arm64/include/asm/efi.h
> > > index 0ae2397076fd..d1e2a4bf8def 100644
> > > --- a/arch/arm64/include/asm/efi.h
> > > +++ b/arch/arm64/include/asm/efi.h
> > > @@ -137,7 +137,7 @@ void efi_virtmap_unload(void);
> > >
> > >  static inline void efi_capsule_flush_cache_range(void *addr, int size)
> > >  {
> > > -       __flush_dcache_area((unsigned long)addr, (unsigned long)addr + size);
> > > +       __clean_inval_dcache_poc((unsigned long)addr, (unsigned long)addr + size);
> > >  }
> > >
> > >  #endif /* _ASM_EFI_H */
> > > diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
> > > index 33293d5855af..29d2aa6f3940 100644
> > > --- a/arch/arm64/include/asm/kvm_mmu.h
> > > +++ b/arch/arm64/include/asm/kvm_mmu.h
> > > @@ -181,7 +181,7 @@ static inline void *__kvm_vector_slot2addr(void *base,
> > >  struct kvm;
> > >
> > >  #define kvm_flush_dcache_to_poc(a,l)   \
> > > -       __flush_dcache_area((unsigned long)(a), (unsigned long)(a)+(l))
> > > +       __clean_inval_dcache_poc((unsigned long)(a), (unsigned long)(a)+(l))
> > >
> > >  static inline bool vcpu_has_cache_enabled(struct kvm_vcpu *vcpu)
> > >  {
> > > @@ -209,12 +209,12 @@ static inline void __invalidate_icache_guest_page(kvm_pfn_t pfn,
> > >  {
> > >         if (icache_is_aliasing()) {
> > >                 /* any kind of VIPT cache */
> > > -               __flush_icache_all();
> > > +               __clean_inval_all_icache_pou();
> > >         } else if (is_kernel_in_hyp_mode() || !icache_is_vpipt()) {
> > >                 /* PIPT or VPIPT at EL2 (see comment in __kvm_tlb_flush_vmid_ipa) */
> > >                 void *va = page_address(pfn_to_page(pfn));
> > >
> > > -               invalidate_icache_range((unsigned long)va,
> > > +               __inval_icache_pou((unsigned long)va,
> > >                                         (unsigned long)va + size);
> > >         }
> > >  }
> > > diff --git a/arch/arm64/kernel/alternative.c b/arch/arm64/kernel/alternative.c
> > > index c906d20c7b52..ea2d52fa9a0c 100644
> > > --- a/arch/arm64/kernel/alternative.c
> > > +++ b/arch/arm64/kernel/alternative.c
> > > @@ -181,7 +181,7 @@ static void __nocfi __apply_alternatives(struct alt_region *region, bool is_modu
> > >          */
> > >         if (!is_module) {
> > >                 dsb(ish);
> > > -               __flush_icache_all();
> > > +               __clean_inval_all_icache_pou();
> > >                 isb();
> > >
> > >                 /* Ignore ARM64_CB bit from feature mask */
> > > diff --git a/arch/arm64/kernel/efi-entry.S b/arch/arm64/kernel/efi-entry.S
> > > index 72e6a580290a..230506f460ec 100644
> > > --- a/arch/arm64/kernel/efi-entry.S
> > > +++ b/arch/arm64/kernel/efi-entry.S
> > > @@ -29,7 +29,7 @@ SYM_CODE_START(efi_enter_kernel)
> > >          */
> > >         ldr     w1, =kernel_size
> > >         add     x1, x0, x1
> > > -       bl      __clean_dcache_area_poc
> > > +       bl      __clean_dcache_poc
> > >         ic      ialluis
> > >
> > >         /*
> > > @@ -38,7 +38,7 @@ SYM_CODE_START(efi_enter_kernel)
> > >          */
> > >         adr     x0, 0f
> > >         adr     x1, 3f
> > > -       bl      __clean_dcache_area_poc
> > > +       bl      __clean_dcache_poc
> > >  0:
> > >         /* Turn off Dcache and MMU */
> > >         mrs     x0, CurrentEL
> > > diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
> > > index 8df0ac8d9123..ea0447c5010a 100644
> > > --- a/arch/arm64/kernel/head.S
> > > +++ b/arch/arm64/kernel/head.S
> > > @@ -118,7 +118,7 @@ SYM_CODE_START_LOCAL(preserve_boot_args)
> > >                                                 // MMU off
> > >
> > >         add     x1, x0, #0x20                   // 4 x 8 bytes
> > > -       b       __inval_dcache_area             // tail call
> > > +       b       __inval_dcache_poc              // tail call
> > >  SYM_CODE_END(preserve_boot_args)
> > >
> > >  /*
> > > @@ -268,7 +268,7 @@ SYM_FUNC_START_LOCAL(__create_page_tables)
> > >          */
> > >         adrp    x0, init_pg_dir
> > >         adrp    x1, init_pg_end
> > > -       bl      __inval_dcache_area
> > > +       bl      __inval_dcache_poc
> > >
> > >         /*
> > >          * Clear the init page tables.
> > > @@ -381,11 +381,11 @@ SYM_FUNC_START_LOCAL(__create_page_tables)
> > >
> > >         adrp    x0, idmap_pg_dir
> > >         adrp    x1, idmap_pg_end
> > > -       bl      __inval_dcache_area
> > > +       bl      __inval_dcache_poc
> > >
> > >         adrp    x0, init_pg_dir
> > >         adrp    x1, init_pg_end
> > > -       bl      __inval_dcache_area
> > > +       bl      __inval_dcache_poc
> > >
> > >         ret     x28
> > >  SYM_FUNC_END(__create_page_tables)
> > > diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c
> > > index b40ddce71507..ec871b24fd5b 100644
> > > --- a/arch/arm64/kernel/hibernate.c
> > > +++ b/arch/arm64/kernel/hibernate.c
> > > @@ -210,7 +210,7 @@ static int create_safe_exec_page(void *src_start, size_t length,
> > >                 return -ENOMEM;
> > >
> > >         memcpy(page, src_start, length);
> > > -       __flush_icache_range((unsigned long)page, (unsigned long)page + length);
> > > +       __clean_inval_cache_pou((unsigned long)page, (unsigned long)page + length);
> > >         rc = trans_pgd_idmap_page(&trans_info, &trans_ttbr0, &t0sz, page);
> > >         if (rc)
> > >                 return rc;
> > > @@ -381,17 +381,17 @@ int swsusp_arch_suspend(void)
> > >                 ret = swsusp_save();
> > >         } else {
> > >                 /* Clean kernel core startup/idle code to PoC*/
> > > -               __flush_dcache_area((unsigned long)__mmuoff_data_start,
> > > +               __clean_inval_dcache_poc((unsigned long)__mmuoff_data_start,
> > >                                     (unsigned long)__mmuoff_data_end);
> > > -               __flush_dcache_area((unsigned long)__idmap_text_start,
> > > +               __clean_inval_dcache_poc((unsigned long)__idmap_text_start,
> > >                                     (unsigned long)__idmap_text_end);
> > >
> > >                 /* Clean kvm setup code to PoC? */
> > >                 if (el2_reset_needed()) {
> > > -                       __flush_dcache_area(
> > > +                       __clean_inval_dcache_poc(
> > >                                 (unsigned long)__hyp_idmap_text_start,
> > >                                 (unsigned long)__hyp_idmap_text_end);
> > > -                       __flush_dcache_area((unsigned long)__hyp_text_start,
> > > +                       __clean_inval_dcache_poc((unsigned long)__hyp_text_start,
> > >                                             (unsigned long)__hyp_text_end);
> > >                 }
> > >
> > > @@ -477,7 +477,7 @@ int swsusp_arch_resume(void)
> > >          * The hibernate exit text contains a set of el2 vectors, that will
> > >          * be executed at el2 with the mmu off in order to reload hyp-stub.
> > >          */
> > > -       __flush_dcache_area((unsigned long)hibernate_exit,
> > > +       __clean_inval_dcache_poc((unsigned long)hibernate_exit,
> > >                             (unsigned long)hibernate_exit + exit_size);
> > >
> > >         /*
> > > diff --git a/arch/arm64/kernel/idreg-override.c b/arch/arm64/kernel/idreg-override.c
> > > index 3dd515baf526..6b4b5727f2db 100644
> > > --- a/arch/arm64/kernel/idreg-override.c
> > > +++ b/arch/arm64/kernel/idreg-override.c
> > > @@ -237,7 +237,7 @@ asmlinkage void __init init_feature_override(void)
> > >
> > >         for (i = 0; i < ARRAY_SIZE(regs); i++) {
> > >                 if (regs[i]->override)
> > > -                       __flush_dcache_area((unsigned long)regs[i]->override,
> > > +                       __clean_inval_dcache_poc((unsigned long)regs[i]->override,
> > >                                             (unsigned long)regs[i]->override +
> > >                                             sizeof(*regs[i]->override));
> > >         }
> > > diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h
> > > index bcf3c2755370..14beda6a573d 100644
> > > --- a/arch/arm64/kernel/image-vars.h
> > > +++ b/arch/arm64/kernel/image-vars.h
> > > @@ -35,7 +35,7 @@ __efistub_strnlen             = __pi_strnlen;
> > >  __efistub_strcmp               = __pi_strcmp;
> > >  __efistub_strncmp              = __pi_strncmp;
> > >  __efistub_strrchr              = __pi_strrchr;
> > > -__efistub___clean_dcache_area_poc = __pi___clean_dcache_area_poc;
> > > +__efistub___clean_dcache_poc = __pi___clean_dcache_poc;
> > >
> > >  #if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)
> > >  __efistub___memcpy             = __pi_memcpy;
> > > diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
> > > index 6c0de2f60ea9..11c7be09e305 100644
> > > --- a/arch/arm64/kernel/insn.c
> > > +++ b/arch/arm64/kernel/insn.c
> > > @@ -198,7 +198,7 @@ int __kprobes aarch64_insn_patch_text_nosync(void *addr, u32 insn)
> > >
> > >         ret = aarch64_insn_write(tp, insn);
> > >         if (ret == 0)
> > > -               __flush_icache_range((uintptr_t)tp,
> > > +               __clean_inval_cache_pou((uintptr_t)tp,
> > >                                      (uintptr_t)tp + AARCH64_INSN_SIZE);
> > >
> > >         return ret;
> > > diff --git a/arch/arm64/kernel/kaslr.c b/arch/arm64/kernel/kaslr.c
> > > index 49cccd03cb37..038a4cc7de93 100644
> > > --- a/arch/arm64/kernel/kaslr.c
> > > +++ b/arch/arm64/kernel/kaslr.c
> > > @@ -72,7 +72,7 @@ u64 __init kaslr_early_init(void)
> > >          * we end up running with module randomization disabled.
> > >          */
> > >         module_alloc_base = (u64)_etext - MODULES_VSIZE;
> > > -       __flush_dcache_area((unsigned long)&module_alloc_base,
> > > +       __clean_inval_dcache_poc((unsigned long)&module_alloc_base,
> > >                             (unsigned long)&module_alloc_base +
> > >                                     sizeof(module_alloc_base));
> > >
> > > @@ -172,10 +172,10 @@ u64 __init kaslr_early_init(void)
> > >         module_alloc_base += (module_range * (seed & ((1 << 21) - 1))) >> 21;
> > >         module_alloc_base &= PAGE_MASK;
> > >
> > > -       __flush_dcache_area((unsigned long)&module_alloc_base,
> > > +       __clean_inval_dcache_poc((unsigned long)&module_alloc_base,
> > >                             (unsigned long)&module_alloc_base +
> > >                                     sizeof(module_alloc_base));
> > > -       __flush_dcache_area((unsigned long)&memstart_offset_seed,
> > > +       __clean_inval_dcache_poc((unsigned long)&memstart_offset_seed,
> > >                             (unsigned long)&memstart_offset_seed +
> > >                                     sizeof(memstart_offset_seed));
> > >
> > > diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c
> > > index 4cada9000acf..0e20a789b03e 100644
> > > --- a/arch/arm64/kernel/machine_kexec.c
> > > +++ b/arch/arm64/kernel/machine_kexec.c
> > > @@ -69,10 +69,10 @@ int machine_kexec_post_load(struct kimage *kimage)
> > >         kexec_image_info(kimage);
> > >
> > >         /* Flush the reloc_code in preparation for its execution. */
> > > -       __flush_dcache_area((unsigned long)reloc_code,
> > > +       __clean_inval_dcache_poc((unsigned long)reloc_code,
> > >                             (unsigned long)reloc_code +
> > >                                     arm64_relocate_new_kernel_size);
> > > -       invalidate_icache_range((uintptr_t)reloc_code,
> > > +       __inval_icache_pou((uintptr_t)reloc_code,
> > >                                 (uintptr_t)reloc_code +
> > >                                         arm64_relocate_new_kernel_size);
> > >
> > > @@ -108,7 +108,7 @@ static void kexec_list_flush(struct kimage *kimage)
> > >                 unsigned long addr;
> > >
> > >                 /* flush the list entries. */
> > > -               __flush_dcache_area((unsigned long)entry,
> > > +               __clean_inval_dcache_poc((unsigned long)entry,
> > >                                     (unsigned long)entry +
> > >                                             sizeof(kimage_entry_t));
> > >
> > > @@ -125,7 +125,7 @@ static void kexec_list_flush(struct kimage *kimage)
> > >                         break;
> > >                 case IND_SOURCE:
> > >                         /* flush the source pages. */
> > > -                       __flush_dcache_area(addr, addr + PAGE_SIZE);
> > > +                       __clean_inval_dcache_poc(addr, addr + PAGE_SIZE);
> > >                         break;
> > >                 case IND_DESTINATION:
> > >                         break;
> > > @@ -152,7 +152,7 @@ static void kexec_segment_flush(const struct kimage *kimage)
> > >                         kimage->segment[i].memsz,
> > >                         kimage->segment[i].memsz /  PAGE_SIZE);
> > >
> > > -               __flush_dcache_area(
> > > +               __clean_inval_dcache_poc(
> > >                         (unsigned long)phys_to_virt(kimage->segment[i].mem),
> > >                         (unsigned long)phys_to_virt(kimage->segment[i].mem) +
> > >                                 kimage->segment[i].memsz);
> > > diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
> > > index 5fcdee331087..2044210ed15a 100644
> > > --- a/arch/arm64/kernel/smp.c
> > > +++ b/arch/arm64/kernel/smp.c
> > > @@ -122,7 +122,7 @@ int __cpu_up(unsigned int cpu, struct task_struct *idle)
> > >         secondary_data.task = idle;
> > >         secondary_data.stack = task_stack_page(idle) + THREAD_SIZE;
> > >         update_cpu_boot_status(CPU_MMU_OFF);
> > > -       __flush_dcache_area((unsigned long)&secondary_data,
> > > +       __clean_inval_dcache_poc((unsigned long)&secondary_data,
> > >                             (unsigned long)&secondary_data +
> > >                                     sizeof(secondary_data));
> > >
> > > @@ -145,7 +145,7 @@ int __cpu_up(unsigned int cpu, struct task_struct *idle)
> > >         pr_crit("CPU%u: failed to come online\n", cpu);
> > >         secondary_data.task = NULL;
> > >         secondary_data.stack = NULL;
> > > -       __flush_dcache_area((unsigned long)&secondary_data,
> > > +       __clean_inval_dcache_poc((unsigned long)&secondary_data,
> > >                             (unsigned long)&secondary_data +
> > >                                     sizeof(secondary_data));
> > >         status = READ_ONCE(secondary_data.status);
> > > diff --git a/arch/arm64/kernel/smp_spin_table.c b/arch/arm64/kernel/smp_spin_table.c
> > > index 58d804582a35..a946ccb9791e 100644
> > > --- a/arch/arm64/kernel/smp_spin_table.c
> > > +++ b/arch/arm64/kernel/smp_spin_table.c
> > > @@ -36,7 +36,7 @@ static void write_pen_release(u64 val)
> > >         unsigned long size = sizeof(secondary_holding_pen_release);
> > >
> > >         secondary_holding_pen_release = val;
> > > -       __flush_dcache_area((unsigned long)start, (unsigned long)start + size);
> > > +       __clean_inval_dcache_poc((unsigned long)start, (unsigned long)start + size);
> > >  }
> > >
> > >
> > > @@ -90,7 +90,7 @@ static int smp_spin_table_cpu_prepare(unsigned int cpu)
> > >          * the boot protocol.
> > >          */
> > >         writeq_relaxed(pa_holding_pen, release_addr);
> > > -       __flush_dcache_area((__force unsigned long)release_addr,
> > > +       __clean_inval_dcache_poc((__force unsigned long)release_addr,
> > >                             (__force unsigned long)release_addr +
> > >                                     sizeof(*release_addr));
> > >
> > > diff --git a/arch/arm64/kernel/sys_compat.c b/arch/arm64/kernel/sys_compat.c
> > > index 265fe3eb1069..fdd415f8d841 100644
> > > --- a/arch/arm64/kernel/sys_compat.c
> > > +++ b/arch/arm64/kernel/sys_compat.c
> > > @@ -41,7 +41,7 @@ __do_compat_cache_op(unsigned long start, unsigned long end)
> > >                         dsb(ish);
> > >                 }
> > >
> > > -               ret = __flush_cache_user_range(start, start + chunk);
> > > +               ret = __clean_inval_cache_user_pou(start, start + chunk);
> > >                 if (ret)
> > >                         return ret;
> > >
> > > diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> > > index 1cb39c0803a4..edeca89405ff 100644
> > > --- a/arch/arm64/kvm/arm.c
> > > +++ b/arch/arm64/kvm/arm.c
> > > @@ -1064,7 +1064,7 @@ static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm_vcpu *vcpu,
> > >                 if (!cpus_have_final_cap(ARM64_HAS_STAGE2_FWB))
> > >                         stage2_unmap_vm(vcpu->kvm);
> > >                 else
> > > -                       __flush_icache_all();
> > > +                       __clean_inval_all_icache_pou();
> > >         }
> > >
> > >         vcpu_reset_hcr(vcpu);
> > > diff --git a/arch/arm64/kvm/hyp/nvhe/cache.S b/arch/arm64/kvm/hyp/nvhe/cache.S
> > > index 36cef6915428..a906dd596e66 100644
> > > --- a/arch/arm64/kvm/hyp/nvhe/cache.S
> > > +++ b/arch/arm64/kvm/hyp/nvhe/cache.S
> > > @@ -7,7 +7,7 @@
> > >  #include <asm/assembler.h>
> > >  #include <asm/alternative.h>
> > >
> > > -SYM_FUNC_START_PI(__flush_dcache_area)
> > > +SYM_FUNC_START_PI(__clean_inval_dcache_poc)
> > >         dcache_by_line_op civac, sy, x0, x1, x2, x3
> > >         ret
> > > -SYM_FUNC_END_PI(__flush_dcache_area)
> > > +SYM_FUNC_END_PI(__clean_inval_dcache_poc)
> > > diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c
> > > index 5dffe928f256..a16719f5068d 100644
> > > --- a/arch/arm64/kvm/hyp/nvhe/setup.c
> > > +++ b/arch/arm64/kvm/hyp/nvhe/setup.c
> > > @@ -134,7 +134,7 @@ static void update_nvhe_init_params(void)
> > >         for (i = 0; i < hyp_nr_cpus; i++) {
> > >                 params = per_cpu_ptr(&kvm_init_params, i);
> > >                 params->pgd_pa = __hyp_pa(pkvm_pgtable.pgd);
> > > -               __flush_dcache_area((unsigned long)params,
> > > +               __clean_inval_dcache_poc((unsigned long)params,
> > >                                     (unsigned long)params + sizeof(*params));
> > >         }
> > >  }
> > > diff --git a/arch/arm64/kvm/hyp/nvhe/tlb.c b/arch/arm64/kvm/hyp/nvhe/tlb.c
> > > index 83dc3b271bc5..184c9c7c13bd 100644
> > > --- a/arch/arm64/kvm/hyp/nvhe/tlb.c
> > > +++ b/arch/arm64/kvm/hyp/nvhe/tlb.c
> > > @@ -104,7 +104,7 @@ void __kvm_tlb_flush_vmid_ipa(struct kvm_s2_mmu *mmu,
> > >          * you should be running with VHE enabled.
> > >          */
> > >         if (icache_is_vpipt())
> > > -               __flush_icache_all();
> > > +               __clean_inval_all_icache_pou();
> > >
> > >         __tlb_switch_to_host(&cxt);
> > >  }
> > > diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
> > > index 10d2f04013d4..fb2613f458de 100644
> > > --- a/arch/arm64/kvm/hyp/pgtable.c
> > > +++ b/arch/arm64/kvm/hyp/pgtable.c
> > > @@ -841,7 +841,7 @@ static int stage2_unmap_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep,
> > >         if (need_flush) {
> > >                 kvm_pte_t *pte_follow = kvm_pte_follow(pte, mm_ops);
> > >
> > > -               __flush_dcache_area((unsigned long)pte_follow,
> > > +               __clean_inval_dcache_poc((unsigned long)pte_follow,
> > >                                     (unsigned long)pte_follow +
> > >                                             kvm_granule_size(level));
> > >         }
> > > @@ -997,7 +997,7 @@ static int stage2_flush_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep,
> > >                 return 0;
> > >
> > >         pte_follow = kvm_pte_follow(pte, mm_ops);
> > > -       __flush_dcache_area((unsigned long)pte_follow,
> > > +       __clean_inval_dcache_poc((unsigned long)pte_follow,
> > >                             (unsigned long)pte_follow +
> > >                                     kvm_granule_size(level));
> > >         return 0;
> > > diff --git a/arch/arm64/lib/uaccess_flushcache.c b/arch/arm64/lib/uaccess_flushcache.c
> > > index 62ea989effe8..b1a6d9823864 100644
> > > --- a/arch/arm64/lib/uaccess_flushcache.c
> > > +++ b/arch/arm64/lib/uaccess_flushcache.c
> > > @@ -15,7 +15,7 @@ void memcpy_flushcache(void *dst, const void *src, size_t cnt)
> > >          * barrier to order the cache maintenance against the memcpy.
> > >          */
> > >         memcpy(dst, src, cnt);
> > > -       __clean_dcache_area_pop((unsigned long)dst, (unsigned long)dst + cnt);
> > > +       __clean_dcache_pop((unsigned long)dst, (unsigned long)dst + cnt);
> > >  }
> > >  EXPORT_SYMBOL_GPL(memcpy_flushcache);
> > >
> > > @@ -33,6 +33,6 @@ unsigned long __copy_user_flushcache(void *to, const void __user *from,
> > >         rc = raw_copy_from_user(to, from, n);
> > >
> > >         /* See above */
> > > -       __clean_dcache_area_pop((unsigned long)to, (unsigned long)to + n - rc);
> > > +       __clean_dcache_pop((unsigned long)to, (unsigned long)to + n - rc);
> > >         return rc;
> > >  }
> > > diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
> > > index d8434e57fab3..2df7212de799 100644
> > > --- a/arch/arm64/mm/cache.S
> > > +++ b/arch/arm64/mm/cache.S
> > > @@ -15,7 +15,7 @@
> > >  #include <asm/asm-uaccess.h>
> > >
> > >  /*
> > > - *     __flush_cache_range(start,end) [needs_uaccess]
> > > + *     __clean_inval_cache_pou_macro(start,end) [needs_uaccess]
> > >   *
> > >   *     Ensure that the I and D caches are coherent within specified region.
> > >   *     This is typically used when code has been written to a memory region,
> > > @@ -25,7 +25,7 @@
> > >   *     - end           - virtual end address of region
> > >   *     - needs_uaccess - (macro parameter) might access user space memory
> > >   */
> > > -.macro __flush_cache_range, needs_uaccess
> > > +.macro __clean_inval_cache_pou_macro, needs_uaccess
> > >         .if     \needs_uaccess
> > >         uaccess_ttbr0_enable x2, x3, x4
> > >         .endif
> > > @@ -77,12 +77,12 @@ alternative_else_nop_endif
> > >   *     - start   - virtual start address of region
> > >   *     - end     - virtual end address of region
> > >   */
> > > -SYM_FUNC_START(__flush_icache_range)
> > > -       __flush_cache_range needs_uaccess=0
> > > -SYM_FUNC_END(__flush_icache_range)
> > > +SYM_FUNC_START(__clean_inval_cache_pou)
> > > +       __clean_inval_cache_pou_macro needs_uaccess=0
> > > +SYM_FUNC_END(__clean_inval_cache_pou)
> > >
> > >  /*
> > > - *     __flush_cache_user_range(start,end)
> > > + *     __clean_inval_cache_user_pou(start,end)
> > >   *
> > >   *     Ensure that the I and D caches are coherent within specified region.
> > >   *     This is typically used when code has been written to a memory region,
> > > @@ -91,19 +91,19 @@ SYM_FUNC_END(__flush_icache_range)
> > >   *     - start   - virtual start address of region
> > >   *     - end     - virtual end address of region
> > >   */
> > > -SYM_FUNC_START(__flush_cache_user_range)
> > > -       __flush_cache_range needs_uaccess=1
> > > -SYM_FUNC_END(__flush_cache_user_range)
> > > +SYM_FUNC_START(__clean_inval_cache_user_pou)
> > > +       __clean_inval_cache_pou_macro needs_uaccess=1
> > > +SYM_FUNC_END(__clean_inval_cache_user_pou)
> > >
> > >  /*
> > > - *     invalidate_icache_range(start,end)
> > > + *     __inval_icache_pou(start,end)
> > >   *
> > >   *     Ensure that the I cache is invalid within specified region.
> > >   *
> > >   *     - start   - virtual start address of region
> > >   *     - end     - virtual end address of region
> > >   */
> > > -SYM_FUNC_START(invalidate_icache_range)
> > > +SYM_FUNC_START(__inval_icache_pou)
> > >  alternative_if ARM64_HAS_CACHE_DIC
> > >         isb
> > >         ret
> > > @@ -111,10 +111,10 @@ alternative_else_nop_endif
> > >
> > >         invalidate_icache_by_line x0, x1, x2, x3, 0, 0f
> > >         ret
> > > -SYM_FUNC_END(invalidate_icache_range)
> > > +SYM_FUNC_END(__inval_icache_pou)
> > >
> > >  /*
> > > - *     __flush_dcache_area(start, end)
> > > + *     __clean_inval_dcache_poc(start, end)
> > >   *
> > >   *     Ensure that any D-cache lines for the interval [start, end)
> > >   *     are cleaned and invalidated to the PoC.
> > > @@ -122,13 +122,13 @@ SYM_FUNC_END(invalidate_icache_range)
> > >   *     - start   - virtual start address of region
> > >   *     - end     - virtual end address of region
> > >   */
> > > -SYM_FUNC_START_PI(__flush_dcache_area)
> > > +SYM_FUNC_START_PI(__clean_inval_dcache_poc)
> > >         dcache_by_line_op civac, sy, x0, x1, x2, x3
> > >         ret
> > > -SYM_FUNC_END_PI(__flush_dcache_area)
> > > +SYM_FUNC_END_PI(__clean_inval_dcache_poc)
> > >
> > >  /*
> > > - *     __clean_dcache_area_pou(start, end)
> > > + *     __clean_dcache_pou(start, end)
> > >   *
> > >   *     Ensure that any D-cache lines for the interval [start, end)
> > >   *     are cleaned to the PoU.
> > > @@ -136,17 +136,17 @@ SYM_FUNC_END_PI(__flush_dcache_area)
> > >   *     - start   - virtual start address of region
> > >   *     - end     - virtual end address of region
> > >   */
> > > -SYM_FUNC_START(__clean_dcache_area_pou)
> > > +SYM_FUNC_START(__clean_dcache_pou)
> > >  alternative_if ARM64_HAS_CACHE_IDC
> > >         dsb     ishst
> > >         ret
> > >  alternative_else_nop_endif
> > >         dcache_by_line_op cvau, ish, x0, x1, x2, x3
> > >         ret
> > > -SYM_FUNC_END(__clean_dcache_area_pou)
> > > +SYM_FUNC_END(__clean_dcache_pou)
> > >
> > >  /*
> > > - *     __inval_dcache_area(start, end)
> > > + *     __inval_dcache_poc(start, end)
> > >   *
> > >   *     Ensure that any D-cache lines for the interval [start, end)
> > >   *     are invalidated. Any partial lines at the ends of the interval are
> > > @@ -156,7 +156,7 @@ SYM_FUNC_END(__clean_dcache_area_pou)
> > >   *     - end     - kernel end address of region
> > >   */
> > >  SYM_FUNC_START_LOCAL(__dma_inv_area)
> > > -SYM_FUNC_START_PI(__inval_dcache_area)
> > > +SYM_FUNC_START_PI(__inval_dcache_poc)
> > >         /* FALLTHROUGH */
> > >
> > >  /*
> > > @@ -181,11 +181,11 @@ SYM_FUNC_START_PI(__inval_dcache_area)
> > >         b.lo    2b
> > >         dsb     sy
> > >         ret
> > > -SYM_FUNC_END_PI(__inval_dcache_area)
> > > +SYM_FUNC_END_PI(__inval_dcache_poc)
> > >  SYM_FUNC_END(__dma_inv_area)
> > >
> > >  /*
> > > - *     __clean_dcache_area_poc(start, end)
> > > + *     __clean_dcache_poc(start, end)
> > >   *
> > >   *     Ensure that any D-cache lines for the interval [start, end)
> > >   *     are cleaned to the PoC.
> > > @@ -194,7 +194,7 @@ SYM_FUNC_END(__dma_inv_area)
> > >   *     - end     - virtual end address of region
> > >   */
> > >  SYM_FUNC_START_LOCAL(__dma_clean_area)
> > > -SYM_FUNC_START_PI(__clean_dcache_area_poc)
> > > +SYM_FUNC_START_PI(__clean_dcache_poc)
> > >         /* FALLTHROUGH */
> > >
> > >  /*
> > > @@ -204,11 +204,11 @@ SYM_FUNC_START_PI(__clean_dcache_area_poc)
> > >   */
> > >         dcache_by_line_op cvac, sy, x0, x1, x2, x3
> > >         ret
> > > -SYM_FUNC_END_PI(__clean_dcache_area_poc)
> > > +SYM_FUNC_END_PI(__clean_dcache_poc)
> > >  SYM_FUNC_END(__dma_clean_area)
> > >
> > >  /*
> > > - *     __clean_dcache_area_pop(start, end)
> > > + *     __clean_dcache_pop(start, end)
> > >   *
> > >   *     Ensure that any D-cache lines for the interval [start, end)
> > >   *     are cleaned to the PoP.
> > > @@ -216,13 +216,13 @@ SYM_FUNC_END(__dma_clean_area)
> > >   *     - start   - virtual start address of region
> > >   *     - end     - virtual end address of region
> > >   */
> > > -SYM_FUNC_START_PI(__clean_dcache_area_pop)
> > > +SYM_FUNC_START_PI(__clean_dcache_pop)
> > >         alternative_if_not ARM64_HAS_DCPOP
> > > -       b       __clean_dcache_area_poc
> > > +       b       __clean_dcache_poc
> > >         alternative_else_nop_endif
> > >         dcache_by_line_op cvap, sy, x0, x1, x2, x3
> > >         ret
> > > -SYM_FUNC_END_PI(__clean_dcache_area_pop)
> > > +SYM_FUNC_END_PI(__clean_dcache_pop)
> > >
> > >  /*
> > >   *     __dma_flush_area(start, size)
> > > diff --git a/arch/arm64/mm/flush.c b/arch/arm64/mm/flush.c
> > > index 143f625e7727..005b92148252 100644
> > > --- a/arch/arm64/mm/flush.c
> > > +++ b/arch/arm64/mm/flush.c
> > > @@ -17,14 +17,14 @@
> > >  void sync_icache_aliases(unsigned long start, unsigned long end)
> > >  {
> > >         if (icache_is_aliasing()) {
> > > -               __clean_dcache_area_pou(start, end);
> > > -               __flush_icache_all();
> > > +               __clean_dcache_pou(start, end);
> > > +               __clean_inval_all_icache_pou();
> > >         } else {
> > >                 /*
> > >                  * Don't issue kick_all_cpus_sync() after I-cache invalidation
> > >                  * for user mappings.
> > >                  */
> > > -               __flush_icache_range(start, end);
> > > +               __clean_inval_cache_pou(start, end);
> > >         }
> > >  }
> > >
> > > @@ -76,20 +76,20 @@ EXPORT_SYMBOL(flush_dcache_page);
> > >  /*
> > >   * Additional functions defined in assembly.
> > >   */
> > > -EXPORT_SYMBOL(__flush_icache_range);
> > > +EXPORT_SYMBOL(__clean_inval_cache_pou);
> > >
> > >  #ifdef CONFIG_ARCH_HAS_PMEM_API
> > >  void arch_wb_cache_pmem(void *addr, size_t size)
> > >  {
> > >         /* Ensure order against any prior non-cacheable writes */
> > >         dmb(osh);
> > > -       __clean_dcache_area_pop((unsigned long)addr, (unsigned long)addr + size);
> > > +       __clean_dcache_pop((unsigned long)addr, (unsigned long)addr + size);
> > >  }
> > >  EXPORT_SYMBOL_GPL(arch_wb_cache_pmem);
> > >
> > >  void arch_invalidate_pmem(void *addr, size_t size)
> > >  {
> > > -       __inval_dcache_area((unsigned long)addr, (unsigned long)addr + size);
> > > +       __inval_dcache_poc((unsigned long)addr, (unsigned long)addr + size);
> > >  }
> > >  EXPORT_SYMBOL_GPL(arch_invalidate_pmem);
> > >  #endif
> > > --
> > > 2.31.1.607.g51e8a6a459-goog
> > >

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v1 13/13] arm64: Rename arm64-internal cache maintenance functions
  2021-05-12  9:51       ` Marc Zyngier
@ 2021-05-12 10:00         ` Mark Rutland
  0 siblings, 0 replies; 32+ messages in thread
From: Mark Rutland @ 2021-05-12 10:00 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Ard Biesheuvel, Fuad Tabba, Linux ARM, Will Deacon,
	Catalin Marinas, James Morse, Alexandru Elisei, Suzuki K Poulose

On Wed, May 12, 2021 at 10:51:04AM +0100, Marc Zyngier wrote:
> On 2021-05-11 16:49, Mark Rutland wrote:
> > On Tue, May 11, 2021 at 05:09:18PM +0200, Ard Biesheuvel wrote:
> > > On Tue, 11 May 2021 at 16:43, Fuad Tabba <tabba@google.com> wrote:
> > > >
> > > > Although naming across the codebase isn't that consistent, it
> > > > tends to follow certain patterns. Moreover, the term "flush"
> > > > isn't defined in the Arm Architecture reference manual, and might
> > > > be interpreted to mean clean, invalidate, or both for a cache.
> > > >
> > > > Rename arm64-internal functions to make the naming internally
> > > > consistent, as well as making it consistent with the Arm ARM, by
> > > > clarifying whether the operation is a clean, invalidate, or both.
> > > > Also specify which point the operation applies two, i.e., to the
> > > > point of unification (PoU), coherence (PoC), or persistence
> > > > (PoP).
> > > >
> > > > This commit applies the following sed transformation to all files
> > > > under arch/arm64:
> > > >
> > > > "s/\b__flush_cache_range\b/__clean_inval_cache_pou_macro/g;"\
> > > > "s/\b__flush_icache_range\b/__clean_inval_cache_pou/g;"\
> > 
> > For icaches, a "flush" is just an invalidate, so this doesn't need
> > "clean".
> > 
> > > > "s/\binvalidate_icache_range\b/__inval_icache_pou/g;"\
> > > > "s/\b__flush_dcache_area\b/__clean_inval_dcache_poc/g;"\
> > > > "s/\b__inval_dcache_area\b/__inval_dcache_poc/g;"\
> > > > "s/__clean_dcache_area_poc\b/__clean_dcache_poc/g;"\
> > > > "s/\b__clean_dcache_area_pop\b/__clean_dcache_pop/g;"\
> > > > "s/\b__clean_dcache_area_pou\b/__clean_dcache_pou/g;"\
> > > > "s/\b__flush_cache_user_range\b/__clean_inval_cache_user_pou/g;"\
> > > > "s/\b__flush_icache_all\b/__clean_inval_all_icache_pou/g;"
> > 
> > Likewise here.
> > 
> > > >
> > > > Note that __clean_dcache_area_poc is deliberately missing a word
> > > > boundary check to match the efistub symbols in image-vars.h.
> > > >
> > > > No functional change intended.
> > > >
> > > > Signed-off-by: Fuad Tabba <tabba@google.com>
> > > 
> > > I am a big fan of this change: code is so much easier to read if the
> > > names of subroutines match their intent.
> > 
> > Likewise!
> > 
> > > I would suggest, though, that we get rid of all the leading
> > > underscores while at it: we often use them when refactoring existing
> > > routines into separate pieces (which is where at least some of these
> > > came from), but here, they seem to have little value.
> > 
> > That all makes sense to me; I'd also suggest we make the cache type the
> > prefix, e.g.
> > 
> > * icache_clean_pou
> 
> I guess you meant "icache_inval_pou", right, as per your comment above?

Yes; whoops!

Mark.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v1 13/13] arm64: Rename arm64-internal cache maintenance functions
  2021-05-12 10:00       ` Fuad Tabba
@ 2021-05-12 10:04         ` Mark Rutland
  0 siblings, 0 replies; 32+ messages in thread
From: Mark Rutland @ 2021-05-12 10:04 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: Ard Biesheuvel, Linux ARM, Will Deacon, Catalin Marinas,
	Marc Zyngier, James Morse, Alexandru Elisei, Suzuki K Poulose

On Wed, May 12, 2021 at 11:00:00AM +0100, Fuad Tabba wrote:
> Hi Mark,
> 
> > > > "s/\b__flush_icache_range\b/__clean_inval_cache_pou/g;"\
> >
> > For icaches, a "flush" is just an invalidate, so this doesn't need
> > "clean".
> 
> This is one of the reasons for this patch. Although you are correct
> when it comes to what the name __flush_icache_range implies, it wasn't
> doing only that. It's flushing both I and D caches. Therefore, the new
> naming scheme with the cache type as a prefix that you suggest below
> should make that clearer.

Ah; sorry for the bad feedback, that all makes sense to me, then!

I look forward to v2. :)

Mark.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v1 01/13] arm64: Do not enable uaccess for flush_icache_range
  2021-05-12  9:59       ` Mark Rutland
@ 2021-05-12 10:29         ` Fuad Tabba
  2021-05-12 10:53           ` Mark Rutland
  0 siblings, 1 reply; 32+ messages in thread
From: Fuad Tabba @ 2021-05-12 10:29 UTC (permalink / raw)
  To: Mark Rutland
  Cc: moderated list:ARM64 PORT (AARCH64 ARCHITECTURE),
	Will Deacon, Catalin Marinas, Marc Zyngier, Ard Biesheuvel,
	James Morse, Alexandru Elisei, Suzuki K Poulose

Hi Mark,

On Wed, May 12, 2021 at 10:59 AM Mark Rutland <mark.rutland@arm.com> wrote:
>
> On Wed, May 12, 2021 at 09:52:28AM +0100, Fuad Tabba wrote:
> > Hi Mark,
> >
> > > > No functional change intended.
> > >
> > > There is a performance change here, since the existing
> > > `__flush_cache_user_range` takes IDC and DIC into account, whereas
> > > `invalidate_icache_by_line` does not.
> >
> > You're right. There is a performance change in this patch and a couple
> > of the others, which I will note in v2. However, I don't think that
> > this patch changes the behavior when it comes to IDC and DIC, does it?
>
> It shouldn't be a functional problem, but it means that the new
> `__flush_icache_range` will always perform redundant I-cache maintenance
> rather than skipping this when the cpu has DIC=1.

Sorry, but I can't quite see how this patch is making a difference in
that regard. The existing code has __flush_icache_range fallthrough to
__flush_cache_user_range, where the alternative_if
ARM64_HAS_CACHE_{IDC,DIC} are invoked.

In this patch, __flush_icache_range and __flush_cache_user_range share
the same code via the macro, where the alternative_ifs and branches
over invalidate_icache_by_line are still there and behave the same:
the macro jumps to 8 if ARM64_HAS_CACHE_DIC, avoiding any redundant
cache maintenance.

Am I missing something else?

> It would be nice if we could structure this to take DIC into account
> either in the new `__flush_icache_range`, or in the
> `invalidate_icache_by_line` helper.

> > > Arguably similar is true in `swsusp_arch_suspend_exit`, but for that
> > > we could add a comment and always use `DC CIVAC`.
> >
> > I can do that in v2 as well.
>
> A separate patch for `swsusp_arch_suspend_exit` would be great, since
> that is something we should backport to stable as a fix.

Will do.

Thanks,
/fuad

> Thanks,
> Mark.
>
> > > > Reported-by: Catalin Marinas <catalin.marinas@arm.com>
> > > > Reported-by: Will Deacon <will@kernel.org>
> > > > Link: https://lore.kernel.org/linux-arch/20200511110014.lb9PEahJ4hVOYrbwIb_qUHXyNy9KQzNFdb_I3YlzY6A@z/
> > > > Signed-off-by: Fuad Tabba <tabba@google.com>
> > > > ---
> > > >  arch/arm64/include/asm/assembler.h | 13 ++++--
> > > >  arch/arm64/mm/cache.S              | 64 +++++++++++++++++++++---------
> > > >  2 files changed, 54 insertions(+), 23 deletions(-)
> > > >
> > > > diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
> > > > index 8418c1bd8f04..6ff7a3a3b238 100644
> > > > --- a/arch/arm64/include/asm/assembler.h
> > > > +++ b/arch/arm64/include/asm/assembler.h
> > > > @@ -426,16 +426,21 @@ alternative_endif
> > > >   * Macro to perform an instruction cache maintenance for the interval
> > > >   * [start, end)
> > > >   *
> > > > - *   start, end:     virtual addresses describing the region
> > > > - *   label:          A label to branch to on user fault.
> > > > - *   Corrupts:       tmp1, tmp2
> > > > + *   start, end:     virtual addresses describing the region
> > > > + *   needs_uaccess:  might access user space memory
> > > > + *   label:          label to branch to on user fault (if needs_uaccess)
> > > > + *   Corrupts:       tmp1, tmp2
> > > >   */
> > > > -     .macro invalidate_icache_by_line start, end, tmp1, tmp2, label
> > > > +     .macro invalidate_icache_by_line start, end, tmp1, tmp2, needs_uaccess, label
> > > >       icache_line_size \tmp1, \tmp2
> > > >       sub     \tmp2, \tmp1, #1
> > > >       bic     \tmp2, \start, \tmp2
> > > >  9997:
> > > > +     .if     \needs_uaccess
> > > >  USER(\label, ic      ivau, \tmp2)                    // invalidate I line PoU
> > > > +     .else
> > > > +     ic      ivau, \tmp2
> > > > +     .endif
> > > >       add     \tmp2, \tmp2, \tmp1
> > > >       cmp     \tmp2, \end
> > > >       b.lo    9997b
> > > > diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
> > > > index 2d881f34dd9d..092f73acdf9a 100644
> > > > --- a/arch/arm64/mm/cache.S
> > > > +++ b/arch/arm64/mm/cache.S
> > > > @@ -15,30 +15,20 @@
> > > >  #include <asm/asm-uaccess.h>
> > > >
> > > >  /*
> > > > - *   flush_icache_range(start,end)
> > > > + *   __flush_cache_range(start,end) [needs_uaccess]
> > > >   *
> > > >   *   Ensure that the I and D caches are coherent within specified region.
> > > >   *   This is typically used when code has been written to a memory region,
> > > >   *   and will be executed.
> > > >   *
> > > > - *   - start   - virtual start address of region
> > > > - *   - end     - virtual end address of region
> > > > + *   - start         - virtual start address of region
> > > > + *   - end           - virtual end address of region
> > > > + *   - needs_uaccess - (macro parameter) might access user space memory
> > > >   */
> > > > -SYM_FUNC_START(__flush_icache_range)
> > > > -     /* FALLTHROUGH */
> > > > -
> > > > -/*
> > > > - *   __flush_cache_user_range(start,end)
> > > > - *
> > > > - *   Ensure that the I and D caches are coherent within specified region.
> > > > - *   This is typically used when code has been written to a memory region,
> > > > - *   and will be executed.
> > > > - *
> > > > - *   - start   - virtual start address of region
> > > > - *   - end     - virtual end address of region
> > > > - */
> > > > -SYM_FUNC_START(__flush_cache_user_range)
> > > > +.macro       __flush_cache_range, needs_uaccess
> > > > +     .if     \needs_uaccess
> > > >       uaccess_ttbr0_enable x2, x3, x4
> > > > +     .endif
> > > >  alternative_if ARM64_HAS_CACHE_IDC
> > > >       dsb     ishst
> > > >       b       7f
> > > > @@ -47,7 +37,11 @@ alternative_else_nop_endif
> > > >       sub     x3, x2, #1
> > > >       bic     x4, x0, x3
> > > >  1:
> > > > +     .if     \needs_uaccess
> > > >  user_alt 9f, "dc cvau, x4",  "dc civac, x4",  ARM64_WORKAROUND_CLEAN_CACHE
> > > > +     .else
> > > > +alternative_insn "dc cvau, x4",  "dc civac, x4",  ARM64_WORKAROUND_CLEAN_CACHE
> > > > +     .endif
> > > >       add     x4, x4, x2
> > > >       cmp     x4, x1
> > > >       b.lo    1b
> > > > @@ -58,15 +52,47 @@ alternative_if ARM64_HAS_CACHE_DIC
> > > >       isb
> > > >       b       8f
> > > >  alternative_else_nop_endif
> > > > -     invalidate_icache_by_line x0, x1, x2, x3, 9f
> > > > +     invalidate_icache_by_line x0, x1, x2, x3, \needs_uaccess, 9f
> > > >  8:   mov     x0, #0
> > > >  1:
> > > > +     .if     \needs_uaccess
> > > >       uaccess_ttbr0_disable x1, x2
> > > > +     .endif
> > > >       ret
> > > > +
> > > > +     .if     \needs_uaccess
> > > >  9:
> > > >       mov     x0, #-EFAULT
> > > >       b       1b
> > > > +     .endif
> > > > +.endm
> > > > +
> > > > +/*
> > > > + *   flush_icache_range(start,end)
> > > > + *
> > > > + *   Ensure that the I and D caches are coherent within specified region.
> > > > + *   This is typically used when code has been written to a memory region,
> > > > + *   and will be executed.
> > > > + *
> > > > + *   - start   - virtual start address of region
> > > > + *   - end     - virtual end address of region
> > > > + */
> > > > +SYM_FUNC_START(__flush_icache_range)
> > > > +     __flush_cache_range needs_uaccess=0
> > > >  SYM_FUNC_END(__flush_icache_range)
> > > > +
> > > > +/*
> > > > + *   __flush_cache_user_range(start,end)
> > > > + *
> > > > + *   Ensure that the I and D caches are coherent within specified region.
> > > > + *   This is typically used when code has been written to a memory region,
> > > > + *   and will be executed.
> > > > + *
> > > > + *   - start   - virtual start address of region
> > > > + *   - end     - virtual end address of region
> > > > + */
> > > > +SYM_FUNC_START(__flush_cache_user_range)
> > > > +     __flush_cache_range needs_uaccess=1
> > > >  SYM_FUNC_END(__flush_cache_user_range)
> > > >
> > > >  /*
> > > > @@ -86,7 +112,7 @@ alternative_else_nop_endif
> > > >
> > > >       uaccess_ttbr0_enable x2, x3, x4
> > > >
> > > > -     invalidate_icache_by_line x0, x1, x2, x3, 2f
> > > > +     invalidate_icache_by_line x0, x1, x2, x3, 1, 2f
> > > >       mov     x0, xzr
> > > >  1:
> > > >       uaccess_ttbr0_disable x1, x2
> > > > --
> > > > 2.31.1.607.g51e8a6a459-goog
> > > >

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v1 01/13] arm64: Do not enable uaccess for flush_icache_range
  2021-05-12 10:29         ` Fuad Tabba
@ 2021-05-12 10:53           ` Mark Rutland
  0 siblings, 0 replies; 32+ messages in thread
From: Mark Rutland @ 2021-05-12 10:53 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: moderated list:ARM64 PORT (AARCH64 ARCHITECTURE),
	Will Deacon, Catalin Marinas, Marc Zyngier, Ard Biesheuvel,
	James Morse, Alexandru Elisei, Suzuki K Poulose

On Wed, May 12, 2021 at 11:29:53AM +0100, Fuad Tabba wrote:
> Hi Mark,
> 
> On Wed, May 12, 2021 at 10:59 AM Mark Rutland <mark.rutland@arm.com> wrote:
> >
> > On Wed, May 12, 2021 at 09:52:28AM +0100, Fuad Tabba wrote:
> > > Hi Mark,
> > >
> > > > > No functional change intended.
> > > >
> > > > There is a performance change here, since the existing
> > > > `__flush_cache_user_range` takes IDC and DIC into account, whereas
> > > > `invalidate_icache_by_line` does not.
> > >
> > > You're right. There is a performance change in this patch and a couple
> > > of the others, which I will note in v2. However, I don't think that
> > > this patch changes the behavior when it comes to IDC and DIC, does it?
> >
> > It shouldn't be a functional problem, but it means that the new
> > `__flush_icache_range` will always perform redundant I-cache maintenance
> > rather than skipping this when the cpu has DIC=1.
> 
> Sorry, but I can't quite see how this patch is making a difference in
> that regard. The existing code has __flush_icache_range fallthrough to
> __flush_cache_user_range, where the alternative_if
> ARM64_HAS_CACHE_{IDC,DIC} are invoked.
> 
> In this patch, __flush_icache_range and __flush_cache_user_range share
> the same code via the macro, where the alternative_ifs and branches
> over invalidate_icache_by_line are still there and behave the same:
> the macro jumps to 8 if ARM64_HAS_CACHE_DIC, avoiding any redundant
> cache maintenance.
> 
> Am I missing something else?

No; you're absolutely right. I had misread the patch and thought the
IDC/DIC parts didn't go into the common macro. That all looks fine.

Sorry again for the noise.

Thanks,
Mark.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 32+ messages in thread

end of thread, other threads:[~2021-05-12 10:55 UTC | newest]

Thread overview: 32+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-05-11 14:42 [PATCH v1 00/13] Tidy up cache.S Fuad Tabba
2021-05-11 14:42 ` [PATCH v1 01/13] arm64: Do not enable uaccess for flush_icache_range Fuad Tabba
2021-05-11 15:22   ` Mark Rutland
2021-05-12  8:52     ` Fuad Tabba
2021-05-12  9:59       ` Mark Rutland
2021-05-12 10:29         ` Fuad Tabba
2021-05-12 10:53           ` Mark Rutland
2021-05-11 16:53   ` Robin Murphy
2021-05-12  8:57     ` Fuad Tabba
2021-05-11 14:42 ` [PATCH v1 02/13] arm64: Do not enable uaccess for invalidate_icache_range Fuad Tabba
2021-05-11 15:34   ` Mark Rutland
2021-05-12  9:35     ` Fuad Tabba
2021-05-11 14:42 ` [PATCH v1 03/13] arm64: Downgrade flush_icache_range to invalidate Fuad Tabba
2021-05-11 14:53   ` Ard Biesheuvel
2021-05-12  9:45     ` Fuad Tabba
2021-05-11 14:42 ` [PATCH v1 04/13] arm64: Move documentation of dcache_by_line_op Fuad Tabba
2021-05-11 14:42 ` [PATCH v1 05/13] arm64: __inval_dcache_area to take end parameter instead of size Fuad Tabba
2021-05-11 14:42 ` [PATCH v1 06/13] arm64: dcache_by_line_op " Fuad Tabba
2021-05-11 14:42 ` [PATCH v1 07/13] arm64: __flush_dcache_area " Fuad Tabba
2021-05-11 14:42 ` [PATCH v1 08/13] arm64: __clean_dcache_area_poc " Fuad Tabba
2021-05-11 14:42 ` [PATCH v1 09/13] arm64: __clean_dcache_area_pop " Fuad Tabba
2021-05-11 14:42 ` [PATCH v1 10/13] arm64: __clean_dcache_area_pou " Fuad Tabba
2021-05-11 14:42 ` [PATCH v1 11/13] arm64: sync_icache_aliases " Fuad Tabba
2021-05-11 14:42 ` [PATCH v1 12/13] arm64: Fix cache maintenance function comments Fuad Tabba
2021-05-11 14:42 ` [PATCH v1 13/13] arm64: Rename arm64-internal cache maintenance functions Fuad Tabba
2021-05-11 15:09   ` Ard Biesheuvel
2021-05-11 15:49     ` Mark Rutland
2021-05-12  9:51       ` Marc Zyngier
2021-05-12 10:00         ` Mark Rutland
2021-05-12 10:00       ` Fuad Tabba
2021-05-12 10:04         ` Mark Rutland
2021-05-12  9:56     ` Fuad Tabba

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).