linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 00/18] Tidy up cache.S
@ 2021-05-20 12:43 Fuad Tabba
  2021-05-20 12:43 ` [PATCH v3 01/18] arm64: assembler: replace `kaddr` with `addr` Fuad Tabba
                   ` (17 more replies)
  0 siblings, 18 replies; 42+ messages in thread
From: Fuad Tabba @ 2021-05-20 12:43 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: will, catalin.marinas, mark.rutland, maz, ardb, james.morse,
	alexandru.elisei, suzuki.poulose, robin.murphy, tabba

Hi,

Changes since v2 [1]:
- Brought in Mark's patches that add conditional cache fixups, only generating
  an extable entry if a label is provided [2]. NOTE: The patches missed some of
  the code comments to reflect the changes. I took the liberty of fixing the
  comments in Mark's patch.
- Modified the user_alt macro to make the fixup label optional, in the same
  way as Mark's patches [2], to avoid code duplication later in the series.
- Tidied up the new cache flush (clean/invalidate) macro by removing code
  duplication, conditional variables/labels. Moved the ttbr manipulation,
  fixup handler, and rets inline in __flush_cache_user_range. (Mark)
- Fixed comments and commit messages. (Mark)

Changes since v1 [3]:
- Apply ARM64_WORKAROUND_CLEAN_CACHE errata to
  swsusp_arch_suspend_exit (Mark)
- Remove toggling of uaccess from the newly created cache flush
  (clean/invalidate) macro and leave it up to the caller (Robin)
- Fix renaming of cache maintenance functions (Ard, Mark)
- Fix comment on maintenance operations in machine_kexec_post_load (Ard)
- Fix commit msg comments to clarify some of the changes and outline potential
  performance impact (Mark)
- Fix code comments that refer to flush_icache_range when the intended function
  is __flush_icache_range

As has been noted before [4], the code in cache.S isn't very tidy. Some of its
functions accept address ranges by start and size, whereas others with similar
names do so by start and end. This has resulted in at least one bug [5].

Moreover, invalidate_icache_range and __flush_icache_range toggle uaccess,
which isn't necessary because they work on the kernel linear map [6].

This patch series attempts to fix these issues, as well as tidy up the code in
general to reduce ambiguity and make it consistent with Arm terminology and
with the functions' actual operations.

No functional change intended in this series. However, there might be a
performance impact due to the reduced number of instructions in general.

This series is based on v5.13-rc1. You can find the applied series here [7].

Cheers,
/fuad

[1] https://lore.kernel.org/linux-arm-kernel/20210517075124.152151-1-tabba@google.com/
[2] https://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git/log/?h=arm64/cleanups/cache
[3] https://lore.kernel.org/linux-arm-kernel/20210511144252.3779113-1-tabba@google.com/T/
[4] https://lore.kernel.org/linux-arch/20200511075115.GA16134@willie-the-truck/
[5] https://lore.kernel.org/linux-arch/20200510075510.987823-3-hch@lst.de/
[6] https://lore.kernel.org/linux-arch/20200511110014.lb9PEahJ4hVOYrbwIb_qUHXyNy9KQzNFdb_I3YlzY6A@z/
[7] https://android-kvm.googlesource.com/linux/+/refs/heads/tabba/fixcache-5.13

Fuad Tabba (16):
  arm64: Apply errata to swsusp_arch_suspend_exit
  arm64: assembler: user_alt label optional
  arm64: Do not enable uaccess for flush_icache_range
  arm64: Do not enable uaccess for invalidate_icache_range
  arm64: Downgrade flush_icache_range to invalidate
  arm64: Move documentation of dcache_by_line_op
  arm64: Fix comments to refer to correct function __flush_icache_range
  arm64: __inval_dcache_area to take end parameter instead of size
  arm64: dcache_by_line_op to take end parameter instead of size
  arm64: __flush_dcache_area to take end parameter instead of size
  arm64: __clean_dcache_area_poc to take end parameter instead of size
  arm64: __clean_dcache_area_pop to take end parameter instead of size
  arm64: __clean_dcache_area_pou to take end parameter instead of size
  arm64: sync_icache_aliases to take end parameter instead of size
  arm64: Fix cache maintenance function comments
  arm64: Rename arm64-internal cache maintenance functions

Mark Rutland (2):
  arm64: assembler: replace `kaddr` with `addr`
  arm64: assembler: add conditional cache fixups

 arch/arm64/include/asm/alternative-macros.h |   9 +-
 arch/arm64/include/asm/arch_gicv3.h         |   3 +-
 arch/arm64/include/asm/assembler.h          |  80 ++++++----
 arch/arm64/include/asm/cacheflush.h         |  69 +++++----
 arch/arm64/include/asm/efi.h                |   2 +-
 arch/arm64/include/asm/kvm_mmu.h            |   7 +-
 arch/arm64/kernel/alternative.c             |   2 +-
 arch/arm64/kernel/efi-entry.S               |   9 +-
 arch/arm64/kernel/head.S                    |  13 +-
 arch/arm64/kernel/hibernate-asm.S           |   7 +-
 arch/arm64/kernel/hibernate.c               |  20 ++-
 arch/arm64/kernel/idreg-override.c          |   3 +-
 arch/arm64/kernel/image-vars.h              |   2 +-
 arch/arm64/kernel/insn.c                    |   2 +-
 arch/arm64/kernel/kaslr.c                   |  12 +-
 arch/arm64/kernel/machine_kexec.c           |  30 ++--
 arch/arm64/kernel/probes/uprobes.c          |   2 +-
 arch/arm64/kernel/smp.c                     |   8 +-
 arch/arm64/kernel/smp_spin_table.c          |   7 +-
 arch/arm64/kernel/sys_compat.c              |   2 +-
 arch/arm64/kvm/arm.c                        |   2 +-
 arch/arm64/kvm/hyp/nvhe/cache.S             |   4 +-
 arch/arm64/kvm/hyp/nvhe/setup.c             |   3 +-
 arch/arm64/kvm/hyp/nvhe/tlb.c               |   2 +-
 arch/arm64/kvm/hyp/pgtable.c                |  13 +-
 arch/arm64/lib/uaccess_flushcache.c         |   4 +-
 arch/arm64/mm/cache.S                       | 163 +++++++++++---------
 arch/arm64/mm/flush.c                       |  29 ++--
 28 files changed, 294 insertions(+), 215 deletions(-)


base-commit: 6efb943b8616ec53a5e444193dccf1af9ad627b5
-- 
2.31.1.751.gd2f1c929bd-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [PATCH v3 01/18] arm64: assembler: replace `kaddr` with `addr`
  2021-05-20 12:43 [PATCH v3 00/18] Tidy up cache.S Fuad Tabba
@ 2021-05-20 12:43 ` Fuad Tabba
  2021-05-20 12:43 ` [PATCH v3 02/18] arm64: assembler: add conditional cache fixups Fuad Tabba
                   ` (16 subsequent siblings)
  17 siblings, 0 replies; 42+ messages in thread
From: Fuad Tabba @ 2021-05-20 12:43 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: will, catalin.marinas, mark.rutland, maz, ardb, james.morse,
	alexandru.elisei, suzuki.poulose, robin.murphy, tabba

From: Mark Rutland <mark.rutland@arm.com>

The `__dcache_op_workaround_clean_cache` and `dcache_by_line_op` macros
are only expected to be usedc on kernel memory, without a user fault
fixup, and so we named their address variables `kaddr` to make this
clear.

Subseuqent patches will modify these to also work on user memory with an
(optional) user fault fixup, where `kaddr` won't make as much sense. To
aid the legibility of patches, this patch (only) replaces `kaddr` with
`addr` as a preparatory step.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Fuad Tabba <tabba@google.com>
Cc: Ard Biesheuvel <aedb@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Fuad Tabba <tabba@google.com>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/include/asm/assembler.h | 32 +++++++++++++++---------------
 1 file changed, 16 insertions(+), 16 deletions(-)

diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
index 8418c1bd8f04..6a0fbc599196 100644
--- a/arch/arm64/include/asm/assembler.h
+++ b/arch/arm64/include/asm/assembler.h
@@ -377,47 +377,47 @@ alternative_cb_end
 
 /*
  * Macro to perform a data cache maintenance for the interval
- * [kaddr, kaddr + size)
+ * [addr, addr + size)
  *
  * 	op:		operation passed to dc instruction
  * 	domain:		domain used in dsb instruciton
- * 	kaddr:		starting virtual address of the region
+ * 	addr:		starting virtual address of the region
  * 	size:		size of the region
- * 	Corrupts:	kaddr, size, tmp1, tmp2
+ * 	Corrupts:	addr, size, tmp1, tmp2
  */
-	.macro __dcache_op_workaround_clean_cache, op, kaddr
+	.macro __dcache_op_workaround_clean_cache, op, addr
 alternative_if_not ARM64_WORKAROUND_CLEAN_CACHE
-	dc	\op, \kaddr
+	dc	\op, \addr
 alternative_else
-	dc	civac, \kaddr
+	dc	civac, \addr
 alternative_endif
 	.endm
 
-	.macro dcache_by_line_op op, domain, kaddr, size, tmp1, tmp2
+	.macro dcache_by_line_op op, domain, addr, size, tmp1, tmp2
 	dcache_line_size \tmp1, \tmp2
-	add	\size, \kaddr, \size
+	add	\size, \addr, \size
 	sub	\tmp2, \tmp1, #1
-	bic	\kaddr, \kaddr, \tmp2
+	bic	\addr, \addr, \tmp2
 9998:
 	.ifc	\op, cvau
-	__dcache_op_workaround_clean_cache \op, \kaddr
+	__dcache_op_workaround_clean_cache \op, \addr
 	.else
 	.ifc	\op, cvac
-	__dcache_op_workaround_clean_cache \op, \kaddr
+	__dcache_op_workaround_clean_cache \op, \addr
 	.else
 	.ifc	\op, cvap
-	sys	3, c7, c12, 1, \kaddr	// dc cvap
+	sys	3, c7, c12, 1, \addr	// dc cvap
 	.else
 	.ifc	\op, cvadp
-	sys	3, c7, c13, 1, \kaddr	// dc cvadp
+	sys	3, c7, c13, 1, \addr	// dc cvadp
 	.else
-	dc	\op, \kaddr
+	dc	\op, \addr
 	.endif
 	.endif
 	.endif
 	.endif
-	add	\kaddr, \kaddr, \tmp1
-	cmp	\kaddr, \size
+	add	\addr, \addr, \tmp1
+	cmp	\addr, \size
 	b.lo	9998b
 	dsb	\domain
 	.endm
-- 
2.31.1.751.gd2f1c929bd-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v3 02/18] arm64: assembler: add conditional cache fixups
  2021-05-20 12:43 [PATCH v3 00/18] Tidy up cache.S Fuad Tabba
  2021-05-20 12:43 ` [PATCH v3 01/18] arm64: assembler: replace `kaddr` with `addr` Fuad Tabba
@ 2021-05-20 12:43 ` Fuad Tabba
  2021-05-20 12:43 ` [PATCH v3 03/18] arm64: Apply errata to swsusp_arch_suspend_exit Fuad Tabba
                   ` (15 subsequent siblings)
  17 siblings, 0 replies; 42+ messages in thread
From: Fuad Tabba @ 2021-05-20 12:43 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: will, catalin.marinas, mark.rutland, maz, ardb, james.morse,
	alexandru.elisei, suzuki.poulose, robin.murphy, tabba

From: Mark Rutland <mark.rutland@arm.com>

It would be helpful if we could use both `dcache_by_line_op` and
`invalidate_icache_by_line` for user memory without accidentally fixing
up unexpected faults when performing maintenance on kernel addresses.

Let's make this possible by having both macros take an optional fixup
label, and only generating an extable entry if a label is provided.

At the same time, let's clean up the labels used to be globally unique
using \@ as we do for other macros.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Fuad Tabba <tabba@google.com>
Cc: Ard Biesheuvel <aedb@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Fuad Tabba <tabba@google.com>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/include/asm/assembler.h | 39 +++++++++++++++++++++---------
 1 file changed, 28 insertions(+), 11 deletions(-)

diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
index 6a0fbc599196..0a276b46ef50 100644
--- a/arch/arm64/include/asm/assembler.h
+++ b/arch/arm64/include/asm/assembler.h
@@ -130,15 +130,27 @@ alternative_endif
 	.endm
 
 /*
- * Emit an entry into the exception table
+ * Create an exception table entry for `insn`, which will branch to `fixup`
+ * when an unhandled fault is taken.
  */
-	.macro		_asm_extable, from, to
+	.macro		_asm_extable, insn, fixup
 	.pushsection	__ex_table, "a"
 	.align		3
-	.long		(\from - .), (\to - .)
+	.long		(\insn - .), (\fixup - .)
 	.popsection
 	.endm
 
+/*
+ * Create an exception table entry for `insn` if `fixup` is provided. Otherwise
+ * do nothing.
+ */
+	.macro		_cond_extable, insn, fixup
+	.ifnc		\fixup,
+	_asm_extable	\insn, \fixup
+	.endif
+	.endm
+
+
 #define USER(l, x...)				\
 9999:	x;					\
 	_asm_extable	9999b, l
@@ -383,6 +395,7 @@ alternative_cb_end
  * 	domain:		domain used in dsb instruciton
  * 	addr:		starting virtual address of the region
  * 	size:		size of the region
+ * 	fixup:		optional label to branch to on user fault
  * 	Corrupts:	addr, size, tmp1, tmp2
  */
 	.macro __dcache_op_workaround_clean_cache, op, addr
@@ -393,12 +406,12 @@ alternative_else
 alternative_endif
 	.endm
 
-	.macro dcache_by_line_op op, domain, addr, size, tmp1, tmp2
+	.macro dcache_by_line_op op, domain, addr, size, tmp1, tmp2, fixup
 	dcache_line_size \tmp1, \tmp2
 	add	\size, \addr, \size
 	sub	\tmp2, \tmp1, #1
 	bic	\addr, \addr, \tmp2
-9998:
+.Ldcache_op\@:
 	.ifc	\op, cvau
 	__dcache_op_workaround_clean_cache \op, \addr
 	.else
@@ -418,8 +431,10 @@ alternative_endif
 	.endif
 	add	\addr, \addr, \tmp1
 	cmp	\addr, \size
-	b.lo	9998b
+	b.lo	.Ldcache_op\@
 	dsb	\domain
+
+	_cond_extable .Ldcache_op\@, \fixup
 	.endm
 
 /*
@@ -427,20 +442,22 @@ alternative_endif
  * [start, end)
  *
  * 	start, end:	virtual addresses describing the region
- *	label:		A label to branch to on user fault.
+ *	fixup:		optional label to branch to on user fault
  * 	Corrupts:	tmp1, tmp2
  */
-	.macro invalidate_icache_by_line start, end, tmp1, tmp2, label
+	.macro invalidate_icache_by_line start, end, tmp1, tmp2, fixup
 	icache_line_size \tmp1, \tmp2
 	sub	\tmp2, \tmp1, #1
 	bic	\tmp2, \start, \tmp2
-9997:
-USER(\label, ic	ivau, \tmp2)			// invalidate I line PoU
+.Licache_op\@:
+	ic	ivau, \tmp2			// invalidate I line PoU
 	add	\tmp2, \tmp2, \tmp1
 	cmp	\tmp2, \end
-	b.lo	9997b
+	b.lo	.Licache_op\@
 	dsb	ish
 	isb
+
+	_cond_extable .Licache_op\@, \fixup
 	.endm
 
 /*
-- 
2.31.1.751.gd2f1c929bd-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v3 03/18] arm64: Apply errata to swsusp_arch_suspend_exit
  2021-05-20 12:43 [PATCH v3 00/18] Tidy up cache.S Fuad Tabba
  2021-05-20 12:43 ` [PATCH v3 01/18] arm64: assembler: replace `kaddr` with `addr` Fuad Tabba
  2021-05-20 12:43 ` [PATCH v3 02/18] arm64: assembler: add conditional cache fixups Fuad Tabba
@ 2021-05-20 12:43 ` Fuad Tabba
  2021-05-20 12:46   ` Mark Rutland
  2021-05-20 12:43 ` [PATCH v3 04/18] arm64: assembler: user_alt label optional Fuad Tabba
                   ` (14 subsequent siblings)
  17 siblings, 1 reply; 42+ messages in thread
From: Fuad Tabba @ 2021-05-20 12:43 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: will, catalin.marinas, mark.rutland, maz, ardb, james.morse,
	alexandru.elisei, suzuki.poulose, robin.murphy, tabba

The Arm errata covered by ARM64_WORKAROUND_CLEAN_CACHE require
that "dc cvau" instructions get promoted to "dc civac".

Reported-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/kernel/hibernate-asm.S | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/kernel/hibernate-asm.S b/arch/arm64/kernel/hibernate-asm.S
index 8ccca660034e..0ed2f72a6b94 100644
--- a/arch/arm64/kernel/hibernate-asm.S
+++ b/arch/arm64/kernel/hibernate-asm.S
@@ -91,7 +91,8 @@ SYM_CODE_START(swsusp_arch_suspend_exit)
 	raw_dcache_line_size x2, x3
 	sub	x3, x2, #1
 	bic	x4, x10, x3
-2:	dc	cvau, x4	/* clean D line / unified line */
+2:	/* clean D line / unified line */
+alternative_insn "dc cvau, x4",  "dc civac, x4",  ARM64_WORKAROUND_CLEAN_CACHE
 	add	x4, x4, x2
 	cmp	x4, x1
 	b.lo	2b
-- 
2.31.1.751.gd2f1c929bd-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v3 04/18] arm64: assembler: user_alt label optional
  2021-05-20 12:43 [PATCH v3 00/18] Tidy up cache.S Fuad Tabba
                   ` (2 preceding siblings ...)
  2021-05-20 12:43 ` [PATCH v3 03/18] arm64: Apply errata to swsusp_arch_suspend_exit Fuad Tabba
@ 2021-05-20 12:43 ` Fuad Tabba
  2021-05-20 12:57   ` Mark Rutland
  2021-05-20 12:43 ` [PATCH v3 05/18] arm64: Do not enable uaccess for flush_icache_range Fuad Tabba
                   ` (13 subsequent siblings)
  17 siblings, 1 reply; 42+ messages in thread
From: Fuad Tabba @ 2021-05-20 12:43 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: will, catalin.marinas, mark.rutland, maz, ardb, james.morse,
	alexandru.elisei, suzuki.poulose, robin.murphy, tabba

Make the label for the extable entry in user_alt optional, only
generating an extable entry if provided.

This is needed later in the series, to avoid instruction
duplication in the assembly code.

While at it, clean up the label used to be globally unique using
\@ as for other macros.

Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/alternative-macros.h | 9 ++++++---
 arch/arm64/mm/cache.S                       | 2 +-
 2 files changed, 7 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/include/asm/alternative-macros.h b/arch/arm64/include/asm/alternative-macros.h
index 8a078fc662ac..01ef954c9b2d 100644
--- a/arch/arm64/include/asm/alternative-macros.h
+++ b/arch/arm64/include/asm/alternative-macros.h
@@ -197,9 +197,12 @@ alternative_endif
 #define _ALTERNATIVE_CFG(insn1, insn2, cap, cfg, ...)	\
 	alternative_insn insn1, insn2, cap, IS_ENABLED(cfg)
 
-.macro user_alt, label, oldinstr, newinstr, cond
-9999:	alternative_insn "\oldinstr", "\newinstr", \cond
-	_asm_extable 9999b, \label
+.macro user_alt, oldinstr, newinstr, cond, label
+.Lextable_\@:
+	alternative_insn "\oldinstr", "\newinstr", \cond
+	.ifnc \label,
+	_asm_extable .Lextable_\@, \label
+	.endif
 .endm
 
 #endif  /*  __ASSEMBLY__  */
diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
index 2d881f34dd9d..5ff8dfa86975 100644
--- a/arch/arm64/mm/cache.S
+++ b/arch/arm64/mm/cache.S
@@ -47,7 +47,7 @@ alternative_else_nop_endif
 	sub	x3, x2, #1
 	bic	x4, x0, x3
 1:
-user_alt 9f, "dc cvau, x4",  "dc civac, x4",  ARM64_WORKAROUND_CLEAN_CACHE
+user_alt "dc cvau, x4",  "dc civac, x4",  ARM64_WORKAROUND_CLEAN_CACHE, 9f
 	add	x4, x4, x2
 	cmp	x4, x1
 	b.lo	1b
-- 
2.31.1.751.gd2f1c929bd-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v3 05/18] arm64: Do not enable uaccess for flush_icache_range
  2021-05-20 12:43 [PATCH v3 00/18] Tidy up cache.S Fuad Tabba
                   ` (3 preceding siblings ...)
  2021-05-20 12:43 ` [PATCH v3 04/18] arm64: assembler: user_alt label optional Fuad Tabba
@ 2021-05-20 12:43 ` Fuad Tabba
  2021-05-20 14:02   ` Mark Rutland
  2021-05-25 11:18   ` Catalin Marinas
  2021-05-20 12:43 ` [PATCH v3 06/18] arm64: Do not enable uaccess for invalidate_icache_range Fuad Tabba
                   ` (12 subsequent siblings)
  17 siblings, 2 replies; 42+ messages in thread
From: Fuad Tabba @ 2021-05-20 12:43 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: will, catalin.marinas, mark.rutland, maz, ardb, james.morse,
	alexandru.elisei, suzuki.poulose, robin.murphy, tabba

__flush_icache_range works on the kernel linear map, and doesn't
need uaccess. The existing code is a side-effect of its current
implementation with __flush_cache_user_range fallthrough.

Instead of fallthrough to share the code, use a common macro for
the two where the caller specifies an optional fixup label if
user access is needed. If provided, this label would be used to
generate an extable entry.

No functional change intended.
Possible performance impact due to the reduced number of
instructions.

Reported-by: Catalin Marinas <catalin.marinas@arm.com>
Reported-by: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/linux-arch/20200511110014.lb9PEahJ4hVOYrbwIb_qUHXyNy9KQzNFdb_I3YlzY6A@z/
Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/mm/cache.S | 64 +++++++++++++++++++++++++++----------------
 1 file changed, 41 insertions(+), 23 deletions(-)

diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
index 5ff8dfa86975..c6bc3b8138e1 100644
--- a/arch/arm64/mm/cache.S
+++ b/arch/arm64/mm/cache.S
@@ -14,6 +14,41 @@
 #include <asm/alternative.h>
 #include <asm/asm-uaccess.h>
 
+/*
+ *	__flush_cache_range(start,end) [fixup]
+ *
+ *	Ensure that the I and D caches are coherent within specified region.
+ *	This is typically used when code has been written to a memory region,
+ *	and will be executed.
+ *
+ *	- start   - virtual start address of region
+ *	- end     - virtual end address of region
+ *	- fixup   - optional label to branch to on user fault
+ */
+.macro	__flush_cache_range, fixup
+alternative_if ARM64_HAS_CACHE_IDC
+	dsb	ishst
+	b	.Ldc_skip_\@
+alternative_else_nop_endif
+	dcache_line_size x2, x3
+	sub	x3, x2, #1
+	bic	x4, x0, x3
+.Ldc_loop_\@:
+user_alt "dc cvau, x4",  "dc civac, x4",  ARM64_WORKAROUND_CLEAN_CACHE, \fixup
+	add	x4, x4, x2
+	cmp	x4, x1
+	b.lo	.Ldc_loop_\@
+	dsb	ish
+
+.Ldc_skip_\@:
+alternative_if ARM64_HAS_CACHE_DIC
+	isb
+	b	.Lic_skip_\@
+alternative_else_nop_endif
+	invalidate_icache_by_line x0, x1, x2, x3, \fixup
+.Lic_skip_\@:
+.endm
+
 /*
  *	flush_icache_range(start,end)
  *
@@ -25,7 +60,9 @@
  *	- end     - virtual end address of region
  */
 SYM_FUNC_START(__flush_icache_range)
-	/* FALLTHROUGH */
+	__flush_cache_range
+	ret
+SYM_FUNC_END(__flush_icache_range)
 
 /*
  *	__flush_cache_user_range(start,end)
@@ -39,34 +76,15 @@ SYM_FUNC_START(__flush_icache_range)
  */
 SYM_FUNC_START(__flush_cache_user_range)
 	uaccess_ttbr0_enable x2, x3, x4
-alternative_if ARM64_HAS_CACHE_IDC
-	dsb	ishst
-	b	7f
-alternative_else_nop_endif
-	dcache_line_size x2, x3
-	sub	x3, x2, #1
-	bic	x4, x0, x3
-1:
-user_alt "dc cvau, x4",  "dc civac, x4",  ARM64_WORKAROUND_CLEAN_CACHE, 9f
-	add	x4, x4, x2
-	cmp	x4, x1
-	b.lo	1b
-	dsb	ish
 
-7:
-alternative_if ARM64_HAS_CACHE_DIC
-	isb
-	b	8f
-alternative_else_nop_endif
-	invalidate_icache_by_line x0, x1, x2, x3, 9f
-8:	mov	x0, #0
+	__flush_cache_range 2f
+	mov	x0, xzr
 1:
 	uaccess_ttbr0_disable x1, x2
 	ret
-9:
+2:
 	mov	x0, #-EFAULT
 	b	1b
-SYM_FUNC_END(__flush_icache_range)
 SYM_FUNC_END(__flush_cache_user_range)
 
 /*
-- 
2.31.1.751.gd2f1c929bd-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v3 06/18] arm64: Do not enable uaccess for invalidate_icache_range
  2021-05-20 12:43 [PATCH v3 00/18] Tidy up cache.S Fuad Tabba
                   ` (4 preceding siblings ...)
  2021-05-20 12:43 ` [PATCH v3 05/18] arm64: Do not enable uaccess for flush_icache_range Fuad Tabba
@ 2021-05-20 12:43 ` Fuad Tabba
  2021-05-20 14:13   ` Mark Rutland
  2021-05-25 11:18   ` Catalin Marinas
  2021-05-20 12:43 ` [PATCH v3 07/18] arm64: Downgrade flush_icache_range to invalidate Fuad Tabba
                   ` (11 subsequent siblings)
  17 siblings, 2 replies; 42+ messages in thread
From: Fuad Tabba @ 2021-05-20 12:43 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: will, catalin.marinas, mark.rutland, maz, ardb, james.morse,
	alexandru.elisei, suzuki.poulose, robin.murphy, tabba

invalidate_icache_range() works on the kernel linear map, and
doesn't need uaccess. Remove the code that toggles
uaccess_ttbr0_enable, as well as the code that emits an entry
into the exception table (via the macro
invalidate_icache_by_line).

Changes return type of invalidate_icache_range() from int (which
used to indicate a fault) to void, since it doesn't need uaccess
and won't fault. Note that return value was never checked by any
of the callers.

No functional change intended.
Possible performance impact due to the reduced number of
instructions.

Reported-by: Catalin Marinas <catalin.marinas@arm.com>
Reported-by: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/linux-arch/20200511110014.lb9PEahJ4hVOYrbwIb_qUHXyNy9KQzNFdb_I3YlzY6A@z/
Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/cacheflush.h |  2 +-
 arch/arm64/mm/cache.S               | 11 +----------
 2 files changed, 2 insertions(+), 11 deletions(-)

diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
index 52e5c1623224..a586afa84172 100644
--- a/arch/arm64/include/asm/cacheflush.h
+++ b/arch/arm64/include/asm/cacheflush.h
@@ -57,7 +57,7 @@
  *		- size   - region size
  */
 extern void __flush_icache_range(unsigned long start, unsigned long end);
-extern int  invalidate_icache_range(unsigned long start, unsigned long end);
+extern void invalidate_icache_range(unsigned long start, unsigned long end);
 extern void __flush_dcache_area(void *addr, size_t len);
 extern void __inval_dcache_area(void *addr, size_t len);
 extern void __clean_dcache_area_poc(void *addr, size_t len);
diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
index c6bc3b8138e1..7318a40dd6ca 100644
--- a/arch/arm64/mm/cache.S
+++ b/arch/arm64/mm/cache.S
@@ -97,21 +97,12 @@ SYM_FUNC_END(__flush_cache_user_range)
  */
 SYM_FUNC_START(invalidate_icache_range)
 alternative_if ARM64_HAS_CACHE_DIC
-	mov	x0, xzr
 	isb
 	ret
 alternative_else_nop_endif
 
-	uaccess_ttbr0_enable x2, x3, x4
-
-	invalidate_icache_by_line x0, x1, x2, x3, 2f
-	mov	x0, xzr
-1:
-	uaccess_ttbr0_disable x1, x2
+	invalidate_icache_by_line x0, x1, x2, x3
 	ret
-2:
-	mov	x0, #-EFAULT
-	b	1b
 SYM_FUNC_END(invalidate_icache_range)
 
 /*
-- 
2.31.1.751.gd2f1c929bd-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v3 07/18] arm64: Downgrade flush_icache_range to invalidate
  2021-05-20 12:43 [PATCH v3 00/18] Tidy up cache.S Fuad Tabba
                   ` (5 preceding siblings ...)
  2021-05-20 12:43 ` [PATCH v3 06/18] arm64: Do not enable uaccess for invalidate_icache_range Fuad Tabba
@ 2021-05-20 12:43 ` Fuad Tabba
  2021-05-20 14:15   ` Mark Rutland
  2021-05-25 11:18   ` Catalin Marinas
  2021-05-20 12:43 ` [PATCH v3 08/18] arm64: Move documentation of dcache_by_line_op Fuad Tabba
                   ` (10 subsequent siblings)
  17 siblings, 2 replies; 42+ messages in thread
From: Fuad Tabba @ 2021-05-20 12:43 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: will, catalin.marinas, mark.rutland, maz, ardb, james.morse,
	alexandru.elisei, suzuki.poulose, robin.murphy, tabba

Since __flush_dcache_area is called right before,
invalidate_icache_range is sufficient in this case.

Rewrite the comment to better explain the rationale behind the
cache maintenance operations used here.

No functional change intended.
Possible performance impact due to invalidating only the icache
rather than invalidating and cleaning both caches.

Reported-by: Catalin Marinas <catalin.marinas@arm.com>
Reported-by: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/linux-arch/20200511110014.lb9PEahJ4hVOYrbwIb_qUHXyNy9KQzNFdb_I3YlzY6A@z/
Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/kernel/machine_kexec.c | 10 +++++++---
 1 file changed, 7 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c
index 90a335c74442..a03944fd0cd4 100644
--- a/arch/arm64/kernel/machine_kexec.c
+++ b/arch/arm64/kernel/machine_kexec.c
@@ -68,10 +68,14 @@ int machine_kexec_post_load(struct kimage *kimage)
 	kimage->arch.kern_reloc = __pa(reloc_code);
 	kexec_image_info(kimage);
 
-	/* Flush the reloc_code in preparation for its execution. */
+	/*
+	 * For execution with the MMU off, reloc_code needs to be cleaned to the
+	 * PoC and invalidated from the I-cache.
+	 */
 	__flush_dcache_area(reloc_code, arm64_relocate_new_kernel_size);
-	flush_icache_range((uintptr_t)reloc_code, (uintptr_t)reloc_code +
-			   arm64_relocate_new_kernel_size);
+	invalidate_icache_range((uintptr_t)reloc_code,
+				(uintptr_t)reloc_code +
+					arm64_relocate_new_kernel_size);
 
 	return 0;
 }
-- 
2.31.1.751.gd2f1c929bd-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v3 08/18] arm64: Move documentation of dcache_by_line_op
  2021-05-20 12:43 [PATCH v3 00/18] Tidy up cache.S Fuad Tabba
                   ` (6 preceding siblings ...)
  2021-05-20 12:43 ` [PATCH v3 07/18] arm64: Downgrade flush_icache_range to invalidate Fuad Tabba
@ 2021-05-20 12:43 ` Fuad Tabba
  2021-05-20 14:17   ` Mark Rutland
  2021-05-20 12:43 ` [PATCH v3 09/18] arm64: Fix comments to refer to correct function __flush_icache_range Fuad Tabba
                   ` (9 subsequent siblings)
  17 siblings, 1 reply; 42+ messages in thread
From: Fuad Tabba @ 2021-05-20 12:43 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: will, catalin.marinas, mark.rutland, maz, ardb, james.morse,
	alexandru.elisei, suzuki.poulose, robin.murphy, tabba

The comment describing the macro dcache_by_line_op is placed
right before the previous macro of the one it describes, which is
a bit confusing. Move it to the macro it describes (dcache_by_line_op).

No functional change intended.

Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/assembler.h | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
index 0a276b46ef50..ced791124b28 100644
--- a/arch/arm64/include/asm/assembler.h
+++ b/arch/arm64/include/asm/assembler.h
@@ -387,6 +387,14 @@ alternative_cb_end
 	bfi	\tcr, \tmp0, \pos, #3
 	.endm
 
+	.macro __dcache_op_workaround_clean_cache, op, addr
+alternative_if_not ARM64_WORKAROUND_CLEAN_CACHE
+	dc	\op, \addr
+alternative_else
+	dc	civac, \addr
+alternative_endif
+	.endm
+
 /*
  * Macro to perform a data cache maintenance for the interval
  * [addr, addr + size)
@@ -398,14 +406,6 @@ alternative_cb_end
  * 	fixup:		optional label to branch to on user fault
  * 	Corrupts:	addr, size, tmp1, tmp2
  */
-	.macro __dcache_op_workaround_clean_cache, op, addr
-alternative_if_not ARM64_WORKAROUND_CLEAN_CACHE
-	dc	\op, \addr
-alternative_else
-	dc	civac, \addr
-alternative_endif
-	.endm
-
 	.macro dcache_by_line_op op, domain, addr, size, tmp1, tmp2, fixup
 	dcache_line_size \tmp1, \tmp2
 	add	\size, \addr, \size
-- 
2.31.1.751.gd2f1c929bd-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v3 09/18] arm64: Fix comments to refer to correct function __flush_icache_range
  2021-05-20 12:43 [PATCH v3 00/18] Tidy up cache.S Fuad Tabba
                   ` (7 preceding siblings ...)
  2021-05-20 12:43 ` [PATCH v3 08/18] arm64: Move documentation of dcache_by_line_op Fuad Tabba
@ 2021-05-20 12:43 ` Fuad Tabba
  2021-05-20 14:18   ` Mark Rutland
  2021-05-20 12:43 ` [PATCH v3 10/18] arm64: __inval_dcache_area to take end parameter instead of size Fuad Tabba
                   ` (8 subsequent siblings)
  17 siblings, 1 reply; 42+ messages in thread
From: Fuad Tabba @ 2021-05-20 12:43 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: will, catalin.marinas, mark.rutland, maz, ardb, james.morse,
	alexandru.elisei, suzuki.poulose, robin.murphy, tabba

Many comments refer to the function flush_icache_range, where the
intent is in fact __flush_icache_range. Fix these comments to
refer to the intended function.

That's probably due to commit 3b8c9f1cdfc506e9 ("arm64: IPI each
CPU after invalidating the I-cache for kernel mappings"), which
renamed flush_icache_range() to __flush_icache_range() and added
a wrapper.

No functional change intended.

Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/kernel/hibernate-asm.S | 4 ++--
 arch/arm64/mm/cache.S             | 2 +-
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/kernel/hibernate-asm.S b/arch/arm64/kernel/hibernate-asm.S
index 0ed2f72a6b94..ef2ab7caf815 100644
--- a/arch/arm64/kernel/hibernate-asm.S
+++ b/arch/arm64/kernel/hibernate-asm.S
@@ -45,7 +45,7 @@
  * Because this code has to be copied to a 'safe' page, it can't call out to
  * other functions by PC-relative address. Also remember that it may be
  * mid-way through over-writing other functions. For this reason it contains
- * code from flush_icache_range() and uses the copy_page() macro.
+ * code from __flush_icache_range() and uses the copy_page() macro.
  *
  * This 'safe' page is mapped via ttbr0, and executed from there. This function
  * switches to a copy of the linear map in ttbr1, performs the restore, then
@@ -87,7 +87,7 @@ SYM_CODE_START(swsusp_arch_suspend_exit)
 	copy_page	x0, x1, x2, x3, x4, x5, x6, x7, x8, x9
 
 	add	x1, x10, #PAGE_SIZE
-	/* Clean the copied page to PoU - based on flush_icache_range() */
+	/* Clean the copied page to PoU - based on __flush_icache_range() */
 	raw_dcache_line_size x2, x3
 	sub	x3, x2, #1
 	bic	x4, x10, x3
diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
index 7318a40dd6ca..80da4b8718b6 100644
--- a/arch/arm64/mm/cache.S
+++ b/arch/arm64/mm/cache.S
@@ -50,7 +50,7 @@ alternative_else_nop_endif
 .endm
 
 /*
- *	flush_icache_range(start,end)
+ *	__flush_icache_range(start,end)
  *
  *	Ensure that the I and D caches are coherent within specified region.
  *	This is typically used when code has been written to a memory region,
-- 
2.31.1.751.gd2f1c929bd-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v3 10/18] arm64: __inval_dcache_area to take end parameter instead of size
  2021-05-20 12:43 [PATCH v3 00/18] Tidy up cache.S Fuad Tabba
                   ` (8 preceding siblings ...)
  2021-05-20 12:43 ` [PATCH v3 09/18] arm64: Fix comments to refer to correct function __flush_icache_range Fuad Tabba
@ 2021-05-20 12:43 ` Fuad Tabba
  2021-05-20 15:46   ` Mark Rutland
  2021-05-20 12:43 ` [PATCH v3 11/18] arm64: dcache_by_line_op " Fuad Tabba
                   ` (7 subsequent siblings)
  17 siblings, 1 reply; 42+ messages in thread
From: Fuad Tabba @ 2021-05-20 12:43 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: will, catalin.marinas, mark.rutland, maz, ardb, james.morse,
	alexandru.elisei, suzuki.poulose, robin.murphy, tabba

To be consistent with other functions with similar names and
functionality in cacheflush.h, cache.S, and cachetlb.rst, change
to specify the range in terms of start and end, as opposed to
start and size.

Because the code is shared with __dma_inv_area, it changes the
parameters for that as well. However, __dma_inv_area is local to
cache.S, so no other users are affected.

No functional change intended.

Reported-by: Will Deacon <will@kernel.org>
Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/cacheflush.h |  2 +-
 arch/arm64/kernel/head.S            |  5 +----
 arch/arm64/mm/cache.S               | 16 +++++++++-------
 arch/arm64/mm/flush.c               |  2 +-
 4 files changed, 12 insertions(+), 13 deletions(-)

diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
index a586afa84172..157234706817 100644
--- a/arch/arm64/include/asm/cacheflush.h
+++ b/arch/arm64/include/asm/cacheflush.h
@@ -59,7 +59,7 @@
 extern void __flush_icache_range(unsigned long start, unsigned long end);
 extern void invalidate_icache_range(unsigned long start, unsigned long end);
 extern void __flush_dcache_area(void *addr, size_t len);
-extern void __inval_dcache_area(void *addr, size_t len);
+extern void __inval_dcache_area(unsigned long start, unsigned long end);
 extern void __clean_dcache_area_poc(void *addr, size_t len);
 extern void __clean_dcache_area_pop(void *addr, size_t len);
 extern void __clean_dcache_area_pou(void *addr, size_t len);
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index 96873dfa67fd..8df0ac8d9123 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -117,7 +117,7 @@ SYM_CODE_START_LOCAL(preserve_boot_args)
 	dmb	sy				// needed before dc ivac with
 						// MMU off
 
-	mov	x1, #0x20			// 4 x 8 bytes
+	add	x1, x0, #0x20			// 4 x 8 bytes
 	b	__inval_dcache_area		// tail call
 SYM_CODE_END(preserve_boot_args)
 
@@ -268,7 +268,6 @@ SYM_FUNC_START_LOCAL(__create_page_tables)
 	 */
 	adrp	x0, init_pg_dir
 	adrp	x1, init_pg_end
-	sub	x1, x1, x0
 	bl	__inval_dcache_area
 
 	/*
@@ -382,12 +381,10 @@ SYM_FUNC_START_LOCAL(__create_page_tables)
 
 	adrp	x0, idmap_pg_dir
 	adrp	x1, idmap_pg_end
-	sub	x1, x1, x0
 	bl	__inval_dcache_area
 
 	adrp	x0, init_pg_dir
 	adrp	x1, init_pg_end
-	sub	x1, x1, x0
 	bl	__inval_dcache_area
 
 	ret	x28
diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
index 80da4b8718b6..5170d9ab450a 100644
--- a/arch/arm64/mm/cache.S
+++ b/arch/arm64/mm/cache.S
@@ -138,25 +138,24 @@ alternative_else_nop_endif
 SYM_FUNC_END(__clean_dcache_area_pou)
 
 /*
- *	__inval_dcache_area(kaddr, size)
+ *	__inval_dcache_area(start, end)
  *
- * 	Ensure that any D-cache lines for the interval [kaddr, kaddr+size)
+ * 	Ensure that any D-cache lines for the interval [start, end)
  * 	are invalidated. Any partial lines at the ends of the interval are
  *	also cleaned to PoC to prevent data loss.
  *
- *	- kaddr   - kernel address
- *	- size    - size in question
+ *	- start   - kernel start address of region
+ *	- end     - kernel end address of region
  */
 SYM_FUNC_START_LOCAL(__dma_inv_area)
 SYM_FUNC_START_PI(__inval_dcache_area)
 	/* FALLTHROUGH */
 
 /*
- *	__dma_inv_area(start, size)
+ *	__dma_inv_area(start, end)
  *	- start   - virtual start address of region
- *	- size    - size in question
+ *	- end     - virtual end address of region
  */
-	add	x1, x1, x0
 	dcache_line_size x2, x3
 	sub	x3, x2, #1
 	tst	x1, x3				// end cache line aligned?
@@ -237,8 +236,10 @@ SYM_FUNC_END_PI(__dma_flush_area)
  *	- dir	- DMA direction
  */
 SYM_FUNC_START_PI(__dma_map_area)
+	add	x1, x0, x1
 	cmp	w2, #DMA_FROM_DEVICE
 	b.eq	__dma_inv_area
+	sub	x1, x1, x0
 	b	__dma_clean_area
 SYM_FUNC_END_PI(__dma_map_area)
 
@@ -249,6 +250,7 @@ SYM_FUNC_END_PI(__dma_map_area)
  *	- dir	- DMA direction
  */
 SYM_FUNC_START_PI(__dma_unmap_area)
+	add	x1, x0, x1
 	cmp	w2, #DMA_TO_DEVICE
 	b.ne	__dma_inv_area
 	ret
diff --git a/arch/arm64/mm/flush.c b/arch/arm64/mm/flush.c
index ac485163a4a7..4e3505c2bea6 100644
--- a/arch/arm64/mm/flush.c
+++ b/arch/arm64/mm/flush.c
@@ -88,7 +88,7 @@ EXPORT_SYMBOL_GPL(arch_wb_cache_pmem);
 
 void arch_invalidate_pmem(void *addr, size_t size)
 {
-	__inval_dcache_area(addr, size);
+	__inval_dcache_area((unsigned long)addr, (unsigned long)addr + size);
 }
 EXPORT_SYMBOL_GPL(arch_invalidate_pmem);
 #endif
-- 
2.31.1.751.gd2f1c929bd-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v3 11/18] arm64: dcache_by_line_op to take end parameter instead of size
  2021-05-20 12:43 [PATCH v3 00/18] Tidy up cache.S Fuad Tabba
                   ` (9 preceding siblings ...)
  2021-05-20 12:43 ` [PATCH v3 10/18] arm64: __inval_dcache_area to take end parameter instead of size Fuad Tabba
@ 2021-05-20 12:43 ` Fuad Tabba
  2021-05-20 15:48   ` Mark Rutland
  2021-05-20 12:44 ` [PATCH v3 12/18] arm64: __flush_dcache_area " Fuad Tabba
                   ` (6 subsequent siblings)
  17 siblings, 1 reply; 42+ messages in thread
From: Fuad Tabba @ 2021-05-20 12:43 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: will, catalin.marinas, mark.rutland, maz, ardb, james.morse,
	alexandru.elisei, suzuki.poulose, robin.murphy, tabba

To be consistent with other functions with similar names and
functionality in cacheflush.h, cache.S, and cachetlb.rst, change
to specify the range in terms of start and end, as opposed to
start and size.

No functional change intended.

Reported-by: Will Deacon <will@kernel.org>
Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/assembler.h | 27 +++++++++++++--------------
 arch/arm64/kvm/hyp/nvhe/cache.S    |  1 +
 arch/arm64/mm/cache.S              |  5 +++++
 3 files changed, 19 insertions(+), 14 deletions(-)

diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
index ced791124b28..c4cecf85dccf 100644
--- a/arch/arm64/include/asm/assembler.h
+++ b/arch/arm64/include/asm/assembler.h
@@ -397,40 +397,39 @@ alternative_endif
 
 /*
  * Macro to perform a data cache maintenance for the interval
- * [addr, addr + size)
+ * [start, end)
  *
  * 	op:		operation passed to dc instruction
  * 	domain:		domain used in dsb instruciton
- * 	addr:		starting virtual address of the region
- * 	size:		size of the region
+ * 	start:          starting virtual address of the region
+ * 	end:            end virtual address of the region
  * 	fixup:		optional label to branch to on user fault
- * 	Corrupts:	addr, size, tmp1, tmp2
+ * 	Corrupts:       start, end, tmp1, tmp2
  */
-	.macro dcache_by_line_op op, domain, addr, size, tmp1, tmp2, fixup
+	.macro dcache_by_line_op op, domain, start, end, tmp1, tmp2, fixup
 	dcache_line_size \tmp1, \tmp2
-	add	\size, \addr, \size
 	sub	\tmp2, \tmp1, #1
-	bic	\addr, \addr, \tmp2
+	bic	\start, \start, \tmp2
 .Ldcache_op\@:
 	.ifc	\op, cvau
-	__dcache_op_workaround_clean_cache \op, \addr
+	__dcache_op_workaround_clean_cache \op, \start
 	.else
 	.ifc	\op, cvac
-	__dcache_op_workaround_clean_cache \op, \addr
+	__dcache_op_workaround_clean_cache \op, \start
 	.else
 	.ifc	\op, cvap
-	sys	3, c7, c12, 1, \addr	// dc cvap
+	sys	3, c7, c12, 1, \start	// dc cvap
 	.else
 	.ifc	\op, cvadp
-	sys	3, c7, c13, 1, \addr	// dc cvadp
+	sys	3, c7, c13, 1, \start	// dc cvadp
 	.else
-	dc	\op, \addr
+	dc	\op, \start
 	.endif
 	.endif
 	.endif
 	.endif
-	add	\addr, \addr, \tmp1
-	cmp	\addr, \size
+	add	\start, \start, \tmp1
+	cmp	\start, \end
 	b.lo	.Ldcache_op\@
 	dsb	\domain
 
diff --git a/arch/arm64/kvm/hyp/nvhe/cache.S b/arch/arm64/kvm/hyp/nvhe/cache.S
index 36cef6915428..3bcfa3cac46f 100644
--- a/arch/arm64/kvm/hyp/nvhe/cache.S
+++ b/arch/arm64/kvm/hyp/nvhe/cache.S
@@ -8,6 +8,7 @@
 #include <asm/alternative.h>
 
 SYM_FUNC_START_PI(__flush_dcache_area)
+	add	x1, x0, x1
 	dcache_by_line_op civac, sy, x0, x1, x2, x3
 	ret
 SYM_FUNC_END_PI(__flush_dcache_area)
diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
index 5170d9ab450a..3b5461a32b85 100644
--- a/arch/arm64/mm/cache.S
+++ b/arch/arm64/mm/cache.S
@@ -115,6 +115,7 @@ SYM_FUNC_END(invalidate_icache_range)
  *	- size    - size in question
  */
 SYM_FUNC_START_PI(__flush_dcache_area)
+	add	x1, x0, x1
 	dcache_by_line_op civac, sy, x0, x1, x2, x3
 	ret
 SYM_FUNC_END_PI(__flush_dcache_area)
@@ -133,6 +134,7 @@ alternative_if ARM64_HAS_CACHE_IDC
 	dsb	ishst
 	ret
 alternative_else_nop_endif
+	add	x1, x0, x1
 	dcache_by_line_op cvau, ish, x0, x1, x2, x3
 	ret
 SYM_FUNC_END(__clean_dcache_area_pou)
@@ -194,6 +196,7 @@ SYM_FUNC_START_PI(__clean_dcache_area_poc)
  *	- start   - virtual start address of region
  *	- size    - size in question
  */
+	add	x1, x0, x1
 	dcache_by_line_op cvac, sy, x0, x1, x2, x3
 	ret
 SYM_FUNC_END_PI(__clean_dcache_area_poc)
@@ -212,6 +215,7 @@ SYM_FUNC_START_PI(__clean_dcache_area_pop)
 	alternative_if_not ARM64_HAS_DCPOP
 	b	__clean_dcache_area_poc
 	alternative_else_nop_endif
+	add	x1, x0, x1
 	dcache_by_line_op cvap, sy, x0, x1, x2, x3
 	ret
 SYM_FUNC_END_PI(__clean_dcache_area_pop)
@@ -225,6 +229,7 @@ SYM_FUNC_END_PI(__clean_dcache_area_pop)
  *	- size    - size in question
  */
 SYM_FUNC_START_PI(__dma_flush_area)
+	add	x1, x0, x1
 	dcache_by_line_op civac, sy, x0, x1, x2, x3
 	ret
 SYM_FUNC_END_PI(__dma_flush_area)
-- 
2.31.1.751.gd2f1c929bd-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v3 12/18] arm64: __flush_dcache_area to take end parameter instead of size
  2021-05-20 12:43 [PATCH v3 00/18] Tidy up cache.S Fuad Tabba
                   ` (10 preceding siblings ...)
  2021-05-20 12:43 ` [PATCH v3 11/18] arm64: dcache_by_line_op " Fuad Tabba
@ 2021-05-20 12:44 ` Fuad Tabba
  2021-05-20 16:06   ` Mark Rutland
  2021-05-20 12:44 ` [PATCH v3 13/18] arm64: __clean_dcache_area_poc " Fuad Tabba
                   ` (5 subsequent siblings)
  17 siblings, 1 reply; 42+ messages in thread
From: Fuad Tabba @ 2021-05-20 12:44 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: will, catalin.marinas, mark.rutland, maz, ardb, james.morse,
	alexandru.elisei, suzuki.poulose, robin.murphy, tabba

To be consistent with other functions with similar names and
functionality in cacheflush.h, cache.S, and cachetlb.rst, change
to specify the range in terms of start and end, as opposed to
start and size.

No functional change intended.

Reported-by: Will Deacon <will@kernel.org>
Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/arch_gicv3.h |  3 ++-
 arch/arm64/include/asm/cacheflush.h |  8 ++++----
 arch/arm64/include/asm/efi.h        |  2 +-
 arch/arm64/include/asm/kvm_mmu.h    |  3 ++-
 arch/arm64/kernel/hibernate.c       | 18 +++++++++++-------
 arch/arm64/kernel/idreg-override.c  |  3 ++-
 arch/arm64/kernel/kaslr.c           | 12 +++++++++---
 arch/arm64/kernel/machine_kexec.c   | 20 +++++++++++++-------
 arch/arm64/kernel/smp.c             |  8 ++++++--
 arch/arm64/kernel/smp_spin_table.c  |  7 ++++---
 arch/arm64/kvm/hyp/nvhe/cache.S     |  1 -
 arch/arm64/kvm/hyp/nvhe/setup.c     |  3 ++-
 arch/arm64/kvm/hyp/pgtable.c        | 13 ++++++++++---
 arch/arm64/mm/cache.S               |  9 ++++-----
 14 files changed, 70 insertions(+), 40 deletions(-)

diff --git a/arch/arm64/include/asm/arch_gicv3.h b/arch/arm64/include/asm/arch_gicv3.h
index 934b9be582d2..ed1cc9d8e6df 100644
--- a/arch/arm64/include/asm/arch_gicv3.h
+++ b/arch/arm64/include/asm/arch_gicv3.h
@@ -124,7 +124,8 @@ static inline u32 gic_read_rpr(void)
 #define gic_read_lpir(c)		readq_relaxed(c)
 #define gic_write_lpir(v, c)		writeq_relaxed(v, c)
 
-#define gic_flush_dcache_to_poc(a,l)	__flush_dcache_area((a), (l))
+#define gic_flush_dcache_to_poc(a,l)	\
+	__flush_dcache_area((unsigned long)(a), (unsigned long)(a)+(l))
 
 #define gits_read_baser(c)		readq_relaxed(c)
 #define gits_write_baser(v, c)		writeq_relaxed(v, c)
diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
index 157234706817..695f88864784 100644
--- a/arch/arm64/include/asm/cacheflush.h
+++ b/arch/arm64/include/asm/cacheflush.h
@@ -50,15 +50,15 @@
  *		- start  - virtual start address
  *		- end    - virtual end address
  *
- *	__flush_dcache_area(kaddr, size)
+ *	__flush_dcache_area(start, end)
  *
  *		Ensure that the data held in page is written back.
- *		- kaddr  - page address
- *		- size   - region size
+ *		- start  - virtual start address
+ *		- end    - virtual end address
  */
 extern void __flush_icache_range(unsigned long start, unsigned long end);
 extern void invalidate_icache_range(unsigned long start, unsigned long end);
-extern void __flush_dcache_area(void *addr, size_t len);
+extern void __flush_dcache_area(unsigned long start, unsigned long end);
 extern void __inval_dcache_area(unsigned long start, unsigned long end);
 extern void __clean_dcache_area_poc(void *addr, size_t len);
 extern void __clean_dcache_area_pop(void *addr, size_t len);
diff --git a/arch/arm64/include/asm/efi.h b/arch/arm64/include/asm/efi.h
index 3578aba9c608..0ae2397076fd 100644
--- a/arch/arm64/include/asm/efi.h
+++ b/arch/arm64/include/asm/efi.h
@@ -137,7 +137,7 @@ void efi_virtmap_unload(void);
 
 static inline void efi_capsule_flush_cache_range(void *addr, int size)
 {
-	__flush_dcache_area(addr, size);
+	__flush_dcache_area((unsigned long)addr, (unsigned long)addr + size);
 }
 
 #endif /* _ASM_EFI_H */
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index 25ed956f9af1..33293d5855af 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -180,7 +180,8 @@ static inline void *__kvm_vector_slot2addr(void *base,
 
 struct kvm;
 
-#define kvm_flush_dcache_to_poc(a,l)	__flush_dcache_area((a), (l))
+#define kvm_flush_dcache_to_poc(a,l)	\
+	__flush_dcache_area((unsigned long)(a), (unsigned long)(a)+(l))
 
 static inline bool vcpu_has_cache_enabled(struct kvm_vcpu *vcpu)
 {
diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c
index b1cef371df2b..b40ddce71507 100644
--- a/arch/arm64/kernel/hibernate.c
+++ b/arch/arm64/kernel/hibernate.c
@@ -240,8 +240,6 @@ static int create_safe_exec_page(void *src_start, size_t length,
 	return 0;
 }
 
-#define dcache_clean_range(start, end)	__flush_dcache_area(start, (end - start))
-
 #ifdef CONFIG_ARM64_MTE
 
 static DEFINE_XARRAY(mte_pages);
@@ -383,13 +381,18 @@ int swsusp_arch_suspend(void)
 		ret = swsusp_save();
 	} else {
 		/* Clean kernel core startup/idle code to PoC*/
-		dcache_clean_range(__mmuoff_data_start, __mmuoff_data_end);
-		dcache_clean_range(__idmap_text_start, __idmap_text_end);
+		__flush_dcache_area((unsigned long)__mmuoff_data_start,
+				    (unsigned long)__mmuoff_data_end);
+		__flush_dcache_area((unsigned long)__idmap_text_start,
+				    (unsigned long)__idmap_text_end);
 
 		/* Clean kvm setup code to PoC? */
 		if (el2_reset_needed()) {
-			dcache_clean_range(__hyp_idmap_text_start, __hyp_idmap_text_end);
-			dcache_clean_range(__hyp_text_start, __hyp_text_end);
+			__flush_dcache_area(
+				(unsigned long)__hyp_idmap_text_start,
+				(unsigned long)__hyp_idmap_text_end);
+			__flush_dcache_area((unsigned long)__hyp_text_start,
+					    (unsigned long)__hyp_text_end);
 		}
 
 		swsusp_mte_restore_tags();
@@ -474,7 +477,8 @@ int swsusp_arch_resume(void)
 	 * The hibernate exit text contains a set of el2 vectors, that will
 	 * be executed at el2 with the mmu off in order to reload hyp-stub.
 	 */
-	__flush_dcache_area(hibernate_exit, exit_size);
+	__flush_dcache_area((unsigned long)hibernate_exit,
+			    (unsigned long)hibernate_exit + exit_size);
 
 	/*
 	 * KASLR will cause the el2 vectors to be in a different location in
diff --git a/arch/arm64/kernel/idreg-override.c b/arch/arm64/kernel/idreg-override.c
index e628c8ce1ffe..3dd515baf526 100644
--- a/arch/arm64/kernel/idreg-override.c
+++ b/arch/arm64/kernel/idreg-override.c
@@ -237,7 +237,8 @@ asmlinkage void __init init_feature_override(void)
 
 	for (i = 0; i < ARRAY_SIZE(regs); i++) {
 		if (regs[i]->override)
-			__flush_dcache_area(regs[i]->override,
+			__flush_dcache_area((unsigned long)regs[i]->override,
+					    (unsigned long)regs[i]->override +
 					    sizeof(*regs[i]->override));
 	}
 }
diff --git a/arch/arm64/kernel/kaslr.c b/arch/arm64/kernel/kaslr.c
index 341342b207f6..49cccd03cb37 100644
--- a/arch/arm64/kernel/kaslr.c
+++ b/arch/arm64/kernel/kaslr.c
@@ -72,7 +72,9 @@ u64 __init kaslr_early_init(void)
 	 * we end up running with module randomization disabled.
 	 */
 	module_alloc_base = (u64)_etext - MODULES_VSIZE;
-	__flush_dcache_area(&module_alloc_base, sizeof(module_alloc_base));
+	__flush_dcache_area((unsigned long)&module_alloc_base,
+			    (unsigned long)&module_alloc_base +
+				    sizeof(module_alloc_base));
 
 	/*
 	 * Try to map the FDT early. If this fails, we simply bail,
@@ -170,8 +172,12 @@ u64 __init kaslr_early_init(void)
 	module_alloc_base += (module_range * (seed & ((1 << 21) - 1))) >> 21;
 	module_alloc_base &= PAGE_MASK;
 
-	__flush_dcache_area(&module_alloc_base, sizeof(module_alloc_base));
-	__flush_dcache_area(&memstart_offset_seed, sizeof(memstart_offset_seed));
+	__flush_dcache_area((unsigned long)&module_alloc_base,
+			    (unsigned long)&module_alloc_base +
+				    sizeof(module_alloc_base));
+	__flush_dcache_area((unsigned long)&memstart_offset_seed,
+			    (unsigned long)&memstart_offset_seed +
+				    sizeof(memstart_offset_seed));
 
 	return offset;
 }
diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c
index a03944fd0cd4..3e79110c8f3a 100644
--- a/arch/arm64/kernel/machine_kexec.c
+++ b/arch/arm64/kernel/machine_kexec.c
@@ -72,7 +72,9 @@ int machine_kexec_post_load(struct kimage *kimage)
 	 * For execution with the MMU off, reloc_code needs to be cleaned to the
 	 * PoC and invalidated from the I-cache.
 	 */
-	__flush_dcache_area(reloc_code, arm64_relocate_new_kernel_size);
+	__flush_dcache_area((unsigned long)reloc_code,
+			    (unsigned long)reloc_code +
+				    arm64_relocate_new_kernel_size);
 	invalidate_icache_range((uintptr_t)reloc_code,
 				(uintptr_t)reloc_code +
 					arm64_relocate_new_kernel_size);
@@ -106,16 +108,18 @@ static void kexec_list_flush(struct kimage *kimage)
 
 	for (entry = &kimage->head; ; entry++) {
 		unsigned int flag;
-		void *addr;
+		unsigned long addr;
 
 		/* flush the list entries. */
-		__flush_dcache_area(entry, sizeof(kimage_entry_t));
+		__flush_dcache_area((unsigned long)entry,
+				    (unsigned long)entry +
+					    sizeof(kimage_entry_t));
 
 		flag = *entry & IND_FLAGS;
 		if (flag == IND_DONE)
 			break;
 
-		addr = phys_to_virt(*entry & PAGE_MASK);
+		addr = (unsigned long)phys_to_virt(*entry & PAGE_MASK);
 
 		switch (flag) {
 		case IND_INDIRECTION:
@@ -124,7 +128,7 @@ static void kexec_list_flush(struct kimage *kimage)
 			break;
 		case IND_SOURCE:
 			/* flush the source pages. */
-			__flush_dcache_area(addr, PAGE_SIZE);
+			__flush_dcache_area(addr, addr + PAGE_SIZE);
 			break;
 		case IND_DESTINATION:
 			break;
@@ -151,8 +155,10 @@ static void kexec_segment_flush(const struct kimage *kimage)
 			kimage->segment[i].memsz,
 			kimage->segment[i].memsz /  PAGE_SIZE);
 
-		__flush_dcache_area(phys_to_virt(kimage->segment[i].mem),
-			kimage->segment[i].memsz);
+		__flush_dcache_area(
+			(unsigned long)phys_to_virt(kimage->segment[i].mem),
+			(unsigned long)phys_to_virt(kimage->segment[i].mem) +
+				kimage->segment[i].memsz);
 	}
 }
 
diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
index dcd7041b2b07..5fcdee331087 100644
--- a/arch/arm64/kernel/smp.c
+++ b/arch/arm64/kernel/smp.c
@@ -122,7 +122,9 @@ int __cpu_up(unsigned int cpu, struct task_struct *idle)
 	secondary_data.task = idle;
 	secondary_data.stack = task_stack_page(idle) + THREAD_SIZE;
 	update_cpu_boot_status(CPU_MMU_OFF);
-	__flush_dcache_area(&secondary_data, sizeof(secondary_data));
+	__flush_dcache_area((unsigned long)&secondary_data,
+			    (unsigned long)&secondary_data +
+				    sizeof(secondary_data));
 
 	/* Now bring the CPU into our world */
 	ret = boot_secondary(cpu, idle);
@@ -143,7 +145,9 @@ int __cpu_up(unsigned int cpu, struct task_struct *idle)
 	pr_crit("CPU%u: failed to come online\n", cpu);
 	secondary_data.task = NULL;
 	secondary_data.stack = NULL;
-	__flush_dcache_area(&secondary_data, sizeof(secondary_data));
+	__flush_dcache_area((unsigned long)&secondary_data,
+			    (unsigned long)&secondary_data +
+				    sizeof(secondary_data));
 	status = READ_ONCE(secondary_data.status);
 	if (status == CPU_MMU_OFF)
 		status = READ_ONCE(__early_cpu_boot_status);
diff --git a/arch/arm64/kernel/smp_spin_table.c b/arch/arm64/kernel/smp_spin_table.c
index c45a83512805..58d804582a35 100644
--- a/arch/arm64/kernel/smp_spin_table.c
+++ b/arch/arm64/kernel/smp_spin_table.c
@@ -36,7 +36,7 @@ static void write_pen_release(u64 val)
 	unsigned long size = sizeof(secondary_holding_pen_release);
 
 	secondary_holding_pen_release = val;
-	__flush_dcache_area(start, size);
+	__flush_dcache_area((unsigned long)start, (unsigned long)start + size);
 }
 
 
@@ -90,8 +90,9 @@ static int smp_spin_table_cpu_prepare(unsigned int cpu)
 	 * the boot protocol.
 	 */
 	writeq_relaxed(pa_holding_pen, release_addr);
-	__flush_dcache_area((__force void *)release_addr,
-			    sizeof(*release_addr));
+	__flush_dcache_area((__force unsigned long)release_addr,
+			    (__force unsigned long)release_addr +
+				    sizeof(*release_addr));
 
 	/*
 	 * Send an event to wake up the secondary CPU.
diff --git a/arch/arm64/kvm/hyp/nvhe/cache.S b/arch/arm64/kvm/hyp/nvhe/cache.S
index 3bcfa3cac46f..36cef6915428 100644
--- a/arch/arm64/kvm/hyp/nvhe/cache.S
+++ b/arch/arm64/kvm/hyp/nvhe/cache.S
@@ -8,7 +8,6 @@
 #include <asm/alternative.h>
 
 SYM_FUNC_START_PI(__flush_dcache_area)
-	add	x1, x0, x1
 	dcache_by_line_op civac, sy, x0, x1, x2, x3
 	ret
 SYM_FUNC_END_PI(__flush_dcache_area)
diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c
index 7488f53b0aa2..5dffe928f256 100644
--- a/arch/arm64/kvm/hyp/nvhe/setup.c
+++ b/arch/arm64/kvm/hyp/nvhe/setup.c
@@ -134,7 +134,8 @@ static void update_nvhe_init_params(void)
 	for (i = 0; i < hyp_nr_cpus; i++) {
 		params = per_cpu_ptr(&kvm_init_params, i);
 		params->pgd_pa = __hyp_pa(pkvm_pgtable.pgd);
-		__flush_dcache_area(params, sizeof(*params));
+		__flush_dcache_area((unsigned long)params,
+				    (unsigned long)params + sizeof(*params));
 	}
 }
 
diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
index c37c1dc4feaf..10d2f04013d4 100644
--- a/arch/arm64/kvm/hyp/pgtable.c
+++ b/arch/arm64/kvm/hyp/pgtable.c
@@ -839,8 +839,11 @@ static int stage2_unmap_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep,
 	stage2_put_pte(ptep, mmu, addr, level, mm_ops);
 
 	if (need_flush) {
-		__flush_dcache_area(kvm_pte_follow(pte, mm_ops),
-				    kvm_granule_size(level));
+		kvm_pte_t *pte_follow = kvm_pte_follow(pte, mm_ops);
+
+		__flush_dcache_area((unsigned long)pte_follow,
+				    (unsigned long)pte_follow +
+					    kvm_granule_size(level));
 	}
 
 	if (childp)
@@ -988,11 +991,15 @@ static int stage2_flush_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep,
 	struct kvm_pgtable *pgt = arg;
 	struct kvm_pgtable_mm_ops *mm_ops = pgt->mm_ops;
 	kvm_pte_t pte = *ptep;
+	kvm_pte_t *pte_follow;
 
 	if (!kvm_pte_valid(pte) || !stage2_pte_cacheable(pgt, pte))
 		return 0;
 
-	__flush_dcache_area(kvm_pte_follow(pte, mm_ops), kvm_granule_size(level));
+	pte_follow = kvm_pte_follow(pte, mm_ops);
+	__flush_dcache_area((unsigned long)pte_follow,
+			    (unsigned long)pte_follow +
+				    kvm_granule_size(level));
 	return 0;
 }
 
diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
index 3b5461a32b85..35abc8d77c4e 100644
--- a/arch/arm64/mm/cache.S
+++ b/arch/arm64/mm/cache.S
@@ -106,16 +106,15 @@ alternative_else_nop_endif
 SYM_FUNC_END(invalidate_icache_range)
 
 /*
- *	__flush_dcache_area(kaddr, size)
+ *	__flush_dcache_area(start, end)
  *
- *	Ensure that any D-cache lines for the interval [kaddr, kaddr+size)
+ *	Ensure that any D-cache lines for the interval [start, end)
  *	are cleaned and invalidated to the PoC.
  *
- *	- kaddr   - kernel address
- *	- size    - size in question
+ *	- start   - virtual start address of region
+ *	- end     - virtual end address of region
  */
 SYM_FUNC_START_PI(__flush_dcache_area)
-	add	x1, x0, x1
 	dcache_by_line_op civac, sy, x0, x1, x2, x3
 	ret
 SYM_FUNC_END_PI(__flush_dcache_area)
-- 
2.31.1.751.gd2f1c929bd-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v3 13/18] arm64: __clean_dcache_area_poc to take end parameter instead of size
  2021-05-20 12:43 [PATCH v3 00/18] Tidy up cache.S Fuad Tabba
                   ` (11 preceding siblings ...)
  2021-05-20 12:44 ` [PATCH v3 12/18] arm64: __flush_dcache_area " Fuad Tabba
@ 2021-05-20 12:44 ` Fuad Tabba
  2021-05-20 16:16   ` Mark Rutland
  2021-05-20 12:44 ` [PATCH v3 14/18] arm64: __clean_dcache_area_pop " Fuad Tabba
                   ` (4 subsequent siblings)
  17 siblings, 1 reply; 42+ messages in thread
From: Fuad Tabba @ 2021-05-20 12:44 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: will, catalin.marinas, mark.rutland, maz, ardb, james.morse,
	alexandru.elisei, suzuki.poulose, robin.murphy, tabba

To be consistent with other functions with similar names and
functionality in cacheflush.h, cache.S, and cachetlb.rst, change
to specify the range in terms of start and end, as opposed to
start and size.

Because the code is shared with __dma_clean_area, it changes the
parameters for that as well. However, __dma_clean_area is local to
cache.S, so no other users are affected.

No functional change intended.

Reported-by: Will Deacon <will@kernel.org>
Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/cacheflush.h |  2 +-
 arch/arm64/kernel/efi-entry.S       |  5 +++--
 arch/arm64/mm/cache.S               | 16 +++++++---------
 3 files changed, 11 insertions(+), 12 deletions(-)

diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
index 695f88864784..3255878d6f30 100644
--- a/arch/arm64/include/asm/cacheflush.h
+++ b/arch/arm64/include/asm/cacheflush.h
@@ -60,7 +60,7 @@ extern void __flush_icache_range(unsigned long start, unsigned long end);
 extern void invalidate_icache_range(unsigned long start, unsigned long end);
 extern void __flush_dcache_area(unsigned long start, unsigned long end);
 extern void __inval_dcache_area(unsigned long start, unsigned long end);
-extern void __clean_dcache_area_poc(void *addr, size_t len);
+extern void __clean_dcache_area_poc(unsigned long start, unsigned long end);
 extern void __clean_dcache_area_pop(void *addr, size_t len);
 extern void __clean_dcache_area_pou(void *addr, size_t len);
 extern long __flush_cache_user_range(unsigned long start, unsigned long end);
diff --git a/arch/arm64/kernel/efi-entry.S b/arch/arm64/kernel/efi-entry.S
index 0073b24b5d25..72e6a580290a 100644
--- a/arch/arm64/kernel/efi-entry.S
+++ b/arch/arm64/kernel/efi-entry.S
@@ -28,6 +28,7 @@ SYM_CODE_START(efi_enter_kernel)
 	 * stale icache entries from before relocation.
 	 */
 	ldr	w1, =kernel_size
+	add	x1, x0, x1
 	bl	__clean_dcache_area_poc
 	ic	ialluis
 
@@ -36,7 +37,7 @@ SYM_CODE_START(efi_enter_kernel)
 	 * so that we can safely disable the MMU and caches.
 	 */
 	adr	x0, 0f
-	ldr	w1, 3f
+	adr	x1, 3f
 	bl	__clean_dcache_area_poc
 0:
 	/* Turn off Dcache and MMU */
@@ -65,4 +66,4 @@ SYM_CODE_START(efi_enter_kernel)
 	mov	x3, xzr
 	br	x19
 SYM_CODE_END(efi_enter_kernel)
-3:	.long	. - 0b
+3:
diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
index 35abc8d77c4e..9a9c44bb26d2 100644
--- a/arch/arm64/mm/cache.S
+++ b/arch/arm64/mm/cache.S
@@ -178,24 +178,23 @@ SYM_FUNC_END_PI(__inval_dcache_area)
 SYM_FUNC_END(__dma_inv_area)
 
 /*
- *	__clean_dcache_area_poc(kaddr, size)
+ *	__clean_dcache_area_poc(start, end)
  *
- * 	Ensure that any D-cache lines for the interval [kaddr, kaddr+size)
+ * 	Ensure that any D-cache lines for the interval [start, end)
  * 	are cleaned to the PoC.
  *
- *	- kaddr   - kernel address
- *	- size    - size in question
+ *	- start   - virtual start address of region
+ *	- end     - virtual end address of region
  */
 SYM_FUNC_START_LOCAL(__dma_clean_area)
 SYM_FUNC_START_PI(__clean_dcache_area_poc)
 	/* FALLTHROUGH */
 
 /*
- *	__dma_clean_area(start, size)
+ *	__dma_clean_area(start, end)
  *	- start   - virtual start address of region
- *	- size    - size in question
+ *	- end     - virtual end address of region
  */
-	add	x1, x0, x1
 	dcache_by_line_op cvac, sy, x0, x1, x2, x3
 	ret
 SYM_FUNC_END_PI(__clean_dcache_area_poc)
@@ -211,10 +210,10 @@ SYM_FUNC_END(__dma_clean_area)
  *	- size    - size in question
  */
 SYM_FUNC_START_PI(__clean_dcache_area_pop)
+	add	x1, x0, x1
 	alternative_if_not ARM64_HAS_DCPOP
 	b	__clean_dcache_area_poc
 	alternative_else_nop_endif
-	add	x1, x0, x1
 	dcache_by_line_op cvap, sy, x0, x1, x2, x3
 	ret
 SYM_FUNC_END_PI(__clean_dcache_area_pop)
@@ -243,7 +242,6 @@ SYM_FUNC_START_PI(__dma_map_area)
 	add	x1, x0, x1
 	cmp	w2, #DMA_FROM_DEVICE
 	b.eq	__dma_inv_area
-	sub	x1, x1, x0
 	b	__dma_clean_area
 SYM_FUNC_END_PI(__dma_map_area)
 
-- 
2.31.1.751.gd2f1c929bd-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v3 14/18] arm64: __clean_dcache_area_pop to take end parameter instead of size
  2021-05-20 12:43 [PATCH v3 00/18] Tidy up cache.S Fuad Tabba
                   ` (12 preceding siblings ...)
  2021-05-20 12:44 ` [PATCH v3 13/18] arm64: __clean_dcache_area_poc " Fuad Tabba
@ 2021-05-20 12:44 ` Fuad Tabba
  2021-05-20 16:19   ` Mark Rutland
  2021-05-20 12:44 ` [PATCH v3 15/18] arm64: __clean_dcache_area_pou " Fuad Tabba
                   ` (3 subsequent siblings)
  17 siblings, 1 reply; 42+ messages in thread
From: Fuad Tabba @ 2021-05-20 12:44 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: will, catalin.marinas, mark.rutland, maz, ardb, james.morse,
	alexandru.elisei, suzuki.poulose, robin.murphy, tabba

To be consistent with other functions with similar names and
functionality in cacheflush.h, cache.S, and cachetlb.rst, change
to specify the range in terms of start and end, as opposed to
start and size.

No functional change intended.

Reported-by: Will Deacon <will@kernel.org>
Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/cacheflush.h | 2 +-
 arch/arm64/lib/uaccess_flushcache.c | 4 ++--
 arch/arm64/mm/cache.S               | 9 ++++-----
 arch/arm64/mm/flush.c               | 2 +-
 4 files changed, 8 insertions(+), 9 deletions(-)

diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
index 3255878d6f30..fa5641868d65 100644
--- a/arch/arm64/include/asm/cacheflush.h
+++ b/arch/arm64/include/asm/cacheflush.h
@@ -61,7 +61,7 @@ extern void invalidate_icache_range(unsigned long start, unsigned long end);
 extern void __flush_dcache_area(unsigned long start, unsigned long end);
 extern void __inval_dcache_area(unsigned long start, unsigned long end);
 extern void __clean_dcache_area_poc(unsigned long start, unsigned long end);
-extern void __clean_dcache_area_pop(void *addr, size_t len);
+extern void __clean_dcache_area_pop(unsigned long start, unsigned long end);
 extern void __clean_dcache_area_pou(void *addr, size_t len);
 extern long __flush_cache_user_range(unsigned long start, unsigned long end);
 extern void sync_icache_aliases(void *kaddr, unsigned long len);
diff --git a/arch/arm64/lib/uaccess_flushcache.c b/arch/arm64/lib/uaccess_flushcache.c
index c83bb5a4aad2..62ea989effe8 100644
--- a/arch/arm64/lib/uaccess_flushcache.c
+++ b/arch/arm64/lib/uaccess_flushcache.c
@@ -15,7 +15,7 @@ void memcpy_flushcache(void *dst, const void *src, size_t cnt)
 	 * barrier to order the cache maintenance against the memcpy.
 	 */
 	memcpy(dst, src, cnt);
-	__clean_dcache_area_pop(dst, cnt);
+	__clean_dcache_area_pop((unsigned long)dst, (unsigned long)dst + cnt);
 }
 EXPORT_SYMBOL_GPL(memcpy_flushcache);
 
@@ -33,6 +33,6 @@ unsigned long __copy_user_flushcache(void *to, const void __user *from,
 	rc = raw_copy_from_user(to, from, n);
 
 	/* See above */
-	__clean_dcache_area_pop(to, n - rc);
+	__clean_dcache_area_pop((unsigned long)to, (unsigned long)to + n - rc);
 	return rc;
 }
diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
index 9a9c44bb26d2..b72fbae4b8e9 100644
--- a/arch/arm64/mm/cache.S
+++ b/arch/arm64/mm/cache.S
@@ -201,16 +201,15 @@ SYM_FUNC_END_PI(__clean_dcache_area_poc)
 SYM_FUNC_END(__dma_clean_area)
 
 /*
- *	__clean_dcache_area_pop(kaddr, size)
+ *	__clean_dcache_area_pop(start, end)
  *
- * 	Ensure that any D-cache lines for the interval [kaddr, kaddr+size)
+ * 	Ensure that any D-cache lines for the interval [start, end)
  * 	are cleaned to the PoP.
  *
- *	- kaddr   - kernel address
- *	- size    - size in question
+ *	- start   - virtual start address of region
+ *	- end     - virtual end address of region
  */
 SYM_FUNC_START_PI(__clean_dcache_area_pop)
-	add	x1, x0, x1
 	alternative_if_not ARM64_HAS_DCPOP
 	b	__clean_dcache_area_poc
 	alternative_else_nop_endif
diff --git a/arch/arm64/mm/flush.c b/arch/arm64/mm/flush.c
index 4e3505c2bea6..5aba7fe42d4b 100644
--- a/arch/arm64/mm/flush.c
+++ b/arch/arm64/mm/flush.c
@@ -82,7 +82,7 @@ void arch_wb_cache_pmem(void *addr, size_t size)
 {
 	/* Ensure order against any prior non-cacheable writes */
 	dmb(osh);
-	__clean_dcache_area_pop(addr, size);
+	__clean_dcache_area_pop((unsigned long)addr, (unsigned long)addr + size);
 }
 EXPORT_SYMBOL_GPL(arch_wb_cache_pmem);
 
-- 
2.31.1.751.gd2f1c929bd-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v3 15/18] arm64: __clean_dcache_area_pou to take end parameter instead of size
  2021-05-20 12:43 [PATCH v3 00/18] Tidy up cache.S Fuad Tabba
                   ` (13 preceding siblings ...)
  2021-05-20 12:44 ` [PATCH v3 14/18] arm64: __clean_dcache_area_pop " Fuad Tabba
@ 2021-05-20 12:44 ` Fuad Tabba
  2021-05-20 16:24   ` Mark Rutland
  2021-05-20 12:44 ` [PATCH v3 16/18] arm64: sync_icache_aliases " Fuad Tabba
                   ` (2 subsequent siblings)
  17 siblings, 1 reply; 42+ messages in thread
From: Fuad Tabba @ 2021-05-20 12:44 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: will, catalin.marinas, mark.rutland, maz, ardb, james.morse,
	alexandru.elisei, suzuki.poulose, robin.murphy, tabba

To be consistent with other functions with similar names and
functionality in cacheflush.h, cache.S, and cachetlb.rst, change
to specify the range in terms of start and end, as opposed to
start and size.

No functional change intended.

Reported-by: Will Deacon <will@kernel.org>
Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/cacheflush.h | 2 +-
 arch/arm64/mm/cache.S               | 9 ++++-----
 arch/arm64/mm/flush.c               | 2 +-
 3 files changed, 6 insertions(+), 7 deletions(-)

diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
index fa5641868d65..f86723047315 100644
--- a/arch/arm64/include/asm/cacheflush.h
+++ b/arch/arm64/include/asm/cacheflush.h
@@ -62,7 +62,7 @@ extern void __flush_dcache_area(unsigned long start, unsigned long end);
 extern void __inval_dcache_area(unsigned long start, unsigned long end);
 extern void __clean_dcache_area_poc(unsigned long start, unsigned long end);
 extern void __clean_dcache_area_pop(unsigned long start, unsigned long end);
-extern void __clean_dcache_area_pou(void *addr, size_t len);
+extern void __clean_dcache_area_pou(unsigned long start, unsigned long end);
 extern long __flush_cache_user_range(unsigned long start, unsigned long end);
 extern void sync_icache_aliases(void *kaddr, unsigned long len);
 
diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
index b72fbae4b8e9..b70a6699c02b 100644
--- a/arch/arm64/mm/cache.S
+++ b/arch/arm64/mm/cache.S
@@ -120,20 +120,19 @@ SYM_FUNC_START_PI(__flush_dcache_area)
 SYM_FUNC_END_PI(__flush_dcache_area)
 
 /*
- *	__clean_dcache_area_pou(kaddr, size)
+ *	__clean_dcache_area_pou(start, end)
  *
- * 	Ensure that any D-cache lines for the interval [kaddr, kaddr+size)
+ * 	Ensure that any D-cache lines for the interval [start, end)
  * 	are cleaned to the PoU.
  *
- *	- kaddr   - kernel address
- *	- size    - size in question
+ *	- start   - virtual start address of region
+ *	- end     - virtual end address of region
  */
 SYM_FUNC_START(__clean_dcache_area_pou)
 alternative_if ARM64_HAS_CACHE_IDC
 	dsb	ishst
 	ret
 alternative_else_nop_endif
-	add	x1, x0, x1
 	dcache_by_line_op cvau, ish, x0, x1, x2, x3
 	ret
 SYM_FUNC_END(__clean_dcache_area_pou)
diff --git a/arch/arm64/mm/flush.c b/arch/arm64/mm/flush.c
index 5aba7fe42d4b..a69d745fb1dc 100644
--- a/arch/arm64/mm/flush.c
+++ b/arch/arm64/mm/flush.c
@@ -19,7 +19,7 @@ void sync_icache_aliases(void *kaddr, unsigned long len)
 	unsigned long addr = (unsigned long)kaddr;
 
 	if (icache_is_aliasing()) {
-		__clean_dcache_area_pou(kaddr, len);
+		__clean_dcache_area_pou(kaddr, kaddr + len);
 		__flush_icache_all();
 	} else {
 		/*
-- 
2.31.1.751.gd2f1c929bd-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v3 16/18] arm64: sync_icache_aliases to take end parameter instead of size
  2021-05-20 12:43 [PATCH v3 00/18] Tidy up cache.S Fuad Tabba
                   ` (14 preceding siblings ...)
  2021-05-20 12:44 ` [PATCH v3 15/18] arm64: __clean_dcache_area_pou " Fuad Tabba
@ 2021-05-20 12:44 ` Fuad Tabba
  2021-05-20 16:34   ` Mark Rutland
  2021-05-20 12:44 ` [PATCH v3 17/18] arm64: Fix cache maintenance function comments Fuad Tabba
  2021-05-20 12:44 ` [PATCH v3 18/18] arm64: Rename arm64-internal cache maintenance functions Fuad Tabba
  17 siblings, 1 reply; 42+ messages in thread
From: Fuad Tabba @ 2021-05-20 12:44 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: will, catalin.marinas, mark.rutland, maz, ardb, james.morse,
	alexandru.elisei, suzuki.poulose, robin.murphy, tabba

To be consistent with other functions with similar names and
functionality in cacheflush.h, cache.S, and cachetlb.rst, change
to specify the range in terms of start and end, as opposed to
start and size.

No functional change intended.

Reported-by: Will Deacon <will@kernel.org>
Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/cacheflush.h |  2 +-
 arch/arm64/kernel/probes/uprobes.c  |  2 +-
 arch/arm64/mm/flush.c               | 21 +++++++++++----------
 3 files changed, 13 insertions(+), 12 deletions(-)

diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
index f86723047315..70b389a8dea5 100644
--- a/arch/arm64/include/asm/cacheflush.h
+++ b/arch/arm64/include/asm/cacheflush.h
@@ -64,7 +64,7 @@ extern void __clean_dcache_area_poc(unsigned long start, unsigned long end);
 extern void __clean_dcache_area_pop(unsigned long start, unsigned long end);
 extern void __clean_dcache_area_pou(unsigned long start, unsigned long end);
 extern long __flush_cache_user_range(unsigned long start, unsigned long end);
-extern void sync_icache_aliases(void *kaddr, unsigned long len);
+extern void sync_icache_aliases(unsigned long start, unsigned long end);
 
 static inline void flush_icache_range(unsigned long start, unsigned long end)
 {
diff --git a/arch/arm64/kernel/probes/uprobes.c b/arch/arm64/kernel/probes/uprobes.c
index 2c247634552b..9be668f3f034 100644
--- a/arch/arm64/kernel/probes/uprobes.c
+++ b/arch/arm64/kernel/probes/uprobes.c
@@ -21,7 +21,7 @@ void arch_uprobe_copy_ixol(struct page *page, unsigned long vaddr,
 	memcpy(dst, src, len);
 
 	/* flush caches (dcache/icache) */
-	sync_icache_aliases(dst, len);
+	sync_icache_aliases((unsigned long)dst, (unsigned long)dst + len);
 
 	kunmap_atomic(xol_page_kaddr);
 }
diff --git a/arch/arm64/mm/flush.c b/arch/arm64/mm/flush.c
index a69d745fb1dc..143f625e7727 100644
--- a/arch/arm64/mm/flush.c
+++ b/arch/arm64/mm/flush.c
@@ -14,28 +14,26 @@
 #include <asm/cache.h>
 #include <asm/tlbflush.h>
 
-void sync_icache_aliases(void *kaddr, unsigned long len)
+void sync_icache_aliases(unsigned long start, unsigned long end)
 {
-	unsigned long addr = (unsigned long)kaddr;
-
 	if (icache_is_aliasing()) {
-		__clean_dcache_area_pou(kaddr, kaddr + len);
+		__clean_dcache_area_pou(start, end);
 		__flush_icache_all();
 	} else {
 		/*
 		 * Don't issue kick_all_cpus_sync() after I-cache invalidation
 		 * for user mappings.
 		 */
-		__flush_icache_range(addr, addr + len);
+		__flush_icache_range(start, end);
 	}
 }
 
 static void flush_ptrace_access(struct vm_area_struct *vma, struct page *page,
-				unsigned long uaddr, void *kaddr,
-				unsigned long len)
+				unsigned long uaddr, unsigned long start,
+				unsigned long end)
 {
 	if (vma->vm_flags & VM_EXEC)
-		sync_icache_aliases(kaddr, len);
+		sync_icache_aliases(start, end);
 }
 
 /*
@@ -48,7 +46,8 @@ void copy_to_user_page(struct vm_area_struct *vma, struct page *page,
 		       unsigned long len)
 {
 	memcpy(dst, src, len);
-	flush_ptrace_access(vma, page, uaddr, dst, len);
+	flush_ptrace_access(vma, page, uaddr, (unsigned long)dst,
+			    (unsigned long)dst + len);
 }
 
 void __sync_icache_dcache(pte_t pte)
@@ -56,7 +55,9 @@ void __sync_icache_dcache(pte_t pte)
 	struct page *page = pte_page(pte);
 
 	if (!test_and_set_bit(PG_dcache_clean, &page->flags))
-		sync_icache_aliases(page_address(page), page_size(page));
+		sync_icache_aliases((unsigned long)page_address(page),
+				    (unsigned long)page_address(page) +
+					    page_size(page));
 }
 EXPORT_SYMBOL_GPL(__sync_icache_dcache);
 
-- 
2.31.1.751.gd2f1c929bd-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v3 17/18] arm64: Fix cache maintenance function comments
  2021-05-20 12:43 [PATCH v3 00/18] Tidy up cache.S Fuad Tabba
                   ` (15 preceding siblings ...)
  2021-05-20 12:44 ` [PATCH v3 16/18] arm64: sync_icache_aliases " Fuad Tabba
@ 2021-05-20 12:44 ` Fuad Tabba
  2021-05-20 16:48   ` Mark Rutland
  2021-05-20 12:44 ` [PATCH v3 18/18] arm64: Rename arm64-internal cache maintenance functions Fuad Tabba
  17 siblings, 1 reply; 42+ messages in thread
From: Fuad Tabba @ 2021-05-20 12:44 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: will, catalin.marinas, mark.rutland, maz, ardb, james.morse,
	alexandru.elisei, suzuki.poulose, robin.murphy, tabba

Fix and expand comments for the cache maintenance functions in
cacheflush.h. Adds comments to functions that weren't described
before. Explains what the functions do using Arm Architecture
Reference Manual terminology.

No functional change intended.

Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/cacheflush.h | 43 +++++++++++++++++++----------
 1 file changed, 28 insertions(+), 15 deletions(-)

diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
index 70b389a8dea5..4b91d3530013 100644
--- a/arch/arm64/include/asm/cacheflush.h
+++ b/arch/arm64/include/asm/cacheflush.h
@@ -30,31 +30,44 @@
  *	the implementation assumes non-aliasing VIPT D-cache and (aliasing)
  *	VIPT I-cache.
  *
- *	flush_icache_range(start, end)
- *
- *		Ensure coherency between the I-cache and the D-cache in the
- *		region described by start, end.
+ *	All functions below apply to the region described by [start, end)
  *		- start  - virtual start address
  *		- end    - virtual end address
  *
- *	invalidate_icache_range(start, end)
+ *	__flush_icache_range(start, end)
  *
- *		Invalidate the I-cache in the region described by start, end.
- *		- start  - virtual start address
- *		- end    - virtual end address
+ *		Ensure coherency between the I-cache and the D-cache region to
+ *		the Point of Unification.
  *
  *	__flush_cache_user_range(start, end)
  *
- *		Ensure coherency between the I-cache and the D-cache in the
- *		region described by start, end.
- *		- start  - virtual start address
- *		- end    - virtual end address
+ *		Ensure coherency between the I-cache and the D-cache region to
+ *		the Point of Unification.
+ *		Use only if the region might access user memory.
+ *
+ *	invalidate_icache_range(start, end)
+ *
+ *		Invalidate I-cache region to the Point of Unification.
  *
  *	__flush_dcache_area(start, end)
  *
- *		Ensure that the data held in page is written back.
- *		- start  - virtual start address
- *		- end    - virtual end address
+ *		Clean and invalidate D-cache region to the Point of Coherence.
+ *
+ *	__inval_dcache_area(start, end)
+ *
+ *		Invalidate D-cache region to the Point of Coherence.
+ *
+ *	__clean_dcache_area_poc(start, end)
+ *
+ *		Clean D-cache region to the Point of Coherence.
+ *
+ *	__clean_dcache_area_pop(start, end)
+ *
+ *		Clean D-cache region to the Point of Persistence.
+ *
+ *	__clean_dcache_area_pou(start, end)
+ *
+ *		Clean D-cache region to the Point of Unification.
  */
 extern void __flush_icache_range(unsigned long start, unsigned long end);
 extern void invalidate_icache_range(unsigned long start, unsigned long end);
-- 
2.31.1.751.gd2f1c929bd-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v3 18/18] arm64: Rename arm64-internal cache maintenance functions
  2021-05-20 12:43 [PATCH v3 00/18] Tidy up cache.S Fuad Tabba
                   ` (16 preceding siblings ...)
  2021-05-20 12:44 ` [PATCH v3 17/18] arm64: Fix cache maintenance function comments Fuad Tabba
@ 2021-05-20 12:44 ` Fuad Tabba
  2021-05-20 17:01   ` Mark Rutland
  17 siblings, 1 reply; 42+ messages in thread
From: Fuad Tabba @ 2021-05-20 12:44 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: will, catalin.marinas, mark.rutland, maz, ardb, james.morse,
	alexandru.elisei, suzuki.poulose, robin.murphy, tabba

Although naming across the codebase isn't that consistent, it
tends to follow certain patterns. Moreover, the term "flush"
isn't defined in the Arm Architecture reference manual, and might
be interpreted to mean clean, invalidate, or both for a cache.

Rename arm64-internal functions to make the naming internally
consistent, as well as making it consistent with the Arm ARM, by
specifying whether it applies to the instruction, data, or both
caches, whether the operation is a clean, invalidate, or both.
Also specify which point the operation applies to, i.e., to the
point of unification (PoU), coherence (PoC), or persistence
(PoP).

This commit applies the following sed transformation to all files
under arch/arm64:

"s/\b__flush_cache_range\b/caches_clean_inval_pou_macro/g;"\
"s/\b__flush_icache_range\b/caches_clean_inval_pou/g;"\
"s/\binvalidate_icache_range\b/icache_inval_pou/g;"\
"s/\b__flush_dcache_area\b/dcache_clean_inval_poc/g;"\
"s/\b__inval_dcache_area\b/dcache_inval_poc/g;"\
"s/__clean_dcache_area_poc\b/dcache_clean_poc/g;"\
"s/\b__clean_dcache_area_pop\b/dcache_clean_pop/g;"\
"s/\b__clean_dcache_area_pou\b/dcache_clean_pou/g;"\
"s/\b__flush_cache_user_range\b/caches_clean_inval_user_pou/g;"\
"s/\b__flush_icache_all\b/icache_inval_all_pou/g;"

Note that __clean_dcache_area_poc is deliberately missing a word
boundary check at the beginning in order to match the efistub
symbols in image-vars.h.

Also note that, despite its name, __flush_icache_range operates
on both instruction and data caches. The name change here
reflects that.

No functional change intended.

Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/arch_gicv3.h |  2 +-
 arch/arm64/include/asm/cacheflush.h | 36 +++++++++---------
 arch/arm64/include/asm/efi.h        |  2 +-
 arch/arm64/include/asm/kvm_mmu.h    |  6 +--
 arch/arm64/kernel/alternative.c     |  2 +-
 arch/arm64/kernel/efi-entry.S       |  4 +-
 arch/arm64/kernel/head.S            |  8 ++--
 arch/arm64/kernel/hibernate-asm.S   |  4 +-
 arch/arm64/kernel/hibernate.c       | 12 +++---
 arch/arm64/kernel/idreg-override.c  |  2 +-
 arch/arm64/kernel/image-vars.h      |  2 +-
 arch/arm64/kernel/insn.c            |  2 +-
 arch/arm64/kernel/kaslr.c           |  6 +--
 arch/arm64/kernel/machine_kexec.c   | 10 ++---
 arch/arm64/kernel/smp.c             |  4 +-
 arch/arm64/kernel/smp_spin_table.c  |  4 +-
 arch/arm64/kernel/sys_compat.c      |  2 +-
 arch/arm64/kvm/arm.c                |  2 +-
 arch/arm64/kvm/hyp/nvhe/cache.S     |  4 +-
 arch/arm64/kvm/hyp/nvhe/setup.c     |  2 +-
 arch/arm64/kvm/hyp/nvhe/tlb.c       |  2 +-
 arch/arm64/kvm/hyp/pgtable.c        |  4 +-
 arch/arm64/lib/uaccess_flushcache.c |  4 +-
 arch/arm64/mm/cache.S               | 58 ++++++++++++++---------------
 arch/arm64/mm/flush.c               | 12 +++---
 25 files changed, 98 insertions(+), 98 deletions(-)

diff --git a/arch/arm64/include/asm/arch_gicv3.h b/arch/arm64/include/asm/arch_gicv3.h
index ed1cc9d8e6df..4ad22c3135db 100644
--- a/arch/arm64/include/asm/arch_gicv3.h
+++ b/arch/arm64/include/asm/arch_gicv3.h
@@ -125,7 +125,7 @@ static inline u32 gic_read_rpr(void)
 #define gic_write_lpir(v, c)		writeq_relaxed(v, c)
 
 #define gic_flush_dcache_to_poc(a,l)	\
-	__flush_dcache_area((unsigned long)(a), (unsigned long)(a)+(l))
+	dcache_clean_inval_poc((unsigned long)(a), (unsigned long)(a)+(l))
 
 #define gits_read_baser(c)		readq_relaxed(c)
 #define gits_write_baser(v, c)		writeq_relaxed(v, c)
diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
index 4b91d3530013..885bda37b805 100644
--- a/arch/arm64/include/asm/cacheflush.h
+++ b/arch/arm64/include/asm/cacheflush.h
@@ -34,54 +34,54 @@
  *		- start  - virtual start address
  *		- end    - virtual end address
  *
- *	__flush_icache_range(start, end)
+ *	caches_clean_inval_pou(start, end)
  *
  *		Ensure coherency between the I-cache and the D-cache region to
  *		the Point of Unification.
  *
- *	__flush_cache_user_range(start, end)
+ *	caches_clean_inval_user_pou(start, end)
  *
  *		Ensure coherency between the I-cache and the D-cache region to
  *		the Point of Unification.
  *		Use only if the region might access user memory.
  *
- *	invalidate_icache_range(start, end)
+ *	icache_inval_pou(start, end)
  *
  *		Invalidate I-cache region to the Point of Unification.
  *
- *	__flush_dcache_area(start, end)
+ *	dcache_clean_inval_poc(start, end)
  *
  *		Clean and invalidate D-cache region to the Point of Coherence.
  *
- *	__inval_dcache_area(start, end)
+ *	dcache_inval_poc(start, end)
  *
  *		Invalidate D-cache region to the Point of Coherence.
  *
- *	__clean_dcache_area_poc(start, end)
+ *	dcache_clean_poc(start, end)
  *
  *		Clean D-cache region to the Point of Coherence.
  *
- *	__clean_dcache_area_pop(start, end)
+ *	dcache_clean_pop(start, end)
  *
  *		Clean D-cache region to the Point of Persistence.
  *
- *	__clean_dcache_area_pou(start, end)
+ *	dcache_clean_pou(start, end)
  *
  *		Clean D-cache region to the Point of Unification.
  */
-extern void __flush_icache_range(unsigned long start, unsigned long end);
-extern void invalidate_icache_range(unsigned long start, unsigned long end);
-extern void __flush_dcache_area(unsigned long start, unsigned long end);
-extern void __inval_dcache_area(unsigned long start, unsigned long end);
-extern void __clean_dcache_area_poc(unsigned long start, unsigned long end);
-extern void __clean_dcache_area_pop(unsigned long start, unsigned long end);
-extern void __clean_dcache_area_pou(unsigned long start, unsigned long end);
-extern long __flush_cache_user_range(unsigned long start, unsigned long end);
+extern void caches_clean_inval_pou(unsigned long start, unsigned long end);
+extern void icache_inval_pou(unsigned long start, unsigned long end);
+extern void dcache_clean_inval_poc(unsigned long start, unsigned long end);
+extern void dcache_inval_poc(unsigned long start, unsigned long end);
+extern void dcache_clean_poc(unsigned long start, unsigned long end);
+extern void dcache_clean_pop(unsigned long start, unsigned long end);
+extern void dcache_clean_pou(unsigned long start, unsigned long end);
+extern long caches_clean_inval_user_pou(unsigned long start, unsigned long end);
 extern void sync_icache_aliases(unsigned long start, unsigned long end);
 
 static inline void flush_icache_range(unsigned long start, unsigned long end)
 {
-	__flush_icache_range(start, end);
+	caches_clean_inval_pou(start, end);
 
 	/*
 	 * IPI all online CPUs so that they undergo a context synchronization
@@ -135,7 +135,7 @@ extern void copy_to_user_page(struct vm_area_struct *, struct page *,
 #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1
 extern void flush_dcache_page(struct page *);
 
-static __always_inline void __flush_icache_all(void)
+static __always_inline void icache_inval_all_pou(void)
 {
 	if (cpus_have_const_cap(ARM64_HAS_CACHE_DIC))
 		return;
diff --git a/arch/arm64/include/asm/efi.h b/arch/arm64/include/asm/efi.h
index 0ae2397076fd..1bed37eb013a 100644
--- a/arch/arm64/include/asm/efi.h
+++ b/arch/arm64/include/asm/efi.h
@@ -137,7 +137,7 @@ void efi_virtmap_unload(void);
 
 static inline void efi_capsule_flush_cache_range(void *addr, int size)
 {
-	__flush_dcache_area((unsigned long)addr, (unsigned long)addr + size);
+	dcache_clean_inval_poc((unsigned long)addr, (unsigned long)addr + size);
 }
 
 #endif /* _ASM_EFI_H */
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index 33293d5855af..f4cbfa9025a8 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -181,7 +181,7 @@ static inline void *__kvm_vector_slot2addr(void *base,
 struct kvm;
 
 #define kvm_flush_dcache_to_poc(a,l)	\
-	__flush_dcache_area((unsigned long)(a), (unsigned long)(a)+(l))
+	dcache_clean_inval_poc((unsigned long)(a), (unsigned long)(a)+(l))
 
 static inline bool vcpu_has_cache_enabled(struct kvm_vcpu *vcpu)
 {
@@ -209,12 +209,12 @@ static inline void __invalidate_icache_guest_page(kvm_pfn_t pfn,
 {
 	if (icache_is_aliasing()) {
 		/* any kind of VIPT cache */
-		__flush_icache_all();
+		icache_inval_all_pou();
 	} else if (is_kernel_in_hyp_mode() || !icache_is_vpipt()) {
 		/* PIPT or VPIPT at EL2 (see comment in __kvm_tlb_flush_vmid_ipa) */
 		void *va = page_address(pfn_to_page(pfn));
 
-		invalidate_icache_range((unsigned long)va,
+		icache_inval_pou((unsigned long)va,
 					(unsigned long)va + size);
 	}
 }
diff --git a/arch/arm64/kernel/alternative.c b/arch/arm64/kernel/alternative.c
index c906d20c7b52..3fb79b76e9d9 100644
--- a/arch/arm64/kernel/alternative.c
+++ b/arch/arm64/kernel/alternative.c
@@ -181,7 +181,7 @@ static void __nocfi __apply_alternatives(struct alt_region *region, bool is_modu
 	 */
 	if (!is_module) {
 		dsb(ish);
-		__flush_icache_all();
+		icache_inval_all_pou();
 		isb();
 
 		/* Ignore ARM64_CB bit from feature mask */
diff --git a/arch/arm64/kernel/efi-entry.S b/arch/arm64/kernel/efi-entry.S
index 72e6a580290a..6668bad21f86 100644
--- a/arch/arm64/kernel/efi-entry.S
+++ b/arch/arm64/kernel/efi-entry.S
@@ -29,7 +29,7 @@ SYM_CODE_START(efi_enter_kernel)
 	 */
 	ldr	w1, =kernel_size
 	add	x1, x0, x1
-	bl	__clean_dcache_area_poc
+	bl	dcache_clean_poc
 	ic	ialluis
 
 	/*
@@ -38,7 +38,7 @@ SYM_CODE_START(efi_enter_kernel)
 	 */
 	adr	x0, 0f
 	adr	x1, 3f
-	bl	__clean_dcache_area_poc
+	bl	dcache_clean_poc
 0:
 	/* Turn off Dcache and MMU */
 	mrs	x0, CurrentEL
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index 8df0ac8d9123..6928cb67d3a0 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -118,7 +118,7 @@ SYM_CODE_START_LOCAL(preserve_boot_args)
 						// MMU off
 
 	add	x1, x0, #0x20			// 4 x 8 bytes
-	b	__inval_dcache_area		// tail call
+	b	dcache_inval_poc		// tail call
 SYM_CODE_END(preserve_boot_args)
 
 /*
@@ -268,7 +268,7 @@ SYM_FUNC_START_LOCAL(__create_page_tables)
 	 */
 	adrp	x0, init_pg_dir
 	adrp	x1, init_pg_end
-	bl	__inval_dcache_area
+	bl	dcache_inval_poc
 
 	/*
 	 * Clear the init page tables.
@@ -381,11 +381,11 @@ SYM_FUNC_START_LOCAL(__create_page_tables)
 
 	adrp	x0, idmap_pg_dir
 	adrp	x1, idmap_pg_end
-	bl	__inval_dcache_area
+	bl	dcache_inval_poc
 
 	adrp	x0, init_pg_dir
 	adrp	x1, init_pg_end
-	bl	__inval_dcache_area
+	bl	dcache_inval_poc
 
 	ret	x28
 SYM_FUNC_END(__create_page_tables)
diff --git a/arch/arm64/kernel/hibernate-asm.S b/arch/arm64/kernel/hibernate-asm.S
index ef2ab7caf815..81c0186a5e32 100644
--- a/arch/arm64/kernel/hibernate-asm.S
+++ b/arch/arm64/kernel/hibernate-asm.S
@@ -45,7 +45,7 @@
  * Because this code has to be copied to a 'safe' page, it can't call out to
  * other functions by PC-relative address. Also remember that it may be
  * mid-way through over-writing other functions. For this reason it contains
- * code from __flush_icache_range() and uses the copy_page() macro.
+ * code from caches_clean_inval_pou() and uses the copy_page() macro.
  *
  * This 'safe' page is mapped via ttbr0, and executed from there. This function
  * switches to a copy of the linear map in ttbr1, performs the restore, then
@@ -87,7 +87,7 @@ SYM_CODE_START(swsusp_arch_suspend_exit)
 	copy_page	x0, x1, x2, x3, x4, x5, x6, x7, x8, x9
 
 	add	x1, x10, #PAGE_SIZE
-	/* Clean the copied page to PoU - based on __flush_icache_range() */
+	/* Clean the copied page to PoU - based on caches_clean_inval_pou() */
 	raw_dcache_line_size x2, x3
 	sub	x3, x2, #1
 	bic	x4, x10, x3
diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c
index b40ddce71507..46a0b4d6e251 100644
--- a/arch/arm64/kernel/hibernate.c
+++ b/arch/arm64/kernel/hibernate.c
@@ -210,7 +210,7 @@ static int create_safe_exec_page(void *src_start, size_t length,
 		return -ENOMEM;
 
 	memcpy(page, src_start, length);
-	__flush_icache_range((unsigned long)page, (unsigned long)page + length);
+	caches_clean_inval_pou((unsigned long)page, (unsigned long)page + length);
 	rc = trans_pgd_idmap_page(&trans_info, &trans_ttbr0, &t0sz, page);
 	if (rc)
 		return rc;
@@ -381,17 +381,17 @@ int swsusp_arch_suspend(void)
 		ret = swsusp_save();
 	} else {
 		/* Clean kernel core startup/idle code to PoC*/
-		__flush_dcache_area((unsigned long)__mmuoff_data_start,
+		dcache_clean_inval_poc((unsigned long)__mmuoff_data_start,
 				    (unsigned long)__mmuoff_data_end);
-		__flush_dcache_area((unsigned long)__idmap_text_start,
+		dcache_clean_inval_poc((unsigned long)__idmap_text_start,
 				    (unsigned long)__idmap_text_end);
 
 		/* Clean kvm setup code to PoC? */
 		if (el2_reset_needed()) {
-			__flush_dcache_area(
+			dcache_clean_inval_poc(
 				(unsigned long)__hyp_idmap_text_start,
 				(unsigned long)__hyp_idmap_text_end);
-			__flush_dcache_area((unsigned long)__hyp_text_start,
+			dcache_clean_inval_poc((unsigned long)__hyp_text_start,
 					    (unsigned long)__hyp_text_end);
 		}
 
@@ -477,7 +477,7 @@ int swsusp_arch_resume(void)
 	 * The hibernate exit text contains a set of el2 vectors, that will
 	 * be executed at el2 with the mmu off in order to reload hyp-stub.
 	 */
-	__flush_dcache_area((unsigned long)hibernate_exit,
+	dcache_clean_inval_poc((unsigned long)hibernate_exit,
 			    (unsigned long)hibernate_exit + exit_size);
 
 	/*
diff --git a/arch/arm64/kernel/idreg-override.c b/arch/arm64/kernel/idreg-override.c
index 3dd515baf526..53a381a7f65d 100644
--- a/arch/arm64/kernel/idreg-override.c
+++ b/arch/arm64/kernel/idreg-override.c
@@ -237,7 +237,7 @@ asmlinkage void __init init_feature_override(void)
 
 	for (i = 0; i < ARRAY_SIZE(regs); i++) {
 		if (regs[i]->override)
-			__flush_dcache_area((unsigned long)regs[i]->override,
+			dcache_clean_inval_poc((unsigned long)regs[i]->override,
 					    (unsigned long)regs[i]->override +
 					    sizeof(*regs[i]->override));
 	}
diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h
index bcf3c2755370..c96a9a0043bf 100644
--- a/arch/arm64/kernel/image-vars.h
+++ b/arch/arm64/kernel/image-vars.h
@@ -35,7 +35,7 @@ __efistub_strnlen		= __pi_strnlen;
 __efistub_strcmp		= __pi_strcmp;
 __efistub_strncmp		= __pi_strncmp;
 __efistub_strrchr		= __pi_strrchr;
-__efistub___clean_dcache_area_poc = __pi___clean_dcache_area_poc;
+__efistub_dcache_clean_poc = __pi_dcache_clean_poc;
 
 #if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)
 __efistub___memcpy		= __pi_memcpy;
diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
index 6c0de2f60ea9..51cb8dc98d00 100644
--- a/arch/arm64/kernel/insn.c
+++ b/arch/arm64/kernel/insn.c
@@ -198,7 +198,7 @@ int __kprobes aarch64_insn_patch_text_nosync(void *addr, u32 insn)
 
 	ret = aarch64_insn_write(tp, insn);
 	if (ret == 0)
-		__flush_icache_range((uintptr_t)tp,
+		caches_clean_inval_pou((uintptr_t)tp,
 				     (uintptr_t)tp + AARCH64_INSN_SIZE);
 
 	return ret;
diff --git a/arch/arm64/kernel/kaslr.c b/arch/arm64/kernel/kaslr.c
index 49cccd03cb37..cfa2cfde3019 100644
--- a/arch/arm64/kernel/kaslr.c
+++ b/arch/arm64/kernel/kaslr.c
@@ -72,7 +72,7 @@ u64 __init kaslr_early_init(void)
 	 * we end up running with module randomization disabled.
 	 */
 	module_alloc_base = (u64)_etext - MODULES_VSIZE;
-	__flush_dcache_area((unsigned long)&module_alloc_base,
+	dcache_clean_inval_poc((unsigned long)&module_alloc_base,
 			    (unsigned long)&module_alloc_base +
 				    sizeof(module_alloc_base));
 
@@ -172,10 +172,10 @@ u64 __init kaslr_early_init(void)
 	module_alloc_base += (module_range * (seed & ((1 << 21) - 1))) >> 21;
 	module_alloc_base &= PAGE_MASK;
 
-	__flush_dcache_area((unsigned long)&module_alloc_base,
+	dcache_clean_inval_poc((unsigned long)&module_alloc_base,
 			    (unsigned long)&module_alloc_base +
 				    sizeof(module_alloc_base));
-	__flush_dcache_area((unsigned long)&memstart_offset_seed,
+	dcache_clean_inval_poc((unsigned long)&memstart_offset_seed,
 			    (unsigned long)&memstart_offset_seed +
 				    sizeof(memstart_offset_seed));
 
diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c
index 3e79110c8f3a..03ceabe4d912 100644
--- a/arch/arm64/kernel/machine_kexec.c
+++ b/arch/arm64/kernel/machine_kexec.c
@@ -72,10 +72,10 @@ int machine_kexec_post_load(struct kimage *kimage)
 	 * For execution with the MMU off, reloc_code needs to be cleaned to the
 	 * PoC and invalidated from the I-cache.
 	 */
-	__flush_dcache_area((unsigned long)reloc_code,
+	dcache_clean_inval_poc((unsigned long)reloc_code,
 			    (unsigned long)reloc_code +
 				    arm64_relocate_new_kernel_size);
-	invalidate_icache_range((uintptr_t)reloc_code,
+	icache_inval_pou((uintptr_t)reloc_code,
 				(uintptr_t)reloc_code +
 					arm64_relocate_new_kernel_size);
 
@@ -111,7 +111,7 @@ static void kexec_list_flush(struct kimage *kimage)
 		unsigned long addr;
 
 		/* flush the list entries. */
-		__flush_dcache_area((unsigned long)entry,
+		dcache_clean_inval_poc((unsigned long)entry,
 				    (unsigned long)entry +
 					    sizeof(kimage_entry_t));
 
@@ -128,7 +128,7 @@ static void kexec_list_flush(struct kimage *kimage)
 			break;
 		case IND_SOURCE:
 			/* flush the source pages. */
-			__flush_dcache_area(addr, addr + PAGE_SIZE);
+			dcache_clean_inval_poc(addr, addr + PAGE_SIZE);
 			break;
 		case IND_DESTINATION:
 			break;
@@ -155,7 +155,7 @@ static void kexec_segment_flush(const struct kimage *kimage)
 			kimage->segment[i].memsz,
 			kimage->segment[i].memsz /  PAGE_SIZE);
 
-		__flush_dcache_area(
+		dcache_clean_inval_poc(
 			(unsigned long)phys_to_virt(kimage->segment[i].mem),
 			(unsigned long)phys_to_virt(kimage->segment[i].mem) +
 				kimage->segment[i].memsz);
diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
index 5fcdee331087..9b4c1118194d 100644
--- a/arch/arm64/kernel/smp.c
+++ b/arch/arm64/kernel/smp.c
@@ -122,7 +122,7 @@ int __cpu_up(unsigned int cpu, struct task_struct *idle)
 	secondary_data.task = idle;
 	secondary_data.stack = task_stack_page(idle) + THREAD_SIZE;
 	update_cpu_boot_status(CPU_MMU_OFF);
-	__flush_dcache_area((unsigned long)&secondary_data,
+	dcache_clean_inval_poc((unsigned long)&secondary_data,
 			    (unsigned long)&secondary_data +
 				    sizeof(secondary_data));
 
@@ -145,7 +145,7 @@ int __cpu_up(unsigned int cpu, struct task_struct *idle)
 	pr_crit("CPU%u: failed to come online\n", cpu);
 	secondary_data.task = NULL;
 	secondary_data.stack = NULL;
-	__flush_dcache_area((unsigned long)&secondary_data,
+	dcache_clean_inval_poc((unsigned long)&secondary_data,
 			    (unsigned long)&secondary_data +
 				    sizeof(secondary_data));
 	status = READ_ONCE(secondary_data.status);
diff --git a/arch/arm64/kernel/smp_spin_table.c b/arch/arm64/kernel/smp_spin_table.c
index 58d804582a35..7e1624ecab3c 100644
--- a/arch/arm64/kernel/smp_spin_table.c
+++ b/arch/arm64/kernel/smp_spin_table.c
@@ -36,7 +36,7 @@ static void write_pen_release(u64 val)
 	unsigned long size = sizeof(secondary_holding_pen_release);
 
 	secondary_holding_pen_release = val;
-	__flush_dcache_area((unsigned long)start, (unsigned long)start + size);
+	dcache_clean_inval_poc((unsigned long)start, (unsigned long)start + size);
 }
 
 
@@ -90,7 +90,7 @@ static int smp_spin_table_cpu_prepare(unsigned int cpu)
 	 * the boot protocol.
 	 */
 	writeq_relaxed(pa_holding_pen, release_addr);
-	__flush_dcache_area((__force unsigned long)release_addr,
+	dcache_clean_inval_poc((__force unsigned long)release_addr,
 			    (__force unsigned long)release_addr +
 				    sizeof(*release_addr));
 
diff --git a/arch/arm64/kernel/sys_compat.c b/arch/arm64/kernel/sys_compat.c
index 265fe3eb1069..db5159a3055f 100644
--- a/arch/arm64/kernel/sys_compat.c
+++ b/arch/arm64/kernel/sys_compat.c
@@ -41,7 +41,7 @@ __do_compat_cache_op(unsigned long start, unsigned long end)
 			dsb(ish);
 		}
 
-		ret = __flush_cache_user_range(start, start + chunk);
+		ret = caches_clean_inval_user_pou(start, start + chunk);
 		if (ret)
 			return ret;
 
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 1cb39c0803a4..c1953f65ca0e 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -1064,7 +1064,7 @@ static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm_vcpu *vcpu,
 		if (!cpus_have_final_cap(ARM64_HAS_STAGE2_FWB))
 			stage2_unmap_vm(vcpu->kvm);
 		else
-			__flush_icache_all();
+			icache_inval_all_pou();
 	}
 
 	vcpu_reset_hcr(vcpu);
diff --git a/arch/arm64/kvm/hyp/nvhe/cache.S b/arch/arm64/kvm/hyp/nvhe/cache.S
index 36cef6915428..958734f4d6b0 100644
--- a/arch/arm64/kvm/hyp/nvhe/cache.S
+++ b/arch/arm64/kvm/hyp/nvhe/cache.S
@@ -7,7 +7,7 @@
 #include <asm/assembler.h>
 #include <asm/alternative.h>
 
-SYM_FUNC_START_PI(__flush_dcache_area)
+SYM_FUNC_START_PI(dcache_clean_inval_poc)
 	dcache_by_line_op civac, sy, x0, x1, x2, x3
 	ret
-SYM_FUNC_END_PI(__flush_dcache_area)
+SYM_FUNC_END_PI(dcache_clean_inval_poc)
diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c
index 5dffe928f256..8143ebd4fb72 100644
--- a/arch/arm64/kvm/hyp/nvhe/setup.c
+++ b/arch/arm64/kvm/hyp/nvhe/setup.c
@@ -134,7 +134,7 @@ static void update_nvhe_init_params(void)
 	for (i = 0; i < hyp_nr_cpus; i++) {
 		params = per_cpu_ptr(&kvm_init_params, i);
 		params->pgd_pa = __hyp_pa(pkvm_pgtable.pgd);
-		__flush_dcache_area((unsigned long)params,
+		dcache_clean_inval_poc((unsigned long)params,
 				    (unsigned long)params + sizeof(*params));
 	}
 }
diff --git a/arch/arm64/kvm/hyp/nvhe/tlb.c b/arch/arm64/kvm/hyp/nvhe/tlb.c
index 83dc3b271bc5..38ed0f6f2703 100644
--- a/arch/arm64/kvm/hyp/nvhe/tlb.c
+++ b/arch/arm64/kvm/hyp/nvhe/tlb.c
@@ -104,7 +104,7 @@ void __kvm_tlb_flush_vmid_ipa(struct kvm_s2_mmu *mmu,
 	 * you should be running with VHE enabled.
 	 */
 	if (icache_is_vpipt())
-		__flush_icache_all();
+		icache_inval_all_pou();
 
 	__tlb_switch_to_host(&cxt);
 }
diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
index 10d2f04013d4..e9ad7fb28ee3 100644
--- a/arch/arm64/kvm/hyp/pgtable.c
+++ b/arch/arm64/kvm/hyp/pgtable.c
@@ -841,7 +841,7 @@ static int stage2_unmap_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep,
 	if (need_flush) {
 		kvm_pte_t *pte_follow = kvm_pte_follow(pte, mm_ops);
 
-		__flush_dcache_area((unsigned long)pte_follow,
+		dcache_clean_inval_poc((unsigned long)pte_follow,
 				    (unsigned long)pte_follow +
 					    kvm_granule_size(level));
 	}
@@ -997,7 +997,7 @@ static int stage2_flush_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep,
 		return 0;
 
 	pte_follow = kvm_pte_follow(pte, mm_ops);
-	__flush_dcache_area((unsigned long)pte_follow,
+	dcache_clean_inval_poc((unsigned long)pte_follow,
 			    (unsigned long)pte_follow +
 				    kvm_granule_size(level));
 	return 0;
diff --git a/arch/arm64/lib/uaccess_flushcache.c b/arch/arm64/lib/uaccess_flushcache.c
index 62ea989effe8..baee22961bdb 100644
--- a/arch/arm64/lib/uaccess_flushcache.c
+++ b/arch/arm64/lib/uaccess_flushcache.c
@@ -15,7 +15,7 @@ void memcpy_flushcache(void *dst, const void *src, size_t cnt)
 	 * barrier to order the cache maintenance against the memcpy.
 	 */
 	memcpy(dst, src, cnt);
-	__clean_dcache_area_pop((unsigned long)dst, (unsigned long)dst + cnt);
+	dcache_clean_pop((unsigned long)dst, (unsigned long)dst + cnt);
 }
 EXPORT_SYMBOL_GPL(memcpy_flushcache);
 
@@ -33,6 +33,6 @@ unsigned long __copy_user_flushcache(void *to, const void __user *from,
 	rc = raw_copy_from_user(to, from, n);
 
 	/* See above */
-	__clean_dcache_area_pop((unsigned long)to, (unsigned long)to + n - rc);
+	dcache_clean_pop((unsigned long)to, (unsigned long)to + n - rc);
 	return rc;
 }
diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
index b70a6699c02b..e799a4999299 100644
--- a/arch/arm64/mm/cache.S
+++ b/arch/arm64/mm/cache.S
@@ -15,7 +15,7 @@
 #include <asm/asm-uaccess.h>
 
 /*
- *	__flush_cache_range(start,end) [fixup]
+ *	caches_clean_inval_pou_macro(start,end) [fixup]
  *
  *	Ensure that the I and D caches are coherent within specified region.
  *	This is typically used when code has been written to a memory region,
@@ -25,7 +25,7 @@
  *	- end     - virtual end address of region
  *	- fixup   - optional label to branch to on user fault
  */
-.macro	__flush_cache_range, fixup
+.macro	caches_clean_inval_pou_macro, fixup
 alternative_if ARM64_HAS_CACHE_IDC
 	dsb	ishst
 	b	.Ldc_skip_\@
@@ -50,7 +50,7 @@ alternative_else_nop_endif
 .endm
 
 /*
- *	__flush_icache_range(start,end)
+ *	caches_clean_inval_pou(start,end)
  *
  *	Ensure that the I and D caches are coherent within specified region.
  *	This is typically used when code has been written to a memory region,
@@ -59,13 +59,13 @@ alternative_else_nop_endif
  *	- start   - virtual start address of region
  *	- end     - virtual end address of region
  */
-SYM_FUNC_START(__flush_icache_range)
-	__flush_cache_range
+SYM_FUNC_START(caches_clean_inval_pou)
+	caches_clean_inval_pou_macro
 	ret
-SYM_FUNC_END(__flush_icache_range)
+SYM_FUNC_END(caches_clean_inval_pou)
 
 /*
- *	__flush_cache_user_range(start,end)
+ *	caches_clean_inval_user_pou(start,end)
  *
  *	Ensure that the I and D caches are coherent within specified region.
  *	This is typically used when code has been written to a memory region,
@@ -74,10 +74,10 @@ SYM_FUNC_END(__flush_icache_range)
  *	- start   - virtual start address of region
  *	- end     - virtual end address of region
  */
-SYM_FUNC_START(__flush_cache_user_range)
+SYM_FUNC_START(caches_clean_inval_user_pou)
 	uaccess_ttbr0_enable x2, x3, x4
 
-	__flush_cache_range 2f
+	caches_clean_inval_pou_macro 2f
 	mov	x0, xzr
 1:
 	uaccess_ttbr0_disable x1, x2
@@ -85,17 +85,17 @@ SYM_FUNC_START(__flush_cache_user_range)
 2:
 	mov	x0, #-EFAULT
 	b	1b
-SYM_FUNC_END(__flush_cache_user_range)
+SYM_FUNC_END(caches_clean_inval_user_pou)
 
 /*
- *	invalidate_icache_range(start,end)
+ *	icache_inval_pou(start,end)
  *
  *	Ensure that the I cache is invalid within specified region.
  *
  *	- start   - virtual start address of region
  *	- end     - virtual end address of region
  */
-SYM_FUNC_START(invalidate_icache_range)
+SYM_FUNC_START(icache_inval_pou)
 alternative_if ARM64_HAS_CACHE_DIC
 	isb
 	ret
@@ -103,10 +103,10 @@ alternative_else_nop_endif
 
 	invalidate_icache_by_line x0, x1, x2, x3
 	ret
-SYM_FUNC_END(invalidate_icache_range)
+SYM_FUNC_END(icache_inval_pou)
 
 /*
- *	__flush_dcache_area(start, end)
+ *	dcache_clean_inval_poc(start, end)
  *
  *	Ensure that any D-cache lines for the interval [start, end)
  *	are cleaned and invalidated to the PoC.
@@ -114,13 +114,13 @@ SYM_FUNC_END(invalidate_icache_range)
  *	- start   - virtual start address of region
  *	- end     - virtual end address of region
  */
-SYM_FUNC_START_PI(__flush_dcache_area)
+SYM_FUNC_START_PI(dcache_clean_inval_poc)
 	dcache_by_line_op civac, sy, x0, x1, x2, x3
 	ret
-SYM_FUNC_END_PI(__flush_dcache_area)
+SYM_FUNC_END_PI(dcache_clean_inval_poc)
 
 /*
- *	__clean_dcache_area_pou(start, end)
+ *	dcache_clean_pou(start, end)
  *
  * 	Ensure that any D-cache lines for the interval [start, end)
  * 	are cleaned to the PoU.
@@ -128,17 +128,17 @@ SYM_FUNC_END_PI(__flush_dcache_area)
  *	- start   - virtual start address of region
  *	- end     - virtual end address of region
  */
-SYM_FUNC_START(__clean_dcache_area_pou)
+SYM_FUNC_START(dcache_clean_pou)
 alternative_if ARM64_HAS_CACHE_IDC
 	dsb	ishst
 	ret
 alternative_else_nop_endif
 	dcache_by_line_op cvau, ish, x0, x1, x2, x3
 	ret
-SYM_FUNC_END(__clean_dcache_area_pou)
+SYM_FUNC_END(dcache_clean_pou)
 
 /*
- *	__inval_dcache_area(start, end)
+ *	dcache_inval_poc(start, end)
  *
  * 	Ensure that any D-cache lines for the interval [start, end)
  * 	are invalidated. Any partial lines at the ends of the interval are
@@ -148,7 +148,7 @@ SYM_FUNC_END(__clean_dcache_area_pou)
  *	- end     - kernel end address of region
  */
 SYM_FUNC_START_LOCAL(__dma_inv_area)
-SYM_FUNC_START_PI(__inval_dcache_area)
+SYM_FUNC_START_PI(dcache_inval_poc)
 	/* FALLTHROUGH */
 
 /*
@@ -173,11 +173,11 @@ SYM_FUNC_START_PI(__inval_dcache_area)
 	b.lo	2b
 	dsb	sy
 	ret
-SYM_FUNC_END_PI(__inval_dcache_area)
+SYM_FUNC_END_PI(dcache_inval_poc)
 SYM_FUNC_END(__dma_inv_area)
 
 /*
- *	__clean_dcache_area_poc(start, end)
+ *	dcache_clean_poc(start, end)
  *
  * 	Ensure that any D-cache lines for the interval [start, end)
  * 	are cleaned to the PoC.
@@ -186,7 +186,7 @@ SYM_FUNC_END(__dma_inv_area)
  *	- end     - virtual end address of region
  */
 SYM_FUNC_START_LOCAL(__dma_clean_area)
-SYM_FUNC_START_PI(__clean_dcache_area_poc)
+SYM_FUNC_START_PI(dcache_clean_poc)
 	/* FALLTHROUGH */
 
 /*
@@ -196,11 +196,11 @@ SYM_FUNC_START_PI(__clean_dcache_area_poc)
  */
 	dcache_by_line_op cvac, sy, x0, x1, x2, x3
 	ret
-SYM_FUNC_END_PI(__clean_dcache_area_poc)
+SYM_FUNC_END_PI(dcache_clean_poc)
 SYM_FUNC_END(__dma_clean_area)
 
 /*
- *	__clean_dcache_area_pop(start, end)
+ *	dcache_clean_pop(start, end)
  *
  * 	Ensure that any D-cache lines for the interval [start, end)
  * 	are cleaned to the PoP.
@@ -208,13 +208,13 @@ SYM_FUNC_END(__dma_clean_area)
  *	- start   - virtual start address of region
  *	- end     - virtual end address of region
  */
-SYM_FUNC_START_PI(__clean_dcache_area_pop)
+SYM_FUNC_START_PI(dcache_clean_pop)
 	alternative_if_not ARM64_HAS_DCPOP
-	b	__clean_dcache_area_poc
+	b	dcache_clean_poc
 	alternative_else_nop_endif
 	dcache_by_line_op cvap, sy, x0, x1, x2, x3
 	ret
-SYM_FUNC_END_PI(__clean_dcache_area_pop)
+SYM_FUNC_END_PI(dcache_clean_pop)
 
 /*
  *	__dma_flush_area(start, size)
diff --git a/arch/arm64/mm/flush.c b/arch/arm64/mm/flush.c
index 143f625e7727..5fea9a3f6663 100644
--- a/arch/arm64/mm/flush.c
+++ b/arch/arm64/mm/flush.c
@@ -17,14 +17,14 @@
 void sync_icache_aliases(unsigned long start, unsigned long end)
 {
 	if (icache_is_aliasing()) {
-		__clean_dcache_area_pou(start, end);
-		__flush_icache_all();
+		dcache_clean_pou(start, end);
+		icache_inval_all_pou();
 	} else {
 		/*
 		 * Don't issue kick_all_cpus_sync() after I-cache invalidation
 		 * for user mappings.
 		 */
-		__flush_icache_range(start, end);
+		caches_clean_inval_pou(start, end);
 	}
 }
 
@@ -76,20 +76,20 @@ EXPORT_SYMBOL(flush_dcache_page);
 /*
  * Additional functions defined in assembly.
  */
-EXPORT_SYMBOL(__flush_icache_range);
+EXPORT_SYMBOL(caches_clean_inval_pou);
 
 #ifdef CONFIG_ARCH_HAS_PMEM_API
 void arch_wb_cache_pmem(void *addr, size_t size)
 {
 	/* Ensure order against any prior non-cacheable writes */
 	dmb(osh);
-	__clean_dcache_area_pop((unsigned long)addr, (unsigned long)addr + size);
+	dcache_clean_pop((unsigned long)addr, (unsigned long)addr + size);
 }
 EXPORT_SYMBOL_GPL(arch_wb_cache_pmem);
 
 void arch_invalidate_pmem(void *addr, size_t size)
 {
-	__inval_dcache_area((unsigned long)addr, (unsigned long)addr + size);
+	dcache_inval_poc((unsigned long)addr, (unsigned long)addr + size);
 }
 EXPORT_SYMBOL_GPL(arch_invalidate_pmem);
 #endif
-- 
2.31.1.751.gd2f1c929bd-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* Re: [PATCH v3 03/18] arm64: Apply errata to swsusp_arch_suspend_exit
  2021-05-20 12:43 ` [PATCH v3 03/18] arm64: Apply errata to swsusp_arch_suspend_exit Fuad Tabba
@ 2021-05-20 12:46   ` Mark Rutland
  0 siblings, 0 replies; 42+ messages in thread
From: Mark Rutland @ 2021-05-20 12:46 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: linux-arm-kernel, will, catalin.marinas, maz, ardb, james.morse,
	alexandru.elisei, suzuki.poulose, robin.murphy

On Thu, May 20, 2021 at 01:43:51PM +0100, Fuad Tabba wrote:
> The Arm errata covered by ARM64_WORKAROUND_CLEAN_CACHE require
> that "dc cvau" instructions get promoted to "dc civac".
> 
> Reported-by: Mark Rutland <mark.rutland@arm.com>
> Signed-off-by: Fuad Tabba <tabba@google.com>

Acked-by: Mark Rutland <mark.rutland@arm.com>

Mark.

> ---
>  arch/arm64/kernel/hibernate-asm.S | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/kernel/hibernate-asm.S b/arch/arm64/kernel/hibernate-asm.S
> index 8ccca660034e..0ed2f72a6b94 100644
> --- a/arch/arm64/kernel/hibernate-asm.S
> +++ b/arch/arm64/kernel/hibernate-asm.S
> @@ -91,7 +91,8 @@ SYM_CODE_START(swsusp_arch_suspend_exit)
>  	raw_dcache_line_size x2, x3
>  	sub	x3, x2, #1
>  	bic	x4, x10, x3
> -2:	dc	cvau, x4	/* clean D line / unified line */
> +2:	/* clean D line / unified line */
> +alternative_insn "dc cvau, x4",  "dc civac, x4",  ARM64_WORKAROUND_CLEAN_CACHE
>  	add	x4, x4, x2
>  	cmp	x4, x1
>  	b.lo	2b
> -- 
> 2.31.1.751.gd2f1c929bd-goog
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v3 04/18] arm64: assembler: user_alt label optional
  2021-05-20 12:43 ` [PATCH v3 04/18] arm64: assembler: user_alt label optional Fuad Tabba
@ 2021-05-20 12:57   ` Mark Rutland
  2021-05-21 11:46     ` Fuad Tabba
  0 siblings, 1 reply; 42+ messages in thread
From: Mark Rutland @ 2021-05-20 12:57 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: linux-arm-kernel, will, catalin.marinas, maz, ardb, james.morse,
	alexandru.elisei, suzuki.poulose, robin.murphy

On Thu, May 20, 2021 at 01:43:52PM +0100, Fuad Tabba wrote:
> Make the label for the extable entry in user_alt optional, only
> generating an extable entry if provided.
> 
> This is needed later in the series, to avoid instruction
> duplication in the assembly code.
> 
> While at it, clean up the label used to be globally unique using
> \@ as for other macros.

Nice; thanks for cleaning up the labels too!

> 
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
>  arch/arm64/include/asm/alternative-macros.h | 9 ++++++---
>  arch/arm64/mm/cache.S                       | 2 +-
>  2 files changed, 7 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/alternative-macros.h b/arch/arm64/include/asm/alternative-macros.h
> index 8a078fc662ac..01ef954c9b2d 100644
> --- a/arch/arm64/include/asm/alternative-macros.h
> +++ b/arch/arm64/include/asm/alternative-macros.h
> @@ -197,9 +197,12 @@ alternative_endif
>  #define _ALTERNATIVE_CFG(insn1, insn2, cap, cfg, ...)	\
>  	alternative_insn insn1, insn2, cap, IS_ENABLED(cfg)
>  
> -.macro user_alt, label, oldinstr, newinstr, cond
> -9999:	alternative_insn "\oldinstr", "\newinstr", \cond
> -	_asm_extable 9999b, \label
> +.macro user_alt, oldinstr, newinstr, cond, label
> +.Lextable_\@:
> +	alternative_insn "\oldinstr", "\newinstr", \cond
> +	.ifnc \label,
> +	_asm_extable .Lextable_\@, \label
> +	.endif
>  .endm

We can use _cond_extable here to simplify this to:

| .macro user_alt, oldinstr, newinstr, cond, label
| .Lextable_\@:
| 	alternative_insn "\oldinstr", "\newinstr", \cond
| 	_cond_extable .Lextable_\@, \label
| .endm

However, since we only use user_alt in __flush_icache_range /
__flush_cache_user_range, I reckon it would be simpler overall to have
those use alternative_insn and _cond_extable directly. Then that would
align with the style of the *_by_line macros, and we could delete
user_alt.

Either way, this looks good, so:

Acked-by: Mark Rutland <mark.rutland@arm.com>

>  
>  #endif  /*  __ASSEMBLY__  */
> diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
> index 2d881f34dd9d..5ff8dfa86975 100644
> --- a/arch/arm64/mm/cache.S
> +++ b/arch/arm64/mm/cache.S
> @@ -47,7 +47,7 @@ alternative_else_nop_endif
>  	sub	x3, x2, #1
>  	bic	x4, x0, x3
>  1:
> -user_alt 9f, "dc cvau, x4",  "dc civac, x4",  ARM64_WORKAROUND_CLEAN_CACHE
> +user_alt "dc cvau, x4",  "dc civac, x4",  ARM64_WORKAROUND_CLEAN_CACHE, 9f
>  	add	x4, x4, x2
>  	cmp	x4, x1
>  	b.lo	1b
> -- 
> 2.31.1.751.gd2f1c929bd-goog
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v3 05/18] arm64: Do not enable uaccess for flush_icache_range
  2021-05-20 12:43 ` [PATCH v3 05/18] arm64: Do not enable uaccess for flush_icache_range Fuad Tabba
@ 2021-05-20 14:02   ` Mark Rutland
  2021-05-20 15:37     ` Mark Rutland
  2021-05-25 11:18   ` Catalin Marinas
  1 sibling, 1 reply; 42+ messages in thread
From: Mark Rutland @ 2021-05-20 14:02 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: linux-arm-kernel, will, catalin.marinas, maz, ardb, james.morse,
	alexandru.elisei, suzuki.poulose, robin.murphy

On Thu, May 20, 2021 at 01:43:53PM +0100, Fuad Tabba wrote:
> __flush_icache_range works on the kernel linear map, and doesn't
> need uaccess. The existing code is a side-effect of its current
> implementation with __flush_cache_user_range fallthrough.
> 
> Instead of fallthrough to share the code, use a common macro for
> the two where the caller specifies an optional fixup label if
> user access is needed. If provided, this label would be used to
> generate an extable entry.
> 
> No functional change intended.
> Possible performance impact due to the reduced number of
> instructions.
> 
> Reported-by: Catalin Marinas <catalin.marinas@arm.com>
> Reported-by: Will Deacon <will@kernel.org>
> Link: https://lore.kernel.org/linux-arch/20200511110014.lb9PEahJ4hVOYrbwIb_qUHXyNy9KQzNFdb_I3YlzY6A@z/
> Signed-off-by: Fuad Tabba <tabba@google.com>

I have one comment below, but either way this looks good to me, so:

Acked-by: Mark Rutland <mark.rutland@arm.com>

> ---
>  arch/arm64/mm/cache.S | 64 +++++++++++++++++++++++++++----------------
>  1 file changed, 41 insertions(+), 23 deletions(-)
> 
> diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
> index 5ff8dfa86975..c6bc3b8138e1 100644
> --- a/arch/arm64/mm/cache.S
> +++ b/arch/arm64/mm/cache.S
> @@ -14,6 +14,41 @@
>  #include <asm/alternative.h>
>  #include <asm/asm-uaccess.h>
>  
> +/*
> + *	__flush_cache_range(start,end) [fixup]
> + *
> + *	Ensure that the I and D caches are coherent within specified region.
> + *	This is typically used when code has been written to a memory region,
> + *	and will be executed.
> + *
> + *	- start   - virtual start address of region
> + *	- end     - virtual end address of region
> + *	- fixup   - optional label to branch to on user fault
> + */
> +.macro	__flush_cache_range, fixup
> +alternative_if ARM64_HAS_CACHE_IDC
> +	dsb	ishst
> +	b	.Ldc_skip_\@
> +alternative_else_nop_endif
> +	dcache_line_size x2, x3
> +	sub	x3, x2, #1
> +	bic	x4, x0, x3
> +.Ldc_loop_\@:
> +user_alt "dc cvau, x4",  "dc civac, x4",  ARM64_WORKAROUND_CLEAN_CACHE, \fixup
> +	add	x4, x4, x2
> +	cmp	x4, x1
> +	b.lo	.Ldc_loop_\@
> +	dsb	ish

As on the prior patch, I reckon it'd be nicer overall to align with the
*by_line macros and have an explicit _cond_extable here, e.g.

| .Ldc_op\@:
| 	alternative_insn "dc cvau, x4",  "dc civac, x4", ARM64_WORKAROUND_CLEAN_CACHE
| 	add	x4, x4, x2
| 	cmp     x4, x1
| 	b.lo	.Ldc_op\@
| 	dsb	ish
| ...
| 	// just before the .endm
| 	_cond_extable .Ldc_op\@, \fixup

... and with some rework it might be possible to use dcache_by_line_op
directly here (it currently clobbers the base and end, so can't be used
as-is).

Thanks,
Mark.

> +
> +.Ldc_skip_\@:
> +alternative_if ARM64_HAS_CACHE_DIC
> +	isb
> +	b	.Lic_skip_\@
> +alternative_else_nop_endif
> +	invalidate_icache_by_line x0, x1, x2, x3, \fixup
> +.Lic_skip_\@:
> +.endm
> +
>  /*
>   *	flush_icache_range(start,end)
>   *
> @@ -25,7 +60,9 @@
>   *	- end     - virtual end address of region
>   */
>  SYM_FUNC_START(__flush_icache_range)
> -	/* FALLTHROUGH */
> +	__flush_cache_range
> +	ret
> +SYM_FUNC_END(__flush_icache_range)
>  
>  /*
>   *	__flush_cache_user_range(start,end)
> @@ -39,34 +76,15 @@ SYM_FUNC_START(__flush_icache_range)
>   */
>  SYM_FUNC_START(__flush_cache_user_range)
>  	uaccess_ttbr0_enable x2, x3, x4
> -alternative_if ARM64_HAS_CACHE_IDC
> -	dsb	ishst
> -	b	7f
> -alternative_else_nop_endif
> -	dcache_line_size x2, x3
> -	sub	x3, x2, #1
> -	bic	x4, x0, x3
> -1:
> -user_alt "dc cvau, x4",  "dc civac, x4",  ARM64_WORKAROUND_CLEAN_CACHE, 9f
> -	add	x4, x4, x2
> -	cmp	x4, x1
> -	b.lo	1b
> -	dsb	ish
>  
> -7:
> -alternative_if ARM64_HAS_CACHE_DIC
> -	isb
> -	b	8f
> -alternative_else_nop_endif
> -	invalidate_icache_by_line x0, x1, x2, x3, 9f
> -8:	mov	x0, #0
> +	__flush_cache_range 2f
> +	mov	x0, xzr
>  1:
>  	uaccess_ttbr0_disable x1, x2
>  	ret
> -9:
> +2:
>  	mov	x0, #-EFAULT
>  	b	1b
> -SYM_FUNC_END(__flush_icache_range)
>  SYM_FUNC_END(__flush_cache_user_range)
>  
>  /*
> -- 
> 2.31.1.751.gd2f1c929bd-goog
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v3 06/18] arm64: Do not enable uaccess for invalidate_icache_range
  2021-05-20 12:43 ` [PATCH v3 06/18] arm64: Do not enable uaccess for invalidate_icache_range Fuad Tabba
@ 2021-05-20 14:13   ` Mark Rutland
  2021-05-25 11:18   ` Catalin Marinas
  1 sibling, 0 replies; 42+ messages in thread
From: Mark Rutland @ 2021-05-20 14:13 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: linux-arm-kernel, will, catalin.marinas, maz, ardb, james.morse,
	alexandru.elisei, suzuki.poulose, robin.murphy

On Thu, May 20, 2021 at 01:43:54PM +0100, Fuad Tabba wrote:
> invalidate_icache_range() works on the kernel linear map, and

Minor nit: this works on kernel addresses generally (e.g. vmalloc), so
we could say "kernel addresses" rather than "the kernel linear map".

> doesn't need uaccess. Remove the code that toggles
> uaccess_ttbr0_enable, as well as the code that emits an entry
> into the exception table (via the macro
> invalidate_icache_by_line).
> 
> Changes return type of invalidate_icache_range() from int (which
> used to indicate a fault) to void, since it doesn't need uaccess
> and won't fault. Note that return value was never checked by any
> of the callers.
> 
> No functional change intended.
> Possible performance impact due to the reduced number of
> instructions.
> 
> Reported-by: Catalin Marinas <catalin.marinas@arm.com>
> Reported-by: Will Deacon <will@kernel.org>
> Link: https://lore.kernel.org/linux-arch/20200511110014.lb9PEahJ4hVOYrbwIb_qUHXyNy9KQzNFdb_I3YlzY6A@z/
> Signed-off-by: Fuad Tabba <tabba@google.com>

Acked-by: Mark Rutland <mark.rutland@arm.com>

Mark.

> ---
>  arch/arm64/include/asm/cacheflush.h |  2 +-
>  arch/arm64/mm/cache.S               | 11 +----------
>  2 files changed, 2 insertions(+), 11 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
> index 52e5c1623224..a586afa84172 100644
> --- a/arch/arm64/include/asm/cacheflush.h
> +++ b/arch/arm64/include/asm/cacheflush.h
> @@ -57,7 +57,7 @@
>   *		- size   - region size
>   */
>  extern void __flush_icache_range(unsigned long start, unsigned long end);
> -extern int  invalidate_icache_range(unsigned long start, unsigned long end);
> +extern void invalidate_icache_range(unsigned long start, unsigned long end);
>  extern void __flush_dcache_area(void *addr, size_t len);
>  extern void __inval_dcache_area(void *addr, size_t len);
>  extern void __clean_dcache_area_poc(void *addr, size_t len);
> diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
> index c6bc3b8138e1..7318a40dd6ca 100644
> --- a/arch/arm64/mm/cache.S
> +++ b/arch/arm64/mm/cache.S
> @@ -97,21 +97,12 @@ SYM_FUNC_END(__flush_cache_user_range)
>   */
>  SYM_FUNC_START(invalidate_icache_range)
>  alternative_if ARM64_HAS_CACHE_DIC
> -	mov	x0, xzr
>  	isb
>  	ret
>  alternative_else_nop_endif
>  
> -	uaccess_ttbr0_enable x2, x3, x4
> -
> -	invalidate_icache_by_line x0, x1, x2, x3, 2f
> -	mov	x0, xzr
> -1:
> -	uaccess_ttbr0_disable x1, x2
> +	invalidate_icache_by_line x0, x1, x2, x3
>  	ret
> -2:
> -	mov	x0, #-EFAULT
> -	b	1b
>  SYM_FUNC_END(invalidate_icache_range)
>  
>  /*
> -- 
> 2.31.1.751.gd2f1c929bd-goog
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v3 07/18] arm64: Downgrade flush_icache_range to invalidate
  2021-05-20 12:43 ` [PATCH v3 07/18] arm64: Downgrade flush_icache_range to invalidate Fuad Tabba
@ 2021-05-20 14:15   ` Mark Rutland
  2021-05-25 11:18   ` Catalin Marinas
  1 sibling, 0 replies; 42+ messages in thread
From: Mark Rutland @ 2021-05-20 14:15 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: linux-arm-kernel, will, catalin.marinas, maz, ardb, james.morse,
	alexandru.elisei, suzuki.poulose, robin.murphy

On Thu, May 20, 2021 at 01:43:55PM +0100, Fuad Tabba wrote:
> Since __flush_dcache_area is called right before,
> invalidate_icache_range is sufficient in this case.
> 
> Rewrite the comment to better explain the rationale behind the
> cache maintenance operations used here.
> 
> No functional change intended.
> Possible performance impact due to invalidating only the icache
> rather than invalidating and cleaning both caches.
> 
> Reported-by: Catalin Marinas <catalin.marinas@arm.com>
> Reported-by: Will Deacon <will@kernel.org>
> Link: https://lore.kernel.org/linux-arch/20200511110014.lb9PEahJ4hVOYrbwIb_qUHXyNy9KQzNFdb_I3YlzY6A@z/
> Signed-off-by: Fuad Tabba <tabba@google.com>

Acked-by: Mark Rutland <mark.rutland@arm.com>

Mark.

> ---
>  arch/arm64/kernel/machine_kexec.c | 10 +++++++---
>  1 file changed, 7 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c
> index 90a335c74442..a03944fd0cd4 100644
> --- a/arch/arm64/kernel/machine_kexec.c
> +++ b/arch/arm64/kernel/machine_kexec.c
> @@ -68,10 +68,14 @@ int machine_kexec_post_load(struct kimage *kimage)
>  	kimage->arch.kern_reloc = __pa(reloc_code);
>  	kexec_image_info(kimage);
>  
> -	/* Flush the reloc_code in preparation for its execution. */
> +	/*
> +	 * For execution with the MMU off, reloc_code needs to be cleaned to the
> +	 * PoC and invalidated from the I-cache.
> +	 */
>  	__flush_dcache_area(reloc_code, arm64_relocate_new_kernel_size);
> -	flush_icache_range((uintptr_t)reloc_code, (uintptr_t)reloc_code +
> -			   arm64_relocate_new_kernel_size);
> +	invalidate_icache_range((uintptr_t)reloc_code,
> +				(uintptr_t)reloc_code +
> +					arm64_relocate_new_kernel_size);
>  
>  	return 0;
>  }
> -- 
> 2.31.1.751.gd2f1c929bd-goog
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v3 08/18] arm64: Move documentation of dcache_by_line_op
  2021-05-20 12:43 ` [PATCH v3 08/18] arm64: Move documentation of dcache_by_line_op Fuad Tabba
@ 2021-05-20 14:17   ` Mark Rutland
  0 siblings, 0 replies; 42+ messages in thread
From: Mark Rutland @ 2021-05-20 14:17 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: linux-arm-kernel, will, catalin.marinas, maz, ardb, james.morse,
	alexandru.elisei, suzuki.poulose, robin.murphy

On Thu, May 20, 2021 at 01:43:56PM +0100, Fuad Tabba wrote:
> The comment describing the macro dcache_by_line_op is placed
> right before the previous macro of the one it describes, which is
> a bit confusing. Move it to the macro it describes (dcache_by_line_op).
> 
> No functional change intended.
> 
> Signed-off-by: Fuad Tabba <tabba@google.com>

Acked-by: Mark Rutland <mark.rutland@arm.com>

Mark.

> ---
>  arch/arm64/include/asm/assembler.h | 16 ++++++++--------
>  1 file changed, 8 insertions(+), 8 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
> index 0a276b46ef50..ced791124b28 100644
> --- a/arch/arm64/include/asm/assembler.h
> +++ b/arch/arm64/include/asm/assembler.h
> @@ -387,6 +387,14 @@ alternative_cb_end
>  	bfi	\tcr, \tmp0, \pos, #3
>  	.endm
>  
> +	.macro __dcache_op_workaround_clean_cache, op, addr
> +alternative_if_not ARM64_WORKAROUND_CLEAN_CACHE
> +	dc	\op, \addr
> +alternative_else
> +	dc	civac, \addr
> +alternative_endif
> +	.endm
> +
>  /*
>   * Macro to perform a data cache maintenance for the interval
>   * [addr, addr + size)
> @@ -398,14 +406,6 @@ alternative_cb_end
>   * 	fixup:		optional label to branch to on user fault
>   * 	Corrupts:	addr, size, tmp1, tmp2
>   */
> -	.macro __dcache_op_workaround_clean_cache, op, addr
> -alternative_if_not ARM64_WORKAROUND_CLEAN_CACHE
> -	dc	\op, \addr
> -alternative_else
> -	dc	civac, \addr
> -alternative_endif
> -	.endm
> -
>  	.macro dcache_by_line_op op, domain, addr, size, tmp1, tmp2, fixup
>  	dcache_line_size \tmp1, \tmp2
>  	add	\size, \addr, \size
> -- 
> 2.31.1.751.gd2f1c929bd-goog
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v3 09/18] arm64: Fix comments to refer to correct function __flush_icache_range
  2021-05-20 12:43 ` [PATCH v3 09/18] arm64: Fix comments to refer to correct function __flush_icache_range Fuad Tabba
@ 2021-05-20 14:18   ` Mark Rutland
  0 siblings, 0 replies; 42+ messages in thread
From: Mark Rutland @ 2021-05-20 14:18 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: linux-arm-kernel, will, catalin.marinas, maz, ardb, james.morse,
	alexandru.elisei, suzuki.poulose, robin.murphy

On Thu, May 20, 2021 at 01:43:57PM +0100, Fuad Tabba wrote:
> Many comments refer to the function flush_icache_range, where the
> intent is in fact __flush_icache_range. Fix these comments to
> refer to the intended function.
> 
> That's probably due to commit 3b8c9f1cdfc506e9 ("arm64: IPI each
> CPU after invalidating the I-cache for kernel mappings"), which
> renamed flush_icache_range() to __flush_icache_range() and added
> a wrapper.
> 
> No functional change intended.
> 
> Signed-off-by: Fuad Tabba <tabba@google.com>

Acked-by: Mark Rutland <mark.rutland@arm.com>

Mark.

> ---
>  arch/arm64/kernel/hibernate-asm.S | 4 ++--
>  arch/arm64/mm/cache.S             | 2 +-
>  2 files changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/arm64/kernel/hibernate-asm.S b/arch/arm64/kernel/hibernate-asm.S
> index 0ed2f72a6b94..ef2ab7caf815 100644
> --- a/arch/arm64/kernel/hibernate-asm.S
> +++ b/arch/arm64/kernel/hibernate-asm.S
> @@ -45,7 +45,7 @@
>   * Because this code has to be copied to a 'safe' page, it can't call out to
>   * other functions by PC-relative address. Also remember that it may be
>   * mid-way through over-writing other functions. For this reason it contains
> - * code from flush_icache_range() and uses the copy_page() macro.
> + * code from __flush_icache_range() and uses the copy_page() macro.
>   *
>   * This 'safe' page is mapped via ttbr0, and executed from there. This function
>   * switches to a copy of the linear map in ttbr1, performs the restore, then
> @@ -87,7 +87,7 @@ SYM_CODE_START(swsusp_arch_suspend_exit)
>  	copy_page	x0, x1, x2, x3, x4, x5, x6, x7, x8, x9
>  
>  	add	x1, x10, #PAGE_SIZE
> -	/* Clean the copied page to PoU - based on flush_icache_range() */
> +	/* Clean the copied page to PoU - based on __flush_icache_range() */
>  	raw_dcache_line_size x2, x3
>  	sub	x3, x2, #1
>  	bic	x4, x10, x3
> diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
> index 7318a40dd6ca..80da4b8718b6 100644
> --- a/arch/arm64/mm/cache.S
> +++ b/arch/arm64/mm/cache.S
> @@ -50,7 +50,7 @@ alternative_else_nop_endif
>  .endm
>  
>  /*
> - *	flush_icache_range(start,end)
> + *	__flush_icache_range(start,end)
>   *
>   *	Ensure that the I and D caches are coherent within specified region.
>   *	This is typically used when code has been written to a memory region,
> -- 
> 2.31.1.751.gd2f1c929bd-goog
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v3 05/18] arm64: Do not enable uaccess for flush_icache_range
  2021-05-20 14:02   ` Mark Rutland
@ 2021-05-20 15:37     ` Mark Rutland
  2021-05-21 12:18       ` Mark Rutland
  0 siblings, 1 reply; 42+ messages in thread
From: Mark Rutland @ 2021-05-20 15:37 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: linux-arm-kernel, will, catalin.marinas, maz, ardb, james.morse,
	alexandru.elisei, suzuki.poulose, robin.murphy

On Thu, May 20, 2021 at 03:02:16PM +0100, Mark Rutland wrote:
> On Thu, May 20, 2021 at 01:43:53PM +0100, Fuad Tabba wrote:
> > __flush_icache_range works on the kernel linear map, and doesn't
> > need uaccess. The existing code is a side-effect of its current
> > implementation with __flush_cache_user_range fallthrough.
> > 
> > Instead of fallthrough to share the code, use a common macro for
> > the two where the caller specifies an optional fixup label if
> > user access is needed. If provided, this label would be used to
> > generate an extable entry.
> > 
> > No functional change intended.
> > Possible performance impact due to the reduced number of
> > instructions.
> > 
> > Reported-by: Catalin Marinas <catalin.marinas@arm.com>
> > Reported-by: Will Deacon <will@kernel.org>
> > Link: https://lore.kernel.org/linux-arch/20200511110014.lb9PEahJ4hVOYrbwIb_qUHXyNy9KQzNFdb_I3YlzY6A@z/
> > Signed-off-by: Fuad Tabba <tabba@google.com>
> 
> I have one comment below, but either way this looks good to me, so:
> 
> Acked-by: Mark Rutland <mark.rutland@arm.com>
> 
> > ---
> >  arch/arm64/mm/cache.S | 64 +++++++++++++++++++++++++++----------------
> >  1 file changed, 41 insertions(+), 23 deletions(-)
> > 
> > diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
> > index 5ff8dfa86975..c6bc3b8138e1 100644
> > --- a/arch/arm64/mm/cache.S
> > +++ b/arch/arm64/mm/cache.S
> > @@ -14,6 +14,41 @@
> >  #include <asm/alternative.h>
> >  #include <asm/asm-uaccess.h>
> >  
> > +/*
> > + *	__flush_cache_range(start,end) [fixup]
> > + *
> > + *	Ensure that the I and D caches are coherent within specified region.
> > + *	This is typically used when code has been written to a memory region,
> > + *	and will be executed.
> > + *
> > + *	- start   - virtual start address of region
> > + *	- end     - virtual end address of region
> > + *	- fixup   - optional label to branch to on user fault
> > + */
> > +.macro	__flush_cache_range, fixup
> > +alternative_if ARM64_HAS_CACHE_IDC
> > +	dsb	ishst
> > +	b	.Ldc_skip_\@
> > +alternative_else_nop_endif
> > +	dcache_line_size x2, x3
> > +	sub	x3, x2, #1
> > +	bic	x4, x0, x3
> > +.Ldc_loop_\@:
> > +user_alt "dc cvau, x4",  "dc civac, x4",  ARM64_WORKAROUND_CLEAN_CACHE, \fixup
> > +	add	x4, x4, x2
> > +	cmp	x4, x1
> > +	b.lo	.Ldc_loop_\@
> > +	dsb	ish
> 
> As on the prior patch, I reckon it'd be nicer overall to align with the
> *by_line macros and have an explicit _cond_extable here, e.g.
> 
> | .Ldc_op\@:
> | 	alternative_insn "dc cvau, x4",  "dc civac, x4", ARM64_WORKAROUND_CLEAN_CACHE
> | 	add	x4, x4, x2
> | 	cmp     x4, x1
> | 	b.lo	.Ldc_op\@
> | 	dsb	ish
> | ...
> | 	// just before the .endm
> | 	_cond_extable .Ldc_op\@, \fixup
> 
> ... and with some rework it might be possible to use dcache_by_line_op
> directly here (it currently clobbers the base and end, so can't be used
> as-is).

Having thought about this a bit more, it's simple enough to do that now:

| alternative_if ARM64_HAS_CACHE_IDC
| 	dsb	ishst
| 	b	.Ldc_skip_\@
| alternative_else_nop_endif
| 	mov	x0, x2
| 	add	x3, x0, x1
| 	dcache_by_line_op cvau, ishst, x2, x3, x4, x5, \fixup
| .Ldc_skip_\@

... and when we just need to change the ADD to a MOV when we change the
macro to take the end in x1.

Note that dcache_by_line_op will automatically upgrade 'cvau' to 'civac'
when ARM64_WORKAROUND_CLEAN_CACHE is present, so the resulting logic is
the same.

Thanks,
Mark.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v3 10/18] arm64: __inval_dcache_area to take end parameter instead of size
  2021-05-20 12:43 ` [PATCH v3 10/18] arm64: __inval_dcache_area to take end parameter instead of size Fuad Tabba
@ 2021-05-20 15:46   ` Mark Rutland
  0 siblings, 0 replies; 42+ messages in thread
From: Mark Rutland @ 2021-05-20 15:46 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: linux-arm-kernel, will, catalin.marinas, maz, ardb, james.morse,
	alexandru.elisei, suzuki.poulose, robin.murphy

On Thu, May 20, 2021 at 01:43:58PM +0100, Fuad Tabba wrote:
> To be consistent with other functions with similar names and
> functionality in cacheflush.h, cache.S, and cachetlb.rst, change
> to specify the range in terms of start and end, as opposed to
> start and size.
> 
> Because the code is shared with __dma_inv_area, it changes the
> parameters for that as well. However, __dma_inv_area is local to
> cache.S, so no other users are affected.
> 
> No functional change intended.
> 
> Reported-by: Will Deacon <will@kernel.org>
> Signed-off-by: Fuad Tabba <tabba@google.com>

All the conversions below look correct to me, and judging by a grep of
the kernel tree there are no stale callers. I see the ADD->SUB dance in
__dma_map_area() will be undone in a subsequent patch. So:

Acked-by: Mark Rutland <mark.rutland@arm.com>

Mark.

> ---
>  arch/arm64/include/asm/cacheflush.h |  2 +-
>  arch/arm64/kernel/head.S            |  5 +----
>  arch/arm64/mm/cache.S               | 16 +++++++++-------
>  arch/arm64/mm/flush.c               |  2 +-
>  4 files changed, 12 insertions(+), 13 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
> index a586afa84172..157234706817 100644
> --- a/arch/arm64/include/asm/cacheflush.h
> +++ b/arch/arm64/include/asm/cacheflush.h
> @@ -59,7 +59,7 @@
>  extern void __flush_icache_range(unsigned long start, unsigned long end);
>  extern void invalidate_icache_range(unsigned long start, unsigned long end);
>  extern void __flush_dcache_area(void *addr, size_t len);
> -extern void __inval_dcache_area(void *addr, size_t len);
> +extern void __inval_dcache_area(unsigned long start, unsigned long end);
>  extern void __clean_dcache_area_poc(void *addr, size_t len);
>  extern void __clean_dcache_area_pop(void *addr, size_t len);
>  extern void __clean_dcache_area_pou(void *addr, size_t len);
> diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
> index 96873dfa67fd..8df0ac8d9123 100644
> --- a/arch/arm64/kernel/head.S
> +++ b/arch/arm64/kernel/head.S
> @@ -117,7 +117,7 @@ SYM_CODE_START_LOCAL(preserve_boot_args)
>  	dmb	sy				// needed before dc ivac with
>  						// MMU off
>  
> -	mov	x1, #0x20			// 4 x 8 bytes
> +	add	x1, x0, #0x20			// 4 x 8 bytes
>  	b	__inval_dcache_area		// tail call
>  SYM_CODE_END(preserve_boot_args)
>  
> @@ -268,7 +268,6 @@ SYM_FUNC_START_LOCAL(__create_page_tables)
>  	 */
>  	adrp	x0, init_pg_dir
>  	adrp	x1, init_pg_end
> -	sub	x1, x1, x0
>  	bl	__inval_dcache_area
>  
>  	/*
> @@ -382,12 +381,10 @@ SYM_FUNC_START_LOCAL(__create_page_tables)
>  
>  	adrp	x0, idmap_pg_dir
>  	adrp	x1, idmap_pg_end
> -	sub	x1, x1, x0
>  	bl	__inval_dcache_area
>  
>  	adrp	x0, init_pg_dir
>  	adrp	x1, init_pg_end
> -	sub	x1, x1, x0
>  	bl	__inval_dcache_area
>  
>  	ret	x28
> diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
> index 80da4b8718b6..5170d9ab450a 100644
> --- a/arch/arm64/mm/cache.S
> +++ b/arch/arm64/mm/cache.S
> @@ -138,25 +138,24 @@ alternative_else_nop_endif
>  SYM_FUNC_END(__clean_dcache_area_pou)
>  
>  /*
> - *	__inval_dcache_area(kaddr, size)
> + *	__inval_dcache_area(start, end)
>   *
> - * 	Ensure that any D-cache lines for the interval [kaddr, kaddr+size)
> + * 	Ensure that any D-cache lines for the interval [start, end)
>   * 	are invalidated. Any partial lines at the ends of the interval are
>   *	also cleaned to PoC to prevent data loss.
>   *
> - *	- kaddr   - kernel address
> - *	- size    - size in question
> + *	- start   - kernel start address of region
> + *	- end     - kernel end address of region
>   */
>  SYM_FUNC_START_LOCAL(__dma_inv_area)
>  SYM_FUNC_START_PI(__inval_dcache_area)
>  	/* FALLTHROUGH */
>  
>  /*
> - *	__dma_inv_area(start, size)
> + *	__dma_inv_area(start, end)
>   *	- start   - virtual start address of region
> - *	- size    - size in question
> + *	- end     - virtual end address of region
>   */
> -	add	x1, x1, x0
>  	dcache_line_size x2, x3
>  	sub	x3, x2, #1
>  	tst	x1, x3				// end cache line aligned?
> @@ -237,8 +236,10 @@ SYM_FUNC_END_PI(__dma_flush_area)
>   *	- dir	- DMA direction
>   */
>  SYM_FUNC_START_PI(__dma_map_area)
> +	add	x1, x0, x1
>  	cmp	w2, #DMA_FROM_DEVICE
>  	b.eq	__dma_inv_area
> +	sub	x1, x1, x0
>  	b	__dma_clean_area
>  SYM_FUNC_END_PI(__dma_map_area)
>  
> @@ -249,6 +250,7 @@ SYM_FUNC_END_PI(__dma_map_area)
>   *	- dir	- DMA direction
>   */
>  SYM_FUNC_START_PI(__dma_unmap_area)
> +	add	x1, x0, x1
>  	cmp	w2, #DMA_TO_DEVICE
>  	b.ne	__dma_inv_area
>  	ret
> diff --git a/arch/arm64/mm/flush.c b/arch/arm64/mm/flush.c
> index ac485163a4a7..4e3505c2bea6 100644
> --- a/arch/arm64/mm/flush.c
> +++ b/arch/arm64/mm/flush.c
> @@ -88,7 +88,7 @@ EXPORT_SYMBOL_GPL(arch_wb_cache_pmem);
>  
>  void arch_invalidate_pmem(void *addr, size_t size)
>  {
> -	__inval_dcache_area(addr, size);
> +	__inval_dcache_area((unsigned long)addr, (unsigned long)addr + size);
>  }
>  EXPORT_SYMBOL_GPL(arch_invalidate_pmem);
>  #endif
> -- 
> 2.31.1.751.gd2f1c929bd-goog
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v3 11/18] arm64: dcache_by_line_op to take end parameter instead of size
  2021-05-20 12:43 ` [PATCH v3 11/18] arm64: dcache_by_line_op " Fuad Tabba
@ 2021-05-20 15:48   ` Mark Rutland
  0 siblings, 0 replies; 42+ messages in thread
From: Mark Rutland @ 2021-05-20 15:48 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: linux-arm-kernel, will, catalin.marinas, maz, ardb, james.morse,
	alexandru.elisei, suzuki.poulose, robin.murphy

On Thu, May 20, 2021 at 01:43:59PM +0100, Fuad Tabba wrote:
> To be consistent with other functions with similar names and
> functionality in cacheflush.h, cache.S, and cachetlb.rst, change
> to specify the range in terms of start and end, as opposed to
> start and size.
> 
> No functional change intended.
> 
> Reported-by: Will Deacon <will@kernel.org>
> Signed-off-by: Fuad Tabba <tabba@google.com>

Acked-by: Mark Rutland <mark.rutland@arm.com>

Mark.

> ---
>  arch/arm64/include/asm/assembler.h | 27 +++++++++++++--------------
>  arch/arm64/kvm/hyp/nvhe/cache.S    |  1 +
>  arch/arm64/mm/cache.S              |  5 +++++
>  3 files changed, 19 insertions(+), 14 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
> index ced791124b28..c4cecf85dccf 100644
> --- a/arch/arm64/include/asm/assembler.h
> +++ b/arch/arm64/include/asm/assembler.h
> @@ -397,40 +397,39 @@ alternative_endif
>  
>  /*
>   * Macro to perform a data cache maintenance for the interval
> - * [addr, addr + size)
> + * [start, end)
>   *
>   * 	op:		operation passed to dc instruction
>   * 	domain:		domain used in dsb instruciton
> - * 	addr:		starting virtual address of the region
> - * 	size:		size of the region
> + * 	start:          starting virtual address of the region
> + * 	end:            end virtual address of the region
>   * 	fixup:		optional label to branch to on user fault
> - * 	Corrupts:	addr, size, tmp1, tmp2
> + * 	Corrupts:       start, end, tmp1, tmp2
>   */
> -	.macro dcache_by_line_op op, domain, addr, size, tmp1, tmp2, fixup
> +	.macro dcache_by_line_op op, domain, start, end, tmp1, tmp2, fixup
>  	dcache_line_size \tmp1, \tmp2
> -	add	\size, \addr, \size
>  	sub	\tmp2, \tmp1, #1
> -	bic	\addr, \addr, \tmp2
> +	bic	\start, \start, \tmp2
>  .Ldcache_op\@:
>  	.ifc	\op, cvau
> -	__dcache_op_workaround_clean_cache \op, \addr
> +	__dcache_op_workaround_clean_cache \op, \start
>  	.else
>  	.ifc	\op, cvac
> -	__dcache_op_workaround_clean_cache \op, \addr
> +	__dcache_op_workaround_clean_cache \op, \start
>  	.else
>  	.ifc	\op, cvap
> -	sys	3, c7, c12, 1, \addr	// dc cvap
> +	sys	3, c7, c12, 1, \start	// dc cvap
>  	.else
>  	.ifc	\op, cvadp
> -	sys	3, c7, c13, 1, \addr	// dc cvadp
> +	sys	3, c7, c13, 1, \start	// dc cvadp
>  	.else
> -	dc	\op, \addr
> +	dc	\op, \start
>  	.endif
>  	.endif
>  	.endif
>  	.endif
> -	add	\addr, \addr, \tmp1
> -	cmp	\addr, \size
> +	add	\start, \start, \tmp1
> +	cmp	\start, \end
>  	b.lo	.Ldcache_op\@
>  	dsb	\domain
>  
> diff --git a/arch/arm64/kvm/hyp/nvhe/cache.S b/arch/arm64/kvm/hyp/nvhe/cache.S
> index 36cef6915428..3bcfa3cac46f 100644
> --- a/arch/arm64/kvm/hyp/nvhe/cache.S
> +++ b/arch/arm64/kvm/hyp/nvhe/cache.S
> @@ -8,6 +8,7 @@
>  #include <asm/alternative.h>
>  
>  SYM_FUNC_START_PI(__flush_dcache_area)
> +	add	x1, x0, x1
>  	dcache_by_line_op civac, sy, x0, x1, x2, x3
>  	ret
>  SYM_FUNC_END_PI(__flush_dcache_area)
> diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
> index 5170d9ab450a..3b5461a32b85 100644
> --- a/arch/arm64/mm/cache.S
> +++ b/arch/arm64/mm/cache.S
> @@ -115,6 +115,7 @@ SYM_FUNC_END(invalidate_icache_range)
>   *	- size    - size in question
>   */
>  SYM_FUNC_START_PI(__flush_dcache_area)
> +	add	x1, x0, x1
>  	dcache_by_line_op civac, sy, x0, x1, x2, x3
>  	ret
>  SYM_FUNC_END_PI(__flush_dcache_area)
> @@ -133,6 +134,7 @@ alternative_if ARM64_HAS_CACHE_IDC
>  	dsb	ishst
>  	ret
>  alternative_else_nop_endif
> +	add	x1, x0, x1
>  	dcache_by_line_op cvau, ish, x0, x1, x2, x3
>  	ret
>  SYM_FUNC_END(__clean_dcache_area_pou)
> @@ -194,6 +196,7 @@ SYM_FUNC_START_PI(__clean_dcache_area_poc)
>   *	- start   - virtual start address of region
>   *	- size    - size in question
>   */
> +	add	x1, x0, x1
>  	dcache_by_line_op cvac, sy, x0, x1, x2, x3
>  	ret
>  SYM_FUNC_END_PI(__clean_dcache_area_poc)
> @@ -212,6 +215,7 @@ SYM_FUNC_START_PI(__clean_dcache_area_pop)
>  	alternative_if_not ARM64_HAS_DCPOP
>  	b	__clean_dcache_area_poc
>  	alternative_else_nop_endif
> +	add	x1, x0, x1
>  	dcache_by_line_op cvap, sy, x0, x1, x2, x3
>  	ret
>  SYM_FUNC_END_PI(__clean_dcache_area_pop)
> @@ -225,6 +229,7 @@ SYM_FUNC_END_PI(__clean_dcache_area_pop)
>   *	- size    - size in question
>   */
>  SYM_FUNC_START_PI(__dma_flush_area)
> +	add	x1, x0, x1
>  	dcache_by_line_op civac, sy, x0, x1, x2, x3
>  	ret
>  SYM_FUNC_END_PI(__dma_flush_area)
> -- 
> 2.31.1.751.gd2f1c929bd-goog
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v3 12/18] arm64: __flush_dcache_area to take end parameter instead of size
  2021-05-20 12:44 ` [PATCH v3 12/18] arm64: __flush_dcache_area " Fuad Tabba
@ 2021-05-20 16:06   ` Mark Rutland
  0 siblings, 0 replies; 42+ messages in thread
From: Mark Rutland @ 2021-05-20 16:06 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: linux-arm-kernel, will, catalin.marinas, maz, ardb, james.morse,
	alexandru.elisei, suzuki.poulose, robin.murphy

On Thu, May 20, 2021 at 01:44:00PM +0100, Fuad Tabba wrote:
> To be consistent with other functions with similar names and
> functionality in cacheflush.h, cache.S, and cachetlb.rst, change
> to specify the range in terms of start and end, as opposed to
> start and size.
> 
> No functional change intended.
> 
> Reported-by: Will Deacon <will@kernel.org>
> Signed-off-by: Fuad Tabba <tabba@google.com>

Acked-by Mark Rutland <mark.rutland@arm.com>

Mark.

> ---
>  arch/arm64/include/asm/arch_gicv3.h |  3 ++-
>  arch/arm64/include/asm/cacheflush.h |  8 ++++----
>  arch/arm64/include/asm/efi.h        |  2 +-
>  arch/arm64/include/asm/kvm_mmu.h    |  3 ++-
>  arch/arm64/kernel/hibernate.c       | 18 +++++++++++-------
>  arch/arm64/kernel/idreg-override.c  |  3 ++-
>  arch/arm64/kernel/kaslr.c           | 12 +++++++++---
>  arch/arm64/kernel/machine_kexec.c   | 20 +++++++++++++-------
>  arch/arm64/kernel/smp.c             |  8 ++++++--
>  arch/arm64/kernel/smp_spin_table.c  |  7 ++++---
>  arch/arm64/kvm/hyp/nvhe/cache.S     |  1 -
>  arch/arm64/kvm/hyp/nvhe/setup.c     |  3 ++-
>  arch/arm64/kvm/hyp/pgtable.c        | 13 ++++++++++---
>  arch/arm64/mm/cache.S               |  9 ++++-----
>  14 files changed, 70 insertions(+), 40 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/arch_gicv3.h b/arch/arm64/include/asm/arch_gicv3.h
> index 934b9be582d2..ed1cc9d8e6df 100644
> --- a/arch/arm64/include/asm/arch_gicv3.h
> +++ b/arch/arm64/include/asm/arch_gicv3.h
> @@ -124,7 +124,8 @@ static inline u32 gic_read_rpr(void)
>  #define gic_read_lpir(c)		readq_relaxed(c)
>  #define gic_write_lpir(v, c)		writeq_relaxed(v, c)
>  
> -#define gic_flush_dcache_to_poc(a,l)	__flush_dcache_area((a), (l))
> +#define gic_flush_dcache_to_poc(a,l)	\
> +	__flush_dcache_area((unsigned long)(a), (unsigned long)(a)+(l))
>  
>  #define gits_read_baser(c)		readq_relaxed(c)
>  #define gits_write_baser(v, c)		writeq_relaxed(v, c)
> diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
> index 157234706817..695f88864784 100644
> --- a/arch/arm64/include/asm/cacheflush.h
> +++ b/arch/arm64/include/asm/cacheflush.h
> @@ -50,15 +50,15 @@
>   *		- start  - virtual start address
>   *		- end    - virtual end address
>   *
> - *	__flush_dcache_area(kaddr, size)
> + *	__flush_dcache_area(start, end)
>   *
>   *		Ensure that the data held in page is written back.
> - *		- kaddr  - page address
> - *		- size   - region size
> + *		- start  - virtual start address
> + *		- end    - virtual end address
>   */
>  extern void __flush_icache_range(unsigned long start, unsigned long end);
>  extern void invalidate_icache_range(unsigned long start, unsigned long end);
> -extern void __flush_dcache_area(void *addr, size_t len);
> +extern void __flush_dcache_area(unsigned long start, unsigned long end);
>  extern void __inval_dcache_area(unsigned long start, unsigned long end);
>  extern void __clean_dcache_area_poc(void *addr, size_t len);
>  extern void __clean_dcache_area_pop(void *addr, size_t len);
> diff --git a/arch/arm64/include/asm/efi.h b/arch/arm64/include/asm/efi.h
> index 3578aba9c608..0ae2397076fd 100644
> --- a/arch/arm64/include/asm/efi.h
> +++ b/arch/arm64/include/asm/efi.h
> @@ -137,7 +137,7 @@ void efi_virtmap_unload(void);
>  
>  static inline void efi_capsule_flush_cache_range(void *addr, int size)
>  {
> -	__flush_dcache_area(addr, size);
> +	__flush_dcache_area((unsigned long)addr, (unsigned long)addr + size);
>  }
>  
>  #endif /* _ASM_EFI_H */
> diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
> index 25ed956f9af1..33293d5855af 100644
> --- a/arch/arm64/include/asm/kvm_mmu.h
> +++ b/arch/arm64/include/asm/kvm_mmu.h
> @@ -180,7 +180,8 @@ static inline void *__kvm_vector_slot2addr(void *base,
>  
>  struct kvm;
>  
> -#define kvm_flush_dcache_to_poc(a,l)	__flush_dcache_area((a), (l))
> +#define kvm_flush_dcache_to_poc(a,l)	\
> +	__flush_dcache_area((unsigned long)(a), (unsigned long)(a)+(l))
>  
>  static inline bool vcpu_has_cache_enabled(struct kvm_vcpu *vcpu)
>  {
> diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c
> index b1cef371df2b..b40ddce71507 100644
> --- a/arch/arm64/kernel/hibernate.c
> +++ b/arch/arm64/kernel/hibernate.c
> @@ -240,8 +240,6 @@ static int create_safe_exec_page(void *src_start, size_t length,
>  	return 0;
>  }
>  
> -#define dcache_clean_range(start, end)	__flush_dcache_area(start, (end - start))
> -
>  #ifdef CONFIG_ARM64_MTE
>  
>  static DEFINE_XARRAY(mte_pages);
> @@ -383,13 +381,18 @@ int swsusp_arch_suspend(void)
>  		ret = swsusp_save();
>  	} else {
>  		/* Clean kernel core startup/idle code to PoC*/
> -		dcache_clean_range(__mmuoff_data_start, __mmuoff_data_end);
> -		dcache_clean_range(__idmap_text_start, __idmap_text_end);
> +		__flush_dcache_area((unsigned long)__mmuoff_data_start,
> +				    (unsigned long)__mmuoff_data_end);
> +		__flush_dcache_area((unsigned long)__idmap_text_start,
> +				    (unsigned long)__idmap_text_end);
>  
>  		/* Clean kvm setup code to PoC? */
>  		if (el2_reset_needed()) {
> -			dcache_clean_range(__hyp_idmap_text_start, __hyp_idmap_text_end);
> -			dcache_clean_range(__hyp_text_start, __hyp_text_end);
> +			__flush_dcache_area(
> +				(unsigned long)__hyp_idmap_text_start,
> +				(unsigned long)__hyp_idmap_text_end);
> +			__flush_dcache_area((unsigned long)__hyp_text_start,
> +					    (unsigned long)__hyp_text_end);
>  		}
>  
>  		swsusp_mte_restore_tags();
> @@ -474,7 +477,8 @@ int swsusp_arch_resume(void)
>  	 * The hibernate exit text contains a set of el2 vectors, that will
>  	 * be executed at el2 with the mmu off in order to reload hyp-stub.
>  	 */
> -	__flush_dcache_area(hibernate_exit, exit_size);
> +	__flush_dcache_area((unsigned long)hibernate_exit,
> +			    (unsigned long)hibernate_exit + exit_size);
>  
>  	/*
>  	 * KASLR will cause the el2 vectors to be in a different location in
> diff --git a/arch/arm64/kernel/idreg-override.c b/arch/arm64/kernel/idreg-override.c
> index e628c8ce1ffe..3dd515baf526 100644
> --- a/arch/arm64/kernel/idreg-override.c
> +++ b/arch/arm64/kernel/idreg-override.c
> @@ -237,7 +237,8 @@ asmlinkage void __init init_feature_override(void)
>  
>  	for (i = 0; i < ARRAY_SIZE(regs); i++) {
>  		if (regs[i]->override)
> -			__flush_dcache_area(regs[i]->override,
> +			__flush_dcache_area((unsigned long)regs[i]->override,
> +					    (unsigned long)regs[i]->override +
>  					    sizeof(*regs[i]->override));
>  	}
>  }
> diff --git a/arch/arm64/kernel/kaslr.c b/arch/arm64/kernel/kaslr.c
> index 341342b207f6..49cccd03cb37 100644
> --- a/arch/arm64/kernel/kaslr.c
> +++ b/arch/arm64/kernel/kaslr.c
> @@ -72,7 +72,9 @@ u64 __init kaslr_early_init(void)
>  	 * we end up running with module randomization disabled.
>  	 */
>  	module_alloc_base = (u64)_etext - MODULES_VSIZE;
> -	__flush_dcache_area(&module_alloc_base, sizeof(module_alloc_base));
> +	__flush_dcache_area((unsigned long)&module_alloc_base,
> +			    (unsigned long)&module_alloc_base +
> +				    sizeof(module_alloc_base));
>  
>  	/*
>  	 * Try to map the FDT early. If this fails, we simply bail,
> @@ -170,8 +172,12 @@ u64 __init kaslr_early_init(void)
>  	module_alloc_base += (module_range * (seed & ((1 << 21) - 1))) >> 21;
>  	module_alloc_base &= PAGE_MASK;
>  
> -	__flush_dcache_area(&module_alloc_base, sizeof(module_alloc_base));
> -	__flush_dcache_area(&memstart_offset_seed, sizeof(memstart_offset_seed));
> +	__flush_dcache_area((unsigned long)&module_alloc_base,
> +			    (unsigned long)&module_alloc_base +
> +				    sizeof(module_alloc_base));
> +	__flush_dcache_area((unsigned long)&memstart_offset_seed,
> +			    (unsigned long)&memstart_offset_seed +
> +				    sizeof(memstart_offset_seed));
>  
>  	return offset;
>  }
> diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c
> index a03944fd0cd4..3e79110c8f3a 100644
> --- a/arch/arm64/kernel/machine_kexec.c
> +++ b/arch/arm64/kernel/machine_kexec.c
> @@ -72,7 +72,9 @@ int machine_kexec_post_load(struct kimage *kimage)
>  	 * For execution with the MMU off, reloc_code needs to be cleaned to the
>  	 * PoC and invalidated from the I-cache.
>  	 */
> -	__flush_dcache_area(reloc_code, arm64_relocate_new_kernel_size);
> +	__flush_dcache_area((unsigned long)reloc_code,
> +			    (unsigned long)reloc_code +
> +				    arm64_relocate_new_kernel_size);
>  	invalidate_icache_range((uintptr_t)reloc_code,
>  				(uintptr_t)reloc_code +
>  					arm64_relocate_new_kernel_size);
> @@ -106,16 +108,18 @@ static void kexec_list_flush(struct kimage *kimage)
>  
>  	for (entry = &kimage->head; ; entry++) {
>  		unsigned int flag;
> -		void *addr;
> +		unsigned long addr;
>  
>  		/* flush the list entries. */
> -		__flush_dcache_area(entry, sizeof(kimage_entry_t));
> +		__flush_dcache_area((unsigned long)entry,
> +				    (unsigned long)entry +
> +					    sizeof(kimage_entry_t));
>  
>  		flag = *entry & IND_FLAGS;
>  		if (flag == IND_DONE)
>  			break;
>  
> -		addr = phys_to_virt(*entry & PAGE_MASK);
> +		addr = (unsigned long)phys_to_virt(*entry & PAGE_MASK);
>  
>  		switch (flag) {
>  		case IND_INDIRECTION:
> @@ -124,7 +128,7 @@ static void kexec_list_flush(struct kimage *kimage)
>  			break;
>  		case IND_SOURCE:
>  			/* flush the source pages. */
> -			__flush_dcache_area(addr, PAGE_SIZE);
> +			__flush_dcache_area(addr, addr + PAGE_SIZE);
>  			break;
>  		case IND_DESTINATION:
>  			break;
> @@ -151,8 +155,10 @@ static void kexec_segment_flush(const struct kimage *kimage)
>  			kimage->segment[i].memsz,
>  			kimage->segment[i].memsz /  PAGE_SIZE);
>  
> -		__flush_dcache_area(phys_to_virt(kimage->segment[i].mem),
> -			kimage->segment[i].memsz);
> +		__flush_dcache_area(
> +			(unsigned long)phys_to_virt(kimage->segment[i].mem),
> +			(unsigned long)phys_to_virt(kimage->segment[i].mem) +
> +				kimage->segment[i].memsz);
>  	}
>  }
>  
> diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
> index dcd7041b2b07..5fcdee331087 100644
> --- a/arch/arm64/kernel/smp.c
> +++ b/arch/arm64/kernel/smp.c
> @@ -122,7 +122,9 @@ int __cpu_up(unsigned int cpu, struct task_struct *idle)
>  	secondary_data.task = idle;
>  	secondary_data.stack = task_stack_page(idle) + THREAD_SIZE;
>  	update_cpu_boot_status(CPU_MMU_OFF);
> -	__flush_dcache_area(&secondary_data, sizeof(secondary_data));
> +	__flush_dcache_area((unsigned long)&secondary_data,
> +			    (unsigned long)&secondary_data +
> +				    sizeof(secondary_data));
>  
>  	/* Now bring the CPU into our world */
>  	ret = boot_secondary(cpu, idle);
> @@ -143,7 +145,9 @@ int __cpu_up(unsigned int cpu, struct task_struct *idle)
>  	pr_crit("CPU%u: failed to come online\n", cpu);
>  	secondary_data.task = NULL;
>  	secondary_data.stack = NULL;
> -	__flush_dcache_area(&secondary_data, sizeof(secondary_data));
> +	__flush_dcache_area((unsigned long)&secondary_data,
> +			    (unsigned long)&secondary_data +
> +				    sizeof(secondary_data));
>  	status = READ_ONCE(secondary_data.status);
>  	if (status == CPU_MMU_OFF)
>  		status = READ_ONCE(__early_cpu_boot_status);
> diff --git a/arch/arm64/kernel/smp_spin_table.c b/arch/arm64/kernel/smp_spin_table.c
> index c45a83512805..58d804582a35 100644
> --- a/arch/arm64/kernel/smp_spin_table.c
> +++ b/arch/arm64/kernel/smp_spin_table.c
> @@ -36,7 +36,7 @@ static void write_pen_release(u64 val)
>  	unsigned long size = sizeof(secondary_holding_pen_release);
>  
>  	secondary_holding_pen_release = val;
> -	__flush_dcache_area(start, size);
> +	__flush_dcache_area((unsigned long)start, (unsigned long)start + size);
>  }
>  
>  
> @@ -90,8 +90,9 @@ static int smp_spin_table_cpu_prepare(unsigned int cpu)
>  	 * the boot protocol.
>  	 */
>  	writeq_relaxed(pa_holding_pen, release_addr);
> -	__flush_dcache_area((__force void *)release_addr,
> -			    sizeof(*release_addr));
> +	__flush_dcache_area((__force unsigned long)release_addr,
> +			    (__force unsigned long)release_addr +
> +				    sizeof(*release_addr));
>  
>  	/*
>  	 * Send an event to wake up the secondary CPU.
> diff --git a/arch/arm64/kvm/hyp/nvhe/cache.S b/arch/arm64/kvm/hyp/nvhe/cache.S
> index 3bcfa3cac46f..36cef6915428 100644
> --- a/arch/arm64/kvm/hyp/nvhe/cache.S
> +++ b/arch/arm64/kvm/hyp/nvhe/cache.S
> @@ -8,7 +8,6 @@
>  #include <asm/alternative.h>
>  
>  SYM_FUNC_START_PI(__flush_dcache_area)
> -	add	x1, x0, x1
>  	dcache_by_line_op civac, sy, x0, x1, x2, x3
>  	ret
>  SYM_FUNC_END_PI(__flush_dcache_area)
> diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c
> index 7488f53b0aa2..5dffe928f256 100644
> --- a/arch/arm64/kvm/hyp/nvhe/setup.c
> +++ b/arch/arm64/kvm/hyp/nvhe/setup.c
> @@ -134,7 +134,8 @@ static void update_nvhe_init_params(void)
>  	for (i = 0; i < hyp_nr_cpus; i++) {
>  		params = per_cpu_ptr(&kvm_init_params, i);
>  		params->pgd_pa = __hyp_pa(pkvm_pgtable.pgd);
> -		__flush_dcache_area(params, sizeof(*params));
> +		__flush_dcache_area((unsigned long)params,
> +				    (unsigned long)params + sizeof(*params));
>  	}
>  }
>  
> diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
> index c37c1dc4feaf..10d2f04013d4 100644
> --- a/arch/arm64/kvm/hyp/pgtable.c
> +++ b/arch/arm64/kvm/hyp/pgtable.c
> @@ -839,8 +839,11 @@ static int stage2_unmap_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep,
>  	stage2_put_pte(ptep, mmu, addr, level, mm_ops);
>  
>  	if (need_flush) {
> -		__flush_dcache_area(kvm_pte_follow(pte, mm_ops),
> -				    kvm_granule_size(level));
> +		kvm_pte_t *pte_follow = kvm_pte_follow(pte, mm_ops);
> +
> +		__flush_dcache_area((unsigned long)pte_follow,
> +				    (unsigned long)pte_follow +
> +					    kvm_granule_size(level));
>  	}
>  
>  	if (childp)
> @@ -988,11 +991,15 @@ static int stage2_flush_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep,
>  	struct kvm_pgtable *pgt = arg;
>  	struct kvm_pgtable_mm_ops *mm_ops = pgt->mm_ops;
>  	kvm_pte_t pte = *ptep;
> +	kvm_pte_t *pte_follow;
>  
>  	if (!kvm_pte_valid(pte) || !stage2_pte_cacheable(pgt, pte))
>  		return 0;
>  
> -	__flush_dcache_area(kvm_pte_follow(pte, mm_ops), kvm_granule_size(level));
> +	pte_follow = kvm_pte_follow(pte, mm_ops);
> +	__flush_dcache_area((unsigned long)pte_follow,
> +			    (unsigned long)pte_follow +
> +				    kvm_granule_size(level));
>  	return 0;
>  }
>  
> diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
> index 3b5461a32b85..35abc8d77c4e 100644
> --- a/arch/arm64/mm/cache.S
> +++ b/arch/arm64/mm/cache.S
> @@ -106,16 +106,15 @@ alternative_else_nop_endif
>  SYM_FUNC_END(invalidate_icache_range)
>  
>  /*
> - *	__flush_dcache_area(kaddr, size)
> + *	__flush_dcache_area(start, end)
>   *
> - *	Ensure that any D-cache lines for the interval [kaddr, kaddr+size)
> + *	Ensure that any D-cache lines for the interval [start, end)
>   *	are cleaned and invalidated to the PoC.
>   *
> - *	- kaddr   - kernel address
> - *	- size    - size in question
> + *	- start   - virtual start address of region
> + *	- end     - virtual end address of region
>   */
>  SYM_FUNC_START_PI(__flush_dcache_area)
> -	add	x1, x0, x1
>  	dcache_by_line_op civac, sy, x0, x1, x2, x3
>  	ret
>  SYM_FUNC_END_PI(__flush_dcache_area)
> -- 
> 2.31.1.751.gd2f1c929bd-goog
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v3 13/18] arm64: __clean_dcache_area_poc to take end parameter instead of size
  2021-05-20 12:44 ` [PATCH v3 13/18] arm64: __clean_dcache_area_poc " Fuad Tabba
@ 2021-05-20 16:16   ` Mark Rutland
  0 siblings, 0 replies; 42+ messages in thread
From: Mark Rutland @ 2021-05-20 16:16 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: linux-arm-kernel, will, catalin.marinas, maz, ardb, james.morse,
	alexandru.elisei, suzuki.poulose, robin.murphy

On Thu, May 20, 2021 at 01:44:01PM +0100, Fuad Tabba wrote:
> To be consistent with other functions with similar names and
> functionality in cacheflush.h, cache.S, and cachetlb.rst, change
> to specify the range in terms of start and end, as opposed to
> start and size.
> 
> Because the code is shared with __dma_clean_area, it changes the
> parameters for that as well. However, __dma_clean_area is local to
> cache.S, so no other users are affected.
> 
> No functional change intended.
> 
> Reported-by: Will Deacon <will@kernel.org>
> Signed-off-by: Fuad Tabba <tabba@google.com>

One minor comment below, with that addressed:

Acked-by: Mark Rutland <mark.rutland@arm.com>

Mark.

> ---
>  arch/arm64/include/asm/cacheflush.h |  2 +-
>  arch/arm64/kernel/efi-entry.S       |  5 +++--
>  arch/arm64/mm/cache.S               | 16 +++++++---------
>  3 files changed, 11 insertions(+), 12 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
> index 695f88864784..3255878d6f30 100644
> --- a/arch/arm64/include/asm/cacheflush.h
> +++ b/arch/arm64/include/asm/cacheflush.h
> @@ -60,7 +60,7 @@ extern void __flush_icache_range(unsigned long start, unsigned long end);
>  extern void invalidate_icache_range(unsigned long start, unsigned long end);
>  extern void __flush_dcache_area(unsigned long start, unsigned long end);
>  extern void __inval_dcache_area(unsigned long start, unsigned long end);
> -extern void __clean_dcache_area_poc(void *addr, size_t len);
> +extern void __clean_dcache_area_poc(unsigned long start, unsigned long end);
>  extern void __clean_dcache_area_pop(void *addr, size_t len);
>  extern void __clean_dcache_area_pou(void *addr, size_t len);
>  extern long __flush_cache_user_range(unsigned long start, unsigned long end);
> diff --git a/arch/arm64/kernel/efi-entry.S b/arch/arm64/kernel/efi-entry.S
> index 0073b24b5d25..72e6a580290a 100644
> --- a/arch/arm64/kernel/efi-entry.S
> +++ b/arch/arm64/kernel/efi-entry.S
> @@ -28,6 +28,7 @@ SYM_CODE_START(efi_enter_kernel)
>  	 * stale icache entries from before relocation.
>  	 */
>  	ldr	w1, =kernel_size
> +	add	x1, x0, x1
>  	bl	__clean_dcache_area_poc
>  	ic	ialluis
>  
> @@ -36,7 +37,7 @@ SYM_CODE_START(efi_enter_kernel)
>  	 * so that we can safely disable the MMU and caches.
>  	 */
>  	adr	x0, 0f
> -	ldr	w1, 3f
> +	adr	x1, 3f
>  	bl	__clean_dcache_area_poc
>  0:
>  	/* Turn off Dcache and MMU */
> @@ -65,4 +66,4 @@ SYM_CODE_START(efi_enter_kernel)
>  	mov	x3, xzr
>  	br	x19
>  SYM_CODE_END(efi_enter_kernel)
> -3:	.long	. - 0b
> +3:

Now that we're using this label for code rather than data, could we
please move this before the SYM_CODE_END()? It looks a bit out-of-place
sitting after the function now that it has no associated data, and it
shouldn't have a functional impact either way.

Mark.

> diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
> index 35abc8d77c4e..9a9c44bb26d2 100644
> --- a/arch/arm64/mm/cache.S
> +++ b/arch/arm64/mm/cache.S
> @@ -178,24 +178,23 @@ SYM_FUNC_END_PI(__inval_dcache_area)
>  SYM_FUNC_END(__dma_inv_area)
>  
>  /*
> - *	__clean_dcache_area_poc(kaddr, size)
> + *	__clean_dcache_area_poc(start, end)
>   *
> - * 	Ensure that any D-cache lines for the interval [kaddr, kaddr+size)
> + * 	Ensure that any D-cache lines for the interval [start, end)
>   * 	are cleaned to the PoC.
>   *
> - *	- kaddr   - kernel address
> - *	- size    - size in question
> + *	- start   - virtual start address of region
> + *	- end     - virtual end address of region
>   */
>  SYM_FUNC_START_LOCAL(__dma_clean_area)
>  SYM_FUNC_START_PI(__clean_dcache_area_poc)
>  	/* FALLTHROUGH */
>  
>  /*
> - *	__dma_clean_area(start, size)
> + *	__dma_clean_area(start, end)
>   *	- start   - virtual start address of region
> - *	- size    - size in question
> + *	- end     - virtual end address of region
>   */
> -	add	x1, x0, x1
>  	dcache_by_line_op cvac, sy, x0, x1, x2, x3
>  	ret
>  SYM_FUNC_END_PI(__clean_dcache_area_poc)
> @@ -211,10 +210,10 @@ SYM_FUNC_END(__dma_clean_area)
>   *	- size    - size in question
>   */
>  SYM_FUNC_START_PI(__clean_dcache_area_pop)
> +	add	x1, x0, x1
>  	alternative_if_not ARM64_HAS_DCPOP
>  	b	__clean_dcache_area_poc
>  	alternative_else_nop_endif
> -	add	x1, x0, x1
>  	dcache_by_line_op cvap, sy, x0, x1, x2, x3
>  	ret
>  SYM_FUNC_END_PI(__clean_dcache_area_pop)
> @@ -243,7 +242,6 @@ SYM_FUNC_START_PI(__dma_map_area)
>  	add	x1, x0, x1
>  	cmp	w2, #DMA_FROM_DEVICE
>  	b.eq	__dma_inv_area
> -	sub	x1, x1, x0
>  	b	__dma_clean_area
>  SYM_FUNC_END_PI(__dma_map_area)
>  
> -- 
> 2.31.1.751.gd2f1c929bd-goog
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v3 14/18] arm64: __clean_dcache_area_pop to take end parameter instead of size
  2021-05-20 12:44 ` [PATCH v3 14/18] arm64: __clean_dcache_area_pop " Fuad Tabba
@ 2021-05-20 16:19   ` Mark Rutland
  0 siblings, 0 replies; 42+ messages in thread
From: Mark Rutland @ 2021-05-20 16:19 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: linux-arm-kernel, will, catalin.marinas, maz, ardb, james.morse,
	alexandru.elisei, suzuki.poulose, robin.murphy

On Thu, May 20, 2021 at 01:44:02PM +0100, Fuad Tabba wrote:
> To be consistent with other functions with similar names and
> functionality in cacheflush.h, cache.S, and cachetlb.rst, change
> to specify the range in terms of start and end, as opposed to
> start and size.
> 
> No functional change intended.
> 
> Reported-by: Will Deacon <will@kernel.org>
> Signed-off-by: Fuad Tabba <tabba@google.com>

Acked-by: Mark Rutland <mark.rutland@arm.com>

Mark.

> ---
>  arch/arm64/include/asm/cacheflush.h | 2 +-
>  arch/arm64/lib/uaccess_flushcache.c | 4 ++--
>  arch/arm64/mm/cache.S               | 9 ++++-----
>  arch/arm64/mm/flush.c               | 2 +-
>  4 files changed, 8 insertions(+), 9 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
> index 3255878d6f30..fa5641868d65 100644
> --- a/arch/arm64/include/asm/cacheflush.h
> +++ b/arch/arm64/include/asm/cacheflush.h
> @@ -61,7 +61,7 @@ extern void invalidate_icache_range(unsigned long start, unsigned long end);
>  extern void __flush_dcache_area(unsigned long start, unsigned long end);
>  extern void __inval_dcache_area(unsigned long start, unsigned long end);
>  extern void __clean_dcache_area_poc(unsigned long start, unsigned long end);
> -extern void __clean_dcache_area_pop(void *addr, size_t len);
> +extern void __clean_dcache_area_pop(unsigned long start, unsigned long end);
>  extern void __clean_dcache_area_pou(void *addr, size_t len);
>  extern long __flush_cache_user_range(unsigned long start, unsigned long end);
>  extern void sync_icache_aliases(void *kaddr, unsigned long len);
> diff --git a/arch/arm64/lib/uaccess_flushcache.c b/arch/arm64/lib/uaccess_flushcache.c
> index c83bb5a4aad2..62ea989effe8 100644
> --- a/arch/arm64/lib/uaccess_flushcache.c
> +++ b/arch/arm64/lib/uaccess_flushcache.c
> @@ -15,7 +15,7 @@ void memcpy_flushcache(void *dst, const void *src, size_t cnt)
>  	 * barrier to order the cache maintenance against the memcpy.
>  	 */
>  	memcpy(dst, src, cnt);
> -	__clean_dcache_area_pop(dst, cnt);
> +	__clean_dcache_area_pop((unsigned long)dst, (unsigned long)dst + cnt);
>  }
>  EXPORT_SYMBOL_GPL(memcpy_flushcache);
>  
> @@ -33,6 +33,6 @@ unsigned long __copy_user_flushcache(void *to, const void __user *from,
>  	rc = raw_copy_from_user(to, from, n);
>  
>  	/* See above */
> -	__clean_dcache_area_pop(to, n - rc);
> +	__clean_dcache_area_pop((unsigned long)to, (unsigned long)to + n - rc);
>  	return rc;
>  }
> diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
> index 9a9c44bb26d2..b72fbae4b8e9 100644
> --- a/arch/arm64/mm/cache.S
> +++ b/arch/arm64/mm/cache.S
> @@ -201,16 +201,15 @@ SYM_FUNC_END_PI(__clean_dcache_area_poc)
>  SYM_FUNC_END(__dma_clean_area)
>  
>  /*
> - *	__clean_dcache_area_pop(kaddr, size)
> + *	__clean_dcache_area_pop(start, end)
>   *
> - * 	Ensure that any D-cache lines for the interval [kaddr, kaddr+size)
> + * 	Ensure that any D-cache lines for the interval [start, end)
>   * 	are cleaned to the PoP.
>   *
> - *	- kaddr   - kernel address
> - *	- size    - size in question
> + *	- start   - virtual start address of region
> + *	- end     - virtual end address of region
>   */
>  SYM_FUNC_START_PI(__clean_dcache_area_pop)
> -	add	x1, x0, x1
>  	alternative_if_not ARM64_HAS_DCPOP
>  	b	__clean_dcache_area_poc
>  	alternative_else_nop_endif
> diff --git a/arch/arm64/mm/flush.c b/arch/arm64/mm/flush.c
> index 4e3505c2bea6..5aba7fe42d4b 100644
> --- a/arch/arm64/mm/flush.c
> +++ b/arch/arm64/mm/flush.c
> @@ -82,7 +82,7 @@ void arch_wb_cache_pmem(void *addr, size_t size)
>  {
>  	/* Ensure order against any prior non-cacheable writes */
>  	dmb(osh);
> -	__clean_dcache_area_pop(addr, size);
> +	__clean_dcache_area_pop((unsigned long)addr, (unsigned long)addr + size);
>  }
>  EXPORT_SYMBOL_GPL(arch_wb_cache_pmem);
>  
> -- 
> 2.31.1.751.gd2f1c929bd-goog
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v3 15/18] arm64: __clean_dcache_area_pou to take end parameter instead of size
  2021-05-20 12:44 ` [PATCH v3 15/18] arm64: __clean_dcache_area_pou " Fuad Tabba
@ 2021-05-20 16:24   ` Mark Rutland
  0 siblings, 0 replies; 42+ messages in thread
From: Mark Rutland @ 2021-05-20 16:24 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: linux-arm-kernel, will, catalin.marinas, maz, ardb, james.morse,
	alexandru.elisei, suzuki.poulose, robin.murphy

On Thu, May 20, 2021 at 01:44:03PM +0100, Fuad Tabba wrote:
> To be consistent with other functions with similar names and
> functionality in cacheflush.h, cache.S, and cachetlb.rst, change
> to specify the range in terms of start and end, as opposed to
> start and size.
> 
> No functional change intended.
> 
> Reported-by: Will Deacon <will@kernel.org>
> Signed-off-by: Fuad Tabba <tabba@google.com>

Acked-by: Mark Rutland <mark.rutland@arm.com>

Mark.

> ---
>  arch/arm64/include/asm/cacheflush.h | 2 +-
>  arch/arm64/mm/cache.S               | 9 ++++-----
>  arch/arm64/mm/flush.c               | 2 +-
>  3 files changed, 6 insertions(+), 7 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
> index fa5641868d65..f86723047315 100644
> --- a/arch/arm64/include/asm/cacheflush.h
> +++ b/arch/arm64/include/asm/cacheflush.h
> @@ -62,7 +62,7 @@ extern void __flush_dcache_area(unsigned long start, unsigned long end);
>  extern void __inval_dcache_area(unsigned long start, unsigned long end);
>  extern void __clean_dcache_area_poc(unsigned long start, unsigned long end);
>  extern void __clean_dcache_area_pop(unsigned long start, unsigned long end);
> -extern void __clean_dcache_area_pou(void *addr, size_t len);
> +extern void __clean_dcache_area_pou(unsigned long start, unsigned long end);
>  extern long __flush_cache_user_range(unsigned long start, unsigned long end);
>  extern void sync_icache_aliases(void *kaddr, unsigned long len);
>  
> diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
> index b72fbae4b8e9..b70a6699c02b 100644
> --- a/arch/arm64/mm/cache.S
> +++ b/arch/arm64/mm/cache.S
> @@ -120,20 +120,19 @@ SYM_FUNC_START_PI(__flush_dcache_area)
>  SYM_FUNC_END_PI(__flush_dcache_area)
>  
>  /*
> - *	__clean_dcache_area_pou(kaddr, size)
> + *	__clean_dcache_area_pou(start, end)
>   *
> - * 	Ensure that any D-cache lines for the interval [kaddr, kaddr+size)
> + * 	Ensure that any D-cache lines for the interval [start, end)
>   * 	are cleaned to the PoU.
>   *
> - *	- kaddr   - kernel address
> - *	- size    - size in question
> + *	- start   - virtual start address of region
> + *	- end     - virtual end address of region
>   */
>  SYM_FUNC_START(__clean_dcache_area_pou)
>  alternative_if ARM64_HAS_CACHE_IDC
>  	dsb	ishst
>  	ret
>  alternative_else_nop_endif
> -	add	x1, x0, x1
>  	dcache_by_line_op cvau, ish, x0, x1, x2, x3
>  	ret
>  SYM_FUNC_END(__clean_dcache_area_pou)
> diff --git a/arch/arm64/mm/flush.c b/arch/arm64/mm/flush.c
> index 5aba7fe42d4b..a69d745fb1dc 100644
> --- a/arch/arm64/mm/flush.c
> +++ b/arch/arm64/mm/flush.c
> @@ -19,7 +19,7 @@ void sync_icache_aliases(void *kaddr, unsigned long len)
>  	unsigned long addr = (unsigned long)kaddr;
>  
>  	if (icache_is_aliasing()) {
> -		__clean_dcache_area_pou(kaddr, len);
> +		__clean_dcache_area_pou(kaddr, kaddr + len);
>  		__flush_icache_all();
>  	} else {
>  		/*
> -- 
> 2.31.1.751.gd2f1c929bd-goog
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v3 16/18] arm64: sync_icache_aliases to take end parameter instead of size
  2021-05-20 12:44 ` [PATCH v3 16/18] arm64: sync_icache_aliases " Fuad Tabba
@ 2021-05-20 16:34   ` Mark Rutland
  0 siblings, 0 replies; 42+ messages in thread
From: Mark Rutland @ 2021-05-20 16:34 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: linux-arm-kernel, will, catalin.marinas, maz, ardb, james.morse,
	alexandru.elisei, suzuki.poulose, robin.murphy

On Thu, May 20, 2021 at 01:44:04PM +0100, Fuad Tabba wrote:
> To be consistent with other functions with similar names and
> functionality in cacheflush.h, cache.S, and cachetlb.rst, change
> to specify the range in terms of start and end, as opposed to
> start and size.
> 
> No functional change intended.
> 
> Reported-by: Will Deacon <will@kernel.org>
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
>  arch/arm64/include/asm/cacheflush.h |  2 +-
>  arch/arm64/kernel/probes/uprobes.c  |  2 +-
>  arch/arm64/mm/flush.c               | 21 +++++++++++----------
>  3 files changed, 13 insertions(+), 12 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
> index f86723047315..70b389a8dea5 100644
> --- a/arch/arm64/include/asm/cacheflush.h
> +++ b/arch/arm64/include/asm/cacheflush.h
> @@ -64,7 +64,7 @@ extern void __clean_dcache_area_poc(unsigned long start, unsigned long end);
>  extern void __clean_dcache_area_pop(unsigned long start, unsigned long end);
>  extern void __clean_dcache_area_pou(unsigned long start, unsigned long end);
>  extern long __flush_cache_user_range(unsigned long start, unsigned long end);
> -extern void sync_icache_aliases(void *kaddr, unsigned long len);
> +extern void sync_icache_aliases(unsigned long start, unsigned long end);
>  
>  static inline void flush_icache_range(unsigned long start, unsigned long end)
>  {
> diff --git a/arch/arm64/kernel/probes/uprobes.c b/arch/arm64/kernel/probes/uprobes.c
> index 2c247634552b..9be668f3f034 100644
> --- a/arch/arm64/kernel/probes/uprobes.c
> +++ b/arch/arm64/kernel/probes/uprobes.c
> @@ -21,7 +21,7 @@ void arch_uprobe_copy_ixol(struct page *page, unsigned long vaddr,
>  	memcpy(dst, src, len);
>  
>  	/* flush caches (dcache/icache) */
> -	sync_icache_aliases(dst, len);
> +	sync_icache_aliases((unsigned long)dst, (unsigned long)dst + len);
>  
>  	kunmap_atomic(xol_page_kaddr);
>  }
> diff --git a/arch/arm64/mm/flush.c b/arch/arm64/mm/flush.c
> index a69d745fb1dc..143f625e7727 100644
> --- a/arch/arm64/mm/flush.c
> +++ b/arch/arm64/mm/flush.c
> @@ -14,28 +14,26 @@
>  #include <asm/cache.h>
>  #include <asm/tlbflush.h>
>  
> -void sync_icache_aliases(void *kaddr, unsigned long len)
> +void sync_icache_aliases(unsigned long start, unsigned long end)
>  {
> -	unsigned long addr = (unsigned long)kaddr;
> -
>  	if (icache_is_aliasing()) {
> -		__clean_dcache_area_pou(kaddr, kaddr + len);
> +		__clean_dcache_area_pou(start, end);
>  		__flush_icache_all();
>  	} else {
>  		/*
>  		 * Don't issue kick_all_cpus_sync() after I-cache invalidation
>  		 * for user mappings.
>  		 */
> -		__flush_icache_range(addr, addr + len);
> +		__flush_icache_range(start, end);
>  	}
>  }
>  
>  static void flush_ptrace_access(struct vm_area_struct *vma, struct page *page,
> -				unsigned long uaddr, void *kaddr,
> -				unsigned long len)
> +				unsigned long uaddr, unsigned long start,
> +				unsigned long end)

Can we please drop the `uaddr` argument here?

Generally, for functions which take both a `uaddr` and a `kaddr`, it's
best to pass a length argument, since that can be applied to either
base. Since we don't use the `uaddr` here it's simpler to remove that.

With that gone:

Acked-by: Mark Rutland <mark.rutland@arm.com>

Mark.

>  {
>  	if (vma->vm_flags & VM_EXEC)
> -		sync_icache_aliases(kaddr, len);
> +		sync_icache_aliases(start, end);
>  }
>  
>  /*
> @@ -48,7 +46,8 @@ void copy_to_user_page(struct vm_area_struct *vma, struct page *page,
>  		       unsigned long len)
>  {
>  	memcpy(dst, src, len);
> -	flush_ptrace_access(vma, page, uaddr, dst, len);
> +	flush_ptrace_access(vma, page, uaddr, (unsigned long)dst,
> +			    (unsigned long)dst + len);
>  }
>  
>  void __sync_icache_dcache(pte_t pte)
> @@ -56,7 +55,9 @@ void __sync_icache_dcache(pte_t pte)
>  	struct page *page = pte_page(pte);
>  
>  	if (!test_and_set_bit(PG_dcache_clean, &page->flags))
> -		sync_icache_aliases(page_address(page), page_size(page));
> +		sync_icache_aliases((unsigned long)page_address(page),
> +				    (unsigned long)page_address(page) +
> +					    page_size(page));
>  }
>  EXPORT_SYMBOL_GPL(__sync_icache_dcache);
>  
> -- 
> 2.31.1.751.gd2f1c929bd-goog
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v3 17/18] arm64: Fix cache maintenance function comments
  2021-05-20 12:44 ` [PATCH v3 17/18] arm64: Fix cache maintenance function comments Fuad Tabba
@ 2021-05-20 16:48   ` Mark Rutland
  0 siblings, 0 replies; 42+ messages in thread
From: Mark Rutland @ 2021-05-20 16:48 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: linux-arm-kernel, will, catalin.marinas, maz, ardb, james.morse,
	alexandru.elisei, suzuki.poulose, robin.murphy

On Thu, May 20, 2021 at 01:44:05PM +0100, Fuad Tabba wrote:
> Fix and expand comments for the cache maintenance functions in
> cacheflush.h. Adds comments to functions that weren't described
> before. Explains what the functions do using Arm Architecture
> Reference Manual terminology.
> 
> No functional change intended.
> 
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
>  arch/arm64/include/asm/cacheflush.h | 43 +++++++++++++++++++----------
>  1 file changed, 28 insertions(+), 15 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
> index 70b389a8dea5..4b91d3530013 100644
> --- a/arch/arm64/include/asm/cacheflush.h
> +++ b/arch/arm64/include/asm/cacheflush.h
> @@ -30,31 +30,44 @@
>   *	the implementation assumes non-aliasing VIPT D-cache and (aliasing)
>   *	VIPT I-cache.
>   *
> - *	flush_icache_range(start, end)
> - *
> - *		Ensure coherency between the I-cache and the D-cache in the
> - *		region described by start, end.
> + *	All functions below apply to the region described by [start, end)
>   *		- start  - virtual start address
>   *		- end    - virtual end address

Could we please say:

| *	All functions below apply to the interval [start, end)
| *		- start  - virtual start address (inclusive)
| *		- end    - virtual end address (exclusive)

The "interval" wording makes it slightly clearer that we're using
interval notation for '[' and ')', and being explicit when describing
start/end makes that clear for those not familiar with interval
notation.

>   *
> - *	invalidate_icache_range(start, end)
> + *	__flush_icache_range(start, end)
>   *
> - *		Invalidate the I-cache in the region described by start, end.
> - *		- start  - virtual start address
> - *		- end    - virtual end address
> + *		Ensure coherency between the I-cache and the D-cache region to
> + *		the Point of Unification.
>   *
>   *	__flush_cache_user_range(start, end)
>   *
> - *		Ensure coherency between the I-cache and the D-cache in the
> - *		region described by start, end.
> - *		- start  - virtual start address
> - *		- end    - virtual end address
> + *		Ensure coherency between the I-cache and the D-cache region to
> + *		the Point of Unification.
> + *		Use only if the region might access user memory.
> + *
> + *	invalidate_icache_range(start, end)
> + *
> + *		Invalidate I-cache region to the Point of Unification.
>   *
>   *	__flush_dcache_area(start, end)
>   *
> - *		Ensure that the data held in page is written back.
> - *		- start  - virtual start address
> - *		- end    - virtual end address
> + *		Clean and invalidate D-cache region to the Point of Coherence.

For better or worse, the architecture calls this "Point of Coherency"
rather than "Point of Coherence", so we should fix this to match, along
with the two instances below.

With those nits addressed:

Acked-by: Mark Rutland <mark.rutland@arm.com>

Mark.

> + *
> + *	__inval_dcache_area(start, end)
> + *
> + *		Invalidate D-cache region to the Point of Coherence.
> + *
> + *	__clean_dcache_area_poc(start, end)
> + *
> + *		Clean D-cache region to the Point of Coherence.
> + *
> + *	__clean_dcache_area_pop(start, end)
> + *
> + *		Clean D-cache region to the Point of Persistence.
> + *
> + *	__clean_dcache_area_pou(start, end)
> + *
> + *		Clean D-cache region to the Point of Unification.
>   */
>  extern void __flush_icache_range(unsigned long start, unsigned long end);
>  extern void invalidate_icache_range(unsigned long start, unsigned long end);
> -- 
> 2.31.1.751.gd2f1c929bd-goog
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v3 18/18] arm64: Rename arm64-internal cache maintenance functions
  2021-05-20 12:44 ` [PATCH v3 18/18] arm64: Rename arm64-internal cache maintenance functions Fuad Tabba
@ 2021-05-20 17:01   ` Mark Rutland
  0 siblings, 0 replies; 42+ messages in thread
From: Mark Rutland @ 2021-05-20 17:01 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: linux-arm-kernel, will, catalin.marinas, maz, ardb, james.morse,
	alexandru.elisei, suzuki.poulose, robin.murphy

On Thu, May 20, 2021 at 01:44:06PM +0100, Fuad Tabba wrote:
> Although naming across the codebase isn't that consistent, it
> tends to follow certain patterns. Moreover, the term "flush"
> isn't defined in the Arm Architecture reference manual, and might
> be interpreted to mean clean, invalidate, or both for a cache.
> 
> Rename arm64-internal functions to make the naming internally
> consistent, as well as making it consistent with the Arm ARM, by
> specifying whether it applies to the instruction, data, or both
> caches, whether the operation is a clean, invalidate, or both.
> Also specify which point the operation applies to, i.e., to the
> point of unification (PoU), coherence (PoC), or persistence
> (PoP).

Minor nit: s/coherence/coherency/

> This commit applies the following sed transformation to all files
> under arch/arm64:
> 
> "s/\b__flush_cache_range\b/caches_clean_inval_pou_macro/g;"\
> "s/\b__flush_icache_range\b/caches_clean_inval_pou/g;"\
> "s/\binvalidate_icache_range\b/icache_inval_pou/g;"\
> "s/\b__flush_dcache_area\b/dcache_clean_inval_poc/g;"\
> "s/\b__inval_dcache_area\b/dcache_inval_poc/g;"\
> "s/__clean_dcache_area_poc\b/dcache_clean_poc/g;"\
> "s/\b__clean_dcache_area_pop\b/dcache_clean_pop/g;"\
> "s/\b__clean_dcache_area_pou\b/dcache_clean_pou/g;"\
> "s/\b__flush_cache_user_range\b/caches_clean_inval_user_pou/g;"\
> "s/\b__flush_icache_all\b/icache_inval_all_pou/g;"
> 
> Note that __clean_dcache_area_poc is deliberately missing a word
> boundary check at the beginning in order to match the efistub
> symbols in image-vars.h.
> 
> Also note that, despite its name, __flush_icache_range operates
> on both instruction and data caches. The name change here
> reflects that.
> 
> No functional change intended.
> 
> Signed-off-by: Fuad Tabba <tabba@google.com>

This looks great! It's especially nice to see "flush" gone, and it's now
much more apparent whether a function affects the I-caches alone or also
performs D-cache maintenance, so it should be much easier to avoid
redundant maintenance.

Acked-by: Mark Rutland <mark.rutland@arm.com>

I've built this and given it some light boot+userspace testing, so for
the series:

Tested-by: Mark Rutland <mark.rutland@arm.com>

Mark.

> ---
>  arch/arm64/include/asm/arch_gicv3.h |  2 +-
>  arch/arm64/include/asm/cacheflush.h | 36 +++++++++---------
>  arch/arm64/include/asm/efi.h        |  2 +-
>  arch/arm64/include/asm/kvm_mmu.h    |  6 +--
>  arch/arm64/kernel/alternative.c     |  2 +-
>  arch/arm64/kernel/efi-entry.S       |  4 +-
>  arch/arm64/kernel/head.S            |  8 ++--
>  arch/arm64/kernel/hibernate-asm.S   |  4 +-
>  arch/arm64/kernel/hibernate.c       | 12 +++---
>  arch/arm64/kernel/idreg-override.c  |  2 +-
>  arch/arm64/kernel/image-vars.h      |  2 +-
>  arch/arm64/kernel/insn.c            |  2 +-
>  arch/arm64/kernel/kaslr.c           |  6 +--
>  arch/arm64/kernel/machine_kexec.c   | 10 ++---
>  arch/arm64/kernel/smp.c             |  4 +-
>  arch/arm64/kernel/smp_spin_table.c  |  4 +-
>  arch/arm64/kernel/sys_compat.c      |  2 +-
>  arch/arm64/kvm/arm.c                |  2 +-
>  arch/arm64/kvm/hyp/nvhe/cache.S     |  4 +-
>  arch/arm64/kvm/hyp/nvhe/setup.c     |  2 +-
>  arch/arm64/kvm/hyp/nvhe/tlb.c       |  2 +-
>  arch/arm64/kvm/hyp/pgtable.c        |  4 +-
>  arch/arm64/lib/uaccess_flushcache.c |  4 +-
>  arch/arm64/mm/cache.S               | 58 ++++++++++++++---------------
>  arch/arm64/mm/flush.c               | 12 +++---
>  25 files changed, 98 insertions(+), 98 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/arch_gicv3.h b/arch/arm64/include/asm/arch_gicv3.h
> index ed1cc9d8e6df..4ad22c3135db 100644
> --- a/arch/arm64/include/asm/arch_gicv3.h
> +++ b/arch/arm64/include/asm/arch_gicv3.h
> @@ -125,7 +125,7 @@ static inline u32 gic_read_rpr(void)
>  #define gic_write_lpir(v, c)		writeq_relaxed(v, c)
>  
>  #define gic_flush_dcache_to_poc(a,l)	\
> -	__flush_dcache_area((unsigned long)(a), (unsigned long)(a)+(l))
> +	dcache_clean_inval_poc((unsigned long)(a), (unsigned long)(a)+(l))
>  
>  #define gits_read_baser(c)		readq_relaxed(c)
>  #define gits_write_baser(v, c)		writeq_relaxed(v, c)
> diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
> index 4b91d3530013..885bda37b805 100644
> --- a/arch/arm64/include/asm/cacheflush.h
> +++ b/arch/arm64/include/asm/cacheflush.h
> @@ -34,54 +34,54 @@
>   *		- start  - virtual start address
>   *		- end    - virtual end address
>   *
> - *	__flush_icache_range(start, end)
> + *	caches_clean_inval_pou(start, end)
>   *
>   *		Ensure coherency between the I-cache and the D-cache region to
>   *		the Point of Unification.
>   *
> - *	__flush_cache_user_range(start, end)
> + *	caches_clean_inval_user_pou(start, end)
>   *
>   *		Ensure coherency between the I-cache and the D-cache region to
>   *		the Point of Unification.
>   *		Use only if the region might access user memory.
>   *
> - *	invalidate_icache_range(start, end)
> + *	icache_inval_pou(start, end)
>   *
>   *		Invalidate I-cache region to the Point of Unification.
>   *
> - *	__flush_dcache_area(start, end)
> + *	dcache_clean_inval_poc(start, end)
>   *
>   *		Clean and invalidate D-cache region to the Point of Coherence.
>   *
> - *	__inval_dcache_area(start, end)
> + *	dcache_inval_poc(start, end)
>   *
>   *		Invalidate D-cache region to the Point of Coherence.
>   *
> - *	__clean_dcache_area_poc(start, end)
> + *	dcache_clean_poc(start, end)
>   *
>   *		Clean D-cache region to the Point of Coherence.
>   *
> - *	__clean_dcache_area_pop(start, end)
> + *	dcache_clean_pop(start, end)
>   *
>   *		Clean D-cache region to the Point of Persistence.
>   *
> - *	__clean_dcache_area_pou(start, end)
> + *	dcache_clean_pou(start, end)
>   *
>   *		Clean D-cache region to the Point of Unification.
>   */
> -extern void __flush_icache_range(unsigned long start, unsigned long end);
> -extern void invalidate_icache_range(unsigned long start, unsigned long end);
> -extern void __flush_dcache_area(unsigned long start, unsigned long end);
> -extern void __inval_dcache_area(unsigned long start, unsigned long end);
> -extern void __clean_dcache_area_poc(unsigned long start, unsigned long end);
> -extern void __clean_dcache_area_pop(unsigned long start, unsigned long end);
> -extern void __clean_dcache_area_pou(unsigned long start, unsigned long end);
> -extern long __flush_cache_user_range(unsigned long start, unsigned long end);
> +extern void caches_clean_inval_pou(unsigned long start, unsigned long end);
> +extern void icache_inval_pou(unsigned long start, unsigned long end);
> +extern void dcache_clean_inval_poc(unsigned long start, unsigned long end);
> +extern void dcache_inval_poc(unsigned long start, unsigned long end);
> +extern void dcache_clean_poc(unsigned long start, unsigned long end);
> +extern void dcache_clean_pop(unsigned long start, unsigned long end);
> +extern void dcache_clean_pou(unsigned long start, unsigned long end);
> +extern long caches_clean_inval_user_pou(unsigned long start, unsigned long end);
>  extern void sync_icache_aliases(unsigned long start, unsigned long end);
>  
>  static inline void flush_icache_range(unsigned long start, unsigned long end)
>  {
> -	__flush_icache_range(start, end);
> +	caches_clean_inval_pou(start, end);
>  
>  	/*
>  	 * IPI all online CPUs so that they undergo a context synchronization
> @@ -135,7 +135,7 @@ extern void copy_to_user_page(struct vm_area_struct *, struct page *,
>  #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1
>  extern void flush_dcache_page(struct page *);
>  
> -static __always_inline void __flush_icache_all(void)
> +static __always_inline void icache_inval_all_pou(void)
>  {
>  	if (cpus_have_const_cap(ARM64_HAS_CACHE_DIC))
>  		return;
> diff --git a/arch/arm64/include/asm/efi.h b/arch/arm64/include/asm/efi.h
> index 0ae2397076fd..1bed37eb013a 100644
> --- a/arch/arm64/include/asm/efi.h
> +++ b/arch/arm64/include/asm/efi.h
> @@ -137,7 +137,7 @@ void efi_virtmap_unload(void);
>  
>  static inline void efi_capsule_flush_cache_range(void *addr, int size)
>  {
> -	__flush_dcache_area((unsigned long)addr, (unsigned long)addr + size);
> +	dcache_clean_inval_poc((unsigned long)addr, (unsigned long)addr + size);
>  }
>  
>  #endif /* _ASM_EFI_H */
> diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
> index 33293d5855af..f4cbfa9025a8 100644
> --- a/arch/arm64/include/asm/kvm_mmu.h
> +++ b/arch/arm64/include/asm/kvm_mmu.h
> @@ -181,7 +181,7 @@ static inline void *__kvm_vector_slot2addr(void *base,
>  struct kvm;
>  
>  #define kvm_flush_dcache_to_poc(a,l)	\
> -	__flush_dcache_area((unsigned long)(a), (unsigned long)(a)+(l))
> +	dcache_clean_inval_poc((unsigned long)(a), (unsigned long)(a)+(l))
>  
>  static inline bool vcpu_has_cache_enabled(struct kvm_vcpu *vcpu)
>  {
> @@ -209,12 +209,12 @@ static inline void __invalidate_icache_guest_page(kvm_pfn_t pfn,
>  {
>  	if (icache_is_aliasing()) {
>  		/* any kind of VIPT cache */
> -		__flush_icache_all();
> +		icache_inval_all_pou();
>  	} else if (is_kernel_in_hyp_mode() || !icache_is_vpipt()) {
>  		/* PIPT or VPIPT at EL2 (see comment in __kvm_tlb_flush_vmid_ipa) */
>  		void *va = page_address(pfn_to_page(pfn));
>  
> -		invalidate_icache_range((unsigned long)va,
> +		icache_inval_pou((unsigned long)va,
>  					(unsigned long)va + size);
>  	}
>  }
> diff --git a/arch/arm64/kernel/alternative.c b/arch/arm64/kernel/alternative.c
> index c906d20c7b52..3fb79b76e9d9 100644
> --- a/arch/arm64/kernel/alternative.c
> +++ b/arch/arm64/kernel/alternative.c
> @@ -181,7 +181,7 @@ static void __nocfi __apply_alternatives(struct alt_region *region, bool is_modu
>  	 */
>  	if (!is_module) {
>  		dsb(ish);
> -		__flush_icache_all();
> +		icache_inval_all_pou();
>  		isb();
>  
>  		/* Ignore ARM64_CB bit from feature mask */
> diff --git a/arch/arm64/kernel/efi-entry.S b/arch/arm64/kernel/efi-entry.S
> index 72e6a580290a..6668bad21f86 100644
> --- a/arch/arm64/kernel/efi-entry.S
> +++ b/arch/arm64/kernel/efi-entry.S
> @@ -29,7 +29,7 @@ SYM_CODE_START(efi_enter_kernel)
>  	 */
>  	ldr	w1, =kernel_size
>  	add	x1, x0, x1
> -	bl	__clean_dcache_area_poc
> +	bl	dcache_clean_poc
>  	ic	ialluis
>  
>  	/*
> @@ -38,7 +38,7 @@ SYM_CODE_START(efi_enter_kernel)
>  	 */
>  	adr	x0, 0f
>  	adr	x1, 3f
> -	bl	__clean_dcache_area_poc
> +	bl	dcache_clean_poc
>  0:
>  	/* Turn off Dcache and MMU */
>  	mrs	x0, CurrentEL
> diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
> index 8df0ac8d9123..6928cb67d3a0 100644
> --- a/arch/arm64/kernel/head.S
> +++ b/arch/arm64/kernel/head.S
> @@ -118,7 +118,7 @@ SYM_CODE_START_LOCAL(preserve_boot_args)
>  						// MMU off
>  
>  	add	x1, x0, #0x20			// 4 x 8 bytes
> -	b	__inval_dcache_area		// tail call
> +	b	dcache_inval_poc		// tail call
>  SYM_CODE_END(preserve_boot_args)
>  
>  /*
> @@ -268,7 +268,7 @@ SYM_FUNC_START_LOCAL(__create_page_tables)
>  	 */
>  	adrp	x0, init_pg_dir
>  	adrp	x1, init_pg_end
> -	bl	__inval_dcache_area
> +	bl	dcache_inval_poc
>  
>  	/*
>  	 * Clear the init page tables.
> @@ -381,11 +381,11 @@ SYM_FUNC_START_LOCAL(__create_page_tables)
>  
>  	adrp	x0, idmap_pg_dir
>  	adrp	x1, idmap_pg_end
> -	bl	__inval_dcache_area
> +	bl	dcache_inval_poc
>  
>  	adrp	x0, init_pg_dir
>  	adrp	x1, init_pg_end
> -	bl	__inval_dcache_area
> +	bl	dcache_inval_poc
>  
>  	ret	x28
>  SYM_FUNC_END(__create_page_tables)
> diff --git a/arch/arm64/kernel/hibernate-asm.S b/arch/arm64/kernel/hibernate-asm.S
> index ef2ab7caf815..81c0186a5e32 100644
> --- a/arch/arm64/kernel/hibernate-asm.S
> +++ b/arch/arm64/kernel/hibernate-asm.S
> @@ -45,7 +45,7 @@
>   * Because this code has to be copied to a 'safe' page, it can't call out to
>   * other functions by PC-relative address. Also remember that it may be
>   * mid-way through over-writing other functions. For this reason it contains
> - * code from __flush_icache_range() and uses the copy_page() macro.
> + * code from caches_clean_inval_pou() and uses the copy_page() macro.
>   *
>   * This 'safe' page is mapped via ttbr0, and executed from there. This function
>   * switches to a copy of the linear map in ttbr1, performs the restore, then
> @@ -87,7 +87,7 @@ SYM_CODE_START(swsusp_arch_suspend_exit)
>  	copy_page	x0, x1, x2, x3, x4, x5, x6, x7, x8, x9
>  
>  	add	x1, x10, #PAGE_SIZE
> -	/* Clean the copied page to PoU - based on __flush_icache_range() */
> +	/* Clean the copied page to PoU - based on caches_clean_inval_pou() */
>  	raw_dcache_line_size x2, x3
>  	sub	x3, x2, #1
>  	bic	x4, x10, x3
> diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c
> index b40ddce71507..46a0b4d6e251 100644
> --- a/arch/arm64/kernel/hibernate.c
> +++ b/arch/arm64/kernel/hibernate.c
> @@ -210,7 +210,7 @@ static int create_safe_exec_page(void *src_start, size_t length,
>  		return -ENOMEM;
>  
>  	memcpy(page, src_start, length);
> -	__flush_icache_range((unsigned long)page, (unsigned long)page + length);
> +	caches_clean_inval_pou((unsigned long)page, (unsigned long)page + length);
>  	rc = trans_pgd_idmap_page(&trans_info, &trans_ttbr0, &t0sz, page);
>  	if (rc)
>  		return rc;
> @@ -381,17 +381,17 @@ int swsusp_arch_suspend(void)
>  		ret = swsusp_save();
>  	} else {
>  		/* Clean kernel core startup/idle code to PoC*/
> -		__flush_dcache_area((unsigned long)__mmuoff_data_start,
> +		dcache_clean_inval_poc((unsigned long)__mmuoff_data_start,
>  				    (unsigned long)__mmuoff_data_end);
> -		__flush_dcache_area((unsigned long)__idmap_text_start,
> +		dcache_clean_inval_poc((unsigned long)__idmap_text_start,
>  				    (unsigned long)__idmap_text_end);
>  
>  		/* Clean kvm setup code to PoC? */
>  		if (el2_reset_needed()) {
> -			__flush_dcache_area(
> +			dcache_clean_inval_poc(
>  				(unsigned long)__hyp_idmap_text_start,
>  				(unsigned long)__hyp_idmap_text_end);
> -			__flush_dcache_area((unsigned long)__hyp_text_start,
> +			dcache_clean_inval_poc((unsigned long)__hyp_text_start,
>  					    (unsigned long)__hyp_text_end);
>  		}
>  
> @@ -477,7 +477,7 @@ int swsusp_arch_resume(void)
>  	 * The hibernate exit text contains a set of el2 vectors, that will
>  	 * be executed at el2 with the mmu off in order to reload hyp-stub.
>  	 */
> -	__flush_dcache_area((unsigned long)hibernate_exit,
> +	dcache_clean_inval_poc((unsigned long)hibernate_exit,
>  			    (unsigned long)hibernate_exit + exit_size);
>  
>  	/*
> diff --git a/arch/arm64/kernel/idreg-override.c b/arch/arm64/kernel/idreg-override.c
> index 3dd515baf526..53a381a7f65d 100644
> --- a/arch/arm64/kernel/idreg-override.c
> +++ b/arch/arm64/kernel/idreg-override.c
> @@ -237,7 +237,7 @@ asmlinkage void __init init_feature_override(void)
>  
>  	for (i = 0; i < ARRAY_SIZE(regs); i++) {
>  		if (regs[i]->override)
> -			__flush_dcache_area((unsigned long)regs[i]->override,
> +			dcache_clean_inval_poc((unsigned long)regs[i]->override,
>  					    (unsigned long)regs[i]->override +
>  					    sizeof(*regs[i]->override));
>  	}
> diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h
> index bcf3c2755370..c96a9a0043bf 100644
> --- a/arch/arm64/kernel/image-vars.h
> +++ b/arch/arm64/kernel/image-vars.h
> @@ -35,7 +35,7 @@ __efistub_strnlen		= __pi_strnlen;
>  __efistub_strcmp		= __pi_strcmp;
>  __efistub_strncmp		= __pi_strncmp;
>  __efistub_strrchr		= __pi_strrchr;
> -__efistub___clean_dcache_area_poc = __pi___clean_dcache_area_poc;
> +__efistub_dcache_clean_poc = __pi_dcache_clean_poc;
>  
>  #if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)
>  __efistub___memcpy		= __pi_memcpy;
> diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
> index 6c0de2f60ea9..51cb8dc98d00 100644
> --- a/arch/arm64/kernel/insn.c
> +++ b/arch/arm64/kernel/insn.c
> @@ -198,7 +198,7 @@ int __kprobes aarch64_insn_patch_text_nosync(void *addr, u32 insn)
>  
>  	ret = aarch64_insn_write(tp, insn);
>  	if (ret == 0)
> -		__flush_icache_range((uintptr_t)tp,
> +		caches_clean_inval_pou((uintptr_t)tp,
>  				     (uintptr_t)tp + AARCH64_INSN_SIZE);
>  
>  	return ret;
> diff --git a/arch/arm64/kernel/kaslr.c b/arch/arm64/kernel/kaslr.c
> index 49cccd03cb37..cfa2cfde3019 100644
> --- a/arch/arm64/kernel/kaslr.c
> +++ b/arch/arm64/kernel/kaslr.c
> @@ -72,7 +72,7 @@ u64 __init kaslr_early_init(void)
>  	 * we end up running with module randomization disabled.
>  	 */
>  	module_alloc_base = (u64)_etext - MODULES_VSIZE;
> -	__flush_dcache_area((unsigned long)&module_alloc_base,
> +	dcache_clean_inval_poc((unsigned long)&module_alloc_base,
>  			    (unsigned long)&module_alloc_base +
>  				    sizeof(module_alloc_base));
>  
> @@ -172,10 +172,10 @@ u64 __init kaslr_early_init(void)
>  	module_alloc_base += (module_range * (seed & ((1 << 21) - 1))) >> 21;
>  	module_alloc_base &= PAGE_MASK;
>  
> -	__flush_dcache_area((unsigned long)&module_alloc_base,
> +	dcache_clean_inval_poc((unsigned long)&module_alloc_base,
>  			    (unsigned long)&module_alloc_base +
>  				    sizeof(module_alloc_base));
> -	__flush_dcache_area((unsigned long)&memstart_offset_seed,
> +	dcache_clean_inval_poc((unsigned long)&memstart_offset_seed,
>  			    (unsigned long)&memstart_offset_seed +
>  				    sizeof(memstart_offset_seed));
>  
> diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c
> index 3e79110c8f3a..03ceabe4d912 100644
> --- a/arch/arm64/kernel/machine_kexec.c
> +++ b/arch/arm64/kernel/machine_kexec.c
> @@ -72,10 +72,10 @@ int machine_kexec_post_load(struct kimage *kimage)
>  	 * For execution with the MMU off, reloc_code needs to be cleaned to the
>  	 * PoC and invalidated from the I-cache.
>  	 */
> -	__flush_dcache_area((unsigned long)reloc_code,
> +	dcache_clean_inval_poc((unsigned long)reloc_code,
>  			    (unsigned long)reloc_code +
>  				    arm64_relocate_new_kernel_size);
> -	invalidate_icache_range((uintptr_t)reloc_code,
> +	icache_inval_pou((uintptr_t)reloc_code,
>  				(uintptr_t)reloc_code +
>  					arm64_relocate_new_kernel_size);
>  
> @@ -111,7 +111,7 @@ static void kexec_list_flush(struct kimage *kimage)
>  		unsigned long addr;
>  
>  		/* flush the list entries. */
> -		__flush_dcache_area((unsigned long)entry,
> +		dcache_clean_inval_poc((unsigned long)entry,
>  				    (unsigned long)entry +
>  					    sizeof(kimage_entry_t));
>  
> @@ -128,7 +128,7 @@ static void kexec_list_flush(struct kimage *kimage)
>  			break;
>  		case IND_SOURCE:
>  			/* flush the source pages. */
> -			__flush_dcache_area(addr, addr + PAGE_SIZE);
> +			dcache_clean_inval_poc(addr, addr + PAGE_SIZE);
>  			break;
>  		case IND_DESTINATION:
>  			break;
> @@ -155,7 +155,7 @@ static void kexec_segment_flush(const struct kimage *kimage)
>  			kimage->segment[i].memsz,
>  			kimage->segment[i].memsz /  PAGE_SIZE);
>  
> -		__flush_dcache_area(
> +		dcache_clean_inval_poc(
>  			(unsigned long)phys_to_virt(kimage->segment[i].mem),
>  			(unsigned long)phys_to_virt(kimage->segment[i].mem) +
>  				kimage->segment[i].memsz);
> diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
> index 5fcdee331087..9b4c1118194d 100644
> --- a/arch/arm64/kernel/smp.c
> +++ b/arch/arm64/kernel/smp.c
> @@ -122,7 +122,7 @@ int __cpu_up(unsigned int cpu, struct task_struct *idle)
>  	secondary_data.task = idle;
>  	secondary_data.stack = task_stack_page(idle) + THREAD_SIZE;
>  	update_cpu_boot_status(CPU_MMU_OFF);
> -	__flush_dcache_area((unsigned long)&secondary_data,
> +	dcache_clean_inval_poc((unsigned long)&secondary_data,
>  			    (unsigned long)&secondary_data +
>  				    sizeof(secondary_data));
>  
> @@ -145,7 +145,7 @@ int __cpu_up(unsigned int cpu, struct task_struct *idle)
>  	pr_crit("CPU%u: failed to come online\n", cpu);
>  	secondary_data.task = NULL;
>  	secondary_data.stack = NULL;
> -	__flush_dcache_area((unsigned long)&secondary_data,
> +	dcache_clean_inval_poc((unsigned long)&secondary_data,
>  			    (unsigned long)&secondary_data +
>  				    sizeof(secondary_data));
>  	status = READ_ONCE(secondary_data.status);
> diff --git a/arch/arm64/kernel/smp_spin_table.c b/arch/arm64/kernel/smp_spin_table.c
> index 58d804582a35..7e1624ecab3c 100644
> --- a/arch/arm64/kernel/smp_spin_table.c
> +++ b/arch/arm64/kernel/smp_spin_table.c
> @@ -36,7 +36,7 @@ static void write_pen_release(u64 val)
>  	unsigned long size = sizeof(secondary_holding_pen_release);
>  
>  	secondary_holding_pen_release = val;
> -	__flush_dcache_area((unsigned long)start, (unsigned long)start + size);
> +	dcache_clean_inval_poc((unsigned long)start, (unsigned long)start + size);
>  }
>  
>  
> @@ -90,7 +90,7 @@ static int smp_spin_table_cpu_prepare(unsigned int cpu)
>  	 * the boot protocol.
>  	 */
>  	writeq_relaxed(pa_holding_pen, release_addr);
> -	__flush_dcache_area((__force unsigned long)release_addr,
> +	dcache_clean_inval_poc((__force unsigned long)release_addr,
>  			    (__force unsigned long)release_addr +
>  				    sizeof(*release_addr));
>  
> diff --git a/arch/arm64/kernel/sys_compat.c b/arch/arm64/kernel/sys_compat.c
> index 265fe3eb1069..db5159a3055f 100644
> --- a/arch/arm64/kernel/sys_compat.c
> +++ b/arch/arm64/kernel/sys_compat.c
> @@ -41,7 +41,7 @@ __do_compat_cache_op(unsigned long start, unsigned long end)
>  			dsb(ish);
>  		}
>  
> -		ret = __flush_cache_user_range(start, start + chunk);
> +		ret = caches_clean_inval_user_pou(start, start + chunk);
>  		if (ret)
>  			return ret;
>  
> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> index 1cb39c0803a4..c1953f65ca0e 100644
> --- a/arch/arm64/kvm/arm.c
> +++ b/arch/arm64/kvm/arm.c
> @@ -1064,7 +1064,7 @@ static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm_vcpu *vcpu,
>  		if (!cpus_have_final_cap(ARM64_HAS_STAGE2_FWB))
>  			stage2_unmap_vm(vcpu->kvm);
>  		else
> -			__flush_icache_all();
> +			icache_inval_all_pou();
>  	}
>  
>  	vcpu_reset_hcr(vcpu);
> diff --git a/arch/arm64/kvm/hyp/nvhe/cache.S b/arch/arm64/kvm/hyp/nvhe/cache.S
> index 36cef6915428..958734f4d6b0 100644
> --- a/arch/arm64/kvm/hyp/nvhe/cache.S
> +++ b/arch/arm64/kvm/hyp/nvhe/cache.S
> @@ -7,7 +7,7 @@
>  #include <asm/assembler.h>
>  #include <asm/alternative.h>
>  
> -SYM_FUNC_START_PI(__flush_dcache_area)
> +SYM_FUNC_START_PI(dcache_clean_inval_poc)
>  	dcache_by_line_op civac, sy, x0, x1, x2, x3
>  	ret
> -SYM_FUNC_END_PI(__flush_dcache_area)
> +SYM_FUNC_END_PI(dcache_clean_inval_poc)
> diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c
> index 5dffe928f256..8143ebd4fb72 100644
> --- a/arch/arm64/kvm/hyp/nvhe/setup.c
> +++ b/arch/arm64/kvm/hyp/nvhe/setup.c
> @@ -134,7 +134,7 @@ static void update_nvhe_init_params(void)
>  	for (i = 0; i < hyp_nr_cpus; i++) {
>  		params = per_cpu_ptr(&kvm_init_params, i);
>  		params->pgd_pa = __hyp_pa(pkvm_pgtable.pgd);
> -		__flush_dcache_area((unsigned long)params,
> +		dcache_clean_inval_poc((unsigned long)params,
>  				    (unsigned long)params + sizeof(*params));
>  	}
>  }
> diff --git a/arch/arm64/kvm/hyp/nvhe/tlb.c b/arch/arm64/kvm/hyp/nvhe/tlb.c
> index 83dc3b271bc5..38ed0f6f2703 100644
> --- a/arch/arm64/kvm/hyp/nvhe/tlb.c
> +++ b/arch/arm64/kvm/hyp/nvhe/tlb.c
> @@ -104,7 +104,7 @@ void __kvm_tlb_flush_vmid_ipa(struct kvm_s2_mmu *mmu,
>  	 * you should be running with VHE enabled.
>  	 */
>  	if (icache_is_vpipt())
> -		__flush_icache_all();
> +		icache_inval_all_pou();
>  
>  	__tlb_switch_to_host(&cxt);
>  }
> diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
> index 10d2f04013d4..e9ad7fb28ee3 100644
> --- a/arch/arm64/kvm/hyp/pgtable.c
> +++ b/arch/arm64/kvm/hyp/pgtable.c
> @@ -841,7 +841,7 @@ static int stage2_unmap_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep,
>  	if (need_flush) {
>  		kvm_pte_t *pte_follow = kvm_pte_follow(pte, mm_ops);
>  
> -		__flush_dcache_area((unsigned long)pte_follow,
> +		dcache_clean_inval_poc((unsigned long)pte_follow,
>  				    (unsigned long)pte_follow +
>  					    kvm_granule_size(level));
>  	}
> @@ -997,7 +997,7 @@ static int stage2_flush_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep,
>  		return 0;
>  
>  	pte_follow = kvm_pte_follow(pte, mm_ops);
> -	__flush_dcache_area((unsigned long)pte_follow,
> +	dcache_clean_inval_poc((unsigned long)pte_follow,
>  			    (unsigned long)pte_follow +
>  				    kvm_granule_size(level));
>  	return 0;
> diff --git a/arch/arm64/lib/uaccess_flushcache.c b/arch/arm64/lib/uaccess_flushcache.c
> index 62ea989effe8..baee22961bdb 100644
> --- a/arch/arm64/lib/uaccess_flushcache.c
> +++ b/arch/arm64/lib/uaccess_flushcache.c
> @@ -15,7 +15,7 @@ void memcpy_flushcache(void *dst, const void *src, size_t cnt)
>  	 * barrier to order the cache maintenance against the memcpy.
>  	 */
>  	memcpy(dst, src, cnt);
> -	__clean_dcache_area_pop((unsigned long)dst, (unsigned long)dst + cnt);
> +	dcache_clean_pop((unsigned long)dst, (unsigned long)dst + cnt);
>  }
>  EXPORT_SYMBOL_GPL(memcpy_flushcache);
>  
> @@ -33,6 +33,6 @@ unsigned long __copy_user_flushcache(void *to, const void __user *from,
>  	rc = raw_copy_from_user(to, from, n);
>  
>  	/* See above */
> -	__clean_dcache_area_pop((unsigned long)to, (unsigned long)to + n - rc);
> +	dcache_clean_pop((unsigned long)to, (unsigned long)to + n - rc);
>  	return rc;
>  }
> diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
> index b70a6699c02b..e799a4999299 100644
> --- a/arch/arm64/mm/cache.S
> +++ b/arch/arm64/mm/cache.S
> @@ -15,7 +15,7 @@
>  #include <asm/asm-uaccess.h>
>  
>  /*
> - *	__flush_cache_range(start,end) [fixup]
> + *	caches_clean_inval_pou_macro(start,end) [fixup]
>   *
>   *	Ensure that the I and D caches are coherent within specified region.
>   *	This is typically used when code has been written to a memory region,
> @@ -25,7 +25,7 @@
>   *	- end     - virtual end address of region
>   *	- fixup   - optional label to branch to on user fault
>   */
> -.macro	__flush_cache_range, fixup
> +.macro	caches_clean_inval_pou_macro, fixup
>  alternative_if ARM64_HAS_CACHE_IDC
>  	dsb	ishst
>  	b	.Ldc_skip_\@
> @@ -50,7 +50,7 @@ alternative_else_nop_endif
>  .endm
>  
>  /*
> - *	__flush_icache_range(start,end)
> + *	caches_clean_inval_pou(start,end)
>   *
>   *	Ensure that the I and D caches are coherent within specified region.
>   *	This is typically used when code has been written to a memory region,
> @@ -59,13 +59,13 @@ alternative_else_nop_endif
>   *	- start   - virtual start address of region
>   *	- end     - virtual end address of region
>   */
> -SYM_FUNC_START(__flush_icache_range)
> -	__flush_cache_range
> +SYM_FUNC_START(caches_clean_inval_pou)
> +	caches_clean_inval_pou_macro
>  	ret
> -SYM_FUNC_END(__flush_icache_range)
> +SYM_FUNC_END(caches_clean_inval_pou)
>  
>  /*
> - *	__flush_cache_user_range(start,end)
> + *	caches_clean_inval_user_pou(start,end)
>   *
>   *	Ensure that the I and D caches are coherent within specified region.
>   *	This is typically used when code has been written to a memory region,
> @@ -74,10 +74,10 @@ SYM_FUNC_END(__flush_icache_range)
>   *	- start   - virtual start address of region
>   *	- end     - virtual end address of region
>   */
> -SYM_FUNC_START(__flush_cache_user_range)
> +SYM_FUNC_START(caches_clean_inval_user_pou)
>  	uaccess_ttbr0_enable x2, x3, x4
>  
> -	__flush_cache_range 2f
> +	caches_clean_inval_pou_macro 2f
>  	mov	x0, xzr
>  1:
>  	uaccess_ttbr0_disable x1, x2
> @@ -85,17 +85,17 @@ SYM_FUNC_START(__flush_cache_user_range)
>  2:
>  	mov	x0, #-EFAULT
>  	b	1b
> -SYM_FUNC_END(__flush_cache_user_range)
> +SYM_FUNC_END(caches_clean_inval_user_pou)
>  
>  /*
> - *	invalidate_icache_range(start,end)
> + *	icache_inval_pou(start,end)
>   *
>   *	Ensure that the I cache is invalid within specified region.
>   *
>   *	- start   - virtual start address of region
>   *	- end     - virtual end address of region
>   */
> -SYM_FUNC_START(invalidate_icache_range)
> +SYM_FUNC_START(icache_inval_pou)
>  alternative_if ARM64_HAS_CACHE_DIC
>  	isb
>  	ret
> @@ -103,10 +103,10 @@ alternative_else_nop_endif
>  
>  	invalidate_icache_by_line x0, x1, x2, x3
>  	ret
> -SYM_FUNC_END(invalidate_icache_range)
> +SYM_FUNC_END(icache_inval_pou)
>  
>  /*
> - *	__flush_dcache_area(start, end)
> + *	dcache_clean_inval_poc(start, end)
>   *
>   *	Ensure that any D-cache lines for the interval [start, end)
>   *	are cleaned and invalidated to the PoC.
> @@ -114,13 +114,13 @@ SYM_FUNC_END(invalidate_icache_range)
>   *	- start   - virtual start address of region
>   *	- end     - virtual end address of region
>   */
> -SYM_FUNC_START_PI(__flush_dcache_area)
> +SYM_FUNC_START_PI(dcache_clean_inval_poc)
>  	dcache_by_line_op civac, sy, x0, x1, x2, x3
>  	ret
> -SYM_FUNC_END_PI(__flush_dcache_area)
> +SYM_FUNC_END_PI(dcache_clean_inval_poc)
>  
>  /*
> - *	__clean_dcache_area_pou(start, end)
> + *	dcache_clean_pou(start, end)
>   *
>   * 	Ensure that any D-cache lines for the interval [start, end)
>   * 	are cleaned to the PoU.
> @@ -128,17 +128,17 @@ SYM_FUNC_END_PI(__flush_dcache_area)
>   *	- start   - virtual start address of region
>   *	- end     - virtual end address of region
>   */
> -SYM_FUNC_START(__clean_dcache_area_pou)
> +SYM_FUNC_START(dcache_clean_pou)
>  alternative_if ARM64_HAS_CACHE_IDC
>  	dsb	ishst
>  	ret
>  alternative_else_nop_endif
>  	dcache_by_line_op cvau, ish, x0, x1, x2, x3
>  	ret
> -SYM_FUNC_END(__clean_dcache_area_pou)
> +SYM_FUNC_END(dcache_clean_pou)
>  
>  /*
> - *	__inval_dcache_area(start, end)
> + *	dcache_inval_poc(start, end)
>   *
>   * 	Ensure that any D-cache lines for the interval [start, end)
>   * 	are invalidated. Any partial lines at the ends of the interval are
> @@ -148,7 +148,7 @@ SYM_FUNC_END(__clean_dcache_area_pou)
>   *	- end     - kernel end address of region
>   */
>  SYM_FUNC_START_LOCAL(__dma_inv_area)
> -SYM_FUNC_START_PI(__inval_dcache_area)
> +SYM_FUNC_START_PI(dcache_inval_poc)
>  	/* FALLTHROUGH */
>  
>  /*
> @@ -173,11 +173,11 @@ SYM_FUNC_START_PI(__inval_dcache_area)
>  	b.lo	2b
>  	dsb	sy
>  	ret
> -SYM_FUNC_END_PI(__inval_dcache_area)
> +SYM_FUNC_END_PI(dcache_inval_poc)
>  SYM_FUNC_END(__dma_inv_area)
>  
>  /*
> - *	__clean_dcache_area_poc(start, end)
> + *	dcache_clean_poc(start, end)
>   *
>   * 	Ensure that any D-cache lines for the interval [start, end)
>   * 	are cleaned to the PoC.
> @@ -186,7 +186,7 @@ SYM_FUNC_END(__dma_inv_area)
>   *	- end     - virtual end address of region
>   */
>  SYM_FUNC_START_LOCAL(__dma_clean_area)
> -SYM_FUNC_START_PI(__clean_dcache_area_poc)
> +SYM_FUNC_START_PI(dcache_clean_poc)
>  	/* FALLTHROUGH */
>  
>  /*
> @@ -196,11 +196,11 @@ SYM_FUNC_START_PI(__clean_dcache_area_poc)
>   */
>  	dcache_by_line_op cvac, sy, x0, x1, x2, x3
>  	ret
> -SYM_FUNC_END_PI(__clean_dcache_area_poc)
> +SYM_FUNC_END_PI(dcache_clean_poc)
>  SYM_FUNC_END(__dma_clean_area)
>  
>  /*
> - *	__clean_dcache_area_pop(start, end)
> + *	dcache_clean_pop(start, end)
>   *
>   * 	Ensure that any D-cache lines for the interval [start, end)
>   * 	are cleaned to the PoP.
> @@ -208,13 +208,13 @@ SYM_FUNC_END(__dma_clean_area)
>   *	- start   - virtual start address of region
>   *	- end     - virtual end address of region
>   */
> -SYM_FUNC_START_PI(__clean_dcache_area_pop)
> +SYM_FUNC_START_PI(dcache_clean_pop)
>  	alternative_if_not ARM64_HAS_DCPOP
> -	b	__clean_dcache_area_poc
> +	b	dcache_clean_poc
>  	alternative_else_nop_endif
>  	dcache_by_line_op cvap, sy, x0, x1, x2, x3
>  	ret
> -SYM_FUNC_END_PI(__clean_dcache_area_pop)
> +SYM_FUNC_END_PI(dcache_clean_pop)
>  
>  /*
>   *	__dma_flush_area(start, size)
> diff --git a/arch/arm64/mm/flush.c b/arch/arm64/mm/flush.c
> index 143f625e7727..5fea9a3f6663 100644
> --- a/arch/arm64/mm/flush.c
> +++ b/arch/arm64/mm/flush.c
> @@ -17,14 +17,14 @@
>  void sync_icache_aliases(unsigned long start, unsigned long end)
>  {
>  	if (icache_is_aliasing()) {
> -		__clean_dcache_area_pou(start, end);
> -		__flush_icache_all();
> +		dcache_clean_pou(start, end);
> +		icache_inval_all_pou();
>  	} else {
>  		/*
>  		 * Don't issue kick_all_cpus_sync() after I-cache invalidation
>  		 * for user mappings.
>  		 */
> -		__flush_icache_range(start, end);
> +		caches_clean_inval_pou(start, end);
>  	}
>  }
>  
> @@ -76,20 +76,20 @@ EXPORT_SYMBOL(flush_dcache_page);
>  /*
>   * Additional functions defined in assembly.
>   */
> -EXPORT_SYMBOL(__flush_icache_range);
> +EXPORT_SYMBOL(caches_clean_inval_pou);
>  
>  #ifdef CONFIG_ARCH_HAS_PMEM_API
>  void arch_wb_cache_pmem(void *addr, size_t size)
>  {
>  	/* Ensure order against any prior non-cacheable writes */
>  	dmb(osh);
> -	__clean_dcache_area_pop((unsigned long)addr, (unsigned long)addr + size);
> +	dcache_clean_pop((unsigned long)addr, (unsigned long)addr + size);
>  }
>  EXPORT_SYMBOL_GPL(arch_wb_cache_pmem);
>  
>  void arch_invalidate_pmem(void *addr, size_t size)
>  {
> -	__inval_dcache_area((unsigned long)addr, (unsigned long)addr + size);
> +	dcache_inval_poc((unsigned long)addr, (unsigned long)addr + size);
>  }
>  EXPORT_SYMBOL_GPL(arch_invalidate_pmem);
>  #endif
> -- 
> 2.31.1.751.gd2f1c929bd-goog
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v3 04/18] arm64: assembler: user_alt label optional
  2021-05-20 12:57   ` Mark Rutland
@ 2021-05-21 11:46     ` Fuad Tabba
  2021-05-21 13:05       ` Mark Rutland
  0 siblings, 1 reply; 42+ messages in thread
From: Fuad Tabba @ 2021-05-21 11:46 UTC (permalink / raw)
  To: Mark Rutland
  Cc: moderated list:ARM64 PORT (AARCH64 ARCHITECTURE),
	Will Deacon, Catalin Marinas, Marc Zyngier, Ard Biesheuvel,
	James Morse, Alexandru Elisei, Suzuki K Poulose, Robin Murphy

Hi Mark,

On Thu, May 20, 2021 at 1:57 PM Mark Rutland <mark.rutland@arm.com> wrote:
>
> On Thu, May 20, 2021 at 01:43:52PM +0100, Fuad Tabba wrote:
> > Make the label for the extable entry in user_alt optional, only
> > generating an extable entry if provided.
> >
> > This is needed later in the series, to avoid instruction
> > duplication in the assembly code.
> >
> > While at it, clean up the label used to be globally unique using
> > \@ as for other macros.
>
> Nice; thanks for cleaning up the labels too!
>
> >
> > Signed-off-by: Fuad Tabba <tabba@google.com>
> > ---
> >  arch/arm64/include/asm/alternative-macros.h | 9 ++++++---
> >  arch/arm64/mm/cache.S                       | 2 +-
> >  2 files changed, 7 insertions(+), 4 deletions(-)
> >
> > diff --git a/arch/arm64/include/asm/alternative-macros.h b/arch/arm64/include/asm/alternative-macros.h
> > index 8a078fc662ac..01ef954c9b2d 100644
> > --- a/arch/arm64/include/asm/alternative-macros.h
> > +++ b/arch/arm64/include/asm/alternative-macros.h
> > @@ -197,9 +197,12 @@ alternative_endif
> >  #define _ALTERNATIVE_CFG(insn1, insn2, cap, cfg, ...)        \
> >       alternative_insn insn1, insn2, cap, IS_ENABLED(cfg)
> >
> > -.macro user_alt, label, oldinstr, newinstr, cond
> > -9999:        alternative_insn "\oldinstr", "\newinstr", \cond
> > -     _asm_extable 9999b, \label
> > +.macro user_alt, oldinstr, newinstr, cond, label
> > +.Lextable_\@:
> > +     alternative_insn "\oldinstr", "\newinstr", \cond
> > +     .ifnc \label,
> > +     _asm_extable .Lextable_\@, \label
> > +     .endif
> >  .endm
>
> We can use _cond_extable here to simplify this to:
>
> | .macro user_alt, oldinstr, newinstr, cond, label
> | .Lextable_\@:
> |       alternative_insn "\oldinstr", "\newinstr", \cond
> |       _cond_extable .Lextable_\@, \label
> | .endm
>
> However, since we only use user_alt in __flush_icache_range /
> __flush_cache_user_range, I reckon it would be simpler overall to have
> those use alternative_insn and _cond_extable directly. Then that would
> align with the style of the *_by_line macros, and we could delete
> user_alt.

Thanks for this, and for the comments on the other patches in this
series. I'll rebase this series on rc3 when it comes out, apply your
suggestions, and send it out.

Cheers,
/fuad


>
> Either way, this looks good, so:
>
> Acked-by: Mark Rutland <mark.rutland@arm.com>
>
> >
> >  #endif  /*  __ASSEMBLY__  */
> > diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
> > index 2d881f34dd9d..5ff8dfa86975 100644
> > --- a/arch/arm64/mm/cache.S
> > +++ b/arch/arm64/mm/cache.S
> > @@ -47,7 +47,7 @@ alternative_else_nop_endif
> >       sub     x3, x2, #1
> >       bic     x4, x0, x3
> >  1:
> > -user_alt 9f, "dc cvau, x4",  "dc civac, x4",  ARM64_WORKAROUND_CLEAN_CACHE
> > +user_alt "dc cvau, x4",  "dc civac, x4",  ARM64_WORKAROUND_CLEAN_CACHE, 9f
> >       add     x4, x4, x2
> >       cmp     x4, x1
> >       b.lo    1b
> > --
> > 2.31.1.751.gd2f1c929bd-goog
> >

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v3 05/18] arm64: Do not enable uaccess for flush_icache_range
  2021-05-20 15:37     ` Mark Rutland
@ 2021-05-21 12:18       ` Mark Rutland
  0 siblings, 0 replies; 42+ messages in thread
From: Mark Rutland @ 2021-05-21 12:18 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: linux-arm-kernel, will, catalin.marinas, maz, ardb, james.morse,
	alexandru.elisei, suzuki.poulose, robin.murphy

On Thu, May 20, 2021 at 04:37:35PM +0100, Mark Rutland wrote:
> On Thu, May 20, 2021 at 03:02:16PM +0100, Mark Rutland wrote:
> Having thought about this a bit more, it's simple enough to do that now:
> 
> | alternative_if ARM64_HAS_CACHE_IDC
> | 	dsb	ishst
> | 	b	.Ldc_skip_\@
> | alternative_else_nop_endif
> | 	mov	x0, x2
> | 	add	x3, x0, x1
> | 	dcache_by_line_op cvau, ishst, x2, x3, x4, x5, \fixup
> | .Ldc_skip_\@

Looking at this again, that "ishst" should be "ish", but otherwise this
stands.

Mark.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v3 04/18] arm64: assembler: user_alt label optional
  2021-05-21 11:46     ` Fuad Tabba
@ 2021-05-21 13:05       ` Mark Rutland
  0 siblings, 0 replies; 42+ messages in thread
From: Mark Rutland @ 2021-05-21 13:05 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: moderated list:ARM64 PORT (AARCH64 ARCHITECTURE),
	Will Deacon, Catalin Marinas, Marc Zyngier, Ard Biesheuvel,
	James Morse, Alexandru Elisei, Suzuki K Poulose, Robin Murphy

On Fri, May 21, 2021 at 12:46:14PM +0100, Fuad Tabba wrote:
> I'll rebase this series on rc3 when it comes out, apply your
> suggestions, and send it out.

Great!

When that's out I'd be happy to throw my arm64 Syzkaller instance at it.

Mark.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v3 05/18] arm64: Do not enable uaccess for flush_icache_range
  2021-05-20 12:43 ` [PATCH v3 05/18] arm64: Do not enable uaccess for flush_icache_range Fuad Tabba
  2021-05-20 14:02   ` Mark Rutland
@ 2021-05-25 11:18   ` Catalin Marinas
  1 sibling, 0 replies; 42+ messages in thread
From: Catalin Marinas @ 2021-05-25 11:18 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: linux-arm-kernel, will, mark.rutland, maz, ardb, james.morse,
	alexandru.elisei, suzuki.poulose, robin.murphy

On Thu, May 20, 2021 at 01:43:53PM +0100, Fuad Tabba wrote:
> __flush_icache_range works on the kernel linear map, and doesn't
> need uaccess. The existing code is a side-effect of its current
> implementation with __flush_cache_user_range fallthrough.
> 
> Instead of fallthrough to share the code, use a common macro for
> the two where the caller specifies an optional fixup label if
> user access is needed. If provided, this label would be used to
> generate an extable entry.
> 
> No functional change intended.
> Possible performance impact due to the reduced number of
> instructions.
> 
> Reported-by: Catalin Marinas <catalin.marinas@arm.com>
> Reported-by: Will Deacon <will@kernel.org>
> Link: https://lore.kernel.org/linux-arch/20200511110014.lb9PEahJ4hVOYrbwIb_qUHXyNy9KQzNFdb_I3YlzY6A@z/
> Signed-off-by: Fuad Tabba <tabba@google.com>

Just a few acks on the patches that have my reported-by but I'm happy
with the series overall, nice clean-up.

Acked-by: Catalin Marinas <catalin.marinas@arm.com>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v3 06/18] arm64: Do not enable uaccess for invalidate_icache_range
  2021-05-20 12:43 ` [PATCH v3 06/18] arm64: Do not enable uaccess for invalidate_icache_range Fuad Tabba
  2021-05-20 14:13   ` Mark Rutland
@ 2021-05-25 11:18   ` Catalin Marinas
  1 sibling, 0 replies; 42+ messages in thread
From: Catalin Marinas @ 2021-05-25 11:18 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: linux-arm-kernel, will, mark.rutland, maz, ardb, james.morse,
	alexandru.elisei, suzuki.poulose, robin.murphy

On Thu, May 20, 2021 at 01:43:54PM +0100, Fuad Tabba wrote:
> invalidate_icache_range() works on the kernel linear map, and
> doesn't need uaccess. Remove the code that toggles
> uaccess_ttbr0_enable, as well as the code that emits an entry
> into the exception table (via the macro
> invalidate_icache_by_line).
> 
> Changes return type of invalidate_icache_range() from int (which
> used to indicate a fault) to void, since it doesn't need uaccess
> and won't fault. Note that return value was never checked by any
> of the callers.
> 
> No functional change intended.
> Possible performance impact due to the reduced number of
> instructions.
> 
> Reported-by: Catalin Marinas <catalin.marinas@arm.com>
> Reported-by: Will Deacon <will@kernel.org>
> Link: https://lore.kernel.org/linux-arch/20200511110014.lb9PEahJ4hVOYrbwIb_qUHXyNy9KQzNFdb_I3YlzY6A@z/
> Signed-off-by: Fuad Tabba <tabba@google.com>

Acked-by: Catalin Marinas <catalin.marinas@arm.com>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v3 07/18] arm64: Downgrade flush_icache_range to invalidate
  2021-05-20 12:43 ` [PATCH v3 07/18] arm64: Downgrade flush_icache_range to invalidate Fuad Tabba
  2021-05-20 14:15   ` Mark Rutland
@ 2021-05-25 11:18   ` Catalin Marinas
  1 sibling, 0 replies; 42+ messages in thread
From: Catalin Marinas @ 2021-05-25 11:18 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: linux-arm-kernel, will, mark.rutland, maz, ardb, james.morse,
	alexandru.elisei, suzuki.poulose, robin.murphy

On Thu, May 20, 2021 at 01:43:55PM +0100, Fuad Tabba wrote:
> Since __flush_dcache_area is called right before,
> invalidate_icache_range is sufficient in this case.
> 
> Rewrite the comment to better explain the rationale behind the
> cache maintenance operations used here.
> 
> No functional change intended.
> Possible performance impact due to invalidating only the icache
> rather than invalidating and cleaning both caches.
> 
> Reported-by: Catalin Marinas <catalin.marinas@arm.com>
> Reported-by: Will Deacon <will@kernel.org>
> Link: https://lore.kernel.org/linux-arch/20200511110014.lb9PEahJ4hVOYrbwIb_qUHXyNy9KQzNFdb_I3YlzY6A@z/
> Signed-off-by: Fuad Tabba <tabba@google.com>

Acked-by: Catalin Marinas <catalin.marinas@arm.com>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 42+ messages in thread

end of thread, other threads:[~2021-05-25 11:20 UTC | newest]

Thread overview: 42+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-05-20 12:43 [PATCH v3 00/18] Tidy up cache.S Fuad Tabba
2021-05-20 12:43 ` [PATCH v3 01/18] arm64: assembler: replace `kaddr` with `addr` Fuad Tabba
2021-05-20 12:43 ` [PATCH v3 02/18] arm64: assembler: add conditional cache fixups Fuad Tabba
2021-05-20 12:43 ` [PATCH v3 03/18] arm64: Apply errata to swsusp_arch_suspend_exit Fuad Tabba
2021-05-20 12:46   ` Mark Rutland
2021-05-20 12:43 ` [PATCH v3 04/18] arm64: assembler: user_alt label optional Fuad Tabba
2021-05-20 12:57   ` Mark Rutland
2021-05-21 11:46     ` Fuad Tabba
2021-05-21 13:05       ` Mark Rutland
2021-05-20 12:43 ` [PATCH v3 05/18] arm64: Do not enable uaccess for flush_icache_range Fuad Tabba
2021-05-20 14:02   ` Mark Rutland
2021-05-20 15:37     ` Mark Rutland
2021-05-21 12:18       ` Mark Rutland
2021-05-25 11:18   ` Catalin Marinas
2021-05-20 12:43 ` [PATCH v3 06/18] arm64: Do not enable uaccess for invalidate_icache_range Fuad Tabba
2021-05-20 14:13   ` Mark Rutland
2021-05-25 11:18   ` Catalin Marinas
2021-05-20 12:43 ` [PATCH v3 07/18] arm64: Downgrade flush_icache_range to invalidate Fuad Tabba
2021-05-20 14:15   ` Mark Rutland
2021-05-25 11:18   ` Catalin Marinas
2021-05-20 12:43 ` [PATCH v3 08/18] arm64: Move documentation of dcache_by_line_op Fuad Tabba
2021-05-20 14:17   ` Mark Rutland
2021-05-20 12:43 ` [PATCH v3 09/18] arm64: Fix comments to refer to correct function __flush_icache_range Fuad Tabba
2021-05-20 14:18   ` Mark Rutland
2021-05-20 12:43 ` [PATCH v3 10/18] arm64: __inval_dcache_area to take end parameter instead of size Fuad Tabba
2021-05-20 15:46   ` Mark Rutland
2021-05-20 12:43 ` [PATCH v3 11/18] arm64: dcache_by_line_op " Fuad Tabba
2021-05-20 15:48   ` Mark Rutland
2021-05-20 12:44 ` [PATCH v3 12/18] arm64: __flush_dcache_area " Fuad Tabba
2021-05-20 16:06   ` Mark Rutland
2021-05-20 12:44 ` [PATCH v3 13/18] arm64: __clean_dcache_area_poc " Fuad Tabba
2021-05-20 16:16   ` Mark Rutland
2021-05-20 12:44 ` [PATCH v3 14/18] arm64: __clean_dcache_area_pop " Fuad Tabba
2021-05-20 16:19   ` Mark Rutland
2021-05-20 12:44 ` [PATCH v3 15/18] arm64: __clean_dcache_area_pou " Fuad Tabba
2021-05-20 16:24   ` Mark Rutland
2021-05-20 12:44 ` [PATCH v3 16/18] arm64: sync_icache_aliases " Fuad Tabba
2021-05-20 16:34   ` Mark Rutland
2021-05-20 12:44 ` [PATCH v3 17/18] arm64: Fix cache maintenance function comments Fuad Tabba
2021-05-20 16:48   ` Mark Rutland
2021-05-20 12:44 ` [PATCH v3 18/18] arm64: Rename arm64-internal cache maintenance functions Fuad Tabba
2021-05-20 17:01   ` Mark Rutland

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).