* [PATCH v4 0/5] ARM: decompressor: use by-VA cache maintenance for v7 cores
@ 2020-02-26 16:57 Ard Biesheuvel
2020-02-26 16:57 ` [PATCH v4 1/5] efi/arm: Work around missing cache maintenance in decompressor handover Ard Biesheuvel
` (6 more replies)
0 siblings, 7 replies; 11+ messages in thread
From: Ard Biesheuvel @ 2020-02-26 16:57 UTC (permalink / raw)
To: linux-arm-kernel
Cc: linux-efi, Ard Biesheuvel, Russell King, Marc Zyngier,
Nicolas Pitre, Catalin Marinas, Tony Lindgren, Linus Walleij
While making changes to the EFI stub startup code, I noticed that we are
still doing set/way maintenance on the caches when booting on v7 cores.
This works today on VMs by virtue of the fact that KVM traps set/way ops
and cleans the whole address space by VA on behalf of the guest, and on
most v7 hardware, the set/way ops are in fact sufficient when only one
core is running, as there usually is no system cache. But on systems
like SynQuacer, for which 32-bit firmware is available, the current cache
maintenance only pushes the data out to the L3 system cache, where it
is not visible to the CPU once it turns the MMU and caches off.
So instead, switch to the by-VA cache maintenance that the architecture
requires for v7 and later (and ARM1176, as a side effect).
Changes since v3:
- ensure that the region that is cleaned after self-relocation of the zImage
covers the appended DTB, if present
Apologies to Linus, but due to this change, I decided not to take your
Tested-by into account, and I would appreciate it if you could retest
this version of the series? Thanks.
Changes since v2:
- add a patch to factor out the code sequence that obtains the inflated image
size by doing an unaligned LE32 load from the end of the compressed data
- use new macro to load the inflated image size instead of doing a potentially
unaligned load
- omit the stack for getting the base and size of the self-relocated zImage
Changes since v1:
- include the EFI patch that was sent out separately before (#1)
- split the preparatory work to pass the region to clean in r0/r1 in a EFI
specific one and one for the decompressor - this way, the first two patches
can go on a stable branch that is shared between the ARM tree and the EFI
tree
- document the meaning of the values in r0/r1 upon entry to cache_clean_flush
- take care to treat the region end address as exclusive
- switch to clean+invalidate to align with the other implementations
- drop some code that manages the stack pointer value before calling
cache_clean_flush(), which is no longer necessary
- take care to clean the entire region that is covered by the relocated zImage
if it needs to relocate itself before decompressing
https://git.kernel.org/pub/scm/linux/kernel/git/ardb/linux.git/log/?h=arm32-efi-cache-ops
[ Several people asked me offline why on earth I am running SynQuacer on 32 bit:
the answer is that this is simply to prove that it is currently broken, and
this implies that for 32-bit VMs running under KVM, we are relying on the
special, non-architectural cache management done by the hypervisor on behalf
of the guest to be able to run this code. ]
Cc: Russell King <linux@armlinux.org.uk>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Nicolas Pitre <nico@fluxnic.net>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Tony Lindgren <tony@atomide.com>
Cc: Linus Walleij <linus.walleij@linaro.org>
Ard Biesheuvel (5):
efi/arm: Work around missing cache maintenance in decompressor
handover
efi/arm: Pass start and end addresses to cache_clean_flush()
ARM: decompressor: factor out routine to obtain the inflated image
size
ARM: decompressor: prepare cache_clean_flush for doing by-VA
maintenance
ARM: decompressor: switch to by-VA cache maintenance for v7 cores
arch/arm/boot/compressed/head.S | 162 +++++++++++---------
1 file changed, 86 insertions(+), 76 deletions(-)
--
2.17.1
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH v4 1/5] efi/arm: Work around missing cache maintenance in decompressor handover
2020-02-26 16:57 [PATCH v4 0/5] ARM: decompressor: use by-VA cache maintenance for v7 cores Ard Biesheuvel
@ 2020-02-26 16:57 ` Ard Biesheuvel
2020-02-26 16:57 ` [PATCH v4 2/5] efi/arm: Pass start and end addresses to cache_clean_flush() Ard Biesheuvel
` (5 subsequent siblings)
6 siblings, 0 replies; 11+ messages in thread
From: Ard Biesheuvel @ 2020-02-26 16:57 UTC (permalink / raw)
To: linux-arm-kernel
Cc: linux-efi, Ard Biesheuvel, Russell King, Marc Zyngier,
Nicolas Pitre, Catalin Marinas, Tony Lindgren, Linus Walleij
The EFI stub executes within the context of the zImage as it was
loaded by the firmware, which means it is treated as an ordinary
PE/COFF executable, which is loaded into memory, and cleaned to
the PoU to ensure that it can be executed safely while the MMU
and caches are on.
When the EFI stub hands over to the decompressor, we clean the caches
by set/way and disable the MMU and D-cache, to comply with the Linux
boot protocol for ARM. However, cache maintenance by set/way is not
sufficient to ensure that subsequent instruction fetches and data
accesses done with the MMU off see the correct data. This means that
proceeding as we do currently is not safe, especially since we also
perform data accesses with the MMU off, from a literal pool as well as
the stack.
So let's kick this can down the road a bit, and jump into the relocated
zImage before disabling the caches. This removes the requirement to
perform any by-VA cache maintenance on the original PE/COFF executable,
but it does require that the relocated zImage is cleaned to the PoC,
which is currently not the case. This will be addressed in a subsequent
patch.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
arch/arm/boot/compressed/head.S | 20 ++++++++++++++------
1 file changed, 14 insertions(+), 6 deletions(-)
diff --git a/arch/arm/boot/compressed/head.S b/arch/arm/boot/compressed/head.S
index 088b0a060876..39f7071d47c7 100644
--- a/arch/arm/boot/compressed/head.S
+++ b/arch/arm/boot/compressed/head.S
@@ -1461,6 +1461,17 @@ ENTRY(efi_stub_entry)
@ Preserve return value of efi_entry() in r4
mov r4, r0
bl cache_clean_flush
+
+ @ The PE/COFF loader might not have cleaned the code we are
+ @ running beyond the PoU, and so calling cache_off below from
+ @ inside the PE/COFF loader allocated region is unsafe. Let's
+ @ assume our own zImage relocation code did a better job, and
+ @ jump into its version of this routine before proceeding.
+ ldr r0, [sp] @ relocated zImage
+ ldr r1, .Ljmp
+ sub r1, r0, r1
+ mov pc, r1 @ no mode switch
+0:
bl cache_off
@ Set parameters for booting zImage according to boot protocol
@@ -1469,18 +1480,15 @@ ENTRY(efi_stub_entry)
mov r0, #0
mov r1, #0xFFFFFFFF
mov r2, r4
-
- @ Branch to (possibly) relocated zImage that is in [sp]
- ldr lr, [sp]
- ldr ip, =start_offset
- add lr, lr, ip
- mov pc, lr @ no mode switch
+ b __efi_start
efi_load_fail:
@ Return EFI_LOAD_ERROR to EFI firmware on error.
ldr r0, =0x80000001
ldmfd sp!, {ip, pc}
ENDPROC(efi_stub_entry)
+ .align 2
+.Ljmp: .long start - 0b
#endif
.align
--
2.17.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v4 2/5] efi/arm: Pass start and end addresses to cache_clean_flush()
2020-02-26 16:57 [PATCH v4 0/5] ARM: decompressor: use by-VA cache maintenance for v7 cores Ard Biesheuvel
2020-02-26 16:57 ` [PATCH v4 1/5] efi/arm: Work around missing cache maintenance in decompressor handover Ard Biesheuvel
@ 2020-02-26 16:57 ` Ard Biesheuvel
2020-02-26 16:57 ` [PATCH v4 3/5] ARM: decompressor: factor out routine to obtain the inflated image size Ard Biesheuvel
` (4 subsequent siblings)
6 siblings, 0 replies; 11+ messages in thread
From: Ard Biesheuvel @ 2020-02-26 16:57 UTC (permalink / raw)
To: linux-arm-kernel
Cc: linux-efi, Ard Biesheuvel, Russell King, Marc Zyngier,
Nicolas Pitre, Catalin Marinas, Tony Lindgren, Linus Walleij
In preparation for turning the decompressor's cache clean/flush
operations into proper by-VA maintenance for v7 cores, pass the
start and end addresses of the regions that need cache maintenance
into cache_clean_flush in registers r0 and r1.
Currently, all implementations of cache_clean_flush ignore these
values, so no functional change is expected as a result of this
patch.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
arch/arm/boot/compressed/head.S | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/arch/arm/boot/compressed/head.S b/arch/arm/boot/compressed/head.S
index 39f7071d47c7..8487221bedb0 100644
--- a/arch/arm/boot/compressed/head.S
+++ b/arch/arm/boot/compressed/head.S
@@ -1460,6 +1460,12 @@ ENTRY(efi_stub_entry)
@ Preserve return value of efi_entry() in r4
mov r4, r0
+ add r1, r4, #SZ_2M @ DT end
+ bl cache_clean_flush
+
+ ldr r0, [sp] @ relocated zImage
+ ldr r1, =_edata @ size of zImage
+ add r1, r1, r0 @ end of zImage
bl cache_clean_flush
@ The PE/COFF loader might not have cleaned the code we are
--
2.17.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v4 3/5] ARM: decompressor: factor out routine to obtain the inflated image size
2020-02-26 16:57 [PATCH v4 0/5] ARM: decompressor: use by-VA cache maintenance for v7 cores Ard Biesheuvel
2020-02-26 16:57 ` [PATCH v4 1/5] efi/arm: Work around missing cache maintenance in decompressor handover Ard Biesheuvel
2020-02-26 16:57 ` [PATCH v4 2/5] efi/arm: Pass start and end addresses to cache_clean_flush() Ard Biesheuvel
@ 2020-02-26 16:57 ` Ard Biesheuvel
2020-02-26 16:57 ` [PATCH v4 4/5] ARM: decompressor: prepare cache_clean_flush for doing by-VA maintenance Ard Biesheuvel
` (3 subsequent siblings)
6 siblings, 0 replies; 11+ messages in thread
From: Ard Biesheuvel @ 2020-02-26 16:57 UTC (permalink / raw)
To: linux-arm-kernel
Cc: linux-efi, Ard Biesheuvel, Russell King, Marc Zyngier,
Nicolas Pitre, Catalin Marinas, Tony Lindgren, Linus Walleij
Before adding another reference to the inflated image size, factor
out the slightly complicated way of loading the unaligned little-endian
constant from the end of the compressed data.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
arch/arm/boot/compressed/head.S | 43 ++++++++++++--------
1 file changed, 26 insertions(+), 17 deletions(-)
diff --git a/arch/arm/boot/compressed/head.S b/arch/arm/boot/compressed/head.S
index 8487221bedb0..d45952aae2b5 100644
--- a/arch/arm/boot/compressed/head.S
+++ b/arch/arm/boot/compressed/head.S
@@ -151,6 +151,25 @@
.L_\@:
.endm
+ /*
+ * The kernel build system appends the size of the
+ * decompressed kernel at the end of the compressed data
+ * in little-endian form.
+ */
+ .macro get_inflated_image_size, res:req, tmp1:req, tmp2:req
+ adr \res, .Linflated_image_size_offset
+ ldr \tmp1, [\res]
+ add \tmp1, \tmp1, \res @ address of inflated image size
+
+ ldrb \res, [\tmp1] @ get_unaligned_le32
+ ldrb \tmp2, [\tmp1, #1]
+ orr \res, \res, \tmp2, lsl #8
+ ldrb \tmp2, [\tmp1, #2]
+ ldrb \tmp1, [\tmp1, #3]
+ orr \res, \res, \tmp2, lsl #16
+ orr \res, \res, \tmp1, lsl #24
+ .endm
+
.section ".start", "ax"
/*
* sort out different calling conventions
@@ -268,15 +287,15 @@ not_angel:
*/
mov r0, pc
cmp r0, r4
- ldrcc r0, LC0+32
+ ldrcc r0, LC0+28
addcc r0, r0, pc
cmpcc r4, r0
orrcc r4, r4, #1 @ remember we skipped cache_on
blcs cache_on
restart: adr r0, LC0
- ldmia r0, {r1, r2, r3, r6, r10, r11, r12}
- ldr sp, [r0, #28]
+ ldmia r0, {r1, r2, r3, r6, r11, r12}
+ ldr sp, [r0, #24]
/*
* We might be running at a different address. We need
@@ -284,20 +303,8 @@ restart: adr r0, LC0
*/
sub r0, r0, r1 @ calculate the delta offset
add r6, r6, r0 @ _edata
- add r10, r10, r0 @ inflated kernel size location
- /*
- * The kernel build system appends the size of the
- * decompressed kernel at the end of the compressed data
- * in little-endian form.
- */
- ldrb r9, [r10, #0]
- ldrb lr, [r10, #1]
- orr r9, r9, lr, lsl #8
- ldrb lr, [r10, #2]
- ldrb r10, [r10, #3]
- orr r9, r9, lr, lsl #16
- orr r9, r9, r10, lsl #24
+ get_inflated_image_size r9, r10, lr
#ifndef CONFIG_ZBOOT_ROM
/* malloc space is above the relocated stack (64k max) */
@@ -652,13 +659,15 @@ LC0: .word LC0 @ r1
.word __bss_start @ r2
.word _end @ r3
.word _edata @ r6
- .word input_data_end - 4 @ r10 (inflated size location)
.word _got_start @ r11
.word _got_end @ ip
.word .L_user_stack_end @ sp
.word _end - restart + 16384 + 1024*1024
.size LC0, . - LC0
+.Linflated_image_size_offset:
+ .long (input_data_end - 4) - .
+
#ifdef CONFIG_ARCH_RPC
.globl params
params: ldr r0, =0x10000100 @ params_phys for RPC
--
2.17.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v4 4/5] ARM: decompressor: prepare cache_clean_flush for doing by-VA maintenance
2020-02-26 16:57 [PATCH v4 0/5] ARM: decompressor: use by-VA cache maintenance for v7 cores Ard Biesheuvel
` (2 preceding siblings ...)
2020-02-26 16:57 ` [PATCH v4 3/5] ARM: decompressor: factor out routine to obtain the inflated image size Ard Biesheuvel
@ 2020-02-26 16:57 ` Ard Biesheuvel
2020-02-26 16:57 ` [PATCH v4 5/5] ARM: decompressor: switch to by-VA cache maintenance for v7 cores Ard Biesheuvel
` (2 subsequent siblings)
6 siblings, 0 replies; 11+ messages in thread
From: Ard Biesheuvel @ 2020-02-26 16:57 UTC (permalink / raw)
To: linux-arm-kernel
Cc: linux-efi, Ard Biesheuvel, Russell King, Marc Zyngier,
Nicolas Pitre, Catalin Marinas, Tony Lindgren, Linus Walleij
In preparation for turning the decompressor's cache clean/flush
operations into proper by-VA maintenance for v7 cores, pass the
start and end addresses of the regions that need cache maintenance
into cache_clean_flush in registers r0 and r1.
Currently, all implementations of cache_clean_flush ignore these
values, so no functional change is expected as a result of this
patch.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
arch/arm/boot/compressed/head.S | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/arch/arm/boot/compressed/head.S b/arch/arm/boot/compressed/head.S
index d45952aae2b5..f90034151aef 100644
--- a/arch/arm/boot/compressed/head.S
+++ b/arch/arm/boot/compressed/head.S
@@ -528,6 +528,8 @@ dtb_check_done:
/* Preserve offset to relocated code. */
sub r6, r9, r6
+ mov r0, r9 @ start of relocated zImage
+ add r1, sp, r6 @ end of relocated zImage
#ifndef CONFIG_ZBOOT_ROM
/* cache_clean_flush may use the stack, so relocate it */
add sp, sp, r6
@@ -629,6 +631,11 @@ not_relocated: mov r0, #0
add r2, sp, #0x10000 @ 64k max
mov r3, r7
bl decompress_kernel
+
+ get_inflated_image_size r1, r2, r3
+
+ mov r0, r4 @ start of inflated image
+ add r1, r1, r0 @ end of inflated image
bl cache_clean_flush
bl cache_off
@@ -1182,6 +1189,9 @@ __armv7_mmu_cache_off:
/*
* Clean and flush the cache to maintain consistency.
*
+ * On entry,
+ * r0 = start address
+ * r1 = end address (exclusive)
* On exit,
* r1, r2, r3, r9, r10, r11, r12 corrupted
* This routine must preserve:
--
2.17.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v4 5/5] ARM: decompressor: switch to by-VA cache maintenance for v7 cores
2020-02-26 16:57 [PATCH v4 0/5] ARM: decompressor: use by-VA cache maintenance for v7 cores Ard Biesheuvel
` (3 preceding siblings ...)
2020-02-26 16:57 ` [PATCH v4 4/5] ARM: decompressor: prepare cache_clean_flush for doing by-VA maintenance Ard Biesheuvel
@ 2020-02-26 16:57 ` Ard Biesheuvel
2020-02-26 19:14 ` [PATCH v4 0/5] ARM: decompressor: use " Tony Lindgren
2020-02-27 10:11 ` Linus Walleij
6 siblings, 0 replies; 11+ messages in thread
From: Ard Biesheuvel @ 2020-02-26 16:57 UTC (permalink / raw)
To: linux-arm-kernel
Cc: linux-efi, Ard Biesheuvel, Russell King, Marc Zyngier,
Nicolas Pitre, Catalin Marinas, Tony Lindgren, Linus Walleij
Update the v7 cache_clean_flush routine to take into account the
memory range passed in r0/r1, and perform cache maintenance by
virtual address on this range instead of set/way maintenance, which
is inappropriate for the purpose of maintaining the cached state of
memory contents.
Since this removes any use of the stack in the implementation of
cache_clean_flush(), we can also drop some code that manages the
value of the stack pointer before calling it.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
arch/arm/boot/compressed/head.S | 83 +++++++-------------
1 file changed, 30 insertions(+), 53 deletions(-)
diff --git a/arch/arm/boot/compressed/head.S b/arch/arm/boot/compressed/head.S
index f90034151aef..4f7c6145e31f 100644
--- a/arch/arm/boot/compressed/head.S
+++ b/arch/arm/boot/compressed/head.S
@@ -530,11 +530,6 @@ dtb_check_done:
mov r0, r9 @ start of relocated zImage
add r1, sp, r6 @ end of relocated zImage
-#ifndef CONFIG_ZBOOT_ROM
- /* cache_clean_flush may use the stack, so relocate it */
- add sp, sp, r6
-#endif
-
bl cache_clean_flush
badr r0, restart
@@ -683,6 +678,24 @@ params: ldr r0, =0x10000100 @ params_phys for RPC
.align
#endif
+/*
+ * dcache_line_size - get the minimum D-cache line size from the CTR register
+ * on ARMv7.
+ */
+ .macro dcache_line_size, reg, tmp
+#ifdef CONFIG_CPU_V7M
+ movw \tmp, #:lower16:BASEADDR_V7M_SCB + V7M_SCB_CTR
+ movt \tmp, #:upper16:BASEADDR_V7M_SCB + V7M_SCB_CTR
+ ldr \tmp, [\tmp]
+#else
+ mrc p15, 0, \tmp, c0, c0, 1 @ read ctr
+#endif
+ lsr \tmp, \tmp, #16
+ and \tmp, \tmp, #0xf @ cache line size encoding
+ mov \reg, #4 @ bytes per word
+ mov \reg, \reg, lsl \tmp @ actual cache line size
+ .endm
+
/*
* Turn on the cache. We need to setup some page tables so that we
* can have both the I and D caches on.
@@ -1175,8 +1188,6 @@ __armv7_mmu_cache_off:
bic r0, r0, #0x000c
#endif
mcr p15, 0, r0, c1, c0 @ turn MMU and cache off
- mov r12, lr
- bl __armv7_mmu_cache_flush
mov r0, #0
#ifdef CONFIG_MMU
mcr p15, 0, r0, c8, c7, 0 @ invalidate whole TLB
@@ -1184,7 +1195,7 @@ __armv7_mmu_cache_off:
mcr p15, 0, r0, c7, c5, 6 @ invalidate BTC
mcr p15, 0, r0, c7, c10, 4 @ DSB
mcr p15, 0, r0, c7, c5, 4 @ ISB
- mov pc, r12
+ mov pc, lr
/*
* Clean and flush the cache to maintain consistency.
@@ -1200,6 +1211,7 @@ __armv7_mmu_cache_off:
.align 5
cache_clean_flush:
mov r3, #16
+ mov r11, r1
b call_cache_fn
__armv4_mpu_cache_flush:
@@ -1250,51 +1262,16 @@ __armv7_mmu_cache_flush:
mcr p15, 0, r10, c7, c14, 0 @ clean+invalidate D
b iflush
hierarchical:
- mcr p15, 0, r10, c7, c10, 5 @ DMB
- stmfd sp!, {r0-r7, r9-r11}
- mrc p15, 1, r0, c0, c0, 1 @ read clidr
- ands r3, r0, #0x7000000 @ extract loc from clidr
- mov r3, r3, lsr #23 @ left align loc bit field
- beq finished @ if loc is 0, then no need to clean
- mov r10, #0 @ start clean at cache level 0
-loop1:
- add r2, r10, r10, lsr #1 @ work out 3x current cache level
- mov r1, r0, lsr r2 @ extract cache type bits from clidr
- and r1, r1, #7 @ mask of the bits for current cache only
- cmp r1, #2 @ see what cache we have at this level
- blt skip @ skip if no cache, or just i-cache
- mcr p15, 2, r10, c0, c0, 0 @ select current cache level in cssr
- mcr p15, 0, r10, c7, c5, 4 @ isb to sych the new cssr&csidr
- mrc p15, 1, r1, c0, c0, 0 @ read the new csidr
- and r2, r1, #7 @ extract the length of the cache lines
- add r2, r2, #4 @ add 4 (line length offset)
- ldr r4, =0x3ff
- ands r4, r4, r1, lsr #3 @ find maximum number on the way size
- clz r5, r4 @ find bit position of way size increment
- ldr r7, =0x7fff
- ands r7, r7, r1, lsr #13 @ extract max number of the index size
-loop2:
- mov r9, r4 @ create working copy of max way size
-loop3:
- ARM( orr r11, r10, r9, lsl r5 ) @ factor way and cache number into r11
- ARM( orr r11, r11, r7, lsl r2 ) @ factor index number into r11
- THUMB( lsl r6, r9, r5 )
- THUMB( orr r11, r10, r6 ) @ factor way and cache number into r11
- THUMB( lsl r6, r7, r2 )
- THUMB( orr r11, r11, r6 ) @ factor index number into r11
- mcr p15, 0, r11, c7, c14, 2 @ clean & invalidate by set/way
- subs r9, r9, #1 @ decrement the way
- bge loop3
- subs r7, r7, #1 @ decrement the index
- bge loop2
-skip:
- add r10, r10, #2 @ increment cache number
- cmp r3, r10
- bgt loop1
-finished:
- ldmfd sp!, {r0-r7, r9-r11}
- mov r10, #0 @ switch back to cache level 0
- mcr p15, 2, r10, c0, c0, 0 @ select current cache level in cssr
+ dcache_line_size r1, r2 @ r1 := dcache min line size
+ sub r2, r1, #1 @ r2 := line size mask
+ bic r0, r0, r2 @ round down start to line size
+ sub r11, r11, #1 @ end address is exclusive
+ bic r11, r11, r2 @ round down end to line size
+0: cmp r0, r11 @ finished?
+ bgt iflush
+ mcr p15, 0, r0, c7, c14, 1 @ Dcache clean/invalidate by VA
+ add r0, r0, r1
+ b 0b
iflush:
mcr p15, 0, r10, c7, c10, 4 @ DSB
mcr p15, 0, r10, c7, c5, 0 @ invalidate I+BTB
--
2.17.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [PATCH v4 0/5] ARM: decompressor: use by-VA cache maintenance for v7 cores
2020-02-26 16:57 [PATCH v4 0/5] ARM: decompressor: use by-VA cache maintenance for v7 cores Ard Biesheuvel
` (4 preceding siblings ...)
2020-02-26 16:57 ` [PATCH v4 5/5] ARM: decompressor: switch to by-VA cache maintenance for v7 cores Ard Biesheuvel
@ 2020-02-26 19:14 ` Tony Lindgren
2020-02-27 10:11 ` Linus Walleij
6 siblings, 0 replies; 11+ messages in thread
From: Tony Lindgren @ 2020-02-26 19:14 UTC (permalink / raw)
To: Ard Biesheuvel
Cc: linux-arm-kernel, linux-efi, Russell King, Marc Zyngier,
Nicolas Pitre, Catalin Marinas, Linus Walleij
* Ard Biesheuvel <ardb@kernel.org> [200226 16:58]:
> While making changes to the EFI stub startup code, I noticed that we are
> still doing set/way maintenance on the caches when booting on v7 cores.
> This works today on VMs by virtue of the fact that KVM traps set/way ops
> and cleans the whole address space by VA on behalf of the guest, and on
> most v7 hardware, the set/way ops are in fact sufficient when only one
> core is running, as there usually is no system cache. But on systems
> like SynQuacer, for which 32-bit firmware is available, the current cache
> maintenance only pushes the data out to the L3 system cache, where it
> is not visible to the CPU once it turns the MMU and caches off.
>
> So instead, switch to the by-VA cache maintenance that the architecture
> requires for v7 and later (and ARM1176, as a side effect).
>
> Changes since v3:
> - ensure that the region that is cleaned after self-relocation of the zImage
> covers the appended DTB, if present
I gave these a try on top of the earlier "arm: fix Kbuild issue caused
by per-task stack protector GCC plugin" and booting still works for
me on armv7 including appended dtb:
Tested-by: Tony Lindgren <tony@atomide.com>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v4 0/5] ARM: decompressor: use by-VA cache maintenance for v7 cores
2020-02-26 16:57 [PATCH v4 0/5] ARM: decompressor: use by-VA cache maintenance for v7 cores Ard Biesheuvel
` (5 preceding siblings ...)
2020-02-26 19:14 ` [PATCH v4 0/5] ARM: decompressor: use " Tony Lindgren
@ 2020-02-27 10:11 ` Linus Walleij
2020-02-27 16:01 ` Marc Zyngier
6 siblings, 1 reply; 11+ messages in thread
From: Linus Walleij @ 2020-02-27 10:11 UTC (permalink / raw)
To: Ard Biesheuvel
Cc: Linux ARM, linux-efi, Russell King, Marc Zyngier, Nicolas Pitre,
Catalin Marinas, Tony Lindgren
On Wed, Feb 26, 2020 at 5:57 PM Ard Biesheuvel <ardb@kernel.org> wrote:
> So instead, switch to the by-VA cache maintenance that the architecture
> requires for v7 and later (and ARM1176, as a side effect).
>
> Changes since v3:
> - ensure that the region that is cleaned after self-relocation of the zImage
> covers the appended DTB, if present
>
> Apologies to Linus, but due to this change, I decided not to take your
> Tested-by into account, and I would appreciate it if you could retest
> this version of the series? Thanks.
No problem, I have tested it on the following:
- ARMv7 Cortex A9 x 2 Qualcomm APQ8060 DragonBoard
- ARM PB11MPCore (4 x 1176)
- ARMv7 Ux500 Cortex A9 x 2
The PB11MPCore is again the crucial board, if it work on that
board it works on anything, most of the time :D
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Yours,
Linus Walleij
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v4 0/5] ARM: decompressor: use by-VA cache maintenance for v7 cores
2020-02-27 10:11 ` Linus Walleij
@ 2020-02-27 16:01 ` Marc Zyngier
2020-02-27 16:47 ` Ard Biesheuvel
0 siblings, 1 reply; 11+ messages in thread
From: Marc Zyngier @ 2020-02-27 16:01 UTC (permalink / raw)
To: Linus Walleij
Cc: Ard Biesheuvel, Linux ARM, linux-efi, Russell King,
Nicolas Pitre, Catalin Marinas, Tony Lindgren
On 2020-02-27 10:11, Linus Walleij wrote:
> On Wed, Feb 26, 2020 at 5:57 PM Ard Biesheuvel <ardb@kernel.org> wrote:
>
>> So instead, switch to the by-VA cache maintenance that the
>> architecture
>> requires for v7 and later (and ARM1176, as a side effect).
>>
>> Changes since v3:
>> - ensure that the region that is cleaned after self-relocation of the
>> zImage
>> covers the appended DTB, if present
>>
>> Apologies to Linus, but due to this change, I decided not to take your
>> Tested-by into account, and I would appreciate it if you could retest
>> this version of the series? Thanks.
>
> No problem, I have tested it on the following:
>
> - ARMv7 Cortex A9 x 2 Qualcomm APQ8060 DragonBoard
> - ARM PB11MPCore (4 x 1176)
<pedant>
The ARM11MPCore isn't a bunch of 1176s glued together. It is actually a
very
different CPU, designed by a different team.
</pedant>
> - ARMv7 Ux500 Cortex A9 x 2
>
> The PB11MPCore is again the crucial board, if it work on that
> board it works on anything, most of the time :D
That I can only agree with! ;-)
M.
--
Jazz is not dead. It just smells funny...
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v4 0/5] ARM: decompressor: use by-VA cache maintenance for v7 cores
2020-02-27 16:01 ` Marc Zyngier
@ 2020-02-27 16:47 ` Ard Biesheuvel
2020-02-27 16:53 ` Marc Zyngier
0 siblings, 1 reply; 11+ messages in thread
From: Ard Biesheuvel @ 2020-02-27 16:47 UTC (permalink / raw)
To: Marc Zyngier
Cc: Linus Walleij, Linux ARM, linux-efi, Russell King, Nicolas Pitre,
Catalin Marinas, Tony Lindgren
On Thu, 27 Feb 2020 at 17:01, Marc Zyngier <maz@kernel.org> wrote:
>
> On 2020-02-27 10:11, Linus Walleij wrote:
> > On Wed, Feb 26, 2020 at 5:57 PM Ard Biesheuvel <ardb@kernel.org> wrote:
> >
> >> So instead, switch to the by-VA cache maintenance that the
> >> architecture
> >> requires for v7 and later (and ARM1176, as a side effect).
> >>
> >> Changes since v3:
> >> - ensure that the region that is cleaned after self-relocation of the
> >> zImage
> >> covers the appended DTB, if present
> >>
> >> Apologies to Linus, but due to this change, I decided not to take your
> >> Tested-by into account, and I would appreciate it if you could retest
> >> this version of the series? Thanks.
> >
> > No problem, I have tested it on the following:
> >
> > - ARMv7 Cortex A9 x 2 Qualcomm APQ8060 DragonBoard
> > - ARM PB11MPCore (4 x 1176)
>
> <pedant>
>
> The ARM11MPCore isn't a bunch of 1176s glued together. It is actually a
> very
> different CPU, designed by a different team.
>
> </pedant>
>
It still takes the same code path in the cache routines, afaict:
- the architecture field in the main id register == 0xf, so it uses
__armv7_mmu_cache_flush
- ID_MMFR1[19:16] == 0x2, so it does not take the 'hierarchical' code
path which is modified by these patches
> > - ARMv7 Ux500 Cortex A9 x 2
> >
> > The PB11MPCore is again the crucial board, if it work on that
> > board it works on anything, most of the time :D
>
> That I can only agree with! ;-)
>
> M.
> --
> Jazz is not dead. It just smells funny...
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v4 0/5] ARM: decompressor: use by-VA cache maintenance for v7 cores
2020-02-27 16:47 ` Ard Biesheuvel
@ 2020-02-27 16:53 ` Marc Zyngier
0 siblings, 0 replies; 11+ messages in thread
From: Marc Zyngier @ 2020-02-27 16:53 UTC (permalink / raw)
To: Ard Biesheuvel
Cc: Linus Walleij, Linux ARM, linux-efi, Russell King, Nicolas Pitre,
Catalin Marinas, Tony Lindgren
On 2020-02-27 16:47, Ard Biesheuvel wrote:
> On Thu, 27 Feb 2020 at 17:01, Marc Zyngier <maz@kernel.org> wrote:
>>
>> On 2020-02-27 10:11, Linus Walleij wrote:
>> > On Wed, Feb 26, 2020 at 5:57 PM Ard Biesheuvel <ardb@kernel.org> wrote:
>> >
>> >> So instead, switch to the by-VA cache maintenance that the
>> >> architecture
>> >> requires for v7 and later (and ARM1176, as a side effect).
>> >>
>> >> Changes since v3:
>> >> - ensure that the region that is cleaned after self-relocation of the
>> >> zImage
>> >> covers the appended DTB, if present
>> >>
>> >> Apologies to Linus, but due to this change, I decided not to take your
>> >> Tested-by into account, and I would appreciate it if you could retest
>> >> this version of the series? Thanks.
>> >
>> > No problem, I have tested it on the following:
>> >
>> > - ARMv7 Cortex A9 x 2 Qualcomm APQ8060 DragonBoard
>> > - ARM PB11MPCore (4 x 1176)
>>
>> <pedant>
>>
>> The ARM11MPCore isn't a bunch of 1176s glued together. It is actually
>> a
>> very
>> different CPU, designed by a different team.
>>
>> </pedant>
>>
>
> It still takes the same code path in the cache routines, afaict:
> - the architecture field in the main id register == 0xf, so it uses
> __armv7_mmu_cache_flush
> - ID_MMFR1[19:16] == 0x2, so it does not take the 'hierarchical' code
> path which is modified by these patches
Absolutely. From a SW perspective, this is treated in a similar way as
ARM1176. The underlying HW is very different though...
Thanks,
M.
--
Jazz is not dead. It just smells funny...
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2020-02-27 16:53 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-02-26 16:57 [PATCH v4 0/5] ARM: decompressor: use by-VA cache maintenance for v7 cores Ard Biesheuvel
2020-02-26 16:57 ` [PATCH v4 1/5] efi/arm: Work around missing cache maintenance in decompressor handover Ard Biesheuvel
2020-02-26 16:57 ` [PATCH v4 2/5] efi/arm: Pass start and end addresses to cache_clean_flush() Ard Biesheuvel
2020-02-26 16:57 ` [PATCH v4 3/5] ARM: decompressor: factor out routine to obtain the inflated image size Ard Biesheuvel
2020-02-26 16:57 ` [PATCH v4 4/5] ARM: decompressor: prepare cache_clean_flush for doing by-VA maintenance Ard Biesheuvel
2020-02-26 16:57 ` [PATCH v4 5/5] ARM: decompressor: switch to by-VA cache maintenance for v7 cores Ard Biesheuvel
2020-02-26 19:14 ` [PATCH v4 0/5] ARM: decompressor: use " Tony Lindgren
2020-02-27 10:11 ` Linus Walleij
2020-02-27 16:01 ` Marc Zyngier
2020-02-27 16:47 ` Ard Biesheuvel
2020-02-27 16:53 ` Marc Zyngier
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).