linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v4 00/12] ARM: vfp: Switch to C API to en/disable softirqs
@ 2023-03-20 13:18 Ard Biesheuvel
  2023-03-20 13:18 ` [PATCH v4 01/12] ARM: vfp: Pass thread_info pointer to vfp_support_entry Ard Biesheuvel
                   ` (12 more replies)
  0 siblings, 13 replies; 32+ messages in thread
From: Ard Biesheuvel @ 2023-03-20 13:18 UTC (permalink / raw)
  To: linux-arm-kernel, linux
  Cc: Ard Biesheuvel, Frederic Weisbecker, Guenter Roeck,
	Peter Zijlstra, Linus Walleij, Arnd Bergmann

As it turns out, enabling or disabling softirqs from asm code is more
tricky than it looks. Simply bumping the associated bit in preempt_count
does the trick for uninstrumented kernels, but with lockdep or preempt
debug enabled, we really need to call the C versions, as replicating
their behavior in asm fully is intractible.

So let's rework the existing code a little bit so we can interpose a
little C helper 'vfp_entry()' that disables softirqs before calling the
existing vfpstate handling code in asm. Re-enabling softirqs from asm
code is more straight-forward, as we can simply perform a tail call via
a C routine that is guaranteed to be callable as a function.

However, since calling these APIs from asm code is still not ideal,
let's reimplement the whole thing in C (patch #4)


Changes since v3:

I got a bit carried away, as I noticed that the NEON opcode matching
implemented in asm in entry-armv.S basically duplicates how undef hooks
work, and so I have added a patch that drops the associated asm code and
moves it into C code, reusing the existing VFP undef hook we have for
kernel mode exceptions. (patch #9)

With that removed, no Thumb mode undef exceptions remain that are
dispatched by the coprocessor undef handling, so that can be simplified
as well (patch #10)

iWMMXT undef exceptions can be handled using an undef hook as well,
(patch #11), which allows us to compile out the coprocessor dispatch
entirely unless we have FPE enabled, which is only the case on OABI or
OABI compat. This removes the non-standard calling convention which
returns via register R9 on success, which can only be implemented in
asm. (patch #12)

Patch #4 is a fix for an issue that I spotted while working on this,
patch #5 is an enhancement that makes it easier to keep track of how
many VFP traps were taken for emulation purposes.

The first four patches are fixes, the remaining ones might go into v6.4.

This also fixes the !CONFIG_VFP build error spotted by Guenter.


Changes since v2:
- add Rbs and Tbs from Linus and Guenter (thanks!)
- correct reference to local_bh_enable() vs __local_bh_enable_ip() in
  commit log of patch #3
- add patch that reworks the asm VFP exception handling entirely so the
  bulk of it is implemented in C. This could be taken into v6.4, while
  the  preceding patches are fixes for v6.3-rc

Cc: Frederic Weisbecker <frederic@kernel.org>
Cc: Guenter Roeck <linux@roeck-us.net>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Linus Walleij <linus.walleij@linaro.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Link: https://lore.kernel.org/all/ZBBYCSZUJOWBg1s8@localhost.localdomain/

Ard Biesheuvel (12):
  ARM: vfp: Pass thread_info pointer to vfp_support_entry
  ARM: vfp: Pass successful return address via register R3
  ARM: vfp: Fix broken softirq handling with instrumentation enabled
  ARM: entry: Fix iWMMXT TIF flag handling
  ARM: vfp: Record VFP bounces as perf emulation faults
  ARM: vfp: Remove workaround for Feroceon CPUs
  ARM: vfp: Reimplement VFP exception entry in C code
  ARM: kernel: Get rid of thread_info::used_cp[] array
  ARM: vfp: Use undef hook for handling VFP exceptions
  ARM: entry: Disregard Thumb undef exception in coproc dispatch
  ARM: iwmmxt: Use undef hook to enable coprocessor for task
  ARM: entry: Make asm coproc dispatch code NWFPE only

 arch/arm/include/asm/assembler.h   |  13 --
 arch/arm/include/asm/thread_info.h |  17 +-
 arch/arm/kernel/asm-offsets.c      |   1 -
 arch/arm/kernel/entry-armv.S       | 138 +++-----------
 arch/arm/kernel/iwmmxt.S           |   9 +
 arch/arm/kernel/pj4-cp0.c          |   1 +
 arch/arm/kernel/process.c          |   1 -
 arch/arm/kernel/ptrace.c           |   2 -
 arch/arm/kernel/xscale-cp0.c       |   1 +
 arch/arm/mm/proc-feroceon.S        |   4 +
 arch/arm/vfp/Makefile              |   2 +-
 arch/arm/vfp/entry.S               |  39 ----
 arch/arm/vfp/vfp.h                 |   1 +
 arch/arm/vfp/vfphw.S               | 200 ++------------------
 arch/arm/vfp/vfpmodule.c           | 195 ++++++++++++++-----
 15 files changed, 219 insertions(+), 405 deletions(-)
 delete mode 100644 arch/arm/vfp/entry.S

-- 
2.39.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH v4 01/12] ARM: vfp: Pass thread_info pointer to vfp_support_entry
  2023-03-20 13:18 [PATCH v4 00/12] ARM: vfp: Switch to C API to en/disable softirqs Ard Biesheuvel
@ 2023-03-20 13:18 ` Ard Biesheuvel
  2023-03-20 13:18 ` [PATCH v4 02/12] ARM: vfp: Pass successful return address via register R3 Ard Biesheuvel
                   ` (11 subsequent siblings)
  12 siblings, 0 replies; 32+ messages in thread
From: Ard Biesheuvel @ 2023-03-20 13:18 UTC (permalink / raw)
  To: linux-arm-kernel, linux
  Cc: Ard Biesheuvel, Frederic Weisbecker, Guenter Roeck,
	Peter Zijlstra, Linus Walleij, Arnd Bergmann

Instead of dereferencing thread_info in do_vfp, pass the thread_info
pointer to vfp_support_entry via R1. That way, we only use a single
caller save register, which makes it easier to convert do_vfp to C code
in a subsequent patch.

Note that, unlike the CPU number, which can change due to preemption,
passing the thread_info pointer can safely be done with preemption
enabled.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Tested-by: Guenter Roeck <linux@roeck-us.net>
---
 arch/arm/vfp/entry.S |  5 +----
 arch/arm/vfp/vfphw.S | 10 +++++++---
 2 files changed, 8 insertions(+), 7 deletions(-)

diff --git a/arch/arm/vfp/entry.S b/arch/arm/vfp/entry.S
index 9a89264cdcc0b46e..cfedc2a3dbd68f1c 100644
--- a/arch/arm/vfp/entry.S
+++ b/arch/arm/vfp/entry.S
@@ -22,15 +22,12 @@
 @  IRQs enabled.
 @
 ENTRY(do_vfp)
-	local_bh_disable r10, r4
+	mov	r1, r10
  	ldr	r4, .LCvfp
-	ldr	r11, [r10, #TI_CPU]	@ CPU number
-	add	r10, r10, #TI_VFPSTATE	@ r10 = workspace
 	ldr	pc, [r4]		@ call VFP entry point
 ENDPROC(do_vfp)
 
 ENTRY(vfp_null_entry)
-	local_bh_enable_ti r10, r4
 	ret	lr
 ENDPROC(vfp_null_entry)
 
diff --git a/arch/arm/vfp/vfphw.S b/arch/arm/vfp/vfphw.S
index 26c4f61ecfa39638..6d056d810e4868c2 100644
--- a/arch/arm/vfp/vfphw.S
+++ b/arch/arm/vfp/vfphw.S
@@ -6,9 +6,9 @@
  *  Written by Deep Blue Solutions Limited.
  *
  * This code is called from the kernel's undefined instruction trap.
+ * r1 holds the thread_info pointer
  * r9 holds the return address for successful handling.
  * lr holds the return address for unrecognised instructions.
- * r10 points at the start of the private FP workspace in the thread structure
  * sp points to a struct pt_regs (as defined in include/asm/proc/ptrace.h)
  */
 #include <linux/init.h>
@@ -69,13 +69,17 @@
 @ VFP hardware support entry point.
 @
 @  r0  = instruction opcode (32-bit ARM or two 16-bit Thumb)
+@  r1  = thread_info pointer
 @  r2  = PC value to resume execution after successful emulation
 @  r9  = normal "successful" return address
-@  r10 = vfp_state union
-@  r11 = CPU number
 @  lr  = unrecognised instruction return address
 @  IRQs enabled.
 ENTRY(vfp_support_entry)
+	local_bh_disable r1, r4
+
+	ldr	r11, [r1, #TI_CPU]	@ CPU number
+	add	r10, r1, #TI_VFPSTATE	@ r10 = workspace
+
 	DBGSTR3	"instr %08x pc %08x state %p", r0, r2, r10
 
 	.fpu	vfpv2
-- 
2.39.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v4 02/12] ARM: vfp: Pass successful return address via register R3
  2023-03-20 13:18 [PATCH v4 00/12] ARM: vfp: Switch to C API to en/disable softirqs Ard Biesheuvel
  2023-03-20 13:18 ` [PATCH v4 01/12] ARM: vfp: Pass thread_info pointer to vfp_support_entry Ard Biesheuvel
@ 2023-03-20 13:18 ` Ard Biesheuvel
  2023-03-20 13:18 ` [PATCH v4 03/12] ARM: vfp: Fix broken softirq handling with instrumentation enabled Ard Biesheuvel
                   ` (10 subsequent siblings)
  12 siblings, 0 replies; 32+ messages in thread
From: Ard Biesheuvel @ 2023-03-20 13:18 UTC (permalink / raw)
  To: linux-arm-kernel, linux
  Cc: Ard Biesheuvel, Frederic Weisbecker, Guenter Roeck,
	Peter Zijlstra, Linus Walleij, Arnd Bergmann

In preparation for reimplementing the do_vfp()->vfp_support_entry()
handover in C code, switch to using R3 to pass the 'success' return
address, rather than R9, as it cannot be used for parameter passing.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Tested-by: Guenter Roeck <linux@roeck-us.net>
---
 arch/arm/vfp/entry.S |  1 +
 arch/arm/vfp/vfphw.S | 14 +++++++-------
 2 files changed, 8 insertions(+), 7 deletions(-)

diff --git a/arch/arm/vfp/entry.S b/arch/arm/vfp/entry.S
index cfedc2a3dbd68f1c..6dabb47617781a5f 100644
--- a/arch/arm/vfp/entry.S
+++ b/arch/arm/vfp/entry.S
@@ -23,6 +23,7 @@
 @
 ENTRY(do_vfp)
 	mov	r1, r10
+	mov	r3, r9
  	ldr	r4, .LCvfp
 	ldr	pc, [r4]		@ call VFP entry point
 ENDPROC(do_vfp)
diff --git a/arch/arm/vfp/vfphw.S b/arch/arm/vfp/vfphw.S
index 6d056d810e4868c2..60acd42e05786e95 100644
--- a/arch/arm/vfp/vfphw.S
+++ b/arch/arm/vfp/vfphw.S
@@ -7,7 +7,7 @@
  *
  * This code is called from the kernel's undefined instruction trap.
  * r1 holds the thread_info pointer
- * r9 holds the return address for successful handling.
+ * r3 holds the return address for successful handling.
  * lr holds the return address for unrecognised instructions.
  * sp points to a struct pt_regs (as defined in include/asm/proc/ptrace.h)
  */
@@ -71,7 +71,7 @@
 @  r0  = instruction opcode (32-bit ARM or two 16-bit Thumb)
 @  r1  = thread_info pointer
 @  r2  = PC value to resume execution after successful emulation
-@  r9  = normal "successful" return address
+@  r3  = normal "successful" return address
 @  lr  = unrecognised instruction return address
 @  IRQs enabled.
 ENTRY(vfp_support_entry)
@@ -89,9 +89,9 @@ ENTRY(vfp_support_entry)
 	bne	look_for_VFP_exceptions	@ VFP is already enabled
 
 	DBGSTR1 "enable %x", r10
-	ldr	r3, vfp_current_hw_state_address
+	ldr	r9, vfp_current_hw_state_address
 	orr	r1, r1, #FPEXC_EN	@ user FPEXC has the enable bit set
-	ldr	r4, [r3, r11, lsl #2]	@ vfp_current_hw_state pointer
+	ldr	r4, [r9, r11, lsl #2]	@ vfp_current_hw_state pointer
 	bic	r5, r1, #FPEXC_EX	@ make sure exceptions are disabled
 	cmp	r4, r10			@ this thread owns the hw context?
 #ifndef CONFIG_SMP
@@ -150,7 +150,7 @@ vfp_reload_hw:
 #endif
 
 	DBGSTR1	"load state %p", r10
-	str	r10, [r3, r11, lsl #2]	@ update the vfp_current_hw_state pointer
+	str	r10, [r9, r11, lsl #2]	@ update the vfp_current_hw_state pointer
 					@ Load the saved state back into the VFP
 	VFPFLDMIA r10, r5		@ reload the working registers while
 					@ FPEXC is in a safe state
@@ -180,7 +180,7 @@ vfp_hw_state_valid:
 					@ always subtract 4 from the following
 					@ instruction address.
 	local_bh_enable_ti r10, r4
-	ret	r9			@ we think we have handled things
+	ret	r3			@ we think we have handled things
 
 
 look_for_VFP_exceptions:
@@ -210,7 +210,7 @@ skip:
 process_exception:
 	DBGSTR	"bounce"
 	mov	r2, sp			@ nothing stacked - regdump is at TOS
-	mov	lr, r9			@ setup for a return to the user code.
+	mov	lr, r3			@ setup for a return to the user code.
 
 	@ Now call the C code to package up the bounce to the support code
 	@   r0 holds the trigger instruction
-- 
2.39.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v4 03/12] ARM: vfp: Fix broken softirq handling with instrumentation enabled
  2023-03-20 13:18 [PATCH v4 00/12] ARM: vfp: Switch to C API to en/disable softirqs Ard Biesheuvel
  2023-03-20 13:18 ` [PATCH v4 01/12] ARM: vfp: Pass thread_info pointer to vfp_support_entry Ard Biesheuvel
  2023-03-20 13:18 ` [PATCH v4 02/12] ARM: vfp: Pass successful return address via register R3 Ard Biesheuvel
@ 2023-03-20 13:18 ` Ard Biesheuvel
  2023-04-09 14:29   ` Linux regression tracking (Thorsten Leemhuis)
  2023-03-20 13:18 ` [PATCH v4 04/12] ARM: entry: Fix iWMMXT TIF flag handling Ard Biesheuvel
                   ` (9 subsequent siblings)
  12 siblings, 1 reply; 32+ messages in thread
From: Ard Biesheuvel @ 2023-03-20 13:18 UTC (permalink / raw)
  To: linux-arm-kernel, linux
  Cc: Ard Biesheuvel, Frederic Weisbecker, Guenter Roeck,
	Peter Zijlstra, Linus Walleij, Arnd Bergmann

Commit 62b95a7b44d1 ("ARM: 9282/1: vfp: Manipulate task VFP state with
softirqs disabled") replaced the en/disable preemption calls inside the
VFP state handling code with en/disabling of soft IRQs, which is
necessary to allow kernel use of the VFP/SIMD unit when handling a soft
IRQ.

Unfortunately, when lockdep is enabled (or other instrumentation that
enables TRACE_IRQFLAGS), the disable path implemented in asm fails to
perform the lockdep and RCU related bookkeeping, resulting in spurious
warnings and other badness.

Set let's rework the VFP entry code a little bit so we can make the
local_bh_disable() call from C, with all the instrumentations that
happen to have been configured. Calling local_bh_enable() can be done
from asm, as it is a simple wrapper around __local_bh_enable_ip(), which
is always a callable function.

Link: https://lore.kernel.org/all/ZBBYCSZUJOWBg1s8@localhost.localdomain/
Fixes: 62b95a7b44d1 ("ARM: 9282/1: vfp: Manipulate task VFP state with softirqs disabled")
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Tested-by: Guenter Roeck <linux@roeck-us.net>
---
 arch/arm/include/asm/assembler.h | 13 ----------
 arch/arm/vfp/entry.S             | 11 +-------
 arch/arm/vfp/vfphw.S             | 12 ++++-----
 arch/arm/vfp/vfpmodule.c         | 27 ++++++++++++++++----
 4 files changed, 29 insertions(+), 34 deletions(-)

diff --git a/arch/arm/include/asm/assembler.h b/arch/arm/include/asm/assembler.h
index 06b48ce23e1ca245..505a306e0271a9c4 100644
--- a/arch/arm/include/asm/assembler.h
+++ b/arch/arm/include/asm/assembler.h
@@ -244,19 +244,6 @@ THUMB(	fpreg	.req	r7	)
 	.endm
 #endif
 
-	.macro	local_bh_disable, ti, tmp
-	ldr	\tmp, [\ti, #TI_PREEMPT]
-	add	\tmp, \tmp, #SOFTIRQ_DISABLE_OFFSET
-	str	\tmp, [\ti, #TI_PREEMPT]
-	.endm
-
-	.macro	local_bh_enable_ti, ti, tmp
-	get_thread_info \ti
-	ldr	\tmp, [\ti, #TI_PREEMPT]
-	sub	\tmp, \tmp, #SOFTIRQ_DISABLE_OFFSET
-	str	\tmp, [\ti, #TI_PREEMPT]
-	.endm
-
 #define USERL(l, x...)				\
 9999:	x;					\
 	.pushsection __ex_table,"a";		\
diff --git a/arch/arm/vfp/entry.S b/arch/arm/vfp/entry.S
index 6dabb47617781a5f..7483ef8bccda394c 100644
--- a/arch/arm/vfp/entry.S
+++ b/arch/arm/vfp/entry.S
@@ -24,14 +24,5 @@
 ENTRY(do_vfp)
 	mov	r1, r10
 	mov	r3, r9
- 	ldr	r4, .LCvfp
-	ldr	pc, [r4]		@ call VFP entry point
+	b	vfp_entry
 ENDPROC(do_vfp)
-
-ENTRY(vfp_null_entry)
-	ret	lr
-ENDPROC(vfp_null_entry)
-
-	.align	2
-.LCvfp:
-	.word	vfp_vector
diff --git a/arch/arm/vfp/vfphw.S b/arch/arm/vfp/vfphw.S
index 60acd42e05786e95..4d8478264d82b3d2 100644
--- a/arch/arm/vfp/vfphw.S
+++ b/arch/arm/vfp/vfphw.S
@@ -75,8 +75,6 @@
 @  lr  = unrecognised instruction return address
 @  IRQs enabled.
 ENTRY(vfp_support_entry)
-	local_bh_disable r1, r4
-
 	ldr	r11, [r1, #TI_CPU]	@ CPU number
 	add	r10, r1, #TI_VFPSTATE	@ r10 = workspace
 
@@ -179,9 +177,12 @@ vfp_hw_state_valid:
 					@ else it's one 32-bit instruction, so
 					@ always subtract 4 from the following
 					@ instruction address.
-	local_bh_enable_ti r10, r4
-	ret	r3			@ we think we have handled things
 
+	mov	lr, r3			@ we think we have handled things
+local_bh_enable_and_ret:
+	adr	r0, .
+	mov	r1, #SOFTIRQ_DISABLE_OFFSET
+	b	__local_bh_enable_ip	@ tail call
 
 look_for_VFP_exceptions:
 	@ Check for synchronous or asynchronous exception
@@ -204,8 +205,7 @@ skip:
 	@ not recognised by VFP
 
 	DBGSTR	"not VFP"
-	local_bh_enable_ti r10, r4
-	ret	lr
+	b	local_bh_enable_and_ret
 
 process_exception:
 	DBGSTR	"bounce"
diff --git a/arch/arm/vfp/vfpmodule.c b/arch/arm/vfp/vfpmodule.c
index 01bc48d738478142..4c7473d2f89a040f 100644
--- a/arch/arm/vfp/vfpmodule.c
+++ b/arch/arm/vfp/vfpmodule.c
@@ -32,10 +32,9 @@
 /*
  * Our undef handlers (in entry.S)
  */
-asmlinkage void vfp_support_entry(void);
-asmlinkage void vfp_null_entry(void);
+asmlinkage void vfp_support_entry(u32, void *, u32, u32);
 
-asmlinkage void (*vfp_vector)(void) = vfp_null_entry;
+static bool have_vfp __ro_after_init;
 
 /*
  * Dual-use variable.
@@ -669,6 +668,25 @@ static int vfp_kmode_exception(struct pt_regs *regs, unsigned int instr)
 	return 1;
 }
 
+/*
+ * Entered with:
+ *
+ *  r0  = instruction opcode (32-bit ARM or two 16-bit Thumb)
+ *  r1  = thread_info pointer
+ *  r2  = PC value to resume execution after successful emulation
+ *  r3  = normal "successful" return address
+ *  lr  = unrecognised instruction return address
+ */
+asmlinkage void vfp_entry(u32 trigger, struct thread_info *ti, u32 resume_pc,
+			  u32 resume_return_address)
+{
+	if (unlikely(!have_vfp))
+		return;
+
+	local_bh_disable();
+	vfp_support_entry(trigger, ti, resume_pc, resume_return_address);
+}
+
 static struct undef_hook vfp_kmode_exception_hook[] = {{
 	.instr_mask	= 0xfe000000,
 	.instr_val	= 0xf2000000,
@@ -798,7 +816,6 @@ static int __init vfp_init(void)
 	vfpsid = fmrx(FPSID);
 	barrier();
 	unregister_undef_hook(&vfp_detect_hook);
-	vfp_vector = vfp_null_entry;
 
 	pr_info("VFP support v0.3: ");
 	if (VFP_arch) {
@@ -883,7 +900,7 @@ static int __init vfp_init(void)
 				  "arm/vfp:starting", vfp_starting_cpu,
 				  vfp_dying_cpu);
 
-	vfp_vector = vfp_support_entry;
+	have_vfp = true;
 
 	thread_register_notifier(&vfp_notifier_block);
 	vfp_pm_init();
-- 
2.39.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v4 04/12] ARM: entry: Fix iWMMXT TIF flag handling
  2023-03-20 13:18 [PATCH v4 00/12] ARM: vfp: Switch to C API to en/disable softirqs Ard Biesheuvel
                   ` (2 preceding siblings ...)
  2023-03-20 13:18 ` [PATCH v4 03/12] ARM: vfp: Fix broken softirq handling with instrumentation enabled Ard Biesheuvel
@ 2023-03-20 13:18 ` Ard Biesheuvel
  2023-03-21 14:32   ` Linus Walleij
  2023-03-20 13:18 ` [PATCH v4 05/12] ARM: vfp: Record VFP bounces as perf emulation faults Ard Biesheuvel
                   ` (8 subsequent siblings)
  12 siblings, 1 reply; 32+ messages in thread
From: Ard Biesheuvel @ 2023-03-20 13:18 UTC (permalink / raw)
  To: linux-arm-kernel, linux
  Cc: Ard Biesheuvel, Frederic Weisbecker, Guenter Roeck,
	Peter Zijlstra, Linus Walleij, Arnd Bergmann, stable

The conditional MOVS instruction that appears to have been added to test
for the TIF_USING_IWMMXT thread_info flag only sets the N and Z
condition flags and register R7, none of which are referenced in the
subsequent code. This means that the instruction does nothing, which
means that we might misidentify faulting FPE instructions as iWMMXT
instructions on kernels that were built to support both.

This seems to have been part of the original submission of the code, and
so this has never worked as intended, and nobody ever noticed, and so we
might decide to just leave this as-is. However, with the ongoing move
towards multiplatform kernels, the issue becomes more likely to
manifest, and so it is better to fix it.

So check whether we are dealing with an undef exception regarding
coprocessor index #0 or #1, and if so, load the thread_info flag and
only dispatch it as a iWMMXT trap if the flag is set.

Cc: <stable@vger.kernel.org> # v2.6.9+
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 arch/arm/kernel/entry-armv.S | 9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/arch/arm/kernel/entry-armv.S b/arch/arm/kernel/entry-armv.S
index c39303e5c23470e6..c5d2f07994fb0d87 100644
--- a/arch/arm/kernel/entry-armv.S
+++ b/arch/arm/kernel/entry-armv.S
@@ -606,10 +606,11 @@ call_fpe:
 	strb	r7, [r6, #TI_USED_CP]		@ set appropriate used_cp[]
 #ifdef CONFIG_IWMMXT
 	@ Test if we need to give access to iWMMXt coprocessors
-	ldr	r5, [r10, #TI_FLAGS]
-	rsbs	r7, r8, #(1 << 8)		@ CP 0 or 1 only
-	movscs	r7, r5, lsr #(TIF_USING_IWMMXT + 1)
-	bcs	iwmmxt_task_enable
+	tst	r8, #0xe << 8			@ CP 0 or 1?
+	ldreq	r5, [r10, #TI_FLAGS]		@ if so, load thread_info flags
+	andeq	r5, r5, #1 << TIF_USING_IWMMXT	@ isolate TIF_USING_IWMMXT flag
+	teqeq	r5, #1 << TIF_USING_IWMMXT	@ check whether it is set
+	beq	iwmmxt_task_enable		@ branch if set
 #endif
  ARM(	add	pc, pc, r8, lsr #6	)
  THUMB(	lsr	r8, r8, #6		)
-- 
2.39.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v4 05/12] ARM: vfp: Record VFP bounces as perf emulation faults
  2023-03-20 13:18 [PATCH v4 00/12] ARM: vfp: Switch to C API to en/disable softirqs Ard Biesheuvel
                   ` (3 preceding siblings ...)
  2023-03-20 13:18 ` [PATCH v4 04/12] ARM: entry: Fix iWMMXT TIF flag handling Ard Biesheuvel
@ 2023-03-20 13:18 ` Ard Biesheuvel
  2023-03-21 14:33   ` Linus Walleij
  2023-03-20 13:18 ` [PATCH v4 06/12] ARM: vfp: Remove workaround for Feroceon CPUs Ard Biesheuvel
                   ` (7 subsequent siblings)
  12 siblings, 1 reply; 32+ messages in thread
From: Ard Biesheuvel @ 2023-03-20 13:18 UTC (permalink / raw)
  To: linux-arm-kernel, linux
  Cc: Ard Biesheuvel, Frederic Weisbecker, Guenter Roeck,
	Peter Zijlstra, Linus Walleij, Arnd Bergmann

VFP 'bouncing' occurs when the VFP unit cannot complete the execution of
a VFP instruction, either because it is not implemented at all, or
because the values of the arguments are out of range for the hardware
implementation, and the software needs to step in to complete the
operation.

To give some insight in how much certain programs rely on this bouncing,
record the emulation of a VFP instruction in perf's emulation-faults
counter.

This can be used like so

  perf stat -e emulation-faults ./testfloat -all2

and the output will be something like

 Performance counter stats for './testfloat -all2':

           259,277      emulation-faults:u

       6.846432176 seconds time elapsed

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 arch/arm/vfp/vfpmodule.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/arch/arm/vfp/vfpmodule.c b/arch/arm/vfp/vfpmodule.c
index 4c7473d2f89a040f..039c8dab990699e2 100644
--- a/arch/arm/vfp/vfpmodule.c
+++ b/arch/arm/vfp/vfpmodule.c
@@ -18,6 +18,7 @@
 #include <linux/uaccess.h>
 #include <linux/user.h>
 #include <linux/export.h>
+#include <linux/perf_event.h>
 
 #include <asm/cp15.h>
 #include <asm/cputype.h>
@@ -313,6 +314,7 @@ static u32 vfp_emulate_instruction(u32 inst, u32 fpscr, struct pt_regs *regs)
 		 * emulate it.
 		 */
 	}
+	perf_sw_event(PERF_COUNT_SW_EMULATION_FAULTS, 1, regs, regs->ARM_pc);
 	return exceptions & ~VFP_NAN_FLAG;
 }
 
-- 
2.39.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v4 06/12] ARM: vfp: Remove workaround for Feroceon CPUs
  2023-03-20 13:18 [PATCH v4 00/12] ARM: vfp: Switch to C API to en/disable softirqs Ard Biesheuvel
                   ` (4 preceding siblings ...)
  2023-03-20 13:18 ` [PATCH v4 05/12] ARM: vfp: Record VFP bounces as perf emulation faults Ard Biesheuvel
@ 2023-03-20 13:18 ` Ard Biesheuvel
  2023-03-21 14:44   ` Linus Walleij
  2023-03-20 13:18 ` [PATCH v4 07/12] ARM: vfp: Reimplement VFP exception entry in C code Ard Biesheuvel
                   ` (6 subsequent siblings)
  12 siblings, 1 reply; 32+ messages in thread
From: Ard Biesheuvel @ 2023-03-20 13:18 UTC (permalink / raw)
  To: linux-arm-kernel, linux
  Cc: Ard Biesheuvel, Frederic Weisbecker, Guenter Roeck,
	Peter Zijlstra, Linus Walleij, Arnd Bergmann

Feroceon CPUs have a non-standard implementation of VFP which reports
synchronous VFP exceptions using the async VFP flag. This requires a
workaround which is difficult to reconcile with other implementations,
making it tricky to support both versions in a single image.

Since this is a v5 CPU, it is not supported by armhf and so the
likelihood that anybody is using this with recent distros/kernels and
rely on the VFP at the same time is extremely low. So let's just disable
VFP support on these cores, so we can remove the workaround.

This will help future development to support v5 and v6 CPUs with a
single kernel image.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 arch/arm/mm/proc-feroceon.S | 4 ++++
 arch/arm/vfp/vfphw.S        | 4 ----
 arch/arm/vfp/vfpmodule.c    | 8 +++++---
 3 files changed, 9 insertions(+), 7 deletions(-)

diff --git a/arch/arm/mm/proc-feroceon.S b/arch/arm/mm/proc-feroceon.S
index 61ce82aca6f0d603..072ff9b451f846bf 100644
--- a/arch/arm/mm/proc-feroceon.S
+++ b/arch/arm/mm/proc-feroceon.S
@@ -56,6 +56,10 @@ ENTRY(cpu_feroceon_proc_init)
 	movne	r2, r2, lsr #2			@ turned into # of sets
 	sub	r2, r2, #(1 << 5)
 	stmia	r1, {r2, r3}
+#ifdef CONFIG_VFP
+	mov	r1, #1				@ disable quirky VFP
+	str_l	r1, VFP_arch_feroceon, r2
+#endif
 	ret	lr
 
 /*
diff --git a/arch/arm/vfp/vfphw.S b/arch/arm/vfp/vfphw.S
index 4d8478264d82b3d2..8049c6830eeb1380 100644
--- a/arch/arm/vfp/vfphw.S
+++ b/arch/arm/vfp/vfphw.S
@@ -110,7 +110,6 @@ ENTRY(vfp_support_entry)
 	beq	vfp_reload_hw		@ then the hw state needs reloading
 	VFPFSTMIA r4, r5		@ save the working registers
 	VFPFMRX	r5, FPSCR		@ current status
-#ifndef CONFIG_CPU_FEROCEON
 	tst	r1, #FPEXC_EX		@ is there additional state to save?
 	beq	1f
 	VFPFMRX	r6, FPINST		@ FPINST (only if FPEXC.EX is set)
@@ -118,7 +117,6 @@ ENTRY(vfp_support_entry)
 	beq	1f
 	VFPFMRX	r8, FPINST2		@ FPINST2 if needed (and present)
 1:
-#endif
 	stmia	r4, {r1, r5, r6, r8}	@ save FPEXC, FPSCR, FPINST, FPINST2
 vfp_reload_hw:
 
@@ -153,7 +151,6 @@ vfp_reload_hw:
 	VFPFLDMIA r10, r5		@ reload the working registers while
 					@ FPEXC is in a safe state
 	ldmia	r10, {r1, r5, r6, r8}	@ load FPEXC, FPSCR, FPINST, FPINST2
-#ifndef CONFIG_CPU_FEROCEON
 	tst	r1, #FPEXC_EX		@ is there additional state to restore?
 	beq	1f
 	VFPFMXR	FPINST, r6		@ restore FPINST (only if FPEXC.EX is set)
@@ -161,7 +158,6 @@ vfp_reload_hw:
 	beq	1f
 	VFPFMXR	FPINST2, r8		@ FPINST2 if needed (and present)
 1:
-#endif
 	VFPFMXR	FPSCR, r5		@ restore status
 
 @ The context stored in the VFP hardware is up to date with this thread
diff --git a/arch/arm/vfp/vfpmodule.c b/arch/arm/vfp/vfpmodule.c
index 039c8dab990699e2..dd31d13ca1d8fc8a 100644
--- a/arch/arm/vfp/vfpmodule.c
+++ b/arch/arm/vfp/vfpmodule.c
@@ -42,7 +42,11 @@ static bool have_vfp __ro_after_init;
  * Used in startup: set to non-zero if VFP checks fail
  * After startup, holds VFP architecture
  */
-static unsigned int __initdata VFP_arch;
+static unsigned int VFP_arch;
+
+#ifdef CONFIG_CPU_FEROCEON
+extern unsigned int VFP_arch_feroceon __alias(VFP_arch);
+#endif
 
 /*
  * The pointer to the vfpstate structure of the thread which currently
@@ -357,14 +361,12 @@ void VFP_bounce(u32 trigger, u32 fpexc, struct pt_regs *regs)
 	}
 
 	if (fpexc & FPEXC_EX) {
-#ifndef CONFIG_CPU_FEROCEON
 		/*
 		 * Asynchronous exception. The instruction is read from FPINST
 		 * and the interrupted instruction has to be restarted.
 		 */
 		trigger = fmrx(FPINST);
 		regs->ARM_pc -= 4;
-#endif
 	} else if (!(fpexc & FPEXC_DEX)) {
 		/*
 		 * Illegal combination of bits. It can be caused by an
-- 
2.39.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v4 07/12] ARM: vfp: Reimplement VFP exception entry in C code
  2023-03-20 13:18 [PATCH v4 00/12] ARM: vfp: Switch to C API to en/disable softirqs Ard Biesheuvel
                   ` (5 preceding siblings ...)
  2023-03-20 13:18 ` [PATCH v4 06/12] ARM: vfp: Remove workaround for Feroceon CPUs Ard Biesheuvel
@ 2023-03-20 13:18 ` Ard Biesheuvel
  2023-03-20 13:18 ` [PATCH v4 08/12] ARM: kernel: Get rid of thread_info::used_cp[] array Ard Biesheuvel
                   ` (5 subsequent siblings)
  12 siblings, 0 replies; 32+ messages in thread
From: Ard Biesheuvel @ 2023-03-20 13:18 UTC (permalink / raw)
  To: linux-arm-kernel, linux
  Cc: Ard Biesheuvel, Frederic Weisbecker, Guenter Roeck,
	Peter Zijlstra, Linus Walleij, Arnd Bergmann

En/disabling softirqs from asm code turned out to be trickier than
expected, so vfp_support_entry now returns by tail calling
__local_enable_bh_ip() and passing the same arguments that a C call to
local_bh_enable() would pass. However, this is slightly hacky, as we
don't want to carry our own implementation of local_bh_enable().

So let's bite the bullet, and get rid of the asm logic in
vfp_support_entry that reasons about whether or not to save and/or
reload the VFP state, and about whether or not an FP exception is
pending, and only keep the VFP loading logic as a function that is
callable from C.

Replicate the removed logic in vfp_entry(), and use the exact same
reasoning as in the asm code. To emphasize the correspondance, retain
some of the asm comments in the C version as well.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
---
 arch/arm/vfp/entry.S     |   9 +-
 arch/arm/vfp/vfp.h       |   1 +
 arch/arm/vfp/vfphw.S     | 202 ++------------------
 arch/arm/vfp/vfpmodule.c | 123 ++++++++++--
 4 files changed, 124 insertions(+), 211 deletions(-)

diff --git a/arch/arm/vfp/entry.S b/arch/arm/vfp/entry.S
index 7483ef8bccda394c..547c94c62cd3a66a 100644
--- a/arch/arm/vfp/entry.S
+++ b/arch/arm/vfp/entry.S
@@ -22,7 +22,10 @@
 @  IRQs enabled.
 @
 ENTRY(do_vfp)
-	mov	r1, r10
-	mov	r3, r9
-	b	vfp_entry
+	mov	r1, r0				@ pass trigger opcode via R1
+	mov	r0, sp				@ pass struct pt_regs via R0
+	bl	vfp_support_entry		@ dispatch the VFP exception
+	cmp	r0, #0				@ handled successfully?
+	reteq	r9				@ then use R9 as return address
+	ret	lr				@ pass to undef handler
 ENDPROC(do_vfp)
diff --git a/arch/arm/vfp/vfp.h b/arch/arm/vfp/vfp.h
index 5cd6d5053271760e..e43a630f8a164f9d 100644
--- a/arch/arm/vfp/vfp.h
+++ b/arch/arm/vfp/vfp.h
@@ -375,3 +375,4 @@ struct op {
 };
 
 asmlinkage void vfp_save_state(void *location, u32 fpexc);
+asmlinkage u32 vfp_load_state(const void *location);
diff --git a/arch/arm/vfp/vfphw.S b/arch/arm/vfp/vfphw.S
index 8049c6830eeb1380..d5a03f3c10c500f3 100644
--- a/arch/arm/vfp/vfphw.S
+++ b/arch/arm/vfp/vfphw.S
@@ -4,12 +4,6 @@
  *
  *  Copyright (C) 2004 ARM Limited.
  *  Written by Deep Blue Solutions Limited.
- *
- * This code is called from the kernel's undefined instruction trap.
- * r1 holds the thread_info pointer
- * r3 holds the return address for successful handling.
- * lr holds the return address for unrecognised instructions.
- * sp points to a struct pt_regs (as defined in include/asm/proc/ptrace.h)
  */
 #include <linux/init.h>
 #include <linux/linkage.h>
@@ -19,20 +13,6 @@
 #include <asm/assembler.h>
 #include <asm/asm-offsets.h>
 
-	.macro	DBGSTR, str
-#ifdef DEBUG
-	stmfd	sp!, {r0-r3, ip, lr}
-	ldr	r0, =1f
-	bl	_printk
-	ldmfd	sp!, {r0-r3, ip, lr}
-
-	.pushsection .rodata, "a"
-1:	.ascii	KERN_DEBUG "VFP: \str\n"
-	.byte	0
-	.previous
-#endif
-	.endm
-
 	.macro  DBGSTR1, str, arg
 #ifdef DEBUG
 	stmfd	sp!, {r0-r3, ip, lr}
@@ -48,175 +28,25 @@
 #endif
 	.endm
 
-	.macro  DBGSTR3, str, arg1, arg2, arg3
-#ifdef DEBUG
-	stmfd	sp!, {r0-r3, ip, lr}
-	mov	r3, \arg3
-	mov	r2, \arg2
-	mov	r1, \arg1
-	ldr	r0, =1f
-	bl	_printk
-	ldmfd	sp!, {r0-r3, ip, lr}
-
-	.pushsection .rodata, "a"
-1:	.ascii	KERN_DEBUG "VFP: \str\n"
-	.byte	0
-	.previous
-#endif
-	.endm
-
-
-@ VFP hardware support entry point.
-@
-@  r0  = instruction opcode (32-bit ARM or two 16-bit Thumb)
-@  r1  = thread_info pointer
-@  r2  = PC value to resume execution after successful emulation
-@  r3  = normal "successful" return address
-@  lr  = unrecognised instruction return address
-@  IRQs enabled.
-ENTRY(vfp_support_entry)
-	ldr	r11, [r1, #TI_CPU]	@ CPU number
-	add	r10, r1, #TI_VFPSTATE	@ r10 = workspace
-
-	DBGSTR3	"instr %08x pc %08x state %p", r0, r2, r10
-
-	.fpu	vfpv2
-	VFPFMRX	r1, FPEXC		@ Is the VFP enabled?
-	DBGSTR1	"fpexc %08x", r1
-	tst	r1, #FPEXC_EN
-	bne	look_for_VFP_exceptions	@ VFP is already enabled
-
-	DBGSTR1 "enable %x", r10
-	ldr	r9, vfp_current_hw_state_address
-	orr	r1, r1, #FPEXC_EN	@ user FPEXC has the enable bit set
-	ldr	r4, [r9, r11, lsl #2]	@ vfp_current_hw_state pointer
-	bic	r5, r1, #FPEXC_EX	@ make sure exceptions are disabled
-	cmp	r4, r10			@ this thread owns the hw context?
-#ifndef CONFIG_SMP
-	@ For UP, checking that this thread owns the hw context is
-	@ sufficient to determine that the hardware state is valid.
-	beq	vfp_hw_state_valid
-
-	@ On UP, we lazily save the VFP context.  As a different
-	@ thread wants ownership of the VFP hardware, save the old
-	@ state if there was a previous (valid) owner.
-
-	VFPFMXR	FPEXC, r5		@ enable VFP, disable any pending
-					@ exceptions, so we can get at the
-					@ rest of it
-
-	DBGSTR1	"save old state %p", r4
-	cmp	r4, #0			@ if the vfp_current_hw_state is NULL
-	beq	vfp_reload_hw		@ then the hw state needs reloading
-	VFPFSTMIA r4, r5		@ save the working registers
-	VFPFMRX	r5, FPSCR		@ current status
-	tst	r1, #FPEXC_EX		@ is there additional state to save?
-	beq	1f
-	VFPFMRX	r6, FPINST		@ FPINST (only if FPEXC.EX is set)
-	tst	r1, #FPEXC_FP2V		@ is there an FPINST2 to read?
-	beq	1f
-	VFPFMRX	r8, FPINST2		@ FPINST2 if needed (and present)
-1:
-	stmia	r4, {r1, r5, r6, r8}	@ save FPEXC, FPSCR, FPINST, FPINST2
-vfp_reload_hw:
-
-#else
-	@ For SMP, if this thread does not own the hw context, then we
-	@ need to reload it.  No need to save the old state as on SMP,
-	@ we always save the state when we switch away from a thread.
-	bne	vfp_reload_hw
-
-	@ This thread has ownership of the current hardware context.
-	@ However, it may have been migrated to another CPU, in which
-	@ case the saved state is newer than the hardware context.
-	@ Check this by looking at the CPU number which the state was
-	@ last loaded onto.
-	ldr	ip, [r10, #VFP_CPU]
-	teq	ip, r11
-	beq	vfp_hw_state_valid
-
-vfp_reload_hw:
-	@ We're loading this threads state into the VFP hardware. Update
-	@ the CPU number which contains the most up to date VFP context.
-	str	r11, [r10, #VFP_CPU]
-
-	VFPFMXR	FPEXC, r5		@ enable VFP, disable any pending
-					@ exceptions, so we can get at the
-					@ rest of it
-#endif
-
-	DBGSTR1	"load state %p", r10
-	str	r10, [r9, r11, lsl #2]	@ update the vfp_current_hw_state pointer
+ENTRY(vfp_load_state)
+	@ Load the current VFP state
+	@ r0 - load location
+	@ returns FPEXC
+	DBGSTR1	"load VFP state %p", r0
 					@ Load the saved state back into the VFP
-	VFPFLDMIA r10, r5		@ reload the working registers while
+	VFPFLDMIA r0, r1		@ reload the working registers while
 					@ FPEXC is in a safe state
-	ldmia	r10, {r1, r5, r6, r8}	@ load FPEXC, FPSCR, FPINST, FPINST2
-	tst	r1, #FPEXC_EX		@ is there additional state to restore?
+	ldmia	r0, {r0-r3}		@ load FPEXC, FPSCR, FPINST, FPINST2
+	tst	r0, #FPEXC_EX		@ is there additional state to restore?
 	beq	1f
-	VFPFMXR	FPINST, r6		@ restore FPINST (only if FPEXC.EX is set)
-	tst	r1, #FPEXC_FP2V		@ is there an FPINST2 to write?
+	VFPFMXR	FPINST, r2		@ restore FPINST (only if FPEXC.EX is set)
+	tst	r0, #FPEXC_FP2V		@ is there an FPINST2 to write?
 	beq	1f
-	VFPFMXR	FPINST2, r8		@ FPINST2 if needed (and present)
+	VFPFMXR	FPINST2, r3		@ FPINST2 if needed (and present)
 1:
-	VFPFMXR	FPSCR, r5		@ restore status
-
-@ The context stored in the VFP hardware is up to date with this thread
-vfp_hw_state_valid:
-	tst	r1, #FPEXC_EX
-	bne	process_exception	@ might as well handle the pending
-					@ exception before retrying branch
-					@ out before setting an FPEXC that
-					@ stops us reading stuff
-	VFPFMXR	FPEXC, r1		@ Restore FPEXC last
-	sub	r2, r2, #4		@ Retry current instruction - if Thumb
-	str	r2, [sp, #S_PC]		@ mode it's two 16-bit instructions,
-					@ else it's one 32-bit instruction, so
-					@ always subtract 4 from the following
-					@ instruction address.
-
-	mov	lr, r3			@ we think we have handled things
-local_bh_enable_and_ret:
-	adr	r0, .
-	mov	r1, #SOFTIRQ_DISABLE_OFFSET
-	b	__local_bh_enable_ip	@ tail call
-
-look_for_VFP_exceptions:
-	@ Check for synchronous or asynchronous exception
-	tst	r1, #FPEXC_EX | FPEXC_DEX
-	bne	process_exception
-	@ On some implementations of the VFP subarch 1, setting FPSCR.IXE
-	@ causes all the CDP instructions to be bounced synchronously without
-	@ setting the FPEXC.EX bit
-	VFPFMRX	r5, FPSCR
-	tst	r5, #FPSCR_IXE
-	bne	process_exception
-
-	tst	r5, #FPSCR_LENGTH_MASK
-	beq	skip
-	orr	r1, r1, #FPEXC_DEX
-	b	process_exception
-skip:
-
-	@ Fall into hand on to next handler - appropriate coproc instr
-	@ not recognised by VFP
-
-	DBGSTR	"not VFP"
-	b	local_bh_enable_and_ret
-
-process_exception:
-	DBGSTR	"bounce"
-	mov	r2, sp			@ nothing stacked - regdump is at TOS
-	mov	lr, r3			@ setup for a return to the user code.
-
-	@ Now call the C code to package up the bounce to the support code
-	@   r0 holds the trigger instruction
-	@   r1 holds the FPEXC value
-	@   r2 pointer to register dump
-	b	VFP_bounce		@ we have handled this - the support
-					@ code will raise an exception if
-					@ required. If not, the user code will
-					@ retry the faulted instruction
-ENDPROC(vfp_support_entry)
+	VFPFMXR	FPSCR, r1		@ restore status
+	ret	lr
+ENDPROC(vfp_load_state)
 
 ENTRY(vfp_save_state)
 	@ Save the current VFP state
@@ -236,10 +66,6 @@ ENTRY(vfp_save_state)
 	ret	lr
 ENDPROC(vfp_save_state)
 
-	.align
-vfp_current_hw_state_address:
-	.word	vfp_current_hw_state
-
 	.macro	tbl_branch, base, tmp, shift
 #ifdef CONFIG_THUMB2_KERNEL
 	adr	\tmp, 1f
diff --git a/arch/arm/vfp/vfpmodule.c b/arch/arm/vfp/vfpmodule.c
index dd31d13ca1d8fc8a..6db7d20c467ff843 100644
--- a/arch/arm/vfp/vfpmodule.c
+++ b/arch/arm/vfp/vfpmodule.c
@@ -30,11 +30,6 @@
 #include "vfpinstr.h"
 #include "vfp.h"
 
-/*
- * Our undef handlers (in entry.S)
- */
-asmlinkage void vfp_support_entry(u32, void *, u32, u32);
-
 static bool have_vfp __ro_after_init;
 
 /*
@@ -325,7 +320,7 @@ static u32 vfp_emulate_instruction(u32 inst, u32 fpscr, struct pt_regs *regs)
 /*
  * Package up a bounce condition.
  */
-void VFP_bounce(u32 trigger, u32 fpexc, struct pt_regs *regs)
+static void VFP_bounce(u32 trigger, u32 fpexc, struct pt_regs *regs)
 {
 	u32 fpscr, orig_fpscr, fpsid, exceptions;
 
@@ -374,7 +369,7 @@ void VFP_bounce(u32 trigger, u32 fpexc, struct pt_regs *regs)
 		 * on VFP subarch 1.
 		 */
 		 vfp_raise_exceptions(VFP_EXCEPTION_ERROR, trigger, fpscr, regs);
-		goto exit;
+		return;
 	}
 
 	/*
@@ -405,7 +400,7 @@ void VFP_bounce(u32 trigger, u32 fpexc, struct pt_regs *regs)
 	 * the FPEXC.FP2V bit is valid only if FPEXC.EX is 1.
 	 */
 	if ((fpexc & (FPEXC_EX | FPEXC_FP2V)) != (FPEXC_EX | FPEXC_FP2V))
-		goto exit;
+		return;
 
 	/*
 	 * The barrier() here prevents fpinst2 being read
@@ -418,8 +413,6 @@ void VFP_bounce(u32 trigger, u32 fpexc, struct pt_regs *regs)
 	exceptions = vfp_emulate_instruction(trigger, orig_fpscr, regs);
 	if (exceptions)
 		vfp_raise_exceptions(exceptions, trigger, orig_fpscr, regs);
- exit:
-	local_bh_enable();
 }
 
 static void vfp_enable(void *unused)
@@ -673,22 +666,112 @@ static int vfp_kmode_exception(struct pt_regs *regs, unsigned int instr)
 }
 
 /*
- * Entered with:
+ * vfp_support_entry - Handle VFP exception from user mode
  *
- *  r0  = instruction opcode (32-bit ARM or two 16-bit Thumb)
- *  r1  = thread_info pointer
- *  r2  = PC value to resume execution after successful emulation
- *  r3  = normal "successful" return address
- *  lr  = unrecognised instruction return address
+ * @regs:	pt_regs structure holding the register state at exception entry
+ * @trigger:	The opcode of the instruction that triggered the exception
+ *
+ * Returns 0 if the exception was handled, or an error code otherwise.
  */
-asmlinkage void vfp_entry(u32 trigger, struct thread_info *ti, u32 resume_pc,
-			  u32 resume_return_address)
+asmlinkage int vfp_support_entry(struct pt_regs *regs, u32 trigger)
 {
+	struct thread_info *ti = current_thread_info();
+	u32 fpexc;
+
 	if (unlikely(!have_vfp))
-		return;
+		return -ENODEV;
 
 	local_bh_disable();
-	vfp_support_entry(trigger, ti, resume_pc, resume_return_address);
+	fpexc = fmrx(FPEXC);
+
+	/*
+	 * If the VFP unit was not enabled yet, we have to check whether the
+	 * VFP state in the CPU's registers is the most recent VFP state
+	 * associated with the process. On UP systems, we don't save the VFP
+	 * state eagerly on a context switch, so we may need to save the
+	 * VFP state to memory first, as it may belong to another process.
+	 */
+	if (!(fpexc & FPEXC_EN)) {
+		/*
+		 * Enable the VFP unit but mask the FP exception flag for the
+		 * time being, so we can access all the registers.
+		 */
+		fpexc |= FPEXC_EN;
+		fmxr(FPEXC, fpexc & ~FPEXC_EX);
+
+		/*
+		 * Check whether or not the VFP state in the CPU's registers is
+		 * the most recent VFP state associated with this task. On SMP,
+		 * migration may result in multiple CPUs holding VFP states
+		 * that belong to the same task, but only the most recent one
+		 * is valid.
+		 */
+		if (!vfp_state_in_hw(ti->cpu, ti)) {
+			if (!IS_ENABLED(CONFIG_SMP) &&
+			    vfp_current_hw_state[ti->cpu] != NULL) {
+				/*
+				 * This CPU is currently holding the most
+				 * recent VFP state associated with another
+				 * task, and we must save that to memory first.
+				 */
+				vfp_save_state(vfp_current_hw_state[ti->cpu],
+					       fpexc);
+			}
+
+			/*
+			 * We can now proceed with loading the task's VFP state
+			 * from memory into the CPU registers.
+			 */
+			fpexc = vfp_load_state(&ti->vfpstate);
+			vfp_current_hw_state[ti->cpu] = &ti->vfpstate;
+#ifdef CONFIG_SMP
+			/*
+			 * Record that this CPU is now the one holding the most
+			 * recent VFP state of the task.
+			 */
+			ti->vfpstate.hard.cpu = ti->cpu;
+#endif
+		}
+
+		if (fpexc & FPEXC_EX)
+			/*
+			 * Might as well handle the pending exception before
+			 * retrying branch out before setting an FPEXC that
+			 * stops us reading stuff.
+			 */
+			goto bounce;
+
+		/*
+		 * No FP exception is pending: just enable the VFP and
+		 * replay the instruction that trapped.
+		 */
+		fmxr(FPEXC, fpexc);
+		regs->ARM_pc -= 4;
+	} else {
+		/* Check for synchronous or asynchronous exceptions */
+		if (!(fpexc & (FPEXC_EX | FPEXC_DEX))) {
+			u32 fpscr = fmrx(FPSCR);
+
+			/*
+			 * On some implementations of the VFP subarch 1,
+			 * setting FPSCR.IXE causes all the CDP instructions to
+			 * be bounced synchronously without setting the
+			 * FPEXC.EX bit
+			 */
+			if (!(fpscr & FPSCR_IXE)) {
+				if (!(fpscr & FPSCR_LENGTH_MASK)) {
+					pr_debug("not VFP\n");
+					local_bh_enable();
+					return -ENOEXEC;
+				}
+				fpexc |= FPEXC_DEX;
+			}
+		}
+bounce:		VFP_bounce(trigger, fpexc, regs);
+	}
+
+	local_bh_enable();
+	return 0;
 }
 
 static struct undef_hook vfp_kmode_exception_hook[] = {{
-- 
2.39.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v4 08/12] ARM: kernel: Get rid of thread_info::used_cp[] array
  2023-03-20 13:18 [PATCH v4 00/12] ARM: vfp: Switch to C API to en/disable softirqs Ard Biesheuvel
                   ` (6 preceding siblings ...)
  2023-03-20 13:18 ` [PATCH v4 07/12] ARM: vfp: Reimplement VFP exception entry in C code Ard Biesheuvel
@ 2023-03-20 13:18 ` Ard Biesheuvel
  2023-03-21 14:58   ` Linus Walleij
  2023-03-20 13:18 ` [PATCH v4 09/12] ARM: vfp: Use undef hook for handling VFP exceptions Ard Biesheuvel
                   ` (4 subsequent siblings)
  12 siblings, 1 reply; 32+ messages in thread
From: Ard Biesheuvel @ 2023-03-20 13:18 UTC (permalink / raw)
  To: linux-arm-kernel, linux
  Cc: Ard Biesheuvel, Frederic Weisbecker, Guenter Roeck,
	Peter Zijlstra, Linus Walleij, Arnd Bergmann

We keep track of which coprocessor triggered a fault in the used_cp[]
array in thread_info, but this data is never used anywhere. So let's
remove it.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 arch/arm/include/asm/thread_info.h | 1 -
 arch/arm/kernel/asm-offsets.c      | 1 -
 arch/arm/kernel/entry-armv.S       | 6 ------
 arch/arm/kernel/process.c          | 1 -
 arch/arm/kernel/ptrace.c           | 2 --
 5 files changed, 11 deletions(-)

diff --git a/arch/arm/include/asm/thread_info.h b/arch/arm/include/asm/thread_info.h
index 7f092cb55a417154..85c5f1e02ebf83ca 100644
--- a/arch/arm/include/asm/thread_info.h
+++ b/arch/arm/include/asm/thread_info.h
@@ -66,7 +66,6 @@ struct thread_info {
 	__u32			cpu_domain;	/* cpu domain */
 	struct cpu_context_save	cpu_context;	/* cpu context */
 	__u32			abi_syscall;	/* ABI type and syscall nr */
-	__u8			used_cp[16];	/* thread used copro */
 	unsigned long		tp_value[2];	/* TLS registers */
 	union fp_state		fpstate __attribute__((aligned(8)));
 	union vfp_state		vfpstate;
diff --git a/arch/arm/kernel/asm-offsets.c b/arch/arm/kernel/asm-offsets.c
index 38121c59cbc26cdd..f9c7111c1d65ffda 100644
--- a/arch/arm/kernel/asm-offsets.c
+++ b/arch/arm/kernel/asm-offsets.c
@@ -47,7 +47,6 @@ int main(void)
   DEFINE(TI_CPU_DOMAIN,		offsetof(struct thread_info, cpu_domain));
   DEFINE(TI_CPU_SAVE,		offsetof(struct thread_info, cpu_context));
   DEFINE(TI_ABI_SYSCALL,	offsetof(struct thread_info, abi_syscall));
-  DEFINE(TI_USED_CP,		offsetof(struct thread_info, used_cp));
   DEFINE(TI_TP_VALUE,		offsetof(struct thread_info, tp_value));
   DEFINE(TI_FPSTATE,		offsetof(struct thread_info, fpstate));
 #ifdef CONFIG_VFP
diff --git a/arch/arm/kernel/entry-armv.S b/arch/arm/kernel/entry-armv.S
index c5d2f07994fb0d87..cae8e31ce2cbde41 100644
--- a/arch/arm/kernel/entry-armv.S
+++ b/arch/arm/kernel/entry-armv.S
@@ -591,9 +591,6 @@ call_fpe:
 	and	r8, r0, r5
 	cmp	r8, r7				@ NEON instruction?
 	bne	2b
-	mov	r7, #1
-	strb	r7, [r10, #TI_USED_CP + 10]	@ mark CP#10 as used
-	strb	r7, [r10, #TI_USED_CP + 11]	@ mark CP#11 as used
 	b	do_vfp				@ let VFP handler handle this
 1:
 #endif
@@ -601,9 +598,6 @@ call_fpe:
 	tstne	r0, #0x04000000			@ bit 26 set on both ARM and Thumb-2
 	reteq	lr
 	and	r8, r0, #0x00000f00		@ mask out CP number
-	mov	r7, #1
-	add	r6, r10, r8, lsr #8		@ add used_cp[] array offset first
-	strb	r7, [r6, #TI_USED_CP]		@ set appropriate used_cp[]
 #ifdef CONFIG_IWMMXT
 	@ Test if we need to give access to iWMMXt coprocessors
 	tst	r8, #0xe << 8			@ CP 0 or 1?
diff --git a/arch/arm/kernel/process.c b/arch/arm/kernel/process.c
index f811733a8fc574b4..343451cab8c4f5df 100644
--- a/arch/arm/kernel/process.c
+++ b/arch/arm/kernel/process.c
@@ -223,7 +223,6 @@ void flush_thread(void)
 
 	flush_ptrace_hw_breakpoint(tsk);
 
-	memset(thread->used_cp, 0, sizeof(thread->used_cp));
 	memset(&tsk->thread.debug, 0, sizeof(struct debug_info));
 	memset(&thread->fpstate, 0, sizeof(union fp_state));
 
diff --git a/arch/arm/kernel/ptrace.c b/arch/arm/kernel/ptrace.c
index 2d8e2516906b6b4a..2b945b9bd36624a9 100644
--- a/arch/arm/kernel/ptrace.c
+++ b/arch/arm/kernel/ptrace.c
@@ -584,8 +584,6 @@ static int fpa_set(struct task_struct *target,
 {
 	struct thread_info *thread = task_thread_info(target);
 
-	thread->used_cp[1] = thread->used_cp[2] = 1;
-
 	return user_regset_copyin(&pos, &count, &kbuf, &ubuf,
 		&thread->fpstate,
 		0, sizeof(struct user_fp));
-- 
2.39.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v4 09/12] ARM: vfp: Use undef hook for handling VFP exceptions
  2023-03-20 13:18 [PATCH v4 00/12] ARM: vfp: Switch to C API to en/disable softirqs Ard Biesheuvel
                   ` (7 preceding siblings ...)
  2023-03-20 13:18 ` [PATCH v4 08/12] ARM: kernel: Get rid of thread_info::used_cp[] array Ard Biesheuvel
@ 2023-03-20 13:18 ` Ard Biesheuvel
  2023-03-21 14:59   ` Linus Walleij
  2023-03-20 13:18 ` [PATCH v4 10/12] ARM: entry: Disregard Thumb undef exception in coproc dispatch Ard Biesheuvel
                   ` (3 subsequent siblings)
  12 siblings, 1 reply; 32+ messages in thread
From: Ard Biesheuvel @ 2023-03-20 13:18 UTC (permalink / raw)
  To: linux-arm-kernel, linux
  Cc: Ard Biesheuvel, Frederic Weisbecker, Guenter Roeck,
	Peter Zijlstra, Linus Walleij, Arnd Bergmann

Now that the VFP support code has been reimplemented as a C function
that takes a struct pt_regs pointer and an opcode, we can use the
existing undef_hook framework to deal with undef exceptions triggered by
VFP instructions instead of having special handling in assembler.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 arch/arm/kernel/entry-armv.S | 53 ----------------
 arch/arm/vfp/Makefile        |  2 +-
 arch/arm/vfp/entry.S         | 31 ----------
 arch/arm/vfp/vfpmodule.c     | 65 ++++++++++----------
 4 files changed, 32 insertions(+), 119 deletions(-)

diff --git a/arch/arm/kernel/entry-armv.S b/arch/arm/kernel/entry-armv.S
index cae8e31ce2cbde41..b4586a3447822774 100644
--- a/arch/arm/kernel/entry-armv.S
+++ b/arch/arm/kernel/entry-armv.S
@@ -557,13 +557,6 @@ ENDPROC(__und_usr)
  * co-processor instructions.  However, we have to watch out
  * for the ARM6/ARM7 SWI bug.
  *
- * NEON is a special case that has to be handled here. Not all
- * NEON instructions are co-processor instructions, so we have
- * to make a special case of checking for them. Plus, there's
- * five groups of them, so we have a table of mask/opcode pairs
- * to check against, and if any match then we branch off into the
- * NEON handler code.
- *
  * Emulators may wish to make use of the following registers:
  *  r0  = instruction opcode (32-bit ARM or two 16-bit Thumb)
  *  r2  = PC value to resume execution after successful emulation
@@ -575,25 +568,8 @@ ENDPROC(__und_usr)
 	@
 	@ Fall-through from Thumb-2 __und_usr
 	@
-#ifdef CONFIG_NEON
-	get_thread_info r10			@ get current thread
-	adr	r6, .LCneon_thumb_opcodes
-	b	2f
-#endif
 call_fpe:
 	get_thread_info r10			@ get current thread
-#ifdef CONFIG_NEON
-	adr	r6, .LCneon_arm_opcodes
-2:	ldr	r5, [r6], #4			@ mask value
-	ldr	r7, [r6], #4			@ opcode bits matching in mask
-	cmp	r5, #0				@ end mask?
-	beq	1f
-	and	r8, r0, r5
-	cmp	r8, r7				@ NEON instruction?
-	bne	2b
-	b	do_vfp				@ let VFP handler handle this
-1:
-#endif
 	tst	r0, #0x08000000			@ only CDP/CPRT/LDC/STC have bit 27
 	tstne	r0, #0x04000000			@ bit 26 set on both ARM and Thumb-2
 	reteq	lr
@@ -621,42 +597,13 @@ call_fpe:
 	ret.w	lr				@ CP#7
 	ret.w	lr				@ CP#8
 	ret.w	lr				@ CP#9
-#ifdef CONFIG_VFP
-	W(b)	do_vfp				@ CP#10 (VFP)
-	W(b)	do_vfp				@ CP#11 (VFP)
-#else
 	ret.w	lr				@ CP#10 (VFP)
 	ret.w	lr				@ CP#11 (VFP)
-#endif
 	ret.w	lr				@ CP#12
 	ret.w	lr				@ CP#13
 	ret.w	lr				@ CP#14 (Debug)
 	ret.w	lr				@ CP#15 (Control)
 
-#ifdef CONFIG_NEON
-	.align	6
-
-.LCneon_arm_opcodes:
-	.word	0xfe000000			@ mask
-	.word	0xf2000000			@ opcode
-
-	.word	0xff100000			@ mask
-	.word	0xf4000000			@ opcode
-
-	.word	0x00000000			@ mask
-	.word	0x00000000			@ opcode
-
-.LCneon_thumb_opcodes:
-	.word	0xef000000			@ mask
-	.word	0xef000000			@ opcode
-
-	.word	0xff100000			@ mask
-	.word	0xf9000000			@ opcode
-
-	.word	0x00000000			@ mask
-	.word	0x00000000			@ opcode
-#endif
-
 do_fpe:
 	add	r10, r10, #TI_FPSTATE		@ r10 = workspace
 	ldr_va	pc, fp_enter, tmp=r4		@ Call FP module USR entry point
diff --git a/arch/arm/vfp/Makefile b/arch/arm/vfp/Makefile
index 749901a72d6dc6c4..dfd64bc2b2fbdd06 100644
--- a/arch/arm/vfp/Makefile
+++ b/arch/arm/vfp/Makefile
@@ -8,4 +8,4 @@
 # ccflags-y := -DDEBUG
 # asflags-y := -DDEBUG
 
-obj-y		+= vfpmodule.o entry.o vfphw.o vfpsingle.o vfpdouble.o
+obj-y		+= vfpmodule.o vfphw.o vfpsingle.o vfpdouble.o
diff --git a/arch/arm/vfp/entry.S b/arch/arm/vfp/entry.S
deleted file mode 100644
index 547c94c62cd3a66a..0000000000000000
--- a/arch/arm/vfp/entry.S
+++ /dev/null
@@ -1,31 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0-only */
-/*
- *  linux/arch/arm/vfp/entry.S
- *
- *  Copyright (C) 2004 ARM Limited.
- *  Written by Deep Blue Solutions Limited.
- */
-#include <linux/init.h>
-#include <linux/linkage.h>
-#include <asm/thread_info.h>
-#include <asm/vfpmacros.h>
-#include <asm/assembler.h>
-#include <asm/asm-offsets.h>
-
-@ VFP entry point.
-@
-@  r0  = instruction opcode (32-bit ARM or two 16-bit Thumb)
-@  r2  = PC value to resume execution after successful emulation
-@  r9  = normal "successful" return address
-@  r10 = this threads thread_info structure
-@  lr  = unrecognised instruction return address
-@  IRQs enabled.
-@
-ENTRY(do_vfp)
-	mov	r1, r0				@ pass trigger opcode via R1
-	mov	r0, sp				@ pass struct pt_regs via R0
-	bl	vfp_support_entry		@ dispatch the VFP exception
-	cmp	r0, #0				@ handled successfully?
-	reteq	r9				@ then use R9 as return address
-	ret	lr				@ pass to undef handler
-ENDPROC(do_vfp)
diff --git a/arch/arm/vfp/vfpmodule.c b/arch/arm/vfp/vfpmodule.c
index 6db7d20c467ff843..5f65cf8375134af3 100644
--- a/arch/arm/vfp/vfpmodule.c
+++ b/arch/arm/vfp/vfpmodule.c
@@ -641,8 +641,6 @@ static int vfp_starting_cpu(unsigned int unused)
 	return 0;
 }
 
-#ifdef CONFIG_KERNEL_MODE_NEON
-
 static int vfp_kmode_exception(struct pt_regs *regs, unsigned int instr)
 {
 	/*
@@ -666,14 +664,14 @@ static int vfp_kmode_exception(struct pt_regs *regs, unsigned int instr)
 }
 
 /*
- * vfp_support_entry - Handle VFP exception from user mode
+ * vfp_support_entry - Handle VFP exception
  *
  * @regs:	pt_regs structure holding the register state at exception entry
  * @trigger:	The opcode of the instruction that triggered the exception
  *
  * Returns 0 if the exception was handled, or an error code otherwise.
  */
-asmlinkage int vfp_support_entry(struct pt_regs *regs, u32 trigger)
+static int vfp_support_entry(struct pt_regs *regs, u32 trigger)
 {
 	struct thread_info *ti = current_thread_info();
 	u32 fpexc;
@@ -681,6 +679,9 @@ asmlinkage int vfp_support_entry(struct pt_regs *regs, u32 trigger)
 	if (unlikely(!have_vfp))
 		return -ENODEV;
 
+	if (!user_mode(regs))
+		return vfp_kmode_exception(regs, trigger);
+
 	local_bh_disable();
 	fpexc = fmrx(FPEXC);
 
@@ -746,7 +747,6 @@ asmlinkage int vfp_support_entry(struct pt_regs *regs, u32 trigger)
 		 * replay the instruction that trapped.
 		 */
 		fmxr(FPEXC, fpexc);
-		regs->ARM_pc -= 4;
 	} else {
 		/* Check for synchronous or asynchronous exceptions */
 		if (!(fpexc & (FPEXC_EX | FPEXC_DEX))) {
@@ -767,54 +767,47 @@ asmlinkage int vfp_support_entry(struct pt_regs *regs, u32 trigger)
 				fpexc |= FPEXC_DEX;
 			}
 		}
-bounce:		VFP_bounce(trigger, fpexc, regs);
+bounce:		regs->ARM_pc += 4;
+		VFP_bounce(trigger, fpexc, regs);
 	}
 
 	local_bh_enable();
 	return 0;
 }
 
-static struct undef_hook vfp_kmode_exception_hook[] = {{
+static struct undef_hook neon_support_hook[] = {{
 	.instr_mask	= 0xfe000000,
 	.instr_val	= 0xf2000000,
-	.cpsr_mask	= MODE_MASK | PSR_T_BIT,
-	.cpsr_val	= SVC_MODE,
-	.fn		= vfp_kmode_exception,
+	.cpsr_mask	= PSR_T_BIT,
+	.cpsr_val	= 0,
+	.fn		= vfp_support_entry,
 }, {
 	.instr_mask	= 0xff100000,
 	.instr_val	= 0xf4000000,
-	.cpsr_mask	= MODE_MASK | PSR_T_BIT,
-	.cpsr_val	= SVC_MODE,
-	.fn		= vfp_kmode_exception,
+	.cpsr_mask	= PSR_T_BIT,
+	.cpsr_val	= 0,
+	.fn		= vfp_support_entry,
 }, {
 	.instr_mask	= 0xef000000,
 	.instr_val	= 0xef000000,
-	.cpsr_mask	= MODE_MASK | PSR_T_BIT,
-	.cpsr_val	= SVC_MODE | PSR_T_BIT,
-	.fn		= vfp_kmode_exception,
+	.cpsr_mask	= PSR_T_BIT,
+	.cpsr_val	= PSR_T_BIT,
+	.fn		= vfp_support_entry,
 }, {
 	.instr_mask	= 0xff100000,
 	.instr_val	= 0xf9000000,
-	.cpsr_mask	= MODE_MASK | PSR_T_BIT,
-	.cpsr_val	= SVC_MODE | PSR_T_BIT,
-	.fn		= vfp_kmode_exception,
-}, {
-	.instr_mask	= 0x0c000e00,
-	.instr_val	= 0x0c000a00,
-	.cpsr_mask	= MODE_MASK,
-	.cpsr_val	= SVC_MODE,
-	.fn		= vfp_kmode_exception,
+	.cpsr_mask	= PSR_T_BIT,
+	.cpsr_val	= PSR_T_BIT,
+	.fn		= vfp_support_entry,
 }};
 
-static int __init vfp_kmode_exception_hook_init(void)
-{
-	int i;
+static struct undef_hook vfp_support_hook = {
+	.instr_mask	= 0x0c000e00,
+	.instr_val	= 0x0c000a00,
+	.fn		= vfp_support_entry,
+};
 
-	for (i = 0; i < ARRAY_SIZE(vfp_kmode_exception_hook); i++)
-		register_undef_hook(&vfp_kmode_exception_hook[i]);
-	return 0;
-}
-subsys_initcall(vfp_kmode_exception_hook_init);
+#ifdef CONFIG_KERNEL_MODE_NEON
 
 /*
  * Kernel-side NEON support functions
@@ -919,8 +912,11 @@ static int __init vfp_init(void)
 		 * for NEON if the hardware has the MVFR registers.
 		 */
 		if (IS_ENABLED(CONFIG_NEON) &&
-		   (fmrx(MVFR1) & 0x000fff00) == 0x00011100)
+		   (fmrx(MVFR1) & 0x000fff00) == 0x00011100) {
 			elf_hwcap |= HWCAP_NEON;
+			for (int i = 0; i < ARRAY_SIZE(neon_support_hook); i++)
+				register_undef_hook(&neon_support_hook[i]);
+		}
 
 		if (IS_ENABLED(CONFIG_VFPv3)) {
 			u32 mvfr0 = fmrx(MVFR0);
@@ -989,6 +985,7 @@ static int __init vfp_init(void)
 
 	have_vfp = true;
 
+	register_undef_hook(&vfp_support_hook);
 	thread_register_notifier(&vfp_notifier_block);
 	vfp_pm_init();
 
-- 
2.39.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v4 10/12] ARM: entry: Disregard Thumb undef exception in coproc dispatch
  2023-03-20 13:18 [PATCH v4 00/12] ARM: vfp: Switch to C API to en/disable softirqs Ard Biesheuvel
                   ` (8 preceding siblings ...)
  2023-03-20 13:18 ` [PATCH v4 09/12] ARM: vfp: Use undef hook for handling VFP exceptions Ard Biesheuvel
@ 2023-03-20 13:18 ` Ard Biesheuvel
  2023-03-21 15:05   ` Linus Walleij
  2023-03-20 13:18 ` [PATCH v4 11/12] ARM: iwmmxt: Use undef hook to enable coprocessor for task Ard Biesheuvel
                   ` (2 subsequent siblings)
  12 siblings, 1 reply; 32+ messages in thread
From: Ard Biesheuvel @ 2023-03-20 13:18 UTC (permalink / raw)
  To: linux-arm-kernel, linux
  Cc: Ard Biesheuvel, Frederic Weisbecker, Guenter Roeck,
	Peter Zijlstra, Linus Walleij, Arnd Bergmann

Now that the only remaining coprocessor instructions being handled via
the dispatch in entry-armv.S are ones that only exist in a ARM (A32)
encoding, we can simplify the handling of Thumb undef exceptions, and
send them straight to the undefined instruction handlers in C code.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 arch/arm/kernel/entry-armv.S | 64 ++++++--------------
 1 file changed, 17 insertions(+), 47 deletions(-)

diff --git a/arch/arm/kernel/entry-armv.S b/arch/arm/kernel/entry-armv.S
index b4586a3447822774..0367c9581c1f05a6 100644
--- a/arch/arm/kernel/entry-armv.S
+++ b/arch/arm/kernel/entry-armv.S
@@ -452,12 +452,6 @@ __und_usr:
 	@ r2 = regs->ARM_pc, which is either 2 or 4 bytes ahead of the
 	@      faulting instruction depending on Thumb mode.
 	@ r3 = regs->ARM_cpsr
-	@
-	@ The emulation code returns using r9 if it has emulated the
-	@ instruction, or the more conventional lr if we are to treat
-	@ this as a real undefined instruction
-	@
-	badr	r9, ret_from_exception
 
 	@ IRQs must be enabled before attempting to read the instruction from
 	@ user space since that could cause a page/translation fault if the
@@ -465,21 +459,8 @@ __und_usr:
 	enable_irq
 
 	tst	r3, #PSR_T_BIT			@ Thumb mode?
-	bne	__und_usr_thumb
-	sub	r4, r2, #4			@ ARM instr at LR - 4
-1:	ldrt	r0, [r4]
- ARM_BE8(rev	r0, r0)				@ little endian instruction
-
-	uaccess_disable ip
-
-	@ r0 = 32-bit ARM instruction which caused the exception
-	@ r2 = PC value for the following instruction (:= regs->ARM_pc)
-	@ r4 = PC value for the faulting instruction
-	@ lr = 32-bit undefined instruction function
-	badr	lr, __und_usr_fault_32
-	b	call_fpe
+	beq	call_fpe
 
-__und_usr_thumb:
 	@ Thumb instruction
 	sub	r4, r2, #2			@ First half of thumb instr at LR - 2
 #if CONFIG_ARM_THUMB && __LINUX_ARM_ARCH__ >= 6 && CONFIG_CPU_V7
@@ -501,22 +482,14 @@ __und_usr_thumb:
  */
 	.arch	armv6t2
 #endif
-2:	ldrht	r5, [r4]
+USERL(	4f,	ldrht r5, [r4])
 ARM_BE8(rev16	r5, r5)				@ little endian instruction
 	cmp	r5, #0xe800			@ 32bit instruction if xx != 0
-	blo	__und_usr_fault_16_pan		@ 16bit undefined instruction
-3:	ldrht	r0, [r2]
-ARM_BE8(rev16	r0, r0)				@ little endian instruction
 	uaccess_disable ip
+	blo	__und_usr_fault_16		@ 16bit undefined instruction
 	add	r2, r2, #2			@ r2 is PC + 2, make it PC + 4
 	str	r2, [sp, #S_PC]			@ it's a 2x16bit instr, update
-	orr	r0, r0, r5, lsl #16
-	badr	lr, __und_usr_fault_32
-	@ r0 = the two 16-bit Thumb instructions which caused the exception
-	@ r2 = PC value for the following Thumb instruction (:= regs->ARM_pc)
-	@ r4 = PC value for the first 16-bit Thumb instruction
-	@ lr = 32bit undefined instruction function
-
+	b	__und_usr_fault_32
 #if __LINUX_ARM_ARCH__ < 7
 /* If the target arch was overridden, change it back: */
 #ifdef CONFIG_CPU_32v6K
@@ -537,14 +510,7 @@ ENDPROC(__und_usr)
 	.pushsection .text.fixup, "ax"
 	.align	2
 4:	str     r4, [sp, #S_PC]			@ retry current instruction
-	ret	r9
-	.popsection
-	.pushsection __ex_table,"a"
-	.long	1b, 4b
-#if CONFIG_ARM_THUMB && __LINUX_ARM_ARCH__ >= 6 && CONFIG_CPU_V7
-	.long	2b, 4b
-	.long	3b, 4b
-#endif
+	b	ret_from_exception
 	.popsection
 
 /*
@@ -558,17 +524,23 @@ ENDPROC(__und_usr)
  * for the ARM6/ARM7 SWI bug.
  *
  * Emulators may wish to make use of the following registers:
- *  r0  = instruction opcode (32-bit ARM or two 16-bit Thumb)
+ *  r0  = instruction opcode (32-bit ARM)
  *  r2  = PC value to resume execution after successful emulation
  *  r9  = normal "successful" return address
  *  r10 = this threads thread_info structure
  *  lr  = unrecognised instruction return address
  * IRQs enabled, FIQs enabled.
  */
-	@
-	@ Fall-through from Thumb-2 __und_usr
-	@
 call_fpe:
+	badr	r9, ret_from_exception
+	badr	lr, __und_usr_fault_32
+
+	sub	r4, r2, #4			@ ARM instr at LR - 4
+USERL(	4b,	ldrt r0, [r4])
+ARM_BE8(rev	r0, r0)				@ little endian instruction
+
+	uaccess_disable ip
+
 	get_thread_info r10			@ get current thread
 	tst	r0, #0x08000000			@ only CDP/CPRT/LDC/STC have bit 27
 	tstne	r0, #0x04000000			@ bit 26 set on both ARM and Thumb-2
@@ -630,13 +602,11 @@ ENDPROC(no_fp)
 __und_usr_fault_32:
 	mov	r1, #4
 	b	1f
-__und_usr_fault_16_pan:
-	uaccess_disable ip
 __und_usr_fault_16:
 	mov	r1, #2
 1:	mov	r0, sp
-	badr	lr, ret_from_exception
-	b	__und_fault
+	bl	__und_fault
+	b	ret_from_exception
 ENDPROC(__und_usr_fault_32)
 ENDPROC(__und_usr_fault_16)
 
-- 
2.39.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v4 11/12] ARM: iwmmxt: Use undef hook to enable coprocessor for task
  2023-03-20 13:18 [PATCH v4 00/12] ARM: vfp: Switch to C API to en/disable softirqs Ard Biesheuvel
                   ` (9 preceding siblings ...)
  2023-03-20 13:18 ` [PATCH v4 10/12] ARM: entry: Disregard Thumb undef exception in coproc dispatch Ard Biesheuvel
@ 2023-03-20 13:18 ` Ard Biesheuvel
  2023-03-21 15:06   ` Linus Walleij
  2023-03-20 13:18 ` [PATCH v4 12/12] ARM: entry: Make asm coproc dispatch code NWFPE only Ard Biesheuvel
  2023-03-23  2:44 ` [PATCH v4 00/12] ARM: vfp: Switch to C API to en/disable softirqs Guenter Roeck
  12 siblings, 1 reply; 32+ messages in thread
From: Ard Biesheuvel @ 2023-03-20 13:18 UTC (permalink / raw)
  To: linux-arm-kernel, linux
  Cc: Ard Biesheuvel, Frederic Weisbecker, Guenter Roeck,
	Peter Zijlstra, Linus Walleij, Arnd Bergmann

Define a undef hook to deal with undef exceptions triggered by iwmmxt
instructions that were issued with the coprocessor disabled. This
removes the dependency on the coprocessor dispatch code in entry-armv.S,
which will be made NWFPE-only in a subsequent patch.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 arch/arm/include/asm/thread_info.h | 16 ++++++++++++++++
 arch/arm/kernel/iwmmxt.S           |  9 +++++++++
 arch/arm/kernel/pj4-cp0.c          |  1 +
 arch/arm/kernel/xscale-cp0.c       |  1 +
 4 files changed, 27 insertions(+)

diff --git a/arch/arm/include/asm/thread_info.h b/arch/arm/include/asm/thread_info.h
index 85c5f1e02ebf83ca..943ffcf069d29cf4 100644
--- a/arch/arm/include/asm/thread_info.h
+++ b/arch/arm/include/asm/thread_info.h
@@ -40,6 +40,7 @@ struct task_struct;
 DECLARE_PER_CPU(struct task_struct *, __entry_task);
 
 #include <asm/types.h>
+#include <asm/traps.h>
 
 struct cpu_context_save {
 	__u32	r4;
@@ -104,6 +105,21 @@ extern void iwmmxt_task_restore(struct thread_info *, void *);
 extern void iwmmxt_task_release(struct thread_info *);
 extern void iwmmxt_task_switch(struct thread_info *);
 
+extern int iwmmxt_undef_handler(struct pt_regs *, u32);
+
+static inline void register_iwmmxt_undef_handler(void)
+{
+	static struct undef_hook iwmmxt_undef_hook = {
+		.instr_mask	= 0x0c000e00,
+		.instr_val	= 0x0c000000,
+		.cpsr_mask	= MODE_MASK | PSR_T_BIT,
+		.cpsr_val	= USR_MODE,
+		.fn		= iwmmxt_undef_handler,
+	};
+
+	register_undef_hook(&iwmmxt_undef_hook);
+}
+
 extern void vfp_sync_hwstate(struct thread_info *);
 extern void vfp_flush_hwstate(struct thread_info *);
 
diff --git a/arch/arm/kernel/iwmmxt.S b/arch/arm/kernel/iwmmxt.S
index d2b4ac06e4ed8c67..f8ea2c0ee19bd3d7 100644
--- a/arch/arm/kernel/iwmmxt.S
+++ b/arch/arm/kernel/iwmmxt.S
@@ -58,6 +58,15 @@
 	.text
 	.arm
 
+ENTRY(iwmmxt_undef_handler)
+	push		{r9, r10, lr}
+	get_thread_info	r10
+	mov		r9, pc
+	b		iwmmxt_task_enable
+	mov		r0, #0
+	pop		{r9, r10, pc}
+ENDPROC(iwmmxt_undef_handler)
+
 /*
  * Lazy switching of Concan coprocessor context
  *
diff --git a/arch/arm/kernel/pj4-cp0.c b/arch/arm/kernel/pj4-cp0.c
index 1d1fb22f44f37e3b..4bca8098c4ff5583 100644
--- a/arch/arm/kernel/pj4-cp0.c
+++ b/arch/arm/kernel/pj4-cp0.c
@@ -126,6 +126,7 @@ static int __init pj4_cp0_init(void)
 	pr_info("PJ4 iWMMXt v%d coprocessor enabled.\n", vers);
 	elf_hwcap |= HWCAP_IWMMXT;
 	thread_register_notifier(&iwmmxt_notifier_block);
+	register_iwmmxt_undef_handler();
 #endif
 
 	return 0;
diff --git a/arch/arm/kernel/xscale-cp0.c b/arch/arm/kernel/xscale-cp0.c
index ed4f6e77616da1c2..00d00d3aae972d1e 100644
--- a/arch/arm/kernel/xscale-cp0.c
+++ b/arch/arm/kernel/xscale-cp0.c
@@ -166,6 +166,7 @@ static int __init xscale_cp0_init(void)
 		pr_info("XScale iWMMXt coprocessor detected.\n");
 		elf_hwcap |= HWCAP_IWMMXT;
 		thread_register_notifier(&iwmmxt_notifier_block);
+		register_iwmmxt_undef_handler();
 #endif
 	} else {
 		pr_info("XScale DSP coprocessor detected.\n");
-- 
2.39.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v4 12/12] ARM: entry: Make asm coproc dispatch code NWFPE only
  2023-03-20 13:18 [PATCH v4 00/12] ARM: vfp: Switch to C API to en/disable softirqs Ard Biesheuvel
                   ` (10 preceding siblings ...)
  2023-03-20 13:18 ` [PATCH v4 11/12] ARM: iwmmxt: Use undef hook to enable coprocessor for task Ard Biesheuvel
@ 2023-03-20 13:18 ` Ard Biesheuvel
  2023-03-21 15:11   ` Linus Walleij
  2023-03-23  2:44 ` [PATCH v4 00/12] ARM: vfp: Switch to C API to en/disable softirqs Guenter Roeck
  12 siblings, 1 reply; 32+ messages in thread
From: Ard Biesheuvel @ 2023-03-20 13:18 UTC (permalink / raw)
  To: linux-arm-kernel, linux
  Cc: Ard Biesheuvel, Frederic Weisbecker, Guenter Roeck,
	Peter Zijlstra, Linus Walleij, Arnd Bergmann

Now that we can dispatch all VFP and iWMMXT related undef exceptions
using undef hooks implemented in C code, we no longer need the asm entry
code that takes care of this unless we are using FPE. As this means it
is ARM only, we can also remove the Thumb2 specific decorations.

It also means the non-standard, asm-only calling convention where
returning via LR means failure and returning via R9 means success is now
only used on legacy platforms that lack any kind of function return
prediction, avoiding the associated performance impact.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 arch/arm/kernel/entry-armv.S | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/arch/arm/kernel/entry-armv.S b/arch/arm/kernel/entry-armv.S
index 0367c9581c1f05a6..5552179faf7a469e 100644
--- a/arch/arm/kernel/entry-armv.S
+++ b/arch/arm/kernel/entry-armv.S
@@ -532,8 +532,9 @@ ENDPROC(__und_usr)
  * IRQs enabled, FIQs enabled.
  */
 call_fpe:
-	badr	r9, ret_from_exception
-	badr	lr, __und_usr_fault_32
+#ifdef CONFIG_FPE_NWFPE
+	adr	r9, ret_from_exception
+	adr	lr, __und_usr_fault_32
 
 	sub	r4, r2, #4			@ ARM instr at LR - 4
 USERL(	4b,	ldrt r0, [r4])
@@ -554,9 +555,7 @@ ARM_BE8(rev	r0, r0)				@ little endian instruction
 	teqeq	r5, #1 << TIF_USING_IWMMXT	@ check whether it is set
 	beq	iwmmxt_task_enable		@ branch if set
 #endif
- ARM(	add	pc, pc, r8, lsr #6	)
- THUMB(	lsr	r8, r8, #6		)
- THUMB(	add	pc, r8			)
+	add	pc, pc, r8, lsr #6
 	nop
 
 	ret.w	lr				@ CP#0
@@ -598,6 +597,7 @@ ENTRY(fp_enter)
 ENTRY(no_fp)
 	ret	lr
 ENDPROC(no_fp)
+#endif
 
 __und_usr_fault_32:
 	mov	r1, #4
-- 
2.39.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* Re: [PATCH v4 04/12] ARM: entry: Fix iWMMXT TIF flag handling
  2023-03-20 13:18 ` [PATCH v4 04/12] ARM: entry: Fix iWMMXT TIF flag handling Ard Biesheuvel
@ 2023-03-21 14:32   ` Linus Walleij
  2023-03-21 19:19     ` Nicolas Pitre
  0 siblings, 1 reply; 32+ messages in thread
From: Linus Walleij @ 2023-03-21 14:32 UTC (permalink / raw)
  To: Ard Biesheuvel, Nicolas Pitre, Nicolas Pitre
  Cc: linux-arm-kernel, linux, Frederic Weisbecker, Guenter Roeck,
	Peter Zijlstra, Arnd Bergmann, stable

On Mon, Mar 20, 2023 at 2:19 PM Ard Biesheuvel <ardb@kernel.org> wrote:

> The conditional MOVS instruction that appears to have been added to test
> for the TIF_USING_IWMMXT thread_info flag only sets the N and Z
> condition flags and register R7, none of which are referenced in the
> subsequent code. This means that the instruction does nothing, which
> means that we might misidentify faulting FPE instructions as iWMMXT
> instructions on kernels that were built to support both.
>
> This seems to have been part of the original submission of the code, and
> so this has never worked as intended, and nobody ever noticed, and so we
> might decide to just leave this as-is. However, with the ongoing move
> towards multiplatform kernels, the issue becomes more likely to
> manifest, and so it is better to fix it.
>
> So check whether we are dealing with an undef exception regarding
> coprocessor index #0 or #1, and if so, load the thread_info flag and
> only dispatch it as a iWMMXT trap if the flag is set.
>
> Cc: <stable@vger.kernel.org> # v2.6.9+
> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>

Looks right to me.
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>

The code dates back before git history, but it was introduced in
Linux v2.6.9 and it looks like Nicolas Pitre wrote it so let's just
ping him and see what he remembers about this!

Yours,
Linus Walleij

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v4 05/12] ARM: vfp: Record VFP bounces as perf emulation faults
  2023-03-20 13:18 ` [PATCH v4 05/12] ARM: vfp: Record VFP bounces as perf emulation faults Ard Biesheuvel
@ 2023-03-21 14:33   ` Linus Walleij
  0 siblings, 0 replies; 32+ messages in thread
From: Linus Walleij @ 2023-03-21 14:33 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: linux-arm-kernel, linux, Frederic Weisbecker, Guenter Roeck,
	Peter Zijlstra, Arnd Bergmann

On Mon, Mar 20, 2023 at 2:19 PM Ard Biesheuvel <ardb@kernel.org> wrote:

> VFP 'bouncing' occurs when the VFP unit cannot complete the execution of
> a VFP instruction, either because it is not implemented at all, or
> because the values of the arguments are out of range for the hardware
> implementation, and the software needs to step in to complete the
> operation.
>
> To give some insight in how much certain programs rely on this bouncing,
> record the emulation of a VFP instruction in perf's emulation-faults
> counter.
>
> This can be used like so
>
>   perf stat -e emulation-faults ./testfloat -all2
>
> and the output will be something like
>
>  Performance counter stats for './testfloat -all2':
>
>            259,277      emulation-faults:u
>
>        6.846432176 seconds time elapsed
>
> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>

Oh that's really useful.
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>

Yours,
Linus Walleij

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v4 06/12] ARM: vfp: Remove workaround for Feroceon CPUs
  2023-03-20 13:18 ` [PATCH v4 06/12] ARM: vfp: Remove workaround for Feroceon CPUs Ard Biesheuvel
@ 2023-03-21 14:44   ` Linus Walleij
  2023-03-21 15:42     ` Ard Biesheuvel
  2023-03-21 20:00     ` Nicolas Pitre
  0 siblings, 2 replies; 32+ messages in thread
From: Linus Walleij @ 2023-03-21 14:44 UTC (permalink / raw)
  To: Ard Biesheuvel, Nicolas Pitre, Nicolas Pitre
  Cc: linux-arm-kernel, linux, Frederic Weisbecker, Guenter Roeck,
	Peter Zijlstra, Arnd Bergmann

On Mon, Mar 20, 2023 at 2:19 PM Ard Biesheuvel <ardb@kernel.org> wrote:

> Feroceon CPUs have a non-standard implementation of VFP which reports
> synchronous VFP exceptions using the async VFP flag. This requires a
> workaround which is difficult to reconcile with other implementations,
> making it tricky to support both versions in a single image.
>
> Since this is a v5 CPU, it is not supported by armhf and so the
> likelihood that anybody is using this with recent distros/kernels and
> rely on the VFP at the same time is extremely low. So let's just disable
> VFP support on these cores, so we can remove the workaround.
>
> This will help future development to support v5 and v6 CPUs with a
> single kernel image.
>
> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>

I agree, I have one of those systems as my NAS, currently running
an (unsupported) version of ArchLinuxARM:

$ cat /proc/cpuinfo
processor    : 0
model name    : Feroceon 88FR131 rev 1 (v5l)
BogoMIPS    : 83.33
Features    : swp half thumb fastmult edsp
CPU implementer    : 0x56
CPU architecture: 5TE
CPU variant    : 0x2
CPU part    : 0x131
CPU revision    : 1

Hm doesn't even have any VFP, I don't know what spins of this Marvell
silicon that even does?
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>

I think this is again on the level of a tree falling in the forest and no-one
being there to hear it, but let's page Nico again because I am pretty
sure if anyone worked with this it was him.

Yours,
Linus Walleij

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v4 08/12] ARM: kernel: Get rid of thread_info::used_cp[] array
  2023-03-20 13:18 ` [PATCH v4 08/12] ARM: kernel: Get rid of thread_info::used_cp[] array Ard Biesheuvel
@ 2023-03-21 14:58   ` Linus Walleij
  0 siblings, 0 replies; 32+ messages in thread
From: Linus Walleij @ 2023-03-21 14:58 UTC (permalink / raw)
  To: Ard Biesheuvel, Al Viro
  Cc: linux-arm-kernel, linux, Frederic Weisbecker, Guenter Roeck,
	Peter Zijlstra, Arnd Bergmann

On Mon, Mar 20, 2023 at 2:19 PM Ard Biesheuvel <ardb@kernel.org> wrote:

> We keep track of which coprocessor triggered a fault in the used_cp[]
> array in thread_info, but this data is never used anywhere. So let's
> remove it.
>
> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>

Introduced in kernel v2.6.2, deleted in kernel v5.9, apparently
only used for dump_fpu() which we don't use anymore?
Cc: Al Viro who deleted dump_fpu().

Fixes: bb1a773d5b6b ("kill unused dump_fpu() instances")

FWIW:
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>

Yours,
Linus Walleij

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v4 09/12] ARM: vfp: Use undef hook for handling VFP exceptions
  2023-03-20 13:18 ` [PATCH v4 09/12] ARM: vfp: Use undef hook for handling VFP exceptions Ard Biesheuvel
@ 2023-03-21 14:59   ` Linus Walleij
  2023-03-21 15:41     ` Ard Biesheuvel
  0 siblings, 1 reply; 32+ messages in thread
From: Linus Walleij @ 2023-03-21 14:59 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: linux-arm-kernel, linux, Frederic Weisbecker, Guenter Roeck,
	Peter Zijlstra, Arnd Bergmann

On Mon, Mar 20, 2023 at 2:19 PM Ard Biesheuvel <ardb@kernel.org> wrote:

> Now that the VFP support code has been reimplemented as a C function
> that takes a struct pt_regs pointer and an opcode, we can use the
> existing undef_hook framework to deal with undef exceptions triggered by
> VFP instructions instead of having special handling in assembler.
>
> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>

Provided we can test that this does the right thing it looks really neat:
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>

Yours,
Linus Walleij

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v4 10/12] ARM: entry: Disregard Thumb undef exception in coproc dispatch
  2023-03-20 13:18 ` [PATCH v4 10/12] ARM: entry: Disregard Thumb undef exception in coproc dispatch Ard Biesheuvel
@ 2023-03-21 15:05   ` Linus Walleij
  0 siblings, 0 replies; 32+ messages in thread
From: Linus Walleij @ 2023-03-21 15:05 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: linux-arm-kernel, linux, Frederic Weisbecker, Guenter Roeck,
	Peter Zijlstra, Arnd Bergmann

On Mon, Mar 20, 2023 at 2:19 PM Ard Biesheuvel <ardb@kernel.org> wrote:

> Now that the only remaining coprocessor instructions being handled via
> the dispatch in entry-armv.S are ones that only exist in a ARM (A32)
> encoding, we can simplify the handling of Thumb undef exceptions, and
> send them straight to the undefined instruction handlers in C code.
>
> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>

I gave up trying to follow the code because the diff gets really convoluted.

But the mere relief of getting rid of things like this terse thing:

> -       .pushsection __ex_table,"a"
> -       .long   1b, 4b
> -#if CONFIG_ARM_THUMB && __LINUX_ARM_ARCH__ >= 6 && CONFIG_CPU_V7
> -       .long   2b, 4b
> -       .long   3b, 4b
> -#endif

Make it worth it, so as long as it works I'm game.
Acked-by: Linus Walleij <linus.walleij@linaro.org>

Yours,
Linus Walleij

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v4 11/12] ARM: iwmmxt: Use undef hook to enable coprocessor for task
  2023-03-20 13:18 ` [PATCH v4 11/12] ARM: iwmmxt: Use undef hook to enable coprocessor for task Ard Biesheuvel
@ 2023-03-21 15:06   ` Linus Walleij
  0 siblings, 0 replies; 32+ messages in thread
From: Linus Walleij @ 2023-03-21 15:06 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: linux-arm-kernel, linux, Frederic Weisbecker, Guenter Roeck,
	Peter Zijlstra, Arnd Bergmann

On Mon, Mar 20, 2023 at 2:19 PM Ard Biesheuvel <ardb@kernel.org> wrote:

> Define a undef hook to deal with undef exceptions triggered by iwmmxt
> instructions that were issued with the coprocessor disabled. This
> removes the dependency on the coprocessor dispatch code in entry-armv.S,
> which will be made NWFPE-only in a subsequent patch.
>
> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>

Clearly easier to deal with this.
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>

Yours,
Linus Walleij

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v4 12/12] ARM: entry: Make asm coproc dispatch code NWFPE only
  2023-03-20 13:18 ` [PATCH v4 12/12] ARM: entry: Make asm coproc dispatch code NWFPE only Ard Biesheuvel
@ 2023-03-21 15:11   ` Linus Walleij
  0 siblings, 0 replies; 32+ messages in thread
From: Linus Walleij @ 2023-03-21 15:11 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: linux-arm-kernel, linux, Frederic Weisbecker, Guenter Roeck,
	Peter Zijlstra, Arnd Bergmann, Marc Zyngier, Scott Bambrough

On Mon, Mar 20, 2023 at 2:19 PM Ard Biesheuvel <ardb@kernel.org> wrote:

> Now that we can dispatch all VFP and iWMMXT related undef exceptions
> using undef hooks implemented in C code, we no longer need the asm entry
> code that takes care of this unless we are using FPE. As this means it
> is ARM only, we can also remove the Thumb2 specific decorations.
>
> It also means the non-standard, asm-only calling convention where
> returning via LR means failure and returning via R9 means success is now
> only used on legacy platforms that lack any kind of function return
> prediction, avoiding the associated performance impact.
>
> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>

Clearly makes things more compartmentalized.
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>

>  call_fpe:
> -       badr    r9, ret_from_exception
> -       badr    lr, __und_usr_fault_32
> +#ifdef CONFIG_FPE_NWFPE
> +       adr     r9, ret_from_exception
> +       adr     lr, __und_usr_fault_32

I actually have the NetWinder so if we want to refactor further I can test.
Marc Zyngier has this machine too.

Scott Bambrough wrote the NWFPE handling a long time ago I think, so
putting him on CC for fun.

Yours,
Linus Walleij

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v4 09/12] ARM: vfp: Use undef hook for handling VFP exceptions
  2023-03-21 14:59   ` Linus Walleij
@ 2023-03-21 15:41     ` Ard Biesheuvel
  0 siblings, 0 replies; 32+ messages in thread
From: Ard Biesheuvel @ 2023-03-21 15:41 UTC (permalink / raw)
  To: Linus Walleij
  Cc: linux-arm-kernel, linux, Frederic Weisbecker, Guenter Roeck,
	Peter Zijlstra, Arnd Bergmann

On Tue, 21 Mar 2023 at 15:59, Linus Walleij <linus.walleij@linaro.org> wrote:
>
> On Mon, Mar 20, 2023 at 2:19 PM Ard Biesheuvel <ardb@kernel.org> wrote:
>
> > Now that the VFP support code has been reimplemented as a C function
> > that takes a struct pt_regs pointer and an opcode, we can use the
> > existing undef_hook framework to deal with undef exceptions triggered by
> > VFP instructions instead of having special handling in assembler.
> >
> > Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
>
> Provided we can test that this does the right thing it looks really neat:
> Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
>

Thanks.

I have tested this on BB white (Cortex-A8) and on RPi1 (1176), using
several instances of testfloat running concurrently, and things work
as expected in all cases. (The BB white is my VPN gateway so it uses
kernel mode NEON as well).

More test coverage is always welcome, of course ..

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v4 06/12] ARM: vfp: Remove workaround for Feroceon CPUs
  2023-03-21 14:44   ` Linus Walleij
@ 2023-03-21 15:42     ` Ard Biesheuvel
  2023-03-21 20:40       ` Linus Walleij
  2023-03-22  7:26       ` Arnd Bergmann
  2023-03-21 20:00     ` Nicolas Pitre
  1 sibling, 2 replies; 32+ messages in thread
From: Ard Biesheuvel @ 2023-03-21 15:42 UTC (permalink / raw)
  To: Linus Walleij
  Cc: Nicolas Pitre, Nicolas Pitre, linux-arm-kernel, linux,
	Frederic Weisbecker, Guenter Roeck, Peter Zijlstra,
	Arnd Bergmann

On Tue, 21 Mar 2023 at 15:44, Linus Walleij <linus.walleij@linaro.org> wrote:
>
> On Mon, Mar 20, 2023 at 2:19 PM Ard Biesheuvel <ardb@kernel.org> wrote:
>
> > Feroceon CPUs have a non-standard implementation of VFP which reports
> > synchronous VFP exceptions using the async VFP flag. This requires a
> > workaround which is difficult to reconcile with other implementations,
> > making it tricky to support both versions in a single image.
> >
> > Since this is a v5 CPU, it is not supported by armhf and so the
> > likelihood that anybody is using this with recent distros/kernels and
> > rely on the VFP at the same time is extremely low. So let's just disable
> > VFP support on these cores, so we can remove the workaround.
> >
> > This will help future development to support v5 and v6 CPUs with a
> > single kernel image.
> >
> > Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
>
> I agree, I have one of those systems as my NAS, currently running
> an (unsupported) version of ArchLinuxARM:
>
> $ cat /proc/cpuinfo
> processor    : 0
> model name    : Feroceon 88FR131 rev 1 (v5l)
> BogoMIPS    : 83.33
> Features    : swp half thumb fastmult edsp
> CPU implementer    : 0x56
> CPU architecture: 5TE
> CPU variant    : 0x2
> CPU part    : 0x131
> CPU revision    : 1
>
> Hm doesn't even have any VFP, I don't know what spins of this Marvell
> silicon that even does?

Arnd might have some more insight in that. Does this kernel have
CONFIG_VFP enabled?

> Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
>
> I think this is again on the level of a tree falling in the forest and no-one
> being there to hear it, but let's page Nico again because I am pretty
> sure if anyone worked with this it was him.
>
> Yours,
> Linus Walleij

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v4 04/12] ARM: entry: Fix iWMMXT TIF flag handling
  2023-03-21 14:32   ` Linus Walleij
@ 2023-03-21 19:19     ` Nicolas Pitre
  2023-03-21 19:32       ` Ard Biesheuvel
  0 siblings, 1 reply; 32+ messages in thread
From: Nicolas Pitre @ 2023-03-21 19:19 UTC (permalink / raw)
  To: Ard Biesheuvel, Linus Walleij
  Cc: linux-arm-kernel, Russell King - ARM Linux, Frederic Weisbecker,
	Guenter Roeck, Peter Zijlstra, Arnd Bergmann, stable

[-- Attachment #1: Type: text/plain, Size: 540 bytes --]

On Mon, Mar 20, 2023 at 2:19 PM Ard Biesheuvel <ardb@kernel.org> wrote:

> The conditional MOVS instruction that appears to have been added to test
> for the TIF_USING_IWMMXT thread_info flag only sets the N and Z
> condition flags and register R7, none of which are referenced in the
> subsequent code.

Really?

As far as I know, the rsb instruction is a "reversed subtract" and that 
also sets the carry flag.

And so does a move with a shifter argument (the last dropped bit is 
moved to the carry flag).

What am I missing?


Nicolas

[-- Attachment #2: Type: text/plain, Size: 176 bytes --]

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v4 04/12] ARM: entry: Fix iWMMXT TIF flag handling
  2023-03-21 19:19     ` Nicolas Pitre
@ 2023-03-21 19:32       ` Ard Biesheuvel
  0 siblings, 0 replies; 32+ messages in thread
From: Ard Biesheuvel @ 2023-03-21 19:32 UTC (permalink / raw)
  To: Nicolas Pitre
  Cc: Linus Walleij, linux-arm-kernel, Russell King - ARM Linux,
	Frederic Weisbecker, Guenter Roeck, Peter Zijlstra,
	Arnd Bergmann, stable

On Tue, 21 Mar 2023 at 20:19, Nicolas Pitre <npitre@baylibre.com> wrote:
>
> On Mon, Mar 20, 2023 at 2:19 PM Ard Biesheuvel <ardb@kernel.org> wrote:
>
> > The conditional MOVS instruction that appears to have been added to test
> > for the TIF_USING_IWMMXT thread_info flag only sets the N and Z
> > condition flags and register R7, none of which are referenced in the
> > subsequent code.
>
> Really?
>
> As far as I know, the rsb instruction is a "reversed subtract" and that
> also sets the carry flag.
>
> And so does a move with a shifter argument (the last dropped bit is
> moved to the carry flag).
>
> What am I missing?
>

No, you are absolutely right. I looked up the wrong encoding in the
ARM ARM. MOVS without a shift preserves the carry flag, but the
variant you used here behaves as you describe.

So the code is correct - apologies for the noise.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v4 06/12] ARM: vfp: Remove workaround for Feroceon CPUs
  2023-03-21 14:44   ` Linus Walleij
  2023-03-21 15:42     ` Ard Biesheuvel
@ 2023-03-21 20:00     ` Nicolas Pitre
  1 sibling, 0 replies; 32+ messages in thread
From: Nicolas Pitre @ 2023-03-21 20:00 UTC (permalink / raw)
  To: Linus Walleij
  Cc: Ard Biesheuvel, linux-arm-kernel, Russell King - ARM Linux,
	Frederic Weisbecker, Guenter Roeck, Peter Zijlstra,
	Arnd Bergmann

[-- Attachment #1: Type: text/plain, Size: 1912 bytes --]

On Tue, 21 Mar 2023, Linus Walleij wrote:

> On Mon, Mar 20, 2023 at 2:19 PM Ard Biesheuvel <ardb@kernel.org> wrote:
> 
> > Feroceon CPUs have a non-standard implementation of VFP which reports
> > synchronous VFP exceptions using the async VFP flag. This requires a
> > workaround which is difficult to reconcile with other implementations,
> > making it tricky to support both versions in a single image.
> >
> > Since this is a v5 CPU, it is not supported by armhf and so the
> > likelihood that anybody is using this with recent distros/kernels and
> > rely on the VFP at the same time is extremely low. So let's just disable
> > VFP support on these cores, so we can remove the workaround.
> >
> > This will help future development to support v5 and v6 CPUs with a
> > single kernel image.
> >
> > Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
> 
> I agree, I have one of those systems as my NAS, currently running
> an (unsupported) version of ArchLinuxARM:
> 
> $ cat /proc/cpuinfo
> processor    : 0
> model name    : Feroceon 88FR131 rev 1 (v5l)
> BogoMIPS    : 83.33
> Features    : swp half thumb fastmult edsp
> CPU implementer    : 0x56
> CPU architecture: 5TE
> CPU variant    : 0x2
> CPU part    : 0x131
> CPU revision    : 1
> 
> Hm doesn't even have any VFP, I don't know what spins of this Marvell
> silicon that even does?
> Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
> 
> I think this is again on the level of a tree falling in the forest and no-one
> being there to hear it, but let's page Nico again because I am pretty
> sure if anyone worked with this it was him.

That was more than 15 years ago it seems. Not getting any younger.

I don't think (or rather I don't remember to be more accurate) that any 
Marvell ARMv5TE had any VFP support. In any case, as stated above, no 
ARMv5 compatible distros have VFP support.

Acked-by: Nicolas Pitre <nico@fluxnic.net>


Nicolas

[-- Attachment #2: Type: text/plain, Size: 176 bytes --]

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v4 06/12] ARM: vfp: Remove workaround for Feroceon CPUs
  2023-03-21 15:42     ` Ard Biesheuvel
@ 2023-03-21 20:40       ` Linus Walleij
  2023-03-22  7:26       ` Arnd Bergmann
  1 sibling, 0 replies; 32+ messages in thread
From: Linus Walleij @ 2023-03-21 20:40 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Nicolas Pitre, Nicolas Pitre, linux-arm-kernel, linux,
	Frederic Weisbecker, Guenter Roeck, Peter Zijlstra,
	Arnd Bergmann

On Tue, Mar 21, 2023 at 4:43 PM Ard Biesheuvel <ardb@kernel.org> wrote:
> On Tue, 21 Mar 2023 at 15:44, Linus Walleij <linus.walleij@linaro.org> wrote:

> > Hm doesn't even have any VFP, I don't know what spins of this Marvell
> > silicon that even does?
>
> Arnd might have some more insight in that. Does this kernel have
> CONFIG_VFP enabled?

Ah no, it's the mvebu_v5_defconfig which apparently doesn't even
enable this. So even lesser chance of running into it.

Yours,
Linus Walleij

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v4 06/12] ARM: vfp: Remove workaround for Feroceon CPUs
  2023-03-21 15:42     ` Ard Biesheuvel
  2023-03-21 20:40       ` Linus Walleij
@ 2023-03-22  7:26       ` Arnd Bergmann
  1 sibling, 0 replies; 32+ messages in thread
From: Arnd Bergmann @ 2023-03-22  7:26 UTC (permalink / raw)
  To: Ard Biesheuvel, Linus Walleij
  Cc: Nicolas Pitre, Nicolas Pitre, linux-arm-kernel, Russell King,
	Frederic Weisbecker, Guenter Roeck, Peter Zijlstra

On Tue, Mar 21, 2023, at 16:42, Ard Biesheuvel wrote:
> On Tue, 21 Mar 2023 at 15:44, Linus Walleij <linus.walleij@linaro.org> wrote:

>> I agree, I have one of those systems as my NAS, currently running
>> an (unsupported) version of ArchLinuxARM:
>>
>> $ cat /proc/cpuinfo
>> processor    : 0
>> model name    : Feroceon 88FR131 rev 1 (v5l)
>> BogoMIPS    : 83.33
>> Features    : swp half thumb fastmult edsp
>> CPU implementer    : 0x56
>> CPU architecture: 5TE
>> CPU variant    : 0x2
>> CPU part    : 0x131
>> CPU revision    : 1
>>
>> Hm doesn't even have any VFP, I don't know what spins of this Marvell
>> silicon that even does?
>
> Arnd might have some more insight in that.

I did the research again last week when we discussed this
on IRC. It turns out that the information I had put into
Documentation/arm/marvell.rst is actually correct, and
out of the various Feroceon/Mohawk CPU cores, only the
very last dual-issue "Jolteon" cores have VFP, the
numbers are 88fr571-vd (used in mv78xx0), and
Feroceon-2850 88fr531-vd (used in Orion-2 88F5281,
but not more common Orion 88F518x and Kirkwood).

On a board level, none of the machines using DT have these
chips, the only remaining board files in 6.3 are
TERASTATION_WXL, TERASTATION_PRO2 and TS409. The first
two of those are being actively supported by the
https://github.com/1000001101000/Debian_on_Buffalo
project, but the maintainer confirmed that they don't
expect any user to run VFP-enabled binaries on the
Debian armel distro.

      Arnd

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v4 00/12] ARM: vfp: Switch to C API to en/disable softirqs
  2023-03-20 13:18 [PATCH v4 00/12] ARM: vfp: Switch to C API to en/disable softirqs Ard Biesheuvel
                   ` (11 preceding siblings ...)
  2023-03-20 13:18 ` [PATCH v4 12/12] ARM: entry: Make asm coproc dispatch code NWFPE only Ard Biesheuvel
@ 2023-03-23  2:44 ` Guenter Roeck
  2023-03-23  8:33   ` Ard Biesheuvel
  12 siblings, 1 reply; 32+ messages in thread
From: Guenter Roeck @ 2023-03-23  2:44 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: linux-arm-kernel, linux, Frederic Weisbecker, Peter Zijlstra,
	Linus Walleij, Arnd Bergmann

On Mon, Mar 20, 2023 at 02:18:33PM +0100, Ard Biesheuvel wrote:
> As it turns out, enabling or disabling softirqs from asm code is more
> tricky than it looks. Simply bumping the associated bit in preempt_count
> does the trick for uninstrumented kernels, but with lockdep or preempt
> debug enabled, we really need to call the C versions, as replicating
> their behavior in asm fully is intractible.
> 
> So let's rework the existing code a little bit so we can interpose a
> little C helper 'vfp_entry()' that disables softirqs before calling the
> existing vfpstate handling code in asm. Re-enabling softirqs from asm
> code is more straight-forward, as we can simply perform a tail call via
> a C routine that is guaranteed to be callable as a function.
> 
> However, since calling these APIs from asm code is still not ideal,
> let's reimplement the whole thing in C (patch #4)
> 

FWIW, all my qemu tests passed for this series.

Guenter

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v4 00/12] ARM: vfp: Switch to C API to en/disable softirqs
  2023-03-23  2:44 ` [PATCH v4 00/12] ARM: vfp: Switch to C API to en/disable softirqs Guenter Roeck
@ 2023-03-23  8:33   ` Ard Biesheuvel
  0 siblings, 0 replies; 32+ messages in thread
From: Ard Biesheuvel @ 2023-03-23  8:33 UTC (permalink / raw)
  To: Guenter Roeck
  Cc: linux-arm-kernel, linux, Frederic Weisbecker, Peter Zijlstra,
	Linus Walleij, Arnd Bergmann

On Thu, 23 Mar 2023 at 03:44, Guenter Roeck <linux@roeck-us.net> wrote:
>
> On Mon, Mar 20, 2023 at 02:18:33PM +0100, Ard Biesheuvel wrote:
> > As it turns out, enabling or disabling softirqs from asm code is more
> > tricky than it looks. Simply bumping the associated bit in preempt_count
> > does the trick for uninstrumented kernels, but with lockdep or preempt
> > debug enabled, we really need to call the C versions, as replicating
> > their behavior in asm fully is intractible.
> >
> > So let's rework the existing code a little bit so we can interpose a
> > little C helper 'vfp_entry()' that disables softirqs before calling the
> > existing vfpstate handling code in asm. Re-enabling softirqs from asm
> > code is more straight-forward, as we can simply perform a tail call via
> > a C routine that is guaranteed to be callable as a function.
> >
> > However, since calling these APIs from asm code is still not ideal,
> > let's reimplement the whole thing in C (patch #4)
> >
>
> FWIW, all my qemu tests passed for this series.
>

Thanks for testing!

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v4 03/12] ARM: vfp: Fix broken softirq handling with instrumentation enabled
  2023-03-20 13:18 ` [PATCH v4 03/12] ARM: vfp: Fix broken softirq handling with instrumentation enabled Ard Biesheuvel
@ 2023-04-09 14:29   ` Linux regression tracking (Thorsten Leemhuis)
  2023-04-10 20:15     ` Guenter Roeck
  0 siblings, 1 reply; 32+ messages in thread
From: Linux regression tracking (Thorsten Leemhuis) @ 2023-04-09 14:29 UTC (permalink / raw)
  To: Ard Biesheuvel, linux-arm-kernel, linux
  Cc: Frederic Weisbecker, Guenter Roeck, Peter Zijlstra,
	Linus Walleij, Arnd Bergmann, Linux kernel regressions list

On 20.03.23 14:18, Ard Biesheuvel wrote:
> Commit 62b95a7b44d1 ("ARM: 9282/1: vfp: Manipulate task VFP state with
> softirqs disabled") replaced the en/disable preemption calls inside the
> VFP state handling code with en/disabling of soft IRQs, which is
> necessary to allow kernel use of the VFP/SIMD unit when handling a soft
> IRQ.
> 
> Unfortunately, when lockdep is enabled (or other instrumentation that
> enables TRACE_IRQFLAGS), the disable path implemented in asm fails to
> perform the lockdep and RCU related bookkeeping, resulting in spurious
> warnings and other badness.
> 
> Set let's rework the VFP entry code a little bit so we can make the
> local_bh_disable() call from C, with all the instrumentations that
> happen to have been configured. Calling local_bh_enable() can be done
> from asm, as it is a simple wrapper around __local_bh_enable_ip(), which
> is always a callable function.
> 
> Link: https://lore.kernel.org/all/ZBBYCSZUJOWBg1s8@localhost.localdomain/
> Fixes: 62b95a7b44d1 ("ARM: 9282/1: vfp: Manipulate task VFP state with softirqs disabled")
> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
> Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
> Tested-by: Guenter Roeck <linux@roeck-us.net>
> [...]

It feels a lot like I'm missing something, but I can't figure out what,
I thus have to ask:

What happened to this patch and/or the series? From my very limited
viewpoint it looks like there was no progress for nearly three week now.

Reminder, 62b95a7b44d1 is a commit from 6.3 cycle and fixing a problem
Guenter ran into. Hence it to me looks like it's something that better
would be fixed this cycle. Or is this considered too dangerous for some
reason and thus need to wait? But even then I wonder a little bit why
it's not in next by now.

Ciao, Thorsten (wearing his 'the Linux kernel's regression tracker' hat)
--
Everything you wanna know about Linux kernel regression tracking:
https://linux-regtracking.leemhuis.info/about/#tldr
If I did something stupid, please tell me, as explained on that page

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v4 03/12] ARM: vfp: Fix broken softirq handling with instrumentation enabled
  2023-04-09 14:29   ` Linux regression tracking (Thorsten Leemhuis)
@ 2023-04-10 20:15     ` Guenter Roeck
  0 siblings, 0 replies; 32+ messages in thread
From: Guenter Roeck @ 2023-04-10 20:15 UTC (permalink / raw)
  To: Linux regressions mailing list, Ard Biesheuvel, linux-arm-kernel, linux
  Cc: Frederic Weisbecker, Peter Zijlstra, Linus Walleij, Arnd Bergmann

On 4/9/23 07:29, Linux regression tracking (Thorsten Leemhuis) wrote:
> On 20.03.23 14:18, Ard Biesheuvel wrote:
>> Commit 62b95a7b44d1 ("ARM: 9282/1: vfp: Manipulate task VFP state with
>> softirqs disabled") replaced the en/disable preemption calls inside the
>> VFP state handling code with en/disabling of soft IRQs, which is
>> necessary to allow kernel use of the VFP/SIMD unit when handling a soft
>> IRQ.
>>
>> Unfortunately, when lockdep is enabled (or other instrumentation that
>> enables TRACE_IRQFLAGS), the disable path implemented in asm fails to
>> perform the lockdep and RCU related bookkeeping, resulting in spurious
>> warnings and other badness.
>>
>> Set let's rework the VFP entry code a little bit so we can make the
>> local_bh_disable() call from C, with all the instrumentations that
>> happen to have been configured. Calling local_bh_enable() can be done
>> from asm, as it is a simple wrapper around __local_bh_enable_ip(), which
>> is always a callable function.
>>
>> Link: https://lore.kernel.org/all/ZBBYCSZUJOWBg1s8@localhost.localdomain/
>> Fixes: 62b95a7b44d1 ("ARM: 9282/1: vfp: Manipulate task VFP state with softirqs disabled")
>> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
>> Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
>> Tested-by: Guenter Roeck <linux@roeck-us.net>
>> [...]
> 
> It feels a lot like I'm missing something, but I can't figure out what,
> I thus have to ask:
> 
> What happened to this patch and/or the series? From my very limited
> viewpoint it looks like there was no progress for nearly three week now.
> 
> Reminder, 62b95a7b44d1 is a commit from 6.3 cycle and fixing a problem
> Guenter ran into. Hence it to me looks like it's something that better
> would be fixed this cycle. Or is this considered too dangerous for some
> reason and thus need to wait? But even then I wonder a little bit why
> it's not in next by now.
> 

Something for sure has changed. I can no longer reproduce the problem
with the latest kernel (v6.3-rc6). I can still reproduce it with
v6.3-rc2. Further testing shows that it is no longer reproducible
for me with v6.3-rc3. That made me curious. I am currently bisecting,
but that will take a while.

Guenter


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 32+ messages in thread

end of thread, other threads:[~2023-04-10 20:16 UTC | newest]

Thread overview: 32+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-03-20 13:18 [PATCH v4 00/12] ARM: vfp: Switch to C API to en/disable softirqs Ard Biesheuvel
2023-03-20 13:18 ` [PATCH v4 01/12] ARM: vfp: Pass thread_info pointer to vfp_support_entry Ard Biesheuvel
2023-03-20 13:18 ` [PATCH v4 02/12] ARM: vfp: Pass successful return address via register R3 Ard Biesheuvel
2023-03-20 13:18 ` [PATCH v4 03/12] ARM: vfp: Fix broken softirq handling with instrumentation enabled Ard Biesheuvel
2023-04-09 14:29   ` Linux regression tracking (Thorsten Leemhuis)
2023-04-10 20:15     ` Guenter Roeck
2023-03-20 13:18 ` [PATCH v4 04/12] ARM: entry: Fix iWMMXT TIF flag handling Ard Biesheuvel
2023-03-21 14:32   ` Linus Walleij
2023-03-21 19:19     ` Nicolas Pitre
2023-03-21 19:32       ` Ard Biesheuvel
2023-03-20 13:18 ` [PATCH v4 05/12] ARM: vfp: Record VFP bounces as perf emulation faults Ard Biesheuvel
2023-03-21 14:33   ` Linus Walleij
2023-03-20 13:18 ` [PATCH v4 06/12] ARM: vfp: Remove workaround for Feroceon CPUs Ard Biesheuvel
2023-03-21 14:44   ` Linus Walleij
2023-03-21 15:42     ` Ard Biesheuvel
2023-03-21 20:40       ` Linus Walleij
2023-03-22  7:26       ` Arnd Bergmann
2023-03-21 20:00     ` Nicolas Pitre
2023-03-20 13:18 ` [PATCH v4 07/12] ARM: vfp: Reimplement VFP exception entry in C code Ard Biesheuvel
2023-03-20 13:18 ` [PATCH v4 08/12] ARM: kernel: Get rid of thread_info::used_cp[] array Ard Biesheuvel
2023-03-21 14:58   ` Linus Walleij
2023-03-20 13:18 ` [PATCH v4 09/12] ARM: vfp: Use undef hook for handling VFP exceptions Ard Biesheuvel
2023-03-21 14:59   ` Linus Walleij
2023-03-21 15:41     ` Ard Biesheuvel
2023-03-20 13:18 ` [PATCH v4 10/12] ARM: entry: Disregard Thumb undef exception in coproc dispatch Ard Biesheuvel
2023-03-21 15:05   ` Linus Walleij
2023-03-20 13:18 ` [PATCH v4 11/12] ARM: iwmmxt: Use undef hook to enable coprocessor for task Ard Biesheuvel
2023-03-21 15:06   ` Linus Walleij
2023-03-20 13:18 ` [PATCH v4 12/12] ARM: entry: Make asm coproc dispatch code NWFPE only Ard Biesheuvel
2023-03-21 15:11   ` Linus Walleij
2023-03-23  2:44 ` [PATCH v4 00/12] ARM: vfp: Switch to C API to en/disable softirqs Guenter Roeck
2023-03-23  8:33   ` Ard Biesheuvel

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).