All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH resend 0/2] preparatory arm64 asm patches for yielding the NEON
@ 2018-03-28 12:41 Ard Biesheuvel
  2018-03-28 12:41 ` [PATCH resend 1/2] arm64: assembler: add utility macros to push/pop stack frames Ard Biesheuvel
  2018-03-28 12:41 ` [PATCH resend 2/2] arm64: assembler: add macros to conditionally yield the NEON under PREEMPT Ard Biesheuvel
  0 siblings, 2 replies; 13+ messages in thread
From: Ard Biesheuvel @ 2018-03-28 12:41 UTC (permalink / raw)
  To: linux-arm-kernel

The RT people reported that the arm64 crypto NEON code behaves poorly in RT
context, because it disables preemption (to avoid having to context switch
the NEON registers) and usually processes the entire input in one go. When we
introduced this code, this was not unreasonable given the overhead of eager
preserve/restore, but today, there isn't that much overhead anymore, and so
we can consider approaches that have much better worst case scheduling latency.

Simply refactoring the code to only call into the core NEON transform one
block at a time results in a non-negligible performance impact, especially
on low end cores such as Cortex-A53 where memory accesses are relatively
costly. So instead, let's introduce some infrastructure to allow assembler
routines to do a conditional yield, i.e., check the TIF_NEED_RESCHED flag
after processing each block of input, and yield if it is set, in which case
some context may need to be preserved and restored, and or constant tables
reloaded.

Patch #1 adds helper macros to create standard AAPCS stack frames. This is
needed because the assembler code will be modified to call into schedule()
[essentially], and so a stack frame is needed to preserve state.

Patch #2 adds helper macros to create the yielding code: check whether a
yield should be done, and preserve/restore the algorithm specific pieces
that will not be preserved across the yield in the NEON registers.

These patches have been broken out from the arm64/crypto series and resent
since they require careful review from the arm64 maintainers, rather than
pulled silently via the crypto tree (which already happened by accident and
got reverted)

Full tree is here
https://git.kernel.org/pub/scm/linux/kernel/git/ardb/linux.git/log/?h=arm64-crypto-preempt


Ard Biesheuvel (2):
  arm64: assembler: add utility macros to push/pop stack frames
  arm64: assembler: add macros to conditionally yield the NEON under
    PREEMPT

 arch/arm64/include/asm/assembler.h | 122 ++++++++++++++++++++
 arch/arm64/kernel/asm-offsets.c    |   2 +
 2 files changed, 124 insertions(+)

-- 
2.11.0

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH resend 1/2] arm64: assembler: add utility macros to push/pop stack frames
  2018-03-28 12:41 [PATCH resend 0/2] preparatory arm64 asm patches for yielding the NEON Ard Biesheuvel
@ 2018-03-28 12:41 ` Ard Biesheuvel
  2018-03-28 16:34   ` Dave Martin
  2018-03-28 12:41 ` [PATCH resend 2/2] arm64: assembler: add macros to conditionally yield the NEON under PREEMPT Ard Biesheuvel
  1 sibling, 1 reply; 13+ messages in thread
From: Ard Biesheuvel @ 2018-03-28 12:41 UTC (permalink / raw)
  To: linux-arm-kernel

We are going to add code to all the NEON crypto routines that will
turn them into non-leaf functions, so we need to manage the stack
frames. To make this less tedious and error prone, add some macros
that take the number of callee saved registers to preserve and the
extra size to allocate in the stack frame (for locals) and emit
the ldp/stp sequences.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/include/asm/assembler.h | 58 ++++++++++++++++++++
 1 file changed, 58 insertions(+)

diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
index 053d83e8db6f..d354eb7f2f0c 100644
--- a/arch/arm64/include/asm/assembler.h
+++ b/arch/arm64/include/asm/assembler.h
@@ -565,4 +565,62 @@ USER(\label, ic	ivau, \tmp2)			// invalidate I line PoU
 #endif
 	.endm
 
+	/*
+	 * frame_push - Push @regcount callee saved registers to the stack,
+	 *              starting at x19, as well as x29/x30, and set x29 to
+	 *              the new value of sp. Add @extra bytes of stack space
+	 *              for locals.
+	 */
+	.macro		frame_push, regcount:req, extra
+	__frame		st, \regcount, \extra
+	.endm
+
+	/*
+	 * frame_pop  - Pop the callee saved registers from the stack that were
+	 *              pushed in the most recent call to frame_push, as well
+	 *              as x29/x30 and any extra stack space that may have been
+	 *              allocated.
+	 */
+	.macro		frame_pop
+	__frame		ld
+	.endm
+
+	.macro		__frame_regs, reg1, reg2, op, num
+	.if		.Lframe_regcount == \num
+	\op\()r		\reg1, [sp, #(\num + 1) * 8]
+	.elseif		.Lframe_regcount > \num
+	\op\()p		\reg1, \reg2, [sp, #(\num + 1) * 8]
+	.endif
+	.endm
+
+	.macro		__frame, op, regcount, extra=0
+	.ifc		\op, st
+	.if		(\regcount) < 0 || (\regcount) > 10
+	.error		"regcount should be in the range [0 ... 10]"
+	.endif
+	.if		((\extra) % 16) != 0
+	.error		"extra should be a multiple of 16 bytes"
+	.endif
+	.set		.Lframe_regcount, \regcount
+	.set		.Lframe_extra, \extra
+	.set		.Lframe_local_offset, ((\regcount + 3) / 2) * 16
+	stp		x29, x30, [sp, #-.Lframe_local_offset - .Lframe_extra]!
+	mov		x29, sp
+	.endif
+
+	__frame_regs	x19, x20, \op, 1
+	__frame_regs	x21, x22, \op, 3
+	__frame_regs	x23, x24, \op, 5
+	__frame_regs	x25, x26, \op, 7
+	__frame_regs	x27, x28, \op, 9
+
+	.ifc		\op, ld
+	.if		.Lframe_regcount == -1
+	.error		"frame_push/frame_pop may not be nested"
+	.endif
+	ldp		x29, x30, [sp], #.Lframe_local_offset + .Lframe_extra
+	.set		.Lframe_regcount, -1
+	.endif
+	.endm
+
 #endif	/* __ASM_ASSEMBLER_H */
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH resend 2/2] arm64: assembler: add macros to conditionally yield the NEON under PREEMPT
  2018-03-28 12:41 [PATCH resend 0/2] preparatory arm64 asm patches for yielding the NEON Ard Biesheuvel
  2018-03-28 12:41 ` [PATCH resend 1/2] arm64: assembler: add utility macros to push/pop stack frames Ard Biesheuvel
@ 2018-03-28 12:41 ` Ard Biesheuvel
  2018-03-28 17:18   ` Dave Martin
  1 sibling, 1 reply; 13+ messages in thread
From: Ard Biesheuvel @ 2018-03-28 12:41 UTC (permalink / raw)
  To: linux-arm-kernel

Add support macros to conditionally yield the NEON (and thus the CPU)
that may be called from the assembler code.

In some cases, yielding the NEON involves saving and restoring a non
trivial amount of context (especially in the CRC folding algorithms),
and so the macro is split into three, and the code in between is only
executed when the yield path is taken, allowing the context to be preserved.
The third macro takes an optional label argument that marks the resume
path after a yield has been performed.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/include/asm/assembler.h | 64 ++++++++++++++++++++
 arch/arm64/kernel/asm-offsets.c    |  2 +
 2 files changed, 66 insertions(+)

diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
index d354eb7f2f0c..fb11514273d9 100644
--- a/arch/arm64/include/asm/assembler.h
+++ b/arch/arm64/include/asm/assembler.h
@@ -623,4 +623,68 @@ USER(\label, ic	ivau, \tmp2)			// invalidate I line PoU
 	.endif
 	.endm
 
+/*
+ * Check whether to yield to another runnable task from kernel mode NEON code
+ * (which runs with preemption disabled).
+ *
+ * if_will_cond_yield_neon
+ *        // pre-yield patchup code
+ * do_cond_yield_neon
+ *        // post-yield patchup code
+ * endif_yield_neon    <label>
+ *
+ * where <label> is optional, and marks the point where execution will resume
+ * after a yield has been performed. If omitted, execution resumes right after
+ * the endif_yield_neon invocation.
+ *
+ * Note that the patchup code does not support assembler directives that change
+ * the output section, any use of such directives is undefined.
+ *
+ * The yield itself consists of the following:
+ * - Check whether the preempt count is exactly 1, in which case disabling
+ *   preemption once will make the task preemptible. If this is not the case,
+ *   yielding is pointless.
+ * - Check whether TIF_NEED_RESCHED is set, and if so, disable and re-enable
+ *   kernel mode NEON (which will trigger a reschedule), and branch to the
+ *   yield fixup code.
+ *
+ * This macro sequence clobbers x0, x1 and the flags register unconditionally,
+ * and may clobber x2 .. x18 if the yield path is taken.
+ */
+
+	.macro		cond_yield_neon, lbl
+	if_will_cond_yield_neon
+	do_cond_yield_neon
+	endif_yield_neon	\lbl
+	.endm
+
+	.macro		if_will_cond_yield_neon
+#ifdef CONFIG_PREEMPT
+	get_thread_info	x0
+	ldr		w1, [x0, #TSK_TI_PREEMPT]
+	ldr		x0, [x0, #TSK_TI_FLAGS]
+	cmp		w1, #PREEMPT_DISABLE_OFFSET
+	csel		x0, x0, xzr, eq
+	tbnz		x0, #TIF_NEED_RESCHED, .Lyield_\@	// needs rescheduling?
+#endif
+	/* fall through to endif_yield_neon */
+	.subsection	1
+.Lyield_\@ :
+	.endm
+
+	.macro		do_cond_yield_neon
+	bl		kernel_neon_end
+	bl		kernel_neon_begin
+	.endm
+
+	.macro		endif_yield_neon, lbl
+	.ifnb		\lbl
+	b		\lbl
+	.else
+	b		.Lyield_out_\@
+	.endif
+	.previous
+.Lyield_out_\@ :
+	.endm
+
 #endif	/* __ASM_ASSEMBLER_H */
diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
index 1303e04110cd..1e2ea2e51acb 100644
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -93,6 +93,8 @@ int main(void)
   DEFINE(DMA_TO_DEVICE,		DMA_TO_DEVICE);
   DEFINE(DMA_FROM_DEVICE,	DMA_FROM_DEVICE);
   BLANK();
+  DEFINE(PREEMPT_DISABLE_OFFSET, PREEMPT_DISABLE_OFFSET);
+  BLANK();
   DEFINE(CLOCK_REALTIME,	CLOCK_REALTIME);
   DEFINE(CLOCK_MONOTONIC,	CLOCK_MONOTONIC);
   DEFINE(CLOCK_MONOTONIC_RAW,	CLOCK_MONOTONIC_RAW);
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH resend 1/2] arm64: assembler: add utility macros to push/pop stack frames
  2018-03-28 12:41 ` [PATCH resend 1/2] arm64: assembler: add utility macros to push/pop stack frames Ard Biesheuvel
@ 2018-03-28 16:34   ` Dave Martin
  2018-03-29  8:54     ` Ard Biesheuvel
  0 siblings, 1 reply; 13+ messages in thread
From: Dave Martin @ 2018-03-28 16:34 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Mar 28, 2018 at 02:41:28PM +0200, Ard Biesheuvel wrote:
> We are going to add code to all the NEON crypto routines that will
> turn them into non-leaf functions, so we need to manage the stack
> frames. To make this less tedious and error prone, add some macros
> that take the number of callee saved registers to preserve and the

Apologies for the delay in looking at these patches...

Anyway:

Nit: for all instances of "callee saved" in this patch, do you mean             "caller saved"?

A few stylistic comments below, but I don't consider them essential to
address unless someone feels like it.

Otherwise,
Reviewed-by: Dave Martin <Dave.Martin@arm.com>

> extra size to allocate in the stack frame (for locals) and emit
> the ldp/stp sequences.
> 
> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> ---
>  arch/arm64/include/asm/assembler.h | 58 ++++++++++++++++++++
>  1 file changed, 58 insertions(+)
> 
> diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
> index 053d83e8db6f..d354eb7f2f0c 100644
> --- a/arch/arm64/include/asm/assembler.h
> +++ b/arch/arm64/include/asm/assembler.h
> @@ -565,4 +565,62 @@ USER(\label, ic	ivau, \tmp2)			// invalidate I line PoU
>  #endif
>  	.endm
>  
> +	/*
> +	 * frame_push - Push @regcount callee saved registers to the stack,
> +	 *              starting at x19, as well as x29/x30, and set x29 to
> +	 *              the new value of sp. Add @extra bytes of stack space
> +	 *              for locals.
> +	 */
> +	.macro		frame_push, regcount:req, extra
> +	__frame		st, \regcount, \extra
> +	.endm
> +
> +	/*
> +	 * frame_pop  - Pop the callee saved registers from the stack that were
> +	 *              pushed in the most recent call to frame_push, as well
> +	 *              as x29/x30 and any extra stack space that may have been
> +	 *              allocated.
> +	 */
> +	.macro		frame_pop
> +	__frame		ld
> +	.endm
> +
> +	.macro		__frame_regs, reg1, reg2, op, num
> +	.if		.Lframe_regcount == \num
> +	\op\()r		\reg1, [sp, #(\num + 1) * 8]
> +	.elseif		.Lframe_regcount > \num
> +	\op\()p		\reg1, \reg2, [sp, #(\num + 1) * 8]
> +	.endif
> +	.endm
> +
> +	.macro		__frame, op, regcount, extra=0
> +	.ifc		\op, st
> +	.if		(\regcount) < 0 || (\regcount) > 10
> +	.error		"regcount should be in the range [0 ... 10]"
> +	.endif
> +	.if		((\extra) % 16) != 0
> +	.error		"extra should be a multiple of 16 bytes"
> +	.endif
> +	.set		.Lframe_regcount, \regcount
> +	.set		.Lframe_extra, \extra
> +	.set		.Lframe_local_offset, ((\regcount + 3) / 2) * 16
> +	stp		x29, x30, [sp, #-.Lframe_local_offset - .Lframe_extra]!
> +	mov		x29, sp
> +	.endif
> +
> +	__frame_regs	x19, x20, \op, 1
> +	__frame_regs	x21, x22, \op, 3
> +	__frame_regs	x23, x24, \op, 5
> +	__frame_regs	x25, x26, \op, 7
> +	__frame_regs	x27, x28, \op, 9
> +
> +	.ifc		\op, ld
> +	.if		.Lframe_regcount == -1

We could also have

	.ifc		\op, st
	.ifdef		.Lframe_regcount
	.if		.Lframe_regcount != -1
	.error [...]

on the push side, which would trip on the first nested frame_push
rather than waiting until a frame_pop appears.

Your existing code could be retained to guard against a double pop.

> +	.error		"frame_push/frame_pop may not be nested"
> +	.endif
> +	ldp		x29, x30, [sp], #.Lframe_local_offset + .Lframe_extra
> +	.set		.Lframe_regcount, -1
> +	.endif
> +	.endm
> +
>  #endif	/* __ASM_ASSEMBLER_H */
> -- 
> 2.11.0
> 
> 
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel at lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH resend 2/2] arm64: assembler: add macros to conditionally yield the NEON under PREEMPT
  2018-03-28 12:41 ` [PATCH resend 2/2] arm64: assembler: add macros to conditionally yield the NEON under PREEMPT Ard Biesheuvel
@ 2018-03-28 17:18   ` Dave Martin
  2018-03-29  9:02     ` Ard Biesheuvel
  0 siblings, 1 reply; 13+ messages in thread
From: Dave Martin @ 2018-03-28 17:18 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Mar 28, 2018 at 02:41:29PM +0200, Ard Biesheuvel wrote:
> Add support macros to conditionally yield the NEON (and thus the CPU)
> that may be called from the assembler code.
> 
> In some cases, yielding the NEON involves saving and restoring a non
> trivial amount of context (especially in the CRC folding algorithms),
> and so the macro is split into three, and the code in between is only
> executed when the yield path is taken, allowing the context to be preserved.
> The third macro takes an optional label argument that marks the resume
> path after a yield has been performed.

Minor comments below, mostly just suggestions/observations.

With the missing #include in asm-offsets.c fixed (if you think it's
appropriate):

Reviewed-by: Dave Martin <Dave.Martin@arm.com>

> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> ---
>  arch/arm64/include/asm/assembler.h | 64 ++++++++++++++++++++
>  arch/arm64/kernel/asm-offsets.c    |  2 +
>  2 files changed, 66 insertions(+)
> 
> diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
> index d354eb7f2f0c..fb11514273d9 100644
> --- a/arch/arm64/include/asm/assembler.h
> +++ b/arch/arm64/include/asm/assembler.h
> @@ -623,4 +623,68 @@ USER(\label, ic	ivau, \tmp2)			// invalidate I line PoU
>  	.endif
>  	.endm
>  
> +/*
> + * Check whether to yield to another runnable task from kernel mode NEON code
> + * (which runs with preemption disabled).
> + *
> + * if_will_cond_yield_neon
> + *        // pre-yield patchup code
> + * do_cond_yield_neon
> + *        // post-yield patchup code
> + * endif_yield_neon    <label>
> + *
> + * where <label> is optional, and marks the point where execution will resume
> + * after a yield has been performed. If omitted, execution resumes right after
> + * the endif_yield_neon invocation.

Maybe add a comment describing cond_yield_neon, e.g.:

 *
 * As a convenience, in the case where no patchup code is required
 * the above sequence may be abbreviated to:
 *
 * cond_yield_neon <label>

> + *
> + * Note that the patchup code does not support assembler directives that change
> + * the output section, any use of such directives is undefined.
> + *
> + * The yield itself consists of the following:
> + * - Check whether the preempt count is exactly 1, in which case disabling
> + *   preemption once will make the task preemptible. If this is not the case,
> + *   yielding is pointless.
> + * - Check whether TIF_NEED_RESCHED is set, and if so, disable and re-enable
> + *   kernel mode NEON (which will trigger a reschedule), and branch to the
> + *   yield fixup code.
> + *
> + * This macro sequence clobbers x0, x1 and the flags register unconditionally,
> + * and may clobber x2 .. x18 if the yield path is taken.
> + */

Does this mean that the pre-yield patchup code can safely refer to
x2..x18, but the post-yield patchup code and the code at <label> (or
otherwise immediately following endif_yield_neon) can't?

> +
> +	.macro		cond_yield_neon, lbl
> +	if_will_cond_yield_neon
> +	do_cond_yield_neon
> +	endif_yield_neon	\lbl
> +	.endm
> +
> +	.macro		if_will_cond_yield_neon
> +#ifdef CONFIG_PREEMPT
> +	get_thread_info	x0
> +	ldr		w1, [x0, #TSK_TI_PREEMPT]
> +	ldr		x0, [x0, #TSK_TI_FLAGS]
> +	cmp		w1, #PREEMPT_DISABLE_OFFSET
> +	csel		x0, x0, xzr, eq
> +	tbnz		x0, #TIF_NEED_RESCHED, .Lyield_\@	// needs rescheduling?
> +#endif
> +	/* fall through to endif_yield_neon */
> +	.subsection	1

Can we junk the code in this case rather than including it in the
kernel, like

	.section .discard.cond_yield_neon

(this seems to conform to some notion of a standard discarded section
name, see <asm-generic/vmlinux.lds.h>).  This additionally discards
the do_cond_yield_neon invocation (which I guess is what we'd expect
for a non-preemptible kernel?)

If doing that discard, a note could be added in the comment block
to warn people not to assume that the patchup code and any labels
defined in it will definitely end up in the kernel image.

Since the patchup sequences aren't likely to be many or large, it's
not the end of the world if we don't do this discarding though.

> +.Lyield_\@ :
> +	.endm
> +
> +	.macro		do_cond_yield_neon
> +	bl		kernel_neon_end
> +	bl		kernel_neon_begin
> +	.endm
> +
> +	.macro		endif_yield_neon, lbl
> +	.ifnb		\lbl
> +	b		\lbl
> +	.else
> +	b		.Lyield_out_\@
> +	.endif

Should you include

	.purgem do_cond_yield_neon
	.purgem endif_yield_neon

here?

Otherwise, I think you would get macro redefinition errors if
if_will_cond_yield_neon is used more than once in a given file.

You could maybe protect against nested and misordered macro uses by the
following, though it feels a bit like overkill.  Alternatively you
could use a magic symbol to record the current state, similarly to
frame_{push,pop}.

	.macro __if_will_cond_yield_neon
	.purgem if_will_cond_yield_neon
	//...

	.macro do_cond_yield_neon
	.purgem do_cond_yield_neon
	//...

	.macro endif_yield_neon
	.purgem endif_yield_neon
	//...

	.macro if_will_cond_yield_neon
	__if_will_cond_yield_neon
	.endm // if_will_cond_yield_neon
	.endm // endif_yield_neon
	.endm // do_cond_yield_neon
	.endm // __if_will_cond_yield_neon

	.macro if_will_cond_yield_neon
	__if_will_cond_yield_neon
	.endm

> +	.previous
> +.Lyield_out_\@ :
> +	.endm
> +
>  #endif	/* __ASM_ASSEMBLER_H */
> diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
> index 1303e04110cd..1e2ea2e51acb 100644
> --- a/arch/arm64/kernel/asm-offsets.c
> +++ b/arch/arm64/kernel/asm-offsets.c
> @@ -93,6 +93,8 @@ int main(void)
>    DEFINE(DMA_TO_DEVICE,		DMA_TO_DEVICE);
>    DEFINE(DMA_FROM_DEVICE,	DMA_FROM_DEVICE);
>    BLANK();

#include <linux/preempt.h> ?

> +  DEFINE(PREEMPT_DISABLE_OFFSET, PREEMPT_DISABLE_OFFSET);
> +  BLANK();

[...]

Cheers
---Dave

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH resend 1/2] arm64: assembler: add utility macros to push/pop stack frames
  2018-03-28 16:34   ` Dave Martin
@ 2018-03-29  8:54     ` Ard Biesheuvel
  2018-03-29  9:28       ` Dave Martin
  0 siblings, 1 reply; 13+ messages in thread
From: Ard Biesheuvel @ 2018-03-29  8:54 UTC (permalink / raw)
  To: linux-arm-kernel

On 28 March 2018 at 17:34, Dave Martin <Dave.Martin@arm.com> wrote:
> On Wed, Mar 28, 2018 at 02:41:28PM +0200, Ard Biesheuvel wrote:
>> We are going to add code to all the NEON crypto routines that will
>> turn them into non-leaf functions, so we need to manage the stack
>> frames. To make this less tedious and error prone, add some macros
>> that take the number of callee saved registers to preserve and the
>
> Apologies for the delay in looking at these patches...
>
> Anyway:
>
> Nit: for all instances of "callee saved" in this patch, do you mean             "caller saved"?
>

'Caller saved' means that the caller needs to stack/unstack a register
itself if it needs its value to be preserved across a function call.
'Callee saved' means that the caller can rely on the callee to ensure
that the register will retain its value.

So we are dealing with the latter here, afaict. Or am I missing something?

> A few stylistic comments below, but I don't consider them essential to
> address unless someone feels like it.
>
> Otherwise,
> Reviewed-by: Dave Martin <Dave.Martin@arm.com>
>

Thanks.

>> extra size to allocate in the stack frame (for locals) and emit
>> the ldp/stp sequences.
>>
>> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
>> ---
>>  arch/arm64/include/asm/assembler.h | 58 ++++++++++++++++++++
>>  1 file changed, 58 insertions(+)
>>
>> diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
>> index 053d83e8db6f..d354eb7f2f0c 100644
>> --- a/arch/arm64/include/asm/assembler.h
>> +++ b/arch/arm64/include/asm/assembler.h
>> @@ -565,4 +565,62 @@ USER(\label, ic  ivau, \tmp2)                    // invalidate I line PoU
>>  #endif
>>       .endm
>>
>> +     /*
>> +      * frame_push - Push @regcount callee saved registers to the stack,
>> +      *              starting at x19, as well as x29/x30, and set x29 to
>> +      *              the new value of sp. Add @extra bytes of stack space
>> +      *              for locals.
>> +      */
>> +     .macro          frame_push, regcount:req, extra
>> +     __frame         st, \regcount, \extra
>> +     .endm
>> +
>> +     /*
>> +      * frame_pop  - Pop the callee saved registers from the stack that were
>> +      *              pushed in the most recent call to frame_push, as well
>> +      *              as x29/x30 and any extra stack space that may have been
>> +      *              allocated.
>> +      */
>> +     .macro          frame_pop
>> +     __frame         ld
>> +     .endm
>> +
>> +     .macro          __frame_regs, reg1, reg2, op, num
>> +     .if             .Lframe_regcount == \num
>> +     \op\()r         \reg1, [sp, #(\num + 1) * 8]
>> +     .elseif         .Lframe_regcount > \num
>> +     \op\()p         \reg1, \reg2, [sp, #(\num + 1) * 8]
>> +     .endif
>> +     .endm
>> +
>> +     .macro          __frame, op, regcount, extra=0
>> +     .ifc            \op, st
>> +     .if             (\regcount) < 0 || (\regcount) > 10
>> +     .error          "regcount should be in the range [0 ... 10]"
>> +     .endif
>> +     .if             ((\extra) % 16) != 0
>> +     .error          "extra should be a multiple of 16 bytes"
>> +     .endif
>> +     .set            .Lframe_regcount, \regcount
>> +     .set            .Lframe_extra, \extra
>> +     .set            .Lframe_local_offset, ((\regcount + 3) / 2) * 16
>> +     stp             x29, x30, [sp, #-.Lframe_local_offset - .Lframe_extra]!
>> +     mov             x29, sp
>> +     .endif
>> +
>> +     __frame_regs    x19, x20, \op, 1
>> +     __frame_regs    x21, x22, \op, 3
>> +     __frame_regs    x23, x24, \op, 5
>> +     __frame_regs    x25, x26, \op, 7
>> +     __frame_regs    x27, x28, \op, 9
>> +
>> +     .ifc            \op, ld
>> +     .if             .Lframe_regcount == -1
>
> We could also have
>
>         .ifc            \op, st
>         .ifdef          .Lframe_regcount
>         .if             .Lframe_regcount != -1
>         .error [...]
>
> on the push side, which would trip on the first nested frame_push
> rather than waiting until a frame_pop appears.
>
> Your existing code could be retained to guard against a double pop.
>

Nice. I'll try that.

>> +     .error          "frame_push/frame_pop may not be nested"
>> +     .endif
>> +     ldp             x29, x30, [sp], #.Lframe_local_offset + .Lframe_extra
>> +     .set            .Lframe_regcount, -1
>> +     .endif
>> +     .endm
>> +
>>  #endif       /* __ASM_ASSEMBLER_H */
>> --
>> 2.11.0
>>
>>
>> _______________________________________________
>> linux-arm-kernel mailing list
>> linux-arm-kernel at lists.infradead.org
>> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH resend 2/2] arm64: assembler: add macros to conditionally yield the NEON under PREEMPT
  2018-03-28 17:18   ` Dave Martin
@ 2018-03-29  9:02     ` Ard Biesheuvel
  2018-03-29  9:36       ` Dave Martin
  0 siblings, 1 reply; 13+ messages in thread
From: Ard Biesheuvel @ 2018-03-29  9:02 UTC (permalink / raw)
  To: linux-arm-kernel

On 28 March 2018 at 18:18, Dave Martin <Dave.Martin@arm.com> wrote:
> On Wed, Mar 28, 2018 at 02:41:29PM +0200, Ard Biesheuvel wrote:
>> Add support macros to conditionally yield the NEON (and thus the CPU)
>> that may be called from the assembler code.
>>
>> In some cases, yielding the NEON involves saving and restoring a non
>> trivial amount of context (especially in the CRC folding algorithms),
>> and so the macro is split into three, and the code in between is only
>> executed when the yield path is taken, allowing the context to be preserved.
>> The third macro takes an optional label argument that marks the resume
>> path after a yield has been performed.
>
> Minor comments below, mostly just suggestions/observations.
>
> With the missing #include in asm-offsets.c fixed (if you think it's
> appropriate):
>
> Reviewed-by: Dave Martin <Dave.Martin@arm.com>
>

Thanks Dave

Replies below

>> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
>> ---
>>  arch/arm64/include/asm/assembler.h | 64 ++++++++++++++++++++
>>  arch/arm64/kernel/asm-offsets.c    |  2 +
>>  2 files changed, 66 insertions(+)
>>
>> diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
>> index d354eb7f2f0c..fb11514273d9 100644
>> --- a/arch/arm64/include/asm/assembler.h
>> +++ b/arch/arm64/include/asm/assembler.h
>> @@ -623,4 +623,68 @@ USER(\label, ic  ivau, \tmp2)                    // invalidate I line PoU
>>       .endif
>>       .endm
>>
>> +/*
>> + * Check whether to yield to another runnable task from kernel mode NEON code
>> + * (which runs with preemption disabled).
>> + *
>> + * if_will_cond_yield_neon
>> + *        // pre-yield patchup code
>> + * do_cond_yield_neon
>> + *        // post-yield patchup code
>> + * endif_yield_neon    <label>
>> + *
>> + * where <label> is optional, and marks the point where execution will resume
>> + * after a yield has been performed. If omitted, execution resumes right after
>> + * the endif_yield_neon invocation.
>
> Maybe add a comment describing cond_yield_neon, e.g.:
>
>  *
>  * As a convenience, in the case where no patchup code is required
>  * the above sequence may be abbreviated to:
>  *
>  * cond_yield_neon <label>
>

Makes sense. I will add that.

>> + *
>> + * Note that the patchup code does not support assembler directives that change
>> + * the output section, any use of such directives is undefined.
>> + *
>> + * The yield itself consists of the following:
>> + * - Check whether the preempt count is exactly 1, in which case disabling
>> + *   preemption once will make the task preemptible. If this is not the case,
>> + *   yielding is pointless.
>> + * - Check whether TIF_NEED_RESCHED is set, and if so, disable and re-enable
>> + *   kernel mode NEON (which will trigger a reschedule), and branch to the
>> + *   yield fixup code.
>> + *
>> + * This macro sequence clobbers x0, x1 and the flags register unconditionally,
>> + * and may clobber x2 .. x18 if the yield path is taken.
>> + */
>
> Does this mean that the pre-yield patchup code can safely refer to
> x2..x18, but the post-yield patchup code and the code at <label> (or
> otherwise immediately following endif_yield_neon) can't?
>

In theory, yes, but it doesn't really matter in practice. If you go
down the yield path, you will always run the pre and post sequences,
and the main code will need to keep state in x19 and up anyway if it
wants it to be preserved.

I should probably rephrase this to say that x0 .. x18 may be clobbered.

>> +
>> +     .macro          cond_yield_neon, lbl
>> +     if_will_cond_yield_neon
>> +     do_cond_yield_neon
>> +     endif_yield_neon        \lbl
>> +     .endm
>> +
>> +     .macro          if_will_cond_yield_neon
>> +#ifdef CONFIG_PREEMPT
>> +     get_thread_info x0
>> +     ldr             w1, [x0, #TSK_TI_PREEMPT]
>> +     ldr             x0, [x0, #TSK_TI_FLAGS]
>> +     cmp             w1, #PREEMPT_DISABLE_OFFSET
>> +     csel            x0, x0, xzr, eq
>> +     tbnz            x0, #TIF_NEED_RESCHED, .Lyield_\@       // needs rescheduling?
>> +#endif
>> +     /* fall through to endif_yield_neon */
>> +     .subsection     1
>
> Can we junk the code in this case rather than including it in the
> kernel, like
>
>         .section .discard.cond_yield_neon
>
> (this seems to conform to some notion of a standard discarded section
> name, see <asm-generic/vmlinux.lds.h>).  This additionally discards
> the do_cond_yield_neon invocation (which I guess is what we'd expect
> for a non-preemptible kernel?)
>
> If doing that discard, a note could be added in the comment block
> to warn people not to assume that the patchup code and any labels
> defined in it will definitely end up in the kernel image.
>
> Since the patchup sequences aren't likely to be many or large, it's
> not the end of the world if we don't do this discarding though.
>

I chose not to bother. These are handcrafted assembly files that are
usually kept in modules, which means the .text footprint is a 4k
multiple anyway, and the code is complex enough as it is, so
discarding ~10 instructions that have been moved out of the hot path
already doesn't seem that useful to me.

>> +.Lyield_\@ :
>> +     .endm
>> +
>> +     .macro          do_cond_yield_neon
>> +     bl              kernel_neon_end
>> +     bl              kernel_neon_begin
>> +     .endm
>> +
>> +     .macro          endif_yield_neon, lbl
>> +     .ifnb           \lbl
>> +     b               \lbl
>> +     .else
>> +     b               .Lyield_out_\@
>> +     .endif
>
> Should you include
>
>         .purgem do_cond_yield_neon
>         .purgem endif_yield_neon
>
> here?
>
> Otherwise, I think you would get macro redefinition errors if
> if_will_cond_yield_neon is used more than once in a given file.
>

if_will_cond_yield_neon does not define any macros itself, so this
shouldn't be a problem.

> You could maybe protect against nested and misordered macro uses by the
> following, though it feels a bit like overkill.  Alternatively you
> could use a magic symbol to record the current state, similarly to
> frame_{push,pop}.
>
>         .macro __if_will_cond_yield_neon
>         .purgem if_will_cond_yield_neon
>         //...
>
>         .macro do_cond_yield_neon
>         .purgem do_cond_yield_neon
>         //...
>
>         .macro endif_yield_neon
>         .purgem endif_yield_neon
>         //...
>
>         .macro if_will_cond_yield_neon
>         __if_will_cond_yield_neon
>         .endm // if_will_cond_yield_neon
>         .endm // endif_yield_neon
>         .endm // do_cond_yield_neon
>         .endm // __if_will_cond_yield_neon
>
>         .macro if_will_cond_yield_neon
>         __if_will_cond_yield_neon
>         .endm
>
>> +     .previous
>> +.Lyield_out_\@ :
>> +     .endm
>> +
>>  #endif       /* __ASM_ASSEMBLER_H */
>> diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
>> index 1303e04110cd..1e2ea2e51acb 100644
>> --- a/arch/arm64/kernel/asm-offsets.c
>> +++ b/arch/arm64/kernel/asm-offsets.c
>> @@ -93,6 +93,8 @@ int main(void)
>>    DEFINE(DMA_TO_DEVICE,              DMA_TO_DEVICE);
>>    DEFINE(DMA_FROM_DEVICE,    DMA_FROM_DEVICE);
>>    BLANK();
>
> #include <linux/preempt.h> ?
>

Good point, will add.

>> +  DEFINE(PREEMPT_DISABLE_OFFSET, PREEMPT_DISABLE_OFFSET);
>> +  BLANK();
>
> [...]
>
> Cheers
> ---Dave

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH resend 1/2] arm64: assembler: add utility macros to push/pop stack frames
  2018-03-29  8:54     ` Ard Biesheuvel
@ 2018-03-29  9:28       ` Dave Martin
  0 siblings, 0 replies; 13+ messages in thread
From: Dave Martin @ 2018-03-29  9:28 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, Mar 29, 2018 at 09:54:31AM +0100, Ard Biesheuvel wrote:
> On 28 March 2018 at 17:34, Dave Martin <Dave.Martin@arm.com> wrote:
> > On Wed, Mar 28, 2018 at 02:41:28PM +0200, Ard Biesheuvel wrote:
> >> We are going to add code to all the NEON crypto routines that will
> >> turn them into non-leaf functions, so we need to manage the stack
> >> frames. To make this less tedious and error prone, add some macros
> >> that take the number of callee saved registers to preserve and the
> >
> > Apologies for the delay in looking at these patches...
> >
> > Anyway:
> >
> > Nit: for all instances of "callee saved" in this patch, do you mean             "caller saved"?
> >
> 
> 'Caller saved' means that the caller needs to stack/unstack a register
> itself if it needs its value to be preserved across a function call.
> 'Callee saved' means that the caller can rely on the callee to ensure
> that the register will retain its value.
> 
> So we are dealing with the latter here, afaict. Or am I missing something?

Yes, I confused myself.  In preparation for calling kernel_neon_begin
etc., we would potentially need to save some caller-save registers.  But
that's not what the macros in this patch are about.

> > A few stylistic comments below, but I don't consider them essential to
> > address unless someone feels like it.
> >
> > Otherwise,
> > Reviewed-by: Dave Martin <Dave.Martin@arm.com>
> >
> 
> Thanks.
> 
> >> extra size to allocate in the stack frame (for locals) and emit
> >> the ldp/stp sequences.
> >>
> >> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> >> ---
> >>  arch/arm64/include/asm/assembler.h | 58 ++++++++++++++++++++
> >>  1 file changed, 58 insertions(+)
> >>
> >> diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h

[...]

> >> +     .macro          __frame, op, regcount, extra=0

[...]

> >> +     .ifc            \op, ld
> >> +     .if             .Lframe_regcount == -1
> >
> > We could also have
> >
> >         .ifc            \op, st
> >         .ifdef          .Lframe_regcount
> >         .if             .Lframe_regcount != -1
> >         .error [...]
> >
> > on the push side, which would trip on the first nested frame_push
> > rather than waiting until a frame_pop appears.
> >
> > Your existing code could be retained to guard against a double pop.
> >
> 
> Nice. I'll try that.

OK, cool

[...]

Cheers
---Dave

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH resend 2/2] arm64: assembler: add macros to conditionally yield the NEON under PREEMPT
  2018-03-29  9:02     ` Ard Biesheuvel
@ 2018-03-29  9:36       ` Dave Martin
  2018-03-29  9:59         ` Ard Biesheuvel
  0 siblings, 1 reply; 13+ messages in thread
From: Dave Martin @ 2018-03-29  9:36 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, Mar 29, 2018 at 10:02:18AM +0100, Ard Biesheuvel wrote:
> On 28 March 2018 at 18:18, Dave Martin <Dave.Martin@arm.com> wrote:
> > On Wed, Mar 28, 2018 at 02:41:29PM +0200, Ard Biesheuvel wrote:
> >> Add support macros to conditionally yield the NEON (and thus the CPU)
> >> that may be called from the assembler code.
> >>
> >> In some cases, yielding the NEON involves saving and restoring a non
> >> trivial amount of context (especially in the CRC folding algorithms),
> >> and so the macro is split into three, and the code in between is only
> >> executed when the yield path is taken, allowing the context to be preserved.
> >> The third macro takes an optional label argument that marks the resume
> >> path after a yield has been performed.
> >
> > Minor comments below, mostly just suggestions/observations.
> >
> > With the missing #include in asm-offsets.c fixed (if you think it's
> > appropriate):
> >
> > Reviewed-by: Dave Martin <Dave.Martin@arm.com>
> >
> 
> Thanks Dave
> 
> Replies below
> 
> >> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> >> ---
> >>  arch/arm64/include/asm/assembler.h | 64 ++++++++++++++++++++
> >>  arch/arm64/kernel/asm-offsets.c    |  2 +
> >>  2 files changed, 66 insertions(+)
> >>
> >> diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
> >> index d354eb7f2f0c..fb11514273d9 100644
> >> --- a/arch/arm64/include/asm/assembler.h
> >> +++ b/arch/arm64/include/asm/assembler.h
> >> @@ -623,4 +623,68 @@ USER(\label, ic  ivau, \tmp2)                    // invalidate I line PoU
> >>       .endif
> >>       .endm
> >>
> >> +/*
> >> + * Check whether to yield to another runnable task from kernel mode NEON code
> >> + * (which runs with preemption disabled).
> >> + *
> >> + * if_will_cond_yield_neon
> >> + *        // pre-yield patchup code
> >> + * do_cond_yield_neon
> >> + *        // post-yield patchup code
> >> + * endif_yield_neon    <label>
> >> + *
> >> + * where <label> is optional, and marks the point where execution will resume
> >> + * after a yield has been performed. If omitted, execution resumes right after
> >> + * the endif_yield_neon invocation.
> >
> > Maybe add a comment describing cond_yield_neon, e.g.:
> >
> >  *
> >  * As a convenience, in the case where no patchup code is required
> >  * the above sequence may be abbreviated to:
> >  *
> >  * cond_yield_neon <label>
> >
> 
> Makes sense. I will add that.
> 
> >> + *
> >> + * Note that the patchup code does not support assembler directives that change
> >> + * the output section, any use of such directives is undefined.
> >> + *
> >> + * The yield itself consists of the following:
> >> + * - Check whether the preempt count is exactly 1, in which case disabling
> >> + *   preemption once will make the task preemptible. If this is not the case,
> >> + *   yielding is pointless.
> >> + * - Check whether TIF_NEED_RESCHED is set, and if so, disable and re-enable
> >> + *   kernel mode NEON (which will trigger a reschedule), and branch to the
> >> + *   yield fixup code.
> >> + *
> >> + * This macro sequence clobbers x0, x1 and the flags register unconditionally,
> >> + * and may clobber x2 .. x18 if the yield path is taken.
> >> + */
> >
> > Does this mean that the pre-yield patchup code can safely refer to
> > x2..x18, but the post-yield patchup code and the code at <label> (or
> > otherwise immediately following endif_yield_neon) can't?
> >
> 
> In theory, yes, but it doesn't really matter in practice. If you go
> down the yield path, you will always run the pre and post sequences,
> and the main code will need to keep state in x19 and up anyway if it
> wants it to be preserved.

True.

> I should probably rephrase this to say that x0 .. x18 may be clobbered.

Sure, that would be simpler.  Or maybe just say that the set of clobbers
is the same as for a function call -- this would cover NZCV for example.

> >> +
> >> +     .macro          cond_yield_neon, lbl
> >> +     if_will_cond_yield_neon
> >> +     do_cond_yield_neon
> >> +     endif_yield_neon        \lbl
> >> +     .endm
> >> +
> >> +     .macro          if_will_cond_yield_neon
> >> +#ifdef CONFIG_PREEMPT
> >> +     get_thread_info x0
> >> +     ldr             w1, [x0, #TSK_TI_PREEMPT]
> >> +     ldr             x0, [x0, #TSK_TI_FLAGS]
> >> +     cmp             w1, #PREEMPT_DISABLE_OFFSET
> >> +     csel            x0, x0, xzr, eq
> >> +     tbnz            x0, #TIF_NEED_RESCHED, .Lyield_\@       // needs rescheduling?
> >> +#endif
> >> +     /* fall through to endif_yield_neon */
> >> +     .subsection     1
> >
> > Can we junk the code in this case rather than including it in the
> > kernel, like
> >
> >         .section .discard.cond_yield_neon
> >
> > (this seems to conform to some notion of a standard discarded section
> > name, see <asm-generic/vmlinux.lds.h>).  This additionally discards
> > the do_cond_yield_neon invocation (which I guess is what we'd expect
> > for a non-preemptible kernel?)
> >
> > If doing that discard, a note could be added in the comment block
> > to warn people not to assume that the patchup code and any labels
> > defined in it will definitely end up in the kernel image.
> >
> > Since the patchup sequences aren't likely to be many or large, it's
> > not the end of the world if we don't do this discarding though.
> >
> 
> I chose not to bother. These are handcrafted assembly files that are
> usually kept in modules, which means the .text footprint is a 4k
> multiple anyway, and the code is complex enough as it is, so
> discarding ~10 instructions that have been moved out of the hot path
> already doesn't seem that useful to me.

Agreed.  Do you know who is building CONFIG_PREEMPT=n kernels these
days?

> >> +.Lyield_\@ :
> >> +     .endm
> >> +
> >> +     .macro          do_cond_yield_neon
> >> +     bl              kernel_neon_end
> >> +     bl              kernel_neon_begin
> >> +     .endm
> >> +
> >> +     .macro          endif_yield_neon, lbl
> >> +     .ifnb           \lbl
> >> +     b               \lbl
> >> +     .else
> >> +     b               .Lyield_out_\@
> >> +     .endif
> >
> > Should you include
> >
> >         .purgem do_cond_yield_neon
> >         .purgem endif_yield_neon
> >
> > here?
> >
> > Otherwise, I think you would get macro redefinition errors if
> > if_will_cond_yield_neon is used more than once in a given file.
> >
> 
> if_will_cond_yield_neon does not define any macros itself, so this
> shouldn't be a problem.

You're right.  I skipped an .endm for some reason while reading and
decided there were nested macros here.  But there aren't.

Protecting against misuse would be "nice", but people using them already
need to know what they're doing, so it's low-priority and something that
could be added in a later patch.  So I agree that there's no need to add
that here.

[...]

Cheers
---Dave

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH resend 2/2] arm64: assembler: add macros to conditionally yield the NEON under PREEMPT
  2018-03-29  9:36       ` Dave Martin
@ 2018-03-29  9:59         ` Ard Biesheuvel
  2018-03-29 11:12           ` Dave Martin
  0 siblings, 1 reply; 13+ messages in thread
From: Ard Biesheuvel @ 2018-03-29  9:59 UTC (permalink / raw)
  To: linux-arm-kernel

On 29 March 2018 at 10:36, Dave Martin <Dave.Martin@arm.com> wrote:
> On Thu, Mar 29, 2018 at 10:02:18AM +0100, Ard Biesheuvel wrote:
>> On 28 March 2018 at 18:18, Dave Martin <Dave.Martin@arm.com> wrote:
>> > On Wed, Mar 28, 2018 at 02:41:29PM +0200, Ard Biesheuvel wrote:
>> >> Add support macros to conditionally yield the NEON (and thus the CPU)
>> >> that may be called from the assembler code.
>> >>
>> >> In some cases, yielding the NEON involves saving and restoring a non
>> >> trivial amount of context (especially in the CRC folding algorithms),
>> >> and so the macro is split into three, and the code in between is only
>> >> executed when the yield path is taken, allowing the context to be preserved.
>> >> The third macro takes an optional label argument that marks the resume
>> >> path after a yield has been performed.
>> >
>> > Minor comments below, mostly just suggestions/observations.
>> >
>> > With the missing #include in asm-offsets.c fixed (if you think it's
>> > appropriate):
>> >
>> > Reviewed-by: Dave Martin <Dave.Martin@arm.com>
>> >
>>
>> Thanks Dave
>>
>> Replies below
>>
>> >> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
>> >> ---
>> >>  arch/arm64/include/asm/assembler.h | 64 ++++++++++++++++++++
>> >>  arch/arm64/kernel/asm-offsets.c    |  2 +
>> >>  2 files changed, 66 insertions(+)
>> >>
>> >> diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
>> >> index d354eb7f2f0c..fb11514273d9 100644
>> >> --- a/arch/arm64/include/asm/assembler.h
>> >> +++ b/arch/arm64/include/asm/assembler.h
>> >> @@ -623,4 +623,68 @@ USER(\label, ic  ivau, \tmp2)                    // invalidate I line PoU
>> >>       .endif
>> >>       .endm
>> >>
>> >> +/*
>> >> + * Check whether to yield to another runnable task from kernel mode NEON code
>> >> + * (which runs with preemption disabled).
>> >> + *
>> >> + * if_will_cond_yield_neon
>> >> + *        // pre-yield patchup code
>> >> + * do_cond_yield_neon
>> >> + *        // post-yield patchup code
>> >> + * endif_yield_neon    <label>
>> >> + *
>> >> + * where <label> is optional, and marks the point where execution will resume
>> >> + * after a yield has been performed. If omitted, execution resumes right after
>> >> + * the endif_yield_neon invocation.
>> >
>> > Maybe add a comment describing cond_yield_neon, e.g.:
>> >
>> >  *
>> >  * As a convenience, in the case where no patchup code is required
>> >  * the above sequence may be abbreviated to:
>> >  *
>> >  * cond_yield_neon <label>
>> >
>>
>> Makes sense. I will add that.
>>
>> >> + *
>> >> + * Note that the patchup code does not support assembler directives that change
>> >> + * the output section, any use of such directives is undefined.
>> >> + *
>> >> + * The yield itself consists of the following:
>> >> + * - Check whether the preempt count is exactly 1, in which case disabling
>> >> + *   preemption once will make the task preemptible. If this is not the case,
>> >> + *   yielding is pointless.
>> >> + * - Check whether TIF_NEED_RESCHED is set, and if so, disable and re-enable
>> >> + *   kernel mode NEON (which will trigger a reschedule), and branch to the
>> >> + *   yield fixup code.
>> >> + *
>> >> + * This macro sequence clobbers x0, x1 and the flags register unconditionally,
>> >> + * and may clobber x2 .. x18 if the yield path is taken.
>> >> + */
>> >
>> > Does this mean that the pre-yield patchup code can safely refer to
>> > x2..x18, but the post-yield patchup code and the code at <label> (or
>> > otherwise immediately following endif_yield_neon) can't?
>> >
>>
>> In theory, yes, but it doesn't really matter in practice. If you go
>> down the yield path, you will always run the pre and post sequences,
>> and the main code will need to keep state in x19 and up anyway if it
>> wants it to be preserved.
>
> True.
>
>> I should probably rephrase this to say that x0 .. x18 may be clobbered.
>
> Sure, that would be simpler.  Or maybe just say that the set of clobbers
> is the same as for a function call -- this would cover NZCV for example.
>

Even better.

>> >> +
>> >> +     .macro          cond_yield_neon, lbl
>> >> +     if_will_cond_yield_neon
>> >> +     do_cond_yield_neon
>> >> +     endif_yield_neon        \lbl
>> >> +     .endm
>> >> +
>> >> +     .macro          if_will_cond_yield_neon
>> >> +#ifdef CONFIG_PREEMPT
>> >> +     get_thread_info x0
>> >> +     ldr             w1, [x0, #TSK_TI_PREEMPT]
>> >> +     ldr             x0, [x0, #TSK_TI_FLAGS]
>> >> +     cmp             w1, #PREEMPT_DISABLE_OFFSET
>> >> +     csel            x0, x0, xzr, eq
>> >> +     tbnz            x0, #TIF_NEED_RESCHED, .Lyield_\@       // needs rescheduling?
>> >> +#endif
>> >> +     /* fall through to endif_yield_neon */
>> >> +     .subsection     1
>> >
>> > Can we junk the code in this case rather than including it in the
>> > kernel, like
>> >
>> >         .section .discard.cond_yield_neon
>> >
>> > (this seems to conform to some notion of a standard discarded section
>> > name, see <asm-generic/vmlinux.lds.h>).  This additionally discards
>> > the do_cond_yield_neon invocation (which I guess is what we'd expect
>> > for a non-preemptible kernel?)
>> >
>> > If doing that discard, a note could be added in the comment block
>> > to warn people not to assume that the patchup code and any labels
>> > defined in it will definitely end up in the kernel image.
>> >
>> > Since the patchup sequences aren't likely to be many or large, it's
>> > not the end of the world if we don't do this discarding though.
>> >
>>
>> I chose not to bother. These are handcrafted assembly files that are
>> usually kept in modules, which means the .text footprint is a 4k
>> multiple anyway, and the code is complex enough as it is, so
>> discarding ~10 instructions that have been moved out of the hot path
>> already doesn't seem that useful to me.
>
> Agreed.  Do you know who is building CONFIG_PREEMPT=n kernels these
> days?
>

AFAIK most distro kernels use voluntary preemption, so they'd still
benefit from this.

>> >> +.Lyield_\@ :
>> >> +     .endm
>> >> +
>> >> +     .macro          do_cond_yield_neon
>> >> +     bl              kernel_neon_end
>> >> +     bl              kernel_neon_begin
>> >> +     .endm
>> >> +
>> >> +     .macro          endif_yield_neon, lbl
>> >> +     .ifnb           \lbl
>> >> +     b               \lbl
>> >> +     .else
>> >> +     b               .Lyield_out_\@
>> >> +     .endif
>> >
>> > Should you include
>> >
>> >         .purgem do_cond_yield_neon
>> >         .purgem endif_yield_neon
>> >
>> > here?
>> >
>> > Otherwise, I think you would get macro redefinition errors if
>> > if_will_cond_yield_neon is used more than once in a given file.
>> >
>>
>> if_will_cond_yield_neon does not define any macros itself, so this
>> shouldn't be a problem.
>
> You're right.  I skipped an .endm for some reason while reading and
> decided there were nested macros here.  But there aren't.
>
> Protecting against misuse would be "nice", but people using them already
> need to know what they're doing, so it's low-priority and something that
> could be added in a later patch.  So I agree that there's no need to add
> that here.
>

OK.

I will respin with the minor issues addressed and your R-b added, and
repost before the end of the day.

Will, hopefully you're still ok with picking this up for v4.17? I'd
hate to postpone the crypto pieces that depend on it to v4.19

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH resend 2/2] arm64: assembler: add macros to conditionally yield the NEON under PREEMPT
  2018-03-29  9:59         ` Ard Biesheuvel
@ 2018-03-29 11:12           ` Dave Martin
  2018-03-29 11:36             ` Ard Biesheuvel
  0 siblings, 1 reply; 13+ messages in thread
From: Dave Martin @ 2018-03-29 11:12 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, Mar 29, 2018 at 10:59:28AM +0100, Ard Biesheuvel wrote:
> On 29 March 2018 at 10:36, Dave Martin <Dave.Martin@arm.com> wrote:
> > On Thu, Mar 29, 2018 at 10:02:18AM +0100, Ard Biesheuvel wrote:
> >> On 28 March 2018 at 18:18, Dave Martin <Dave.Martin@arm.com> wrote:

[...]

> >> I should probably rephrase this to say that x0 .. x18 may be clobbered.
> >
> > Sure, that would be simpler.  Or maybe just say that the set of clobbers
> > is the same as for a function call -- this would cover NZCV for example.
> >
> 
> Even better.

[...]

> >> > Since the patchup sequences aren't likely to be many or large, it's
> >> > not the end of the world if we don't do this discarding though.
> >> >
> >>
> >> I chose not to bother. These are handcrafted assembly files that are
> >> usually kept in modules, which means the .text footprint is a 4k
> >> multiple anyway, and the code is complex enough as it is, so
> >> discarding ~10 instructions that have been moved out of the hot path
> >> already doesn't seem that useful to me.
> >
> > Agreed.  Do you know who is building CONFIG_PREEMPT=n kernels these
> > days?
> >
> 
> AFAIK most distro kernels use voluntary preemption, so they'd still
> benefit from this.

OK, and given the size of the typical distro kernel, I doubt anyone will
lose sleep over a couple of hundred extra bytes.

I might try to hack it up later just for fun, just to see whether it
works.

[...]

> >> > Should you include
> >> >
> >> >         .purgem do_cond_yield_neon
> >> >         .purgem endif_yield_neon
> >> >
> >> > here?
> >> >
> >> > Otherwise, I think you would get macro redefinition errors if
> >> > if_will_cond_yield_neon is used more than once in a given file.
> >> >
> >>
> >> if_will_cond_yield_neon does not define any macros itself, so this
> >> shouldn't be a problem.
> >
> > You're right.  I skipped an .endm for some reason while reading and
> > decided there were nested macros here.  But there aren't.
> >
> > Protecting against misuse would be "nice", but people using them already
> > need to know what they're doing, so it's low-priority and something that
> > could be added in a later patch.  So I agree that there's no need to add
> > that here.
> >
> 
> OK.
> 
> I will respin with the minor issues addressed and your R-b added, and
> repost before the end of the day.

Sounds good to me.

Cheers
---Dave
 
> Will, hopefully you're still ok with picking this up for v4.17? I'd
> hate to postpone the crypto pieces that depend on it to v4.19

[...]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH resend 2/2] arm64: assembler: add macros to conditionally yield the NEON under PREEMPT
  2018-03-29 11:12           ` Dave Martin
@ 2018-03-29 11:36             ` Ard Biesheuvel
  2018-03-29 12:42               ` Dave Martin
  0 siblings, 1 reply; 13+ messages in thread
From: Ard Biesheuvel @ 2018-03-29 11:36 UTC (permalink / raw)
  To: linux-arm-kernel


> On 29 Mar 2018, at 13:12, Dave Martin <Dave.Martin@arm.com> wrote:
> 
>> On Thu, Mar 29, 2018 at 10:59:28AM +0100, Ard Biesheuvel wrote:
>>> On 29 March 2018 at 10:36, Dave Martin <Dave.Martin@arm.com> wrote:
>>>> On Thu, Mar 29, 2018 at 10:02:18AM +0100, Ard Biesheuvel wrote:
>>>> On 28 March 2018 at 18:18, Dave Martin <Dave.Martin@arm.com> wrote:
> 
> [...]
> 
>>>> I should probably rephrase this to say that x0 .. x18 may be clobbered.
>>> 
>>> Sure, that would be simpler.  Or maybe just say that the set of clobbers
>>> is the same as for a function call -- this would cover NZCV for example.
>>> 
>> 
>> Even better.
> 
> [...]
> 
>>>>> Since the patchup sequences aren't likely to be many or large, it's
>>>>> not the end of the world if we don't do this discarding though.
>>>>> 
>>>> 
>>>> I chose not to bother. These are handcrafted assembly files that are
>>>> usually kept in modules, which means the .text footprint is a 4k
>>>> multiple anyway, and the code is complex enough as it is, so
>>>> discarding ~10 instructions that have been moved out of the hot path
>>>> already doesn't seem that useful to me.
>>> 
>>> Agreed.  Do you know who is building CONFIG_PREEMPT=n kernels these
>>> days?
>>> 
>> 
>> AFAIK most distro kernels use voluntary preemption, so they'd still
>> benefit from this.
> 
> OK, and given the size of the typical distro kernel, I doubt anyone will
> lose sleep over a couple of hundred extra bytes.
> 

My point was that this is /not/ dead code on typical distro kernels given that this approach should work equally under voluntary preemption.


> I might try to hack it up later just for fun, just to see whether it
> works.
> 
> [...]
> 
>>>>> Should you include
>>>>> 
>>>>>        .purgem do_cond_yield_neon
>>>>>        .purgem endif_yield_neon
>>>>> 
>>>>> here?
>>>>> 
>>>>> Otherwise, I think you would get macro redefinition errors if
>>>>> if_will_cond_yield_neon is used more than once in a given file.
>>>>> 
>>>> 
>>>> if_will_cond_yield_neon does not define any macros itself, so this
>>>> shouldn't be a problem.
>>> 
>>> You're right.  I skipped an .endm for some reason while reading and
>>> decided there were nested macros here.  But there aren't.
>>> 
>>> Protecting against misuse would be "nice", but people using them already
>>> need to know what they're doing, so it's low-priority and something that
>>> could be added in a later patch.  So I agree that there's no need to add
>>> that here.
>>> 
>> 
>> OK.
>> 
>> I will respin with the minor issues addressed and your R-b added, and
>> repost before the end of the day.
> 
> Sounds good to me.
> 
> Cheers
> ---Dave
> 
>> Will, hopefully you're still ok with picking this up for v4.17? I'd
>> hate to postpone the crypto pieces that depend on it to v4.19
> 
> [...]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH resend 2/2] arm64: assembler: add macros to conditionally yield the NEON under PREEMPT
  2018-03-29 11:36             ` Ard Biesheuvel
@ 2018-03-29 12:42               ` Dave Martin
  0 siblings, 0 replies; 13+ messages in thread
From: Dave Martin @ 2018-03-29 12:42 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, Mar 29, 2018 at 01:36:44PM +0200, Ard Biesheuvel wrote:
> 
> > On 29 Mar 2018, at 13:12, Dave Martin <Dave.Martin@arm.com> wrote:
> > 
> >> On Thu, Mar 29, 2018 at 10:59:28AM +0100, Ard Biesheuvel wrote:
> >>> On 29 March 2018 at 10:36, Dave Martin <Dave.Martin@arm.com> wrote:
> >>>> On Thu, Mar 29, 2018 at 10:02:18AM +0100, Ard Biesheuvel wrote:
> >>>> On 28 March 2018 at 18:18, Dave Martin <Dave.Martin@arm.com> wrote:

[...]

> >>>>> Since the patchup sequences aren't likely to be many or large, it's
> >>>>> not the end of the world if we don't do this discarding though.
> >>>>> 
> >>>> 
> >>>> I chose not to bother. These are handcrafted assembly files that are
> >>>> usually kept in modules, which means the .text footprint is a 4k
> >>>> multiple anyway, and the code is complex enough as it is, so
> >>>> discarding ~10 instructions that have been moved out of the hot path
> >>>> already doesn't seem that useful to me.
> >>> 
> >>> Agreed.  Do you know who is building CONFIG_PREEMPT=n kernels these
> >>> days?
> >>> 
> >> 
> >> AFAIK most distro kernels use voluntary preemption, so they'd still
> >> benefit from this.
> > 
> > OK, and given the size of the typical distro kernel, I doubt anyone will
> > lose sleep over a couple of hundred extra bytes.
> > 
> 
> My point was that this is /not/ dead code on typical distro kernels
> given that this approach should work equally under voluntary preemption.

I think CONFIG_PREEMPT and CONFIG_PREEMPT_VOLUNTARY are mutually
exclusive, so in the PREEMPT_VOLUNTARY case the yield path code will
get compiled out here.

But that's probably the right thing to do IIUC: unless we introduce an
explicit preemption point into do_cond_yield_neon, voluntary preemption
won't occur anyway.  And the crypto API probably doesn't expect us to
do that... especially if we're in a softirq.

[...]

Cheers
---Dave

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2018-03-29 12:42 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-03-28 12:41 [PATCH resend 0/2] preparatory arm64 asm patches for yielding the NEON Ard Biesheuvel
2018-03-28 12:41 ` [PATCH resend 1/2] arm64: assembler: add utility macros to push/pop stack frames Ard Biesheuvel
2018-03-28 16:34   ` Dave Martin
2018-03-29  8:54     ` Ard Biesheuvel
2018-03-29  9:28       ` Dave Martin
2018-03-28 12:41 ` [PATCH resend 2/2] arm64: assembler: add macros to conditionally yield the NEON under PREEMPT Ard Biesheuvel
2018-03-28 17:18   ` Dave Martin
2018-03-29  9:02     ` Ard Biesheuvel
2018-03-29  9:36       ` Dave Martin
2018-03-29  9:59         ` Ard Biesheuvel
2018-03-29 11:12           ` Dave Martin
2018-03-29 11:36             ` Ard Biesheuvel
2018-03-29 12:42               ` Dave Martin

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.