All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 0/5] arm64: Run kernel mode NEON with preemption enabled
@ 2023-11-27 12:23 ` Ard Biesheuvel
  0 siblings, 0 replies; 20+ messages in thread
From: Ard Biesheuvel @ 2023-11-27 12:23 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: linux-crypto, Ard Biesheuvel, Marc Zyngier, Will Deacon,
	Mark Rutland, Kees Cook, Catalin Marinas, Mark Brown,
	Eric Biggers, Sebastian Andrzej Siewior

From: Ard Biesheuvel <ardb@kernel.org>

Currently, kernel mode NEON (SIMD) support is implemented in a way that
requires preemption to be disabled while the SIMD registers are live.
The reason for this is that those registers are not in the set that is
preserved/restored on exception entry/exit and context switch, as this
would impact performance generally, even for workloads where kernel mode
SIMD is not the bottleneck.

However, doing substantial work with preemption disabled is not great,
as it affects scheduling latency, which is especially problematic for
real-time use cases. So ideally, we should keep preemption enabled when
we can, and find another way to ensure that this does not corrupt the
NEON register state of in-kernel SIMD users.

This series implements a suggestion by Mark Rutland, and introduces a
thread_info flag TIF_USING_KMODE_FPSIMD, which indicates to the thread
switch machinery that the task in question has live kernel mode SIMD
state which needs to be preserved and restored. The space needed for
this is allocated in thread_struct. (*)

Given that currently, we run kernel mode NEON with softirqs disabled (to
avoid the need for preserving kernel mode NEON context belonging to task
context while the SIMD unit is being used by code running in softirq
context), just removing the preempt_disable/enable calls is not
sufficient, and we also need to leave softirqs enabled. This means that
we may need to preserve kernel mode NEON state not only on a context
switch, but also when code running in softirq context takes ownership of
the SIMD unit, but this is straight-forward once we add the scratch
space to thread_struct. (On PREEMPT_RT, softirqs execute with preemption
enabled, making kernel mode FPSIMD in softirq context preemptible as
well. We rely on the fact that the task that hosts the softirq dispatch
logic does not itself use kernel mode FPSIMD in task context to ensure
that there is only a single kernel mode FPSIMD state that may need to be
preserved and restored.)

(*) We might decide to allocate this space (~512 bytes) dynamically, if
the thread_struct memory footprint causes issues. However, we should
also explore doing the same for the user space FPSIMD state, as kernel
threads never return to user space and have no need for this allocation.

v3:
- add patch to drop yield logic from crypto C glue code
- add R-b from Mark

v2:
- tweak some commit logs for clarity
- integrate with the existing lazy restore logic
- add Mark's R-b to patch #1

Cc: Marc Zyngier <maz@kernel.org>
Cc: Will Deacon <will@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Mark Brown <broonie@kernel.org>
Cc: Eric Biggers <ebiggers@google.com>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

Ard Biesheuvel (5):
  arm64: fpsimd: Drop unneeded 'busy' flag
  arm64: fpsimd: Preserve/restore kernel mode NEON at context switch
  arm64: fpsimd: Implement lazy restore for kernel mode FPSIMD
  arm64: crypto: Remove conditional yield logic
  arm64: crypto: Remove FPSIMD yield logic from glue code

 arch/arm64/crypto/aes-ce-ccm-glue.c      |   5 -
 arch/arm64/crypto/aes-glue.c             |  21 +--
 arch/arm64/crypto/aes-modes.S            |   2 -
 arch/arm64/crypto/chacha-neon-glue.c     |  14 +-
 arch/arm64/crypto/crct10dif-ce-glue.c    |  30 +---
 arch/arm64/crypto/nhpoly1305-neon-glue.c |  12 +-
 arch/arm64/crypto/poly1305-glue.c        |  15 +-
 arch/arm64/crypto/polyval-ce-glue.c      |   5 +-
 arch/arm64/crypto/sha1-ce-core.S         |   6 +-
 arch/arm64/crypto/sha1-ce-glue.c         |  19 +--
 arch/arm64/crypto/sha2-ce-core.S         |   6 +-
 arch/arm64/crypto/sha2-ce-glue.c         |  19 +--
 arch/arm64/crypto/sha3-ce-core.S         |   6 +-
 arch/arm64/crypto/sha3-ce-glue.c         |  14 +-
 arch/arm64/crypto/sha512-ce-core.S       |   8 +-
 arch/arm64/crypto/sha512-ce-glue.c       |  16 +-
 arch/arm64/include/asm/assembler.h       |  29 ----
 arch/arm64/include/asm/processor.h       |   3 +
 arch/arm64/include/asm/simd.h            |  11 +-
 arch/arm64/include/asm/thread_info.h     |   1 +
 arch/arm64/kernel/asm-offsets.c          |   4 -
 arch/arm64/kernel/fpsimd.c               | 163 +++++++++++++-------
 22 files changed, 165 insertions(+), 244 deletions(-)

-- 
2.43.0.rc1.413.gea7ed67945-goog


^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH v3 0/5] arm64: Run kernel mode NEON with preemption enabled
@ 2023-11-27 12:23 ` Ard Biesheuvel
  0 siblings, 0 replies; 20+ messages in thread
From: Ard Biesheuvel @ 2023-11-27 12:23 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: linux-crypto, Ard Biesheuvel, Marc Zyngier, Will Deacon,
	Mark Rutland, Kees Cook, Catalin Marinas, Mark Brown,
	Eric Biggers, Sebastian Andrzej Siewior

From: Ard Biesheuvel <ardb@kernel.org>

Currently, kernel mode NEON (SIMD) support is implemented in a way that
requires preemption to be disabled while the SIMD registers are live.
The reason for this is that those registers are not in the set that is
preserved/restored on exception entry/exit and context switch, as this
would impact performance generally, even for workloads where kernel mode
SIMD is not the bottleneck.

However, doing substantial work with preemption disabled is not great,
as it affects scheduling latency, which is especially problematic for
real-time use cases. So ideally, we should keep preemption enabled when
we can, and find another way to ensure that this does not corrupt the
NEON register state of in-kernel SIMD users.

This series implements a suggestion by Mark Rutland, and introduces a
thread_info flag TIF_USING_KMODE_FPSIMD, which indicates to the thread
switch machinery that the task in question has live kernel mode SIMD
state which needs to be preserved and restored. The space needed for
this is allocated in thread_struct. (*)

Given that currently, we run kernel mode NEON with softirqs disabled (to
avoid the need for preserving kernel mode NEON context belonging to task
context while the SIMD unit is being used by code running in softirq
context), just removing the preempt_disable/enable calls is not
sufficient, and we also need to leave softirqs enabled. This means that
we may need to preserve kernel mode NEON state not only on a context
switch, but also when code running in softirq context takes ownership of
the SIMD unit, but this is straight-forward once we add the scratch
space to thread_struct. (On PREEMPT_RT, softirqs execute with preemption
enabled, making kernel mode FPSIMD in softirq context preemptible as
well. We rely on the fact that the task that hosts the softirq dispatch
logic does not itself use kernel mode FPSIMD in task context to ensure
that there is only a single kernel mode FPSIMD state that may need to be
preserved and restored.)

(*) We might decide to allocate this space (~512 bytes) dynamically, if
the thread_struct memory footprint causes issues. However, we should
also explore doing the same for the user space FPSIMD state, as kernel
threads never return to user space and have no need for this allocation.

v3:
- add patch to drop yield logic from crypto C glue code
- add R-b from Mark

v2:
- tweak some commit logs for clarity
- integrate with the existing lazy restore logic
- add Mark's R-b to patch #1

Cc: Marc Zyngier <maz@kernel.org>
Cc: Will Deacon <will@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Mark Brown <broonie@kernel.org>
Cc: Eric Biggers <ebiggers@google.com>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

Ard Biesheuvel (5):
  arm64: fpsimd: Drop unneeded 'busy' flag
  arm64: fpsimd: Preserve/restore kernel mode NEON at context switch
  arm64: fpsimd: Implement lazy restore for kernel mode FPSIMD
  arm64: crypto: Remove conditional yield logic
  arm64: crypto: Remove FPSIMD yield logic from glue code

 arch/arm64/crypto/aes-ce-ccm-glue.c      |   5 -
 arch/arm64/crypto/aes-glue.c             |  21 +--
 arch/arm64/crypto/aes-modes.S            |   2 -
 arch/arm64/crypto/chacha-neon-glue.c     |  14 +-
 arch/arm64/crypto/crct10dif-ce-glue.c    |  30 +---
 arch/arm64/crypto/nhpoly1305-neon-glue.c |  12 +-
 arch/arm64/crypto/poly1305-glue.c        |  15 +-
 arch/arm64/crypto/polyval-ce-glue.c      |   5 +-
 arch/arm64/crypto/sha1-ce-core.S         |   6 +-
 arch/arm64/crypto/sha1-ce-glue.c         |  19 +--
 arch/arm64/crypto/sha2-ce-core.S         |   6 +-
 arch/arm64/crypto/sha2-ce-glue.c         |  19 +--
 arch/arm64/crypto/sha3-ce-core.S         |   6 +-
 arch/arm64/crypto/sha3-ce-glue.c         |  14 +-
 arch/arm64/crypto/sha512-ce-core.S       |   8 +-
 arch/arm64/crypto/sha512-ce-glue.c       |  16 +-
 arch/arm64/include/asm/assembler.h       |  29 ----
 arch/arm64/include/asm/processor.h       |   3 +
 arch/arm64/include/asm/simd.h            |  11 +-
 arch/arm64/include/asm/thread_info.h     |   1 +
 arch/arm64/kernel/asm-offsets.c          |   4 -
 arch/arm64/kernel/fpsimd.c               | 163 +++++++++++++-------
 22 files changed, 165 insertions(+), 244 deletions(-)

-- 
2.43.0.rc1.413.gea7ed67945-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH v3 1/5] arm64: fpsimd: Drop unneeded 'busy' flag
  2023-11-27 12:23 ` Ard Biesheuvel
@ 2023-11-27 12:23   ` Ard Biesheuvel
  -1 siblings, 0 replies; 20+ messages in thread
From: Ard Biesheuvel @ 2023-11-27 12:23 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: linux-crypto, Ard Biesheuvel, Marc Zyngier, Will Deacon,
	Mark Rutland, Kees Cook, Catalin Marinas, Mark Brown,
	Eric Biggers, Sebastian Andrzej Siewior

From: Ard Biesheuvel <ardb@kernel.org>

Kernel mode NEON will preserve the user mode FPSIMD state by saving it
into the task struct before clobbering the registers. In order to avoid
the need for preserving kernel mode state too, we disallow nested use of
kernel mode NEON, i..e, use in softirq context while the interrupted
task context was using kernel mode NEON too.

Originally, this policy was implemented using a per-CPU flag which was
exposed via may_use_simd(), requiring the users of the kernel mode NEON
to deal with the possibility that it might return false, and having NEON
and non-NEON code paths. This policy was changed by commit
13150149aa6ded1 ("arm64: fpsimd: run kernel mode NEON with softirqs
disabled"), and now, softirq processing is disabled entirely instead,
and so may_use_simd() can never fail when called from task or softirq
context.

This means we can drop the fpsimd_context_busy flag entirely, and
instead, ensure that we disable softirq processing in places where we
formerly relied on the flag for preventing races in the FPSIMD preserve
routines.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Mark Brown <broonie@kernel.org>
---
 arch/arm64/include/asm/simd.h | 11 +---
 arch/arm64/kernel/fpsimd.c    | 53 +++++---------------
 2 files changed, 13 insertions(+), 51 deletions(-)

diff --git a/arch/arm64/include/asm/simd.h b/arch/arm64/include/asm/simd.h
index 6a75d7ecdcaa..8e86c9e70e48 100644
--- a/arch/arm64/include/asm/simd.h
+++ b/arch/arm64/include/asm/simd.h
@@ -12,8 +12,6 @@
 #include <linux/preempt.h>
 #include <linux/types.h>
 
-DECLARE_PER_CPU(bool, fpsimd_context_busy);
-
 #ifdef CONFIG_KERNEL_MODE_NEON
 
 /*
@@ -28,17 +26,10 @@ static __must_check inline bool may_use_simd(void)
 	/*
 	 * We must make sure that the SVE has been initialized properly
 	 * before using the SIMD in kernel.
-	 * fpsimd_context_busy is only set while preemption is disabled,
-	 * and is clear whenever preemption is enabled. Since
-	 * this_cpu_read() is atomic w.r.t. preemption, fpsimd_context_busy
-	 * cannot change under our feet -- if it's set we cannot be
-	 * migrated, and if it's clear we cannot be migrated to a CPU
-	 * where it is set.
 	 */
 	return !WARN_ON(!system_capabilities_finalized()) &&
 	       system_supports_fpsimd() &&
-	       !in_hardirq() && !irqs_disabled() && !in_nmi() &&
-	       !this_cpu_read(fpsimd_context_busy);
+	       !in_hardirq() && !irqs_disabled() && !in_nmi();
 }
 
 #else /* ! CONFIG_KERNEL_MODE_NEON */
diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
index 1559c706d32d..ccc4a78a70e4 100644
--- a/arch/arm64/kernel/fpsimd.c
+++ b/arch/arm64/kernel/fpsimd.c
@@ -85,13 +85,13 @@
  * softirq kicks in. Upon vcpu_put(), KVM will save the vcpu FP state and
  * flag the register state as invalid.
  *
- * In order to allow softirq handlers to use FPSIMD, kernel_neon_begin() may
- * save the task's FPSIMD context back to task_struct from softirq context.
- * To prevent this from racing with the manipulation of the task's FPSIMD state
- * from task context and thereby corrupting the state, it is necessary to
- * protect any manipulation of a task's fpsimd_state or TIF_FOREIGN_FPSTATE
- * flag with {, __}get_cpu_fpsimd_context(). This will still allow softirqs to
- * run but prevent them to use FPSIMD.
+ * In order to allow softirq handlers to use FPSIMD, kernel_neon_begin() may be
+ * called from softirq context, which will save the task's FPSIMD context back
+ * to task_struct. To prevent this from racing with the manipulation of the
+ * task's FPSIMD state from task context and thereby corrupting the state, it
+ * is necessary to protect any manipulation of a task's fpsimd_state or
+ * TIF_FOREIGN_FPSTATE flag with get_cpu_fpsimd_context(), which will suspend
+ * softirq servicing entirely until put_cpu_fpsimd_context() is called.
  *
  * For a certain task, the sequence may look something like this:
  * - the task gets scheduled in; if both the task's fpsimd_cpu field
@@ -209,27 +209,14 @@ static inline void sme_free(struct task_struct *t) { }
 
 #endif
 
-DEFINE_PER_CPU(bool, fpsimd_context_busy);
-EXPORT_PER_CPU_SYMBOL(fpsimd_context_busy);
-
 static void fpsimd_bind_task_to_cpu(void);
 
-static void __get_cpu_fpsimd_context(void)
-{
-	bool busy = __this_cpu_xchg(fpsimd_context_busy, true);
-
-	WARN_ON(busy);
-}
-
 /*
  * Claim ownership of the CPU FPSIMD context for use by the calling context.
  *
  * The caller may freely manipulate the FPSIMD context metadata until
  * put_cpu_fpsimd_context() is called.
  *
- * The double-underscore version must only be called if you know the task
- * can't be preempted.
- *
  * On RT kernels local_bh_disable() is not sufficient because it only
  * serializes soft interrupt related sections via a local lock, but stays
  * preemptible. Disabling preemption is the right choice here as bottom
@@ -242,14 +229,6 @@ static void get_cpu_fpsimd_context(void)
 		local_bh_disable();
 	else
 		preempt_disable();
-	__get_cpu_fpsimd_context();
-}
-
-static void __put_cpu_fpsimd_context(void)
-{
-	bool busy = __this_cpu_xchg(fpsimd_context_busy, false);
-
-	WARN_ON(!busy); /* No matching get_cpu_fpsimd_context()? */
 }
 
 /*
@@ -261,18 +240,12 @@ static void __put_cpu_fpsimd_context(void)
  */
 static void put_cpu_fpsimd_context(void)
 {
-	__put_cpu_fpsimd_context();
 	if (!IS_ENABLED(CONFIG_PREEMPT_RT))
 		local_bh_enable();
 	else
 		preempt_enable();
 }
 
-static bool have_cpu_fpsimd_context(void)
-{
-	return !preemptible() && __this_cpu_read(fpsimd_context_busy);
-}
-
 unsigned int task_get_vl(const struct task_struct *task, enum vec_type type)
 {
 	return task->thread.vl[type];
@@ -383,7 +356,7 @@ static void task_fpsimd_load(void)
 	bool restore_ffr;
 
 	WARN_ON(!system_supports_fpsimd());
-	WARN_ON(!have_cpu_fpsimd_context());
+	WARN_ON(preemptible());
 
 	if (system_supports_sve() || system_supports_sme()) {
 		switch (current->thread.fp_type) {
@@ -467,7 +440,7 @@ static void fpsimd_save(void)
 	unsigned int vl;
 
 	WARN_ON(!system_supports_fpsimd());
-	WARN_ON(!have_cpu_fpsimd_context());
+	WARN_ON(preemptible());
 
 	if (test_thread_flag(TIF_FOREIGN_FPSTATE))
 		return;
@@ -1507,7 +1480,7 @@ void fpsimd_thread_switch(struct task_struct *next)
 	if (!system_supports_fpsimd())
 		return;
 
-	__get_cpu_fpsimd_context();
+	WARN_ON_ONCE(!irqs_disabled());
 
 	/* Save unsaved fpsimd state, if any: */
 	fpsimd_save();
@@ -1523,8 +1496,6 @@ void fpsimd_thread_switch(struct task_struct *next)
 
 	update_tsk_thread_flag(next, TIF_FOREIGN_FPSTATE,
 			       wrong_task || wrong_cpu);
-
-	__put_cpu_fpsimd_context();
 }
 
 static void fpsimd_flush_thread_vl(enum vec_type type)
@@ -1829,10 +1800,10 @@ void fpsimd_save_and_flush_cpu_state(void)
 	if (!system_supports_fpsimd())
 		return;
 	WARN_ON(preemptible());
-	__get_cpu_fpsimd_context();
+	get_cpu_fpsimd_context();
 	fpsimd_save();
 	fpsimd_flush_cpu_state();
-	__put_cpu_fpsimd_context();
+	put_cpu_fpsimd_context();
 }
 
 #ifdef CONFIG_KERNEL_MODE_NEON
-- 
2.43.0.rc1.413.gea7ed67945-goog


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v3 1/5] arm64: fpsimd: Drop unneeded 'busy' flag
@ 2023-11-27 12:23   ` Ard Biesheuvel
  0 siblings, 0 replies; 20+ messages in thread
From: Ard Biesheuvel @ 2023-11-27 12:23 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: linux-crypto, Ard Biesheuvel, Marc Zyngier, Will Deacon,
	Mark Rutland, Kees Cook, Catalin Marinas, Mark Brown,
	Eric Biggers, Sebastian Andrzej Siewior

From: Ard Biesheuvel <ardb@kernel.org>

Kernel mode NEON will preserve the user mode FPSIMD state by saving it
into the task struct before clobbering the registers. In order to avoid
the need for preserving kernel mode state too, we disallow nested use of
kernel mode NEON, i..e, use in softirq context while the interrupted
task context was using kernel mode NEON too.

Originally, this policy was implemented using a per-CPU flag which was
exposed via may_use_simd(), requiring the users of the kernel mode NEON
to deal with the possibility that it might return false, and having NEON
and non-NEON code paths. This policy was changed by commit
13150149aa6ded1 ("arm64: fpsimd: run kernel mode NEON with softirqs
disabled"), and now, softirq processing is disabled entirely instead,
and so may_use_simd() can never fail when called from task or softirq
context.

This means we can drop the fpsimd_context_busy flag entirely, and
instead, ensure that we disable softirq processing in places where we
formerly relied on the flag for preventing races in the FPSIMD preserve
routines.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Mark Brown <broonie@kernel.org>
---
 arch/arm64/include/asm/simd.h | 11 +---
 arch/arm64/kernel/fpsimd.c    | 53 +++++---------------
 2 files changed, 13 insertions(+), 51 deletions(-)

diff --git a/arch/arm64/include/asm/simd.h b/arch/arm64/include/asm/simd.h
index 6a75d7ecdcaa..8e86c9e70e48 100644
--- a/arch/arm64/include/asm/simd.h
+++ b/arch/arm64/include/asm/simd.h
@@ -12,8 +12,6 @@
 #include <linux/preempt.h>
 #include <linux/types.h>
 
-DECLARE_PER_CPU(bool, fpsimd_context_busy);
-
 #ifdef CONFIG_KERNEL_MODE_NEON
 
 /*
@@ -28,17 +26,10 @@ static __must_check inline bool may_use_simd(void)
 	/*
 	 * We must make sure that the SVE has been initialized properly
 	 * before using the SIMD in kernel.
-	 * fpsimd_context_busy is only set while preemption is disabled,
-	 * and is clear whenever preemption is enabled. Since
-	 * this_cpu_read() is atomic w.r.t. preemption, fpsimd_context_busy
-	 * cannot change under our feet -- if it's set we cannot be
-	 * migrated, and if it's clear we cannot be migrated to a CPU
-	 * where it is set.
 	 */
 	return !WARN_ON(!system_capabilities_finalized()) &&
 	       system_supports_fpsimd() &&
-	       !in_hardirq() && !irqs_disabled() && !in_nmi() &&
-	       !this_cpu_read(fpsimd_context_busy);
+	       !in_hardirq() && !irqs_disabled() && !in_nmi();
 }
 
 #else /* ! CONFIG_KERNEL_MODE_NEON */
diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
index 1559c706d32d..ccc4a78a70e4 100644
--- a/arch/arm64/kernel/fpsimd.c
+++ b/arch/arm64/kernel/fpsimd.c
@@ -85,13 +85,13 @@
  * softirq kicks in. Upon vcpu_put(), KVM will save the vcpu FP state and
  * flag the register state as invalid.
  *
- * In order to allow softirq handlers to use FPSIMD, kernel_neon_begin() may
- * save the task's FPSIMD context back to task_struct from softirq context.
- * To prevent this from racing with the manipulation of the task's FPSIMD state
- * from task context and thereby corrupting the state, it is necessary to
- * protect any manipulation of a task's fpsimd_state or TIF_FOREIGN_FPSTATE
- * flag with {, __}get_cpu_fpsimd_context(). This will still allow softirqs to
- * run but prevent them to use FPSIMD.
+ * In order to allow softirq handlers to use FPSIMD, kernel_neon_begin() may be
+ * called from softirq context, which will save the task's FPSIMD context back
+ * to task_struct. To prevent this from racing with the manipulation of the
+ * task's FPSIMD state from task context and thereby corrupting the state, it
+ * is necessary to protect any manipulation of a task's fpsimd_state or
+ * TIF_FOREIGN_FPSTATE flag with get_cpu_fpsimd_context(), which will suspend
+ * softirq servicing entirely until put_cpu_fpsimd_context() is called.
  *
  * For a certain task, the sequence may look something like this:
  * - the task gets scheduled in; if both the task's fpsimd_cpu field
@@ -209,27 +209,14 @@ static inline void sme_free(struct task_struct *t) { }
 
 #endif
 
-DEFINE_PER_CPU(bool, fpsimd_context_busy);
-EXPORT_PER_CPU_SYMBOL(fpsimd_context_busy);
-
 static void fpsimd_bind_task_to_cpu(void);
 
-static void __get_cpu_fpsimd_context(void)
-{
-	bool busy = __this_cpu_xchg(fpsimd_context_busy, true);
-
-	WARN_ON(busy);
-}
-
 /*
  * Claim ownership of the CPU FPSIMD context for use by the calling context.
  *
  * The caller may freely manipulate the FPSIMD context metadata until
  * put_cpu_fpsimd_context() is called.
  *
- * The double-underscore version must only be called if you know the task
- * can't be preempted.
- *
  * On RT kernels local_bh_disable() is not sufficient because it only
  * serializes soft interrupt related sections via a local lock, but stays
  * preemptible. Disabling preemption is the right choice here as bottom
@@ -242,14 +229,6 @@ static void get_cpu_fpsimd_context(void)
 		local_bh_disable();
 	else
 		preempt_disable();
-	__get_cpu_fpsimd_context();
-}
-
-static void __put_cpu_fpsimd_context(void)
-{
-	bool busy = __this_cpu_xchg(fpsimd_context_busy, false);
-
-	WARN_ON(!busy); /* No matching get_cpu_fpsimd_context()? */
 }
 
 /*
@@ -261,18 +240,12 @@ static void __put_cpu_fpsimd_context(void)
  */
 static void put_cpu_fpsimd_context(void)
 {
-	__put_cpu_fpsimd_context();
 	if (!IS_ENABLED(CONFIG_PREEMPT_RT))
 		local_bh_enable();
 	else
 		preempt_enable();
 }
 
-static bool have_cpu_fpsimd_context(void)
-{
-	return !preemptible() && __this_cpu_read(fpsimd_context_busy);
-}
-
 unsigned int task_get_vl(const struct task_struct *task, enum vec_type type)
 {
 	return task->thread.vl[type];
@@ -383,7 +356,7 @@ static void task_fpsimd_load(void)
 	bool restore_ffr;
 
 	WARN_ON(!system_supports_fpsimd());
-	WARN_ON(!have_cpu_fpsimd_context());
+	WARN_ON(preemptible());
 
 	if (system_supports_sve() || system_supports_sme()) {
 		switch (current->thread.fp_type) {
@@ -467,7 +440,7 @@ static void fpsimd_save(void)
 	unsigned int vl;
 
 	WARN_ON(!system_supports_fpsimd());
-	WARN_ON(!have_cpu_fpsimd_context());
+	WARN_ON(preemptible());
 
 	if (test_thread_flag(TIF_FOREIGN_FPSTATE))
 		return;
@@ -1507,7 +1480,7 @@ void fpsimd_thread_switch(struct task_struct *next)
 	if (!system_supports_fpsimd())
 		return;
 
-	__get_cpu_fpsimd_context();
+	WARN_ON_ONCE(!irqs_disabled());
 
 	/* Save unsaved fpsimd state, if any: */
 	fpsimd_save();
@@ -1523,8 +1496,6 @@ void fpsimd_thread_switch(struct task_struct *next)
 
 	update_tsk_thread_flag(next, TIF_FOREIGN_FPSTATE,
 			       wrong_task || wrong_cpu);
-
-	__put_cpu_fpsimd_context();
 }
 
 static void fpsimd_flush_thread_vl(enum vec_type type)
@@ -1829,10 +1800,10 @@ void fpsimd_save_and_flush_cpu_state(void)
 	if (!system_supports_fpsimd())
 		return;
 	WARN_ON(preemptible());
-	__get_cpu_fpsimd_context();
+	get_cpu_fpsimd_context();
 	fpsimd_save();
 	fpsimd_flush_cpu_state();
-	__put_cpu_fpsimd_context();
+	put_cpu_fpsimd_context();
 }
 
 #ifdef CONFIG_KERNEL_MODE_NEON
-- 
2.43.0.rc1.413.gea7ed67945-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v3 2/5] arm64: fpsimd: Preserve/restore kernel mode NEON at context switch
  2023-11-27 12:23 ` Ard Biesheuvel
@ 2023-11-27 12:23   ` Ard Biesheuvel
  -1 siblings, 0 replies; 20+ messages in thread
From: Ard Biesheuvel @ 2023-11-27 12:23 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: linux-crypto, Ard Biesheuvel, Marc Zyngier, Will Deacon,
	Mark Rutland, Kees Cook, Catalin Marinas, Mark Brown,
	Eric Biggers, Sebastian Andrzej Siewior

From: Ard Biesheuvel <ardb@kernel.org>

Currently, the FPSIMD register file is not preserved and restored along
with the general registers on exception entry/exit or context switch.
For this reason, we disable preemption when enabling FPSIMD for kernel
mode use in task context, and suspend the processing of softirqs so that
there are no concurrent uses in the kernel. (Kernel mode FPSIMD may not
be used at all in other contexts).

Disabling preemption while doing CPU intensive work on inputs of
potentially unbounded size is bad for real-time performance, which is
why we try and ensure that SIMD crypto code does not operate on more
than ~4k at a time, which is an arbitrary limit and requires assembler
code to implement efficiently.

We can avoid the need for disabling preemption if we can ensure that any
in-kernel users of the NEON will not lose the FPSIMD register state
across a context switch. And given that disabling softirqs implicitly
disables preemption as well, we will also have to ensure that a softirq
that runs code using FPSIMD can safely interrupt an in-kernel user.

So introduce a thread_info flag TIF_USING_KMODE_FPSIMD, and modify the
context switch hook for FPSIMD to preserve and restore the kernel mode
FPSIMD to/from struct thread_struct when it is set. This avoids any
scheduling blackouts due to prolonged use of FPSIMD in kernel mode,
without the need for manual yielding.

In order to support softirq processing while FPSIMD is being used in
kernel task context, use the same flag to decide whether the kernel mode
FPSIMD state needs to be preserved and restored before allowing FPSIMD
to be used in softirq context.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Mark Brown <broonie@kernel.org>
---
 arch/arm64/include/asm/processor.h   |  2 +
 arch/arm64/include/asm/thread_info.h |  1 +
 arch/arm64/kernel/fpsimd.c           | 92 ++++++++++++++++----
 3 files changed, 77 insertions(+), 18 deletions(-)

diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
index e5bc54522e71..dcb51c0571af 100644
--- a/arch/arm64/include/asm/processor.h
+++ b/arch/arm64/include/asm/processor.h
@@ -167,6 +167,8 @@ struct thread_struct {
 	unsigned long		fault_address;	/* fault info */
 	unsigned long		fault_code;	/* ESR_EL1 value */
 	struct debug_info	debug;		/* debugging */
+
+	struct user_fpsimd_state	kmode_fpsimd_state;
 #ifdef CONFIG_ARM64_PTR_AUTH
 	struct ptrauth_keys_user	keys_user;
 #ifdef CONFIG_ARM64_PTR_AUTH_KERNEL
diff --git a/arch/arm64/include/asm/thread_info.h b/arch/arm64/include/asm/thread_info.h
index 553d1bc559c6..6b254cf90e8b 100644
--- a/arch/arm64/include/asm/thread_info.h
+++ b/arch/arm64/include/asm/thread_info.h
@@ -80,6 +80,7 @@ void arch_setup_new_exec(void);
 #define TIF_TAGGED_ADDR		26	/* Allow tagged user addresses */
 #define TIF_SME			27	/* SME in use */
 #define TIF_SME_VL_INHERIT	28	/* Inherit SME vl_onexec across exec */
+#define TIF_USING_KMODE_FPSIMD	29	/* Task is in a kernel mode FPSIMD section */
 
 #define _TIF_SIGPENDING		(1 << TIF_SIGPENDING)
 #define _TIF_NEED_RESCHED	(1 << TIF_NEED_RESCHED)
diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
index ccc4a78a70e4..198918805bf6 100644
--- a/arch/arm64/kernel/fpsimd.c
+++ b/arch/arm64/kernel/fpsimd.c
@@ -357,6 +357,7 @@ static void task_fpsimd_load(void)
 
 	WARN_ON(!system_supports_fpsimd());
 	WARN_ON(preemptible());
+	WARN_ON(test_thread_flag(TIF_USING_KMODE_FPSIMD));
 
 	if (system_supports_sve() || system_supports_sme()) {
 		switch (current->thread.fp_type) {
@@ -379,7 +380,7 @@ static void task_fpsimd_load(void)
 		default:
 			/*
 			 * This indicates either a bug in
-			 * fpsimd_save() or memory corruption, we
+			 * fpsimd_save_user_state() or memory corruption, we
 			 * should always record an explicit format
 			 * when we save. We always at least have the
 			 * memory allocated for FPSMID registers so
@@ -430,7 +431,7 @@ static void task_fpsimd_load(void)
  * than via current, if we are saving KVM state then it will have
  * ensured that the type of registers to save is set in last->to_save.
  */
-static void fpsimd_save(void)
+static void fpsimd_save_user_state(void)
 {
 	struct cpu_fp_state const *last =
 		this_cpu_ptr(&fpsimd_last_state);
@@ -861,7 +862,7 @@ int vec_set_vector_length(struct task_struct *task, enum vec_type type,
 	if (task == current) {
 		get_cpu_fpsimd_context();
 
-		fpsimd_save();
+		fpsimd_save_user_state();
 	}
 
 	fpsimd_flush_task_state(task);
@@ -1473,6 +1474,16 @@ void do_fpsimd_exc(unsigned long esr, struct pt_regs *regs)
 		       current);
 }
 
+static void fpsimd_load_kernel_state(struct task_struct *task)
+{
+	fpsimd_load_state(&task->thread.kmode_fpsimd_state);
+}
+
+static void fpsimd_save_kernel_state(struct task_struct *task)
+{
+	fpsimd_save_state(&task->thread.kmode_fpsimd_state);
+}
+
 void fpsimd_thread_switch(struct task_struct *next)
 {
 	bool wrong_task, wrong_cpu;
@@ -1483,19 +1494,28 @@ void fpsimd_thread_switch(struct task_struct *next)
 	WARN_ON_ONCE(!irqs_disabled());
 
 	/* Save unsaved fpsimd state, if any: */
-	fpsimd_save();
+	if (!test_thread_flag(TIF_USING_KMODE_FPSIMD))
+		fpsimd_save_user_state();
+	else
+		fpsimd_save_kernel_state(current);
 
-	/*
-	 * Fix up TIF_FOREIGN_FPSTATE to correctly describe next's
-	 * state.  For kernel threads, FPSIMD registers are never loaded
-	 * and wrong_task and wrong_cpu will always be true.
-	 */
-	wrong_task = __this_cpu_read(fpsimd_last_state.st) !=
-					&next->thread.uw.fpsimd_state;
-	wrong_cpu = next->thread.fpsimd_cpu != smp_processor_id();
+	if (test_tsk_thread_flag(next, TIF_USING_KMODE_FPSIMD)) {
+		fpsimd_load_kernel_state(next);
+		set_tsk_thread_flag(next, TIF_FOREIGN_FPSTATE);
+	} else {
+		/*
+		 * Fix up TIF_FOREIGN_FPSTATE to correctly describe next's
+		 * state.  For kernel threads, FPSIMD registers are never
+		 * loaded with user mode FPSIMD state and so wrong_task and
+		 * wrong_cpu will always be true.
+		 */
+		wrong_task = __this_cpu_read(fpsimd_last_state.st) !=
+			&next->thread.uw.fpsimd_state;
+		wrong_cpu = next->thread.fpsimd_cpu != smp_processor_id();
 
-	update_tsk_thread_flag(next, TIF_FOREIGN_FPSTATE,
-			       wrong_task || wrong_cpu);
+		update_tsk_thread_flag(next, TIF_FOREIGN_FPSTATE,
+				       wrong_task || wrong_cpu);
+	}
 }
 
 static void fpsimd_flush_thread_vl(enum vec_type type)
@@ -1585,7 +1605,7 @@ void fpsimd_preserve_current_state(void)
 		return;
 
 	get_cpu_fpsimd_context();
-	fpsimd_save();
+	fpsimd_save_user_state();
 	put_cpu_fpsimd_context();
 }
 
@@ -1801,7 +1821,7 @@ void fpsimd_save_and_flush_cpu_state(void)
 		return;
 	WARN_ON(preemptible());
 	get_cpu_fpsimd_context();
-	fpsimd_save();
+	fpsimd_save_user_state();
 	fpsimd_flush_cpu_state();
 	put_cpu_fpsimd_context();
 }
@@ -1835,10 +1855,37 @@ void kernel_neon_begin(void)
 	get_cpu_fpsimd_context();
 
 	/* Save unsaved fpsimd state, if any: */
-	fpsimd_save();
+	if (!test_thread_flag(TIF_USING_KMODE_FPSIMD)) {
+		fpsimd_save_user_state();
+
+		/*
+		 * Set the thread flag so that the kernel mode FPSIMD state
+		 * will be context switched along with the rest of the task
+		 * state.
+		 *
+		 * On non-PREEMPT_RT, softirqs may interrupt task level kernel
+		 * mode FPSIMD, but the task will not be preemptible so setting
+		 * TIF_USING_KMODE_FPSIMD for those would be both wrong (as it
+		 * would mark the task context FPSIMD state as requiring a
+		 * context switch) and unnecessary.
+		 *
+		 * On PREEMPT_RT, softirqs are serviced from a separate thread,
+		 * which is scheduled as usual, and this guarantees that these
+		 * softirqs are not interrupting use of the FPSIMD in kernel
+		 * mode in task context. So in this case, setting the flag here
+		 * is always appropriate.
+		 */
+		if (IS_ENABLED(CONFIG_PREEMPT_RT) || !in_serving_softirq())
+			set_thread_flag(TIF_USING_KMODE_FPSIMD);
+	} else {
+		BUG_ON(IS_ENABLED(CONFIG_PREEMPT_RT) || !in_serving_softirq());
+		fpsimd_save_kernel_state(current);
+	}
 
 	/* Invalidate any task state remaining in the fpsimd regs: */
 	fpsimd_flush_cpu_state();
+
+	put_cpu_fpsimd_context();
 }
 EXPORT_SYMBOL_GPL(kernel_neon_begin);
 
@@ -1856,7 +1903,16 @@ void kernel_neon_end(void)
 	if (!system_supports_fpsimd())
 		return;
 
-	put_cpu_fpsimd_context();
+	/*
+	 * If we are returning from a nested use of kernel mode FPSIMD, restore
+	 * the task context kernel mode FPSIMD state. This can only happen when
+	 * running in softirq context on non-PREEMPT_RT.
+	 */
+	if (!IS_ENABLED(CONFIG_PREEMPT_RT) && in_serving_softirq() &&
+	    test_thread_flag(TIF_USING_KMODE_FPSIMD))
+		fpsimd_load_kernel_state(current);
+	else
+		clear_thread_flag(TIF_USING_KMODE_FPSIMD);
 }
 EXPORT_SYMBOL_GPL(kernel_neon_end);
 
-- 
2.43.0.rc1.413.gea7ed67945-goog


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v3 2/5] arm64: fpsimd: Preserve/restore kernel mode NEON at context switch
@ 2023-11-27 12:23   ` Ard Biesheuvel
  0 siblings, 0 replies; 20+ messages in thread
From: Ard Biesheuvel @ 2023-11-27 12:23 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: linux-crypto, Ard Biesheuvel, Marc Zyngier, Will Deacon,
	Mark Rutland, Kees Cook, Catalin Marinas, Mark Brown,
	Eric Biggers, Sebastian Andrzej Siewior

From: Ard Biesheuvel <ardb@kernel.org>

Currently, the FPSIMD register file is not preserved and restored along
with the general registers on exception entry/exit or context switch.
For this reason, we disable preemption when enabling FPSIMD for kernel
mode use in task context, and suspend the processing of softirqs so that
there are no concurrent uses in the kernel. (Kernel mode FPSIMD may not
be used at all in other contexts).

Disabling preemption while doing CPU intensive work on inputs of
potentially unbounded size is bad for real-time performance, which is
why we try and ensure that SIMD crypto code does not operate on more
than ~4k at a time, which is an arbitrary limit and requires assembler
code to implement efficiently.

We can avoid the need for disabling preemption if we can ensure that any
in-kernel users of the NEON will not lose the FPSIMD register state
across a context switch. And given that disabling softirqs implicitly
disables preemption as well, we will also have to ensure that a softirq
that runs code using FPSIMD can safely interrupt an in-kernel user.

So introduce a thread_info flag TIF_USING_KMODE_FPSIMD, and modify the
context switch hook for FPSIMD to preserve and restore the kernel mode
FPSIMD to/from struct thread_struct when it is set. This avoids any
scheduling blackouts due to prolonged use of FPSIMD in kernel mode,
without the need for manual yielding.

In order to support softirq processing while FPSIMD is being used in
kernel task context, use the same flag to decide whether the kernel mode
FPSIMD state needs to be preserved and restored before allowing FPSIMD
to be used in softirq context.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Mark Brown <broonie@kernel.org>
---
 arch/arm64/include/asm/processor.h   |  2 +
 arch/arm64/include/asm/thread_info.h |  1 +
 arch/arm64/kernel/fpsimd.c           | 92 ++++++++++++++++----
 3 files changed, 77 insertions(+), 18 deletions(-)

diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
index e5bc54522e71..dcb51c0571af 100644
--- a/arch/arm64/include/asm/processor.h
+++ b/arch/arm64/include/asm/processor.h
@@ -167,6 +167,8 @@ struct thread_struct {
 	unsigned long		fault_address;	/* fault info */
 	unsigned long		fault_code;	/* ESR_EL1 value */
 	struct debug_info	debug;		/* debugging */
+
+	struct user_fpsimd_state	kmode_fpsimd_state;
 #ifdef CONFIG_ARM64_PTR_AUTH
 	struct ptrauth_keys_user	keys_user;
 #ifdef CONFIG_ARM64_PTR_AUTH_KERNEL
diff --git a/arch/arm64/include/asm/thread_info.h b/arch/arm64/include/asm/thread_info.h
index 553d1bc559c6..6b254cf90e8b 100644
--- a/arch/arm64/include/asm/thread_info.h
+++ b/arch/arm64/include/asm/thread_info.h
@@ -80,6 +80,7 @@ void arch_setup_new_exec(void);
 #define TIF_TAGGED_ADDR		26	/* Allow tagged user addresses */
 #define TIF_SME			27	/* SME in use */
 #define TIF_SME_VL_INHERIT	28	/* Inherit SME vl_onexec across exec */
+#define TIF_USING_KMODE_FPSIMD	29	/* Task is in a kernel mode FPSIMD section */
 
 #define _TIF_SIGPENDING		(1 << TIF_SIGPENDING)
 #define _TIF_NEED_RESCHED	(1 << TIF_NEED_RESCHED)
diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
index ccc4a78a70e4..198918805bf6 100644
--- a/arch/arm64/kernel/fpsimd.c
+++ b/arch/arm64/kernel/fpsimd.c
@@ -357,6 +357,7 @@ static void task_fpsimd_load(void)
 
 	WARN_ON(!system_supports_fpsimd());
 	WARN_ON(preemptible());
+	WARN_ON(test_thread_flag(TIF_USING_KMODE_FPSIMD));
 
 	if (system_supports_sve() || system_supports_sme()) {
 		switch (current->thread.fp_type) {
@@ -379,7 +380,7 @@ static void task_fpsimd_load(void)
 		default:
 			/*
 			 * This indicates either a bug in
-			 * fpsimd_save() or memory corruption, we
+			 * fpsimd_save_user_state() or memory corruption, we
 			 * should always record an explicit format
 			 * when we save. We always at least have the
 			 * memory allocated for FPSMID registers so
@@ -430,7 +431,7 @@ static void task_fpsimd_load(void)
  * than via current, if we are saving KVM state then it will have
  * ensured that the type of registers to save is set in last->to_save.
  */
-static void fpsimd_save(void)
+static void fpsimd_save_user_state(void)
 {
 	struct cpu_fp_state const *last =
 		this_cpu_ptr(&fpsimd_last_state);
@@ -861,7 +862,7 @@ int vec_set_vector_length(struct task_struct *task, enum vec_type type,
 	if (task == current) {
 		get_cpu_fpsimd_context();
 
-		fpsimd_save();
+		fpsimd_save_user_state();
 	}
 
 	fpsimd_flush_task_state(task);
@@ -1473,6 +1474,16 @@ void do_fpsimd_exc(unsigned long esr, struct pt_regs *regs)
 		       current);
 }
 
+static void fpsimd_load_kernel_state(struct task_struct *task)
+{
+	fpsimd_load_state(&task->thread.kmode_fpsimd_state);
+}
+
+static void fpsimd_save_kernel_state(struct task_struct *task)
+{
+	fpsimd_save_state(&task->thread.kmode_fpsimd_state);
+}
+
 void fpsimd_thread_switch(struct task_struct *next)
 {
 	bool wrong_task, wrong_cpu;
@@ -1483,19 +1494,28 @@ void fpsimd_thread_switch(struct task_struct *next)
 	WARN_ON_ONCE(!irqs_disabled());
 
 	/* Save unsaved fpsimd state, if any: */
-	fpsimd_save();
+	if (!test_thread_flag(TIF_USING_KMODE_FPSIMD))
+		fpsimd_save_user_state();
+	else
+		fpsimd_save_kernel_state(current);
 
-	/*
-	 * Fix up TIF_FOREIGN_FPSTATE to correctly describe next's
-	 * state.  For kernel threads, FPSIMD registers are never loaded
-	 * and wrong_task and wrong_cpu will always be true.
-	 */
-	wrong_task = __this_cpu_read(fpsimd_last_state.st) !=
-					&next->thread.uw.fpsimd_state;
-	wrong_cpu = next->thread.fpsimd_cpu != smp_processor_id();
+	if (test_tsk_thread_flag(next, TIF_USING_KMODE_FPSIMD)) {
+		fpsimd_load_kernel_state(next);
+		set_tsk_thread_flag(next, TIF_FOREIGN_FPSTATE);
+	} else {
+		/*
+		 * Fix up TIF_FOREIGN_FPSTATE to correctly describe next's
+		 * state.  For kernel threads, FPSIMD registers are never
+		 * loaded with user mode FPSIMD state and so wrong_task and
+		 * wrong_cpu will always be true.
+		 */
+		wrong_task = __this_cpu_read(fpsimd_last_state.st) !=
+			&next->thread.uw.fpsimd_state;
+		wrong_cpu = next->thread.fpsimd_cpu != smp_processor_id();
 
-	update_tsk_thread_flag(next, TIF_FOREIGN_FPSTATE,
-			       wrong_task || wrong_cpu);
+		update_tsk_thread_flag(next, TIF_FOREIGN_FPSTATE,
+				       wrong_task || wrong_cpu);
+	}
 }
 
 static void fpsimd_flush_thread_vl(enum vec_type type)
@@ -1585,7 +1605,7 @@ void fpsimd_preserve_current_state(void)
 		return;
 
 	get_cpu_fpsimd_context();
-	fpsimd_save();
+	fpsimd_save_user_state();
 	put_cpu_fpsimd_context();
 }
 
@@ -1801,7 +1821,7 @@ void fpsimd_save_and_flush_cpu_state(void)
 		return;
 	WARN_ON(preemptible());
 	get_cpu_fpsimd_context();
-	fpsimd_save();
+	fpsimd_save_user_state();
 	fpsimd_flush_cpu_state();
 	put_cpu_fpsimd_context();
 }
@@ -1835,10 +1855,37 @@ void kernel_neon_begin(void)
 	get_cpu_fpsimd_context();
 
 	/* Save unsaved fpsimd state, if any: */
-	fpsimd_save();
+	if (!test_thread_flag(TIF_USING_KMODE_FPSIMD)) {
+		fpsimd_save_user_state();
+
+		/*
+		 * Set the thread flag so that the kernel mode FPSIMD state
+		 * will be context switched along with the rest of the task
+		 * state.
+		 *
+		 * On non-PREEMPT_RT, softirqs may interrupt task level kernel
+		 * mode FPSIMD, but the task will not be preemptible so setting
+		 * TIF_USING_KMODE_FPSIMD for those would be both wrong (as it
+		 * would mark the task context FPSIMD state as requiring a
+		 * context switch) and unnecessary.
+		 *
+		 * On PREEMPT_RT, softirqs are serviced from a separate thread,
+		 * which is scheduled as usual, and this guarantees that these
+		 * softirqs are not interrupting use of the FPSIMD in kernel
+		 * mode in task context. So in this case, setting the flag here
+		 * is always appropriate.
+		 */
+		if (IS_ENABLED(CONFIG_PREEMPT_RT) || !in_serving_softirq())
+			set_thread_flag(TIF_USING_KMODE_FPSIMD);
+	} else {
+		BUG_ON(IS_ENABLED(CONFIG_PREEMPT_RT) || !in_serving_softirq());
+		fpsimd_save_kernel_state(current);
+	}
 
 	/* Invalidate any task state remaining in the fpsimd regs: */
 	fpsimd_flush_cpu_state();
+
+	put_cpu_fpsimd_context();
 }
 EXPORT_SYMBOL_GPL(kernel_neon_begin);
 
@@ -1856,7 +1903,16 @@ void kernel_neon_end(void)
 	if (!system_supports_fpsimd())
 		return;
 
-	put_cpu_fpsimd_context();
+	/*
+	 * If we are returning from a nested use of kernel mode FPSIMD, restore
+	 * the task context kernel mode FPSIMD state. This can only happen when
+	 * running in softirq context on non-PREEMPT_RT.
+	 */
+	if (!IS_ENABLED(CONFIG_PREEMPT_RT) && in_serving_softirq() &&
+	    test_thread_flag(TIF_USING_KMODE_FPSIMD))
+		fpsimd_load_kernel_state(current);
+	else
+		clear_thread_flag(TIF_USING_KMODE_FPSIMD);
 }
 EXPORT_SYMBOL_GPL(kernel_neon_end);
 
-- 
2.43.0.rc1.413.gea7ed67945-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v3 3/5] arm64: fpsimd: Implement lazy restore for kernel mode FPSIMD
  2023-11-27 12:23 ` Ard Biesheuvel
@ 2023-11-27 12:23   ` Ard Biesheuvel
  -1 siblings, 0 replies; 20+ messages in thread
From: Ard Biesheuvel @ 2023-11-27 12:23 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: linux-crypto, Ard Biesheuvel, Marc Zyngier, Will Deacon,
	Mark Rutland, Kees Cook, Catalin Marinas, Mark Brown,
	Eric Biggers, Sebastian Andrzej Siewior

From: Ard Biesheuvel <ardb@kernel.org>

Now that kernel mode FPSIMD state is context switched along with other
task state, we can enable the existing logic that keeps track of which
task's FPSIMD state the CPU is holding in its registers. If it is the
context of the task that we are switching to, we can elide the reload of
the FPSIMD state from memory.

Note that we also need to check whether the FPSIMD state on this CPU is
the most recent: if a task gets migrated away and back again, the state
in memory may be more recent than the state in the CPU. So add another
CPU id field to task_struct to keep track of this. (We could reuse the
existing CPU id field used for user mode context, but that might result
in user state to be discarded unnecessarily, given that two distinct
CPUs could be holding the most recent user mode state and the most
recent kernel mode state)

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Mark Brown <broonie@kernel.org>
---
 arch/arm64/include/asm/processor.h |  1 +
 arch/arm64/kernel/fpsimd.c         | 18 ++++++++++++++++++
 2 files changed, 19 insertions(+)

diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
index dcb51c0571af..332f15d0abcf 100644
--- a/arch/arm64/include/asm/processor.h
+++ b/arch/arm64/include/asm/processor.h
@@ -169,6 +169,7 @@ struct thread_struct {
 	struct debug_info	debug;		/* debugging */
 
 	struct user_fpsimd_state	kmode_fpsimd_state;
+	unsigned int			kmode_fpsimd_cpu;
 #ifdef CONFIG_ARM64_PTR_AUTH
 	struct ptrauth_keys_user	keys_user;
 #ifdef CONFIG_ARM64_PTR_AUTH_KERNEL
diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
index 198918805bf6..112111a078b6 100644
--- a/arch/arm64/kernel/fpsimd.c
+++ b/arch/arm64/kernel/fpsimd.c
@@ -1476,12 +1476,30 @@ void do_fpsimd_exc(unsigned long esr, struct pt_regs *regs)
 
 static void fpsimd_load_kernel_state(struct task_struct *task)
 {
+	struct cpu_fp_state *last = this_cpu_ptr(&fpsimd_last_state);
+
+	/*
+	 * Elide the load if this CPU holds the most recent kernel mode
+	 * FPSIMD context of the current task.
+	 */
+	if (last->st == &task->thread.kmode_fpsimd_state &&
+	    task->thread.kmode_fpsimd_cpu == smp_processor_id())
+		return;
+
 	fpsimd_load_state(&task->thread.kmode_fpsimd_state);
 }
 
 static void fpsimd_save_kernel_state(struct task_struct *task)
 {
+	struct cpu_fp_state cpu_fp_state = {
+		.st		= &task->thread.kmode_fpsimd_state,
+		.to_save	= FP_STATE_FPSIMD,
+	};
+
 	fpsimd_save_state(&task->thread.kmode_fpsimd_state);
+	fpsimd_bind_state_to_cpu(&cpu_fp_state);
+
+	task->thread.kmode_fpsimd_cpu = smp_processor_id();
 }
 
 void fpsimd_thread_switch(struct task_struct *next)
-- 
2.43.0.rc1.413.gea7ed67945-goog


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v3 3/5] arm64: fpsimd: Implement lazy restore for kernel mode FPSIMD
@ 2023-11-27 12:23   ` Ard Biesheuvel
  0 siblings, 0 replies; 20+ messages in thread
From: Ard Biesheuvel @ 2023-11-27 12:23 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: linux-crypto, Ard Biesheuvel, Marc Zyngier, Will Deacon,
	Mark Rutland, Kees Cook, Catalin Marinas, Mark Brown,
	Eric Biggers, Sebastian Andrzej Siewior

From: Ard Biesheuvel <ardb@kernel.org>

Now that kernel mode FPSIMD state is context switched along with other
task state, we can enable the existing logic that keeps track of which
task's FPSIMD state the CPU is holding in its registers. If it is the
context of the task that we are switching to, we can elide the reload of
the FPSIMD state from memory.

Note that we also need to check whether the FPSIMD state on this CPU is
the most recent: if a task gets migrated away and back again, the state
in memory may be more recent than the state in the CPU. So add another
CPU id field to task_struct to keep track of this. (We could reuse the
existing CPU id field used for user mode context, but that might result
in user state to be discarded unnecessarily, given that two distinct
CPUs could be holding the most recent user mode state and the most
recent kernel mode state)

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Mark Brown <broonie@kernel.org>
---
 arch/arm64/include/asm/processor.h |  1 +
 arch/arm64/kernel/fpsimd.c         | 18 ++++++++++++++++++
 2 files changed, 19 insertions(+)

diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
index dcb51c0571af..332f15d0abcf 100644
--- a/arch/arm64/include/asm/processor.h
+++ b/arch/arm64/include/asm/processor.h
@@ -169,6 +169,7 @@ struct thread_struct {
 	struct debug_info	debug;		/* debugging */
 
 	struct user_fpsimd_state	kmode_fpsimd_state;
+	unsigned int			kmode_fpsimd_cpu;
 #ifdef CONFIG_ARM64_PTR_AUTH
 	struct ptrauth_keys_user	keys_user;
 #ifdef CONFIG_ARM64_PTR_AUTH_KERNEL
diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
index 198918805bf6..112111a078b6 100644
--- a/arch/arm64/kernel/fpsimd.c
+++ b/arch/arm64/kernel/fpsimd.c
@@ -1476,12 +1476,30 @@ void do_fpsimd_exc(unsigned long esr, struct pt_regs *regs)
 
 static void fpsimd_load_kernel_state(struct task_struct *task)
 {
+	struct cpu_fp_state *last = this_cpu_ptr(&fpsimd_last_state);
+
+	/*
+	 * Elide the load if this CPU holds the most recent kernel mode
+	 * FPSIMD context of the current task.
+	 */
+	if (last->st == &task->thread.kmode_fpsimd_state &&
+	    task->thread.kmode_fpsimd_cpu == smp_processor_id())
+		return;
+
 	fpsimd_load_state(&task->thread.kmode_fpsimd_state);
 }
 
 static void fpsimd_save_kernel_state(struct task_struct *task)
 {
+	struct cpu_fp_state cpu_fp_state = {
+		.st		= &task->thread.kmode_fpsimd_state,
+		.to_save	= FP_STATE_FPSIMD,
+	};
+
 	fpsimd_save_state(&task->thread.kmode_fpsimd_state);
+	fpsimd_bind_state_to_cpu(&cpu_fp_state);
+
+	task->thread.kmode_fpsimd_cpu = smp_processor_id();
 }
 
 void fpsimd_thread_switch(struct task_struct *next)
-- 
2.43.0.rc1.413.gea7ed67945-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v3 4/5] arm64: crypto: Remove conditional yield logic
  2023-11-27 12:23 ` Ard Biesheuvel
@ 2023-11-27 12:23   ` Ard Biesheuvel
  -1 siblings, 0 replies; 20+ messages in thread
From: Ard Biesheuvel @ 2023-11-27 12:23 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: linux-crypto, Ard Biesheuvel, Marc Zyngier, Will Deacon,
	Mark Rutland, Kees Cook, Catalin Marinas, Mark Brown,
	Eric Biggers, Sebastian Andrzej Siewior

From: Ard Biesheuvel <ardb@kernel.org>

Some classes of crypto algorithms (such as skciphers or aeads) have
natural yield points, but SIMD based shashes yield the NEON unit
manually to avoid causing scheduling blackouts when operating on large
inputs.

This is no longer necessary now that kernel mode NEON runs with
preemption enabled, so remove this logic from the crypto assembler code,
along with the macro that implements the TIF_NEED_RESCHED check.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 arch/arm64/crypto/aes-glue.c       | 21 +++++---------
 arch/arm64/crypto/aes-modes.S      |  2 --
 arch/arm64/crypto/sha1-ce-core.S   |  6 ++--
 arch/arm64/crypto/sha1-ce-glue.c   | 19 ++++---------
 arch/arm64/crypto/sha2-ce-core.S   |  6 ++--
 arch/arm64/crypto/sha2-ce-glue.c   | 19 ++++---------
 arch/arm64/crypto/sha3-ce-core.S   |  6 ++--
 arch/arm64/crypto/sha3-ce-glue.c   | 14 ++++------
 arch/arm64/crypto/sha512-ce-core.S |  8 ++----
 arch/arm64/crypto/sha512-ce-glue.c | 16 ++++-------
 arch/arm64/include/asm/assembler.h | 29 --------------------
 arch/arm64/kernel/asm-offsets.c    |  4 ---
 12 files changed, 38 insertions(+), 112 deletions(-)

diff --git a/arch/arm64/crypto/aes-glue.c b/arch/arm64/crypto/aes-glue.c
index 162787c7aa86..c42c903b7d60 100644
--- a/arch/arm64/crypto/aes-glue.c
+++ b/arch/arm64/crypto/aes-glue.c
@@ -109,9 +109,9 @@ asmlinkage void aes_essiv_cbc_decrypt(u8 out[], u8 const in[], u32 const rk1[],
 				      int rounds, int blocks, u8 iv[],
 				      u32 const rk2[]);
 
-asmlinkage int aes_mac_update(u8 const in[], u32 const rk[], int rounds,
-			      int blocks, u8 dg[], int enc_before,
-			      int enc_after);
+asmlinkage void aes_mac_update(u8 const in[], u32 const rk[], int rounds,
+			       int blocks, u8 dg[], int enc_before,
+			       int enc_after);
 
 struct crypto_aes_xts_ctx {
 	struct crypto_aes_ctx key1;
@@ -880,17 +880,10 @@ static void mac_do_update(struct crypto_aes_ctx *ctx, u8 const in[], int blocks,
 	int rounds = 6 + ctx->key_length / 4;
 
 	if (crypto_simd_usable()) {
-		int rem;
-
-		do {
-			kernel_neon_begin();
-			rem = aes_mac_update(in, ctx->key_enc, rounds, blocks,
-					     dg, enc_before, enc_after);
-			kernel_neon_end();
-			in += (blocks - rem) * AES_BLOCK_SIZE;
-			blocks = rem;
-			enc_before = 0;
-		} while (blocks);
+		kernel_neon_begin();
+		aes_mac_update(in, ctx->key_enc, rounds, blocks, dg,
+			       enc_before, enc_after);
+		kernel_neon_end();
 	} else {
 		if (enc_before)
 			aes_encrypt(ctx, dg, dg);
diff --git a/arch/arm64/crypto/aes-modes.S b/arch/arm64/crypto/aes-modes.S
index 0e834a2c062c..4d68853d0caf 100644
--- a/arch/arm64/crypto/aes-modes.S
+++ b/arch/arm64/crypto/aes-modes.S
@@ -842,7 +842,6 @@ AES_FUNC_START(aes_mac_update)
 	cbz		w5, .Lmacout
 	encrypt_block	v0, w2, x1, x7, w8
 	st1		{v0.16b}, [x4]			/* return dg */
-	cond_yield	.Lmacout, x7, x8
 	b		.Lmacloop4x
 .Lmac1x:
 	add		w3, w3, #4
@@ -861,6 +860,5 @@ AES_FUNC_START(aes_mac_update)
 
 .Lmacout:
 	st1		{v0.16b}, [x4]			/* return dg */
-	mov		w0, w3
 	ret
 AES_FUNC_END(aes_mac_update)
diff --git a/arch/arm64/crypto/sha1-ce-core.S b/arch/arm64/crypto/sha1-ce-core.S
index 9b1f2d82a6fe..9e37bc09c3a5 100644
--- a/arch/arm64/crypto/sha1-ce-core.S
+++ b/arch/arm64/crypto/sha1-ce-core.S
@@ -62,8 +62,8 @@
 	.endm
 
 	/*
-	 * int __sha1_ce_transform(struct sha1_ce_state *sst, u8 const *src,
-	 *			   int blocks)
+	 * void __sha1_ce_transform(struct sha1_ce_state *sst, u8 const *src,
+	 *			    int blocks)
 	 */
 SYM_FUNC_START(__sha1_ce_transform)
 	/* load round constants */
@@ -121,7 +121,6 @@ CPU_LE(	rev32		v11.16b, v11.16b	)
 	add		dgav.4s, dgav.4s, dg0v.4s
 
 	cbz		w2, 2f
-	cond_yield	3f, x5, x6
 	b		0b
 
 	/*
@@ -145,6 +144,5 @@ CPU_LE(	rev32		v11.16b, v11.16b	)
 	/* store new state */
 3:	st1		{dgav.4s}, [x0]
 	str		dgb, [x0, #16]
-	mov		w0, w2
 	ret
 SYM_FUNC_END(__sha1_ce_transform)
diff --git a/arch/arm64/crypto/sha1-ce-glue.c b/arch/arm64/crypto/sha1-ce-glue.c
index 1dd93e1fcb39..c1c5c5cb104b 100644
--- a/arch/arm64/crypto/sha1-ce-glue.c
+++ b/arch/arm64/crypto/sha1-ce-glue.c
@@ -29,23 +29,16 @@ struct sha1_ce_state {
 extern const u32 sha1_ce_offsetof_count;
 extern const u32 sha1_ce_offsetof_finalize;
 
-asmlinkage int __sha1_ce_transform(struct sha1_ce_state *sst, u8 const *src,
-				   int blocks);
+asmlinkage void __sha1_ce_transform(struct sha1_ce_state *sst, u8 const *src,
+				    int blocks);
 
 static void sha1_ce_transform(struct sha1_state *sst, u8 const *src,
 			      int blocks)
 {
-	while (blocks) {
-		int rem;
-
-		kernel_neon_begin();
-		rem = __sha1_ce_transform(container_of(sst,
-						       struct sha1_ce_state,
-						       sst), src, blocks);
-		kernel_neon_end();
-		src += (blocks - rem) * SHA1_BLOCK_SIZE;
-		blocks = rem;
-	}
+	kernel_neon_begin();
+	__sha1_ce_transform(container_of(sst, struct sha1_ce_state, sst), src,
+			    blocks);
+	kernel_neon_end();
 }
 
 const u32 sha1_ce_offsetof_count = offsetof(struct sha1_ce_state, sst.count);
diff --git a/arch/arm64/crypto/sha2-ce-core.S b/arch/arm64/crypto/sha2-ce-core.S
index fce84d88ddb2..112d772b29db 100644
--- a/arch/arm64/crypto/sha2-ce-core.S
+++ b/arch/arm64/crypto/sha2-ce-core.S
@@ -71,8 +71,8 @@
 	.word		0x90befffa, 0xa4506ceb, 0xbef9a3f7, 0xc67178f2
 
 	/*
-	 * int __sha256_ce_transform(struct sha256_ce_state *sst, u8 const *src,
-	 *			     int blocks)
+	 * void __sha256_ce_transform(struct sha256_ce_state *sst, u8 const *src,
+	 *			      int blocks)
 	 */
 	.text
 SYM_FUNC_START(__sha256_ce_transform)
@@ -129,7 +129,6 @@ CPU_LE(	rev32		v19.16b, v19.16b	)
 
 	/* handled all input blocks? */
 	cbz		w2, 2f
-	cond_yield	3f, x5, x6
 	b		0b
 
 	/*
@@ -152,6 +151,5 @@ CPU_LE(	rev32		v19.16b, v19.16b	)
 
 	/* store new state */
 3:	st1		{dgav.4s, dgbv.4s}, [x0]
-	mov		w0, w2
 	ret
 SYM_FUNC_END(__sha256_ce_transform)
diff --git a/arch/arm64/crypto/sha2-ce-glue.c b/arch/arm64/crypto/sha2-ce-glue.c
index 0a44d2e7ee1f..f785a66a1de4 100644
--- a/arch/arm64/crypto/sha2-ce-glue.c
+++ b/arch/arm64/crypto/sha2-ce-glue.c
@@ -30,23 +30,16 @@ struct sha256_ce_state {
 extern const u32 sha256_ce_offsetof_count;
 extern const u32 sha256_ce_offsetof_finalize;
 
-asmlinkage int __sha256_ce_transform(struct sha256_ce_state *sst, u8 const *src,
-				     int blocks);
+asmlinkage void __sha256_ce_transform(struct sha256_ce_state *sst, u8 const *src,
+				      int blocks);
 
 static void sha256_ce_transform(struct sha256_state *sst, u8 const *src,
 				int blocks)
 {
-	while (blocks) {
-		int rem;
-
-		kernel_neon_begin();
-		rem = __sha256_ce_transform(container_of(sst,
-							 struct sha256_ce_state,
-							 sst), src, blocks);
-		kernel_neon_end();
-		src += (blocks - rem) * SHA256_BLOCK_SIZE;
-		blocks = rem;
-	}
+	kernel_neon_begin();
+	__sha256_ce_transform(container_of(sst, struct sha256_ce_state, sst),
+			      src, blocks);
+	kernel_neon_end();
 }
 
 const u32 sha256_ce_offsetof_count = offsetof(struct sha256_ce_state,
diff --git a/arch/arm64/crypto/sha3-ce-core.S b/arch/arm64/crypto/sha3-ce-core.S
index 9c77313f5a60..db64831ad35d 100644
--- a/arch/arm64/crypto/sha3-ce-core.S
+++ b/arch/arm64/crypto/sha3-ce-core.S
@@ -37,7 +37,7 @@
 	.endm
 
 	/*
-	 * int sha3_ce_transform(u64 *st, const u8 *data, int blocks, int dg_size)
+	 * void sha3_ce_transform(u64 *st, const u8 *data, int blocks, int dg_size)
 	 */
 	.text
 SYM_FUNC_START(sha3_ce_transform)
@@ -184,18 +184,16 @@ SYM_FUNC_START(sha3_ce_transform)
 	eor	 v0.16b,  v0.16b, v31.16b
 
 	cbnz	w8, 3b
-	cond_yield 4f, x8, x9
 	cbnz	w2, 0b
 
 	/* save state */
-4:	st1	{ v0.1d- v3.1d}, [x0], #32
+	st1	{ v0.1d- v3.1d}, [x0], #32
 	st1	{ v4.1d- v7.1d}, [x0], #32
 	st1	{ v8.1d-v11.1d}, [x0], #32
 	st1	{v12.1d-v15.1d}, [x0], #32
 	st1	{v16.1d-v19.1d}, [x0], #32
 	st1	{v20.1d-v23.1d}, [x0], #32
 	st1	{v24.1d}, [x0]
-	mov	w0, w2
 	ret
 SYM_FUNC_END(sha3_ce_transform)
 
diff --git a/arch/arm64/crypto/sha3-ce-glue.c b/arch/arm64/crypto/sha3-ce-glue.c
index 250e1377c481..d689cd2bf4cf 100644
--- a/arch/arm64/crypto/sha3-ce-glue.c
+++ b/arch/arm64/crypto/sha3-ce-glue.c
@@ -28,8 +28,8 @@ MODULE_ALIAS_CRYPTO("sha3-256");
 MODULE_ALIAS_CRYPTO("sha3-384");
 MODULE_ALIAS_CRYPTO("sha3-512");
 
-asmlinkage int sha3_ce_transform(u64 *st, const u8 *data, int blocks,
-				 int md_len);
+asmlinkage void sha3_ce_transform(u64 *st, const u8 *data, int blocks,
+				  int md_len);
 
 static int sha3_update(struct shash_desc *desc, const u8 *data,
 		       unsigned int len)
@@ -59,15 +59,11 @@ static int sha3_update(struct shash_desc *desc, const u8 *data,
 		blocks = len / sctx->rsiz;
 		len %= sctx->rsiz;
 
-		while (blocks) {
-			int rem;
-
+		if (blocks) {
 			kernel_neon_begin();
-			rem = sha3_ce_transform(sctx->st, data, blocks,
-						digest_size);
+			sha3_ce_transform(sctx->st, data, blocks, digest_size);
 			kernel_neon_end();
-			data += (blocks - rem) * sctx->rsiz;
-			blocks = rem;
+			data += blocks * sctx->rsiz;
 		}
 	}
 
diff --git a/arch/arm64/crypto/sha512-ce-core.S b/arch/arm64/crypto/sha512-ce-core.S
index 91ef68b15fcc..96acc9295230 100644
--- a/arch/arm64/crypto/sha512-ce-core.S
+++ b/arch/arm64/crypto/sha512-ce-core.S
@@ -102,8 +102,8 @@
 	.endm
 
 	/*
-	 * int __sha512_ce_transform(struct sha512_state *sst, u8 const *src,
-	 *			     int blocks)
+	 * void __sha512_ce_transform(struct sha512_state *sst, u8 const *src,
+	 *			      int blocks)
 	 */
 	.text
 SYM_FUNC_START(__sha512_ce_transform)
@@ -195,12 +195,10 @@ CPU_LE(	rev64		v19.16b, v19.16b	)
 	add		v10.2d, v10.2d, v2.2d
 	add		v11.2d, v11.2d, v3.2d
 
-	cond_yield	3f, x4, x5
 	/* handled all input blocks? */
 	cbnz		w2, 0b
 
 	/* store new state */
-3:	st1		{v8.2d-v11.2d}, [x0]
-	mov		w0, w2
+	st1		{v8.2d-v11.2d}, [x0]
 	ret
 SYM_FUNC_END(__sha512_ce_transform)
diff --git a/arch/arm64/crypto/sha512-ce-glue.c b/arch/arm64/crypto/sha512-ce-glue.c
index f3431fc62315..70eef74fe031 100644
--- a/arch/arm64/crypto/sha512-ce-glue.c
+++ b/arch/arm64/crypto/sha512-ce-glue.c
@@ -26,23 +26,17 @@ MODULE_LICENSE("GPL v2");
 MODULE_ALIAS_CRYPTO("sha384");
 MODULE_ALIAS_CRYPTO("sha512");
 
-asmlinkage int __sha512_ce_transform(struct sha512_state *sst, u8 const *src,
-				     int blocks);
+asmlinkage void __sha512_ce_transform(struct sha512_state *sst, u8 const *src,
+				      int blocks);
 
 asmlinkage void sha512_block_data_order(u64 *digest, u8 const *src, int blocks);
 
 static void sha512_ce_transform(struct sha512_state *sst, u8 const *src,
 				int blocks)
 {
-	while (blocks) {
-		int rem;
-
-		kernel_neon_begin();
-		rem = __sha512_ce_transform(sst, src, blocks);
-		kernel_neon_end();
-		src += (blocks - rem) * SHA512_BLOCK_SIZE;
-		blocks = rem;
-	}
+	kernel_neon_begin();
+	__sha512_ce_transform(sst, src, blocks);
+	kernel_neon_end();
 }
 
 static void sha512_arm64_transform(struct sha512_state *sst, u8 const *src,
diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
index 376a980f2bad..f0da53a0388f 100644
--- a/arch/arm64/include/asm/assembler.h
+++ b/arch/arm64/include/asm/assembler.h
@@ -759,35 +759,6 @@ alternative_endif
 	set_sctlr sctlr_el2, \reg
 .endm
 
-	/*
-	 * Check whether preempt/bh-disabled asm code should yield as soon as
-	 * it is able. This is the case if we are currently running in task
-	 * context, and either a softirq is pending, or the TIF_NEED_RESCHED
-	 * flag is set and re-enabling preemption a single time would result in
-	 * a preempt count of zero. (Note that the TIF_NEED_RESCHED flag is
-	 * stored negated in the top word of the thread_info::preempt_count
-	 * field)
-	 */
-	.macro		cond_yield, lbl:req, tmp:req, tmp2:req
-	get_current_task \tmp
-	ldr		\tmp, [\tmp, #TSK_TI_PREEMPT]
-	/*
-	 * If we are serving a softirq, there is no point in yielding: the
-	 * softirq will not be preempted no matter what we do, so we should
-	 * run to completion as quickly as we can.
-	 */
-	tbnz		\tmp, #SOFTIRQ_SHIFT, .Lnoyield_\@
-#ifdef CONFIG_PREEMPTION
-	sub		\tmp, \tmp, #PREEMPT_DISABLE_OFFSET
-	cbz		\tmp, \lbl
-#endif
-	adr_l		\tmp, irq_stat + IRQ_CPUSTAT_SOFTIRQ_PENDING
-	get_this_cpu_offset	\tmp2
-	ldr		w\tmp, [\tmp, \tmp2]
-	cbnz		w\tmp, \lbl	// yield on pending softirq in task context
-.Lnoyield_\@:
-	.endm
-
 /*
  * Branch Target Identifier (BTI)
  */
diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
index 5ff1942b04fc..fb9e9ef9b527 100644
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -116,10 +116,6 @@ int main(void)
   DEFINE(DMA_TO_DEVICE,		DMA_TO_DEVICE);
   DEFINE(DMA_FROM_DEVICE,	DMA_FROM_DEVICE);
   BLANK();
-  DEFINE(PREEMPT_DISABLE_OFFSET, PREEMPT_DISABLE_OFFSET);
-  DEFINE(SOFTIRQ_SHIFT, SOFTIRQ_SHIFT);
-  DEFINE(IRQ_CPUSTAT_SOFTIRQ_PENDING, offsetof(irq_cpustat_t, __softirq_pending));
-  BLANK();
   DEFINE(CPU_BOOT_TASK,		offsetof(struct secondary_data, task));
   BLANK();
   DEFINE(FTR_OVR_VAL_OFFSET,	offsetof(struct arm64_ftr_override, val));
-- 
2.43.0.rc1.413.gea7ed67945-goog


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v3 4/5] arm64: crypto: Remove conditional yield logic
@ 2023-11-27 12:23   ` Ard Biesheuvel
  0 siblings, 0 replies; 20+ messages in thread
From: Ard Biesheuvel @ 2023-11-27 12:23 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: linux-crypto, Ard Biesheuvel, Marc Zyngier, Will Deacon,
	Mark Rutland, Kees Cook, Catalin Marinas, Mark Brown,
	Eric Biggers, Sebastian Andrzej Siewior

From: Ard Biesheuvel <ardb@kernel.org>

Some classes of crypto algorithms (such as skciphers or aeads) have
natural yield points, but SIMD based shashes yield the NEON unit
manually to avoid causing scheduling blackouts when operating on large
inputs.

This is no longer necessary now that kernel mode NEON runs with
preemption enabled, so remove this logic from the crypto assembler code,
along with the macro that implements the TIF_NEED_RESCHED check.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 arch/arm64/crypto/aes-glue.c       | 21 +++++---------
 arch/arm64/crypto/aes-modes.S      |  2 --
 arch/arm64/crypto/sha1-ce-core.S   |  6 ++--
 arch/arm64/crypto/sha1-ce-glue.c   | 19 ++++---------
 arch/arm64/crypto/sha2-ce-core.S   |  6 ++--
 arch/arm64/crypto/sha2-ce-glue.c   | 19 ++++---------
 arch/arm64/crypto/sha3-ce-core.S   |  6 ++--
 arch/arm64/crypto/sha3-ce-glue.c   | 14 ++++------
 arch/arm64/crypto/sha512-ce-core.S |  8 ++----
 arch/arm64/crypto/sha512-ce-glue.c | 16 ++++-------
 arch/arm64/include/asm/assembler.h | 29 --------------------
 arch/arm64/kernel/asm-offsets.c    |  4 ---
 12 files changed, 38 insertions(+), 112 deletions(-)

diff --git a/arch/arm64/crypto/aes-glue.c b/arch/arm64/crypto/aes-glue.c
index 162787c7aa86..c42c903b7d60 100644
--- a/arch/arm64/crypto/aes-glue.c
+++ b/arch/arm64/crypto/aes-glue.c
@@ -109,9 +109,9 @@ asmlinkage void aes_essiv_cbc_decrypt(u8 out[], u8 const in[], u32 const rk1[],
 				      int rounds, int blocks, u8 iv[],
 				      u32 const rk2[]);
 
-asmlinkage int aes_mac_update(u8 const in[], u32 const rk[], int rounds,
-			      int blocks, u8 dg[], int enc_before,
-			      int enc_after);
+asmlinkage void aes_mac_update(u8 const in[], u32 const rk[], int rounds,
+			       int blocks, u8 dg[], int enc_before,
+			       int enc_after);
 
 struct crypto_aes_xts_ctx {
 	struct crypto_aes_ctx key1;
@@ -880,17 +880,10 @@ static void mac_do_update(struct crypto_aes_ctx *ctx, u8 const in[], int blocks,
 	int rounds = 6 + ctx->key_length / 4;
 
 	if (crypto_simd_usable()) {
-		int rem;
-
-		do {
-			kernel_neon_begin();
-			rem = aes_mac_update(in, ctx->key_enc, rounds, blocks,
-					     dg, enc_before, enc_after);
-			kernel_neon_end();
-			in += (blocks - rem) * AES_BLOCK_SIZE;
-			blocks = rem;
-			enc_before = 0;
-		} while (blocks);
+		kernel_neon_begin();
+		aes_mac_update(in, ctx->key_enc, rounds, blocks, dg,
+			       enc_before, enc_after);
+		kernel_neon_end();
 	} else {
 		if (enc_before)
 			aes_encrypt(ctx, dg, dg);
diff --git a/arch/arm64/crypto/aes-modes.S b/arch/arm64/crypto/aes-modes.S
index 0e834a2c062c..4d68853d0caf 100644
--- a/arch/arm64/crypto/aes-modes.S
+++ b/arch/arm64/crypto/aes-modes.S
@@ -842,7 +842,6 @@ AES_FUNC_START(aes_mac_update)
 	cbz		w5, .Lmacout
 	encrypt_block	v0, w2, x1, x7, w8
 	st1		{v0.16b}, [x4]			/* return dg */
-	cond_yield	.Lmacout, x7, x8
 	b		.Lmacloop4x
 .Lmac1x:
 	add		w3, w3, #4
@@ -861,6 +860,5 @@ AES_FUNC_START(aes_mac_update)
 
 .Lmacout:
 	st1		{v0.16b}, [x4]			/* return dg */
-	mov		w0, w3
 	ret
 AES_FUNC_END(aes_mac_update)
diff --git a/arch/arm64/crypto/sha1-ce-core.S b/arch/arm64/crypto/sha1-ce-core.S
index 9b1f2d82a6fe..9e37bc09c3a5 100644
--- a/arch/arm64/crypto/sha1-ce-core.S
+++ b/arch/arm64/crypto/sha1-ce-core.S
@@ -62,8 +62,8 @@
 	.endm
 
 	/*
-	 * int __sha1_ce_transform(struct sha1_ce_state *sst, u8 const *src,
-	 *			   int blocks)
+	 * void __sha1_ce_transform(struct sha1_ce_state *sst, u8 const *src,
+	 *			    int blocks)
 	 */
 SYM_FUNC_START(__sha1_ce_transform)
 	/* load round constants */
@@ -121,7 +121,6 @@ CPU_LE(	rev32		v11.16b, v11.16b	)
 	add		dgav.4s, dgav.4s, dg0v.4s
 
 	cbz		w2, 2f
-	cond_yield	3f, x5, x6
 	b		0b
 
 	/*
@@ -145,6 +144,5 @@ CPU_LE(	rev32		v11.16b, v11.16b	)
 	/* store new state */
 3:	st1		{dgav.4s}, [x0]
 	str		dgb, [x0, #16]
-	mov		w0, w2
 	ret
 SYM_FUNC_END(__sha1_ce_transform)
diff --git a/arch/arm64/crypto/sha1-ce-glue.c b/arch/arm64/crypto/sha1-ce-glue.c
index 1dd93e1fcb39..c1c5c5cb104b 100644
--- a/arch/arm64/crypto/sha1-ce-glue.c
+++ b/arch/arm64/crypto/sha1-ce-glue.c
@@ -29,23 +29,16 @@ struct sha1_ce_state {
 extern const u32 sha1_ce_offsetof_count;
 extern const u32 sha1_ce_offsetof_finalize;
 
-asmlinkage int __sha1_ce_transform(struct sha1_ce_state *sst, u8 const *src,
-				   int blocks);
+asmlinkage void __sha1_ce_transform(struct sha1_ce_state *sst, u8 const *src,
+				    int blocks);
 
 static void sha1_ce_transform(struct sha1_state *sst, u8 const *src,
 			      int blocks)
 {
-	while (blocks) {
-		int rem;
-
-		kernel_neon_begin();
-		rem = __sha1_ce_transform(container_of(sst,
-						       struct sha1_ce_state,
-						       sst), src, blocks);
-		kernel_neon_end();
-		src += (blocks - rem) * SHA1_BLOCK_SIZE;
-		blocks = rem;
-	}
+	kernel_neon_begin();
+	__sha1_ce_transform(container_of(sst, struct sha1_ce_state, sst), src,
+			    blocks);
+	kernel_neon_end();
 }
 
 const u32 sha1_ce_offsetof_count = offsetof(struct sha1_ce_state, sst.count);
diff --git a/arch/arm64/crypto/sha2-ce-core.S b/arch/arm64/crypto/sha2-ce-core.S
index fce84d88ddb2..112d772b29db 100644
--- a/arch/arm64/crypto/sha2-ce-core.S
+++ b/arch/arm64/crypto/sha2-ce-core.S
@@ -71,8 +71,8 @@
 	.word		0x90befffa, 0xa4506ceb, 0xbef9a3f7, 0xc67178f2
 
 	/*
-	 * int __sha256_ce_transform(struct sha256_ce_state *sst, u8 const *src,
-	 *			     int blocks)
+	 * void __sha256_ce_transform(struct sha256_ce_state *sst, u8 const *src,
+	 *			      int blocks)
 	 */
 	.text
 SYM_FUNC_START(__sha256_ce_transform)
@@ -129,7 +129,6 @@ CPU_LE(	rev32		v19.16b, v19.16b	)
 
 	/* handled all input blocks? */
 	cbz		w2, 2f
-	cond_yield	3f, x5, x6
 	b		0b
 
 	/*
@@ -152,6 +151,5 @@ CPU_LE(	rev32		v19.16b, v19.16b	)
 
 	/* store new state */
 3:	st1		{dgav.4s, dgbv.4s}, [x0]
-	mov		w0, w2
 	ret
 SYM_FUNC_END(__sha256_ce_transform)
diff --git a/arch/arm64/crypto/sha2-ce-glue.c b/arch/arm64/crypto/sha2-ce-glue.c
index 0a44d2e7ee1f..f785a66a1de4 100644
--- a/arch/arm64/crypto/sha2-ce-glue.c
+++ b/arch/arm64/crypto/sha2-ce-glue.c
@@ -30,23 +30,16 @@ struct sha256_ce_state {
 extern const u32 sha256_ce_offsetof_count;
 extern const u32 sha256_ce_offsetof_finalize;
 
-asmlinkage int __sha256_ce_transform(struct sha256_ce_state *sst, u8 const *src,
-				     int blocks);
+asmlinkage void __sha256_ce_transform(struct sha256_ce_state *sst, u8 const *src,
+				      int blocks);
 
 static void sha256_ce_transform(struct sha256_state *sst, u8 const *src,
 				int blocks)
 {
-	while (blocks) {
-		int rem;
-
-		kernel_neon_begin();
-		rem = __sha256_ce_transform(container_of(sst,
-							 struct sha256_ce_state,
-							 sst), src, blocks);
-		kernel_neon_end();
-		src += (blocks - rem) * SHA256_BLOCK_SIZE;
-		blocks = rem;
-	}
+	kernel_neon_begin();
+	__sha256_ce_transform(container_of(sst, struct sha256_ce_state, sst),
+			      src, blocks);
+	kernel_neon_end();
 }
 
 const u32 sha256_ce_offsetof_count = offsetof(struct sha256_ce_state,
diff --git a/arch/arm64/crypto/sha3-ce-core.S b/arch/arm64/crypto/sha3-ce-core.S
index 9c77313f5a60..db64831ad35d 100644
--- a/arch/arm64/crypto/sha3-ce-core.S
+++ b/arch/arm64/crypto/sha3-ce-core.S
@@ -37,7 +37,7 @@
 	.endm
 
 	/*
-	 * int sha3_ce_transform(u64 *st, const u8 *data, int blocks, int dg_size)
+	 * void sha3_ce_transform(u64 *st, const u8 *data, int blocks, int dg_size)
 	 */
 	.text
 SYM_FUNC_START(sha3_ce_transform)
@@ -184,18 +184,16 @@ SYM_FUNC_START(sha3_ce_transform)
 	eor	 v0.16b,  v0.16b, v31.16b
 
 	cbnz	w8, 3b
-	cond_yield 4f, x8, x9
 	cbnz	w2, 0b
 
 	/* save state */
-4:	st1	{ v0.1d- v3.1d}, [x0], #32
+	st1	{ v0.1d- v3.1d}, [x0], #32
 	st1	{ v4.1d- v7.1d}, [x0], #32
 	st1	{ v8.1d-v11.1d}, [x0], #32
 	st1	{v12.1d-v15.1d}, [x0], #32
 	st1	{v16.1d-v19.1d}, [x0], #32
 	st1	{v20.1d-v23.1d}, [x0], #32
 	st1	{v24.1d}, [x0]
-	mov	w0, w2
 	ret
 SYM_FUNC_END(sha3_ce_transform)
 
diff --git a/arch/arm64/crypto/sha3-ce-glue.c b/arch/arm64/crypto/sha3-ce-glue.c
index 250e1377c481..d689cd2bf4cf 100644
--- a/arch/arm64/crypto/sha3-ce-glue.c
+++ b/arch/arm64/crypto/sha3-ce-glue.c
@@ -28,8 +28,8 @@ MODULE_ALIAS_CRYPTO("sha3-256");
 MODULE_ALIAS_CRYPTO("sha3-384");
 MODULE_ALIAS_CRYPTO("sha3-512");
 
-asmlinkage int sha3_ce_transform(u64 *st, const u8 *data, int blocks,
-				 int md_len);
+asmlinkage void sha3_ce_transform(u64 *st, const u8 *data, int blocks,
+				  int md_len);
 
 static int sha3_update(struct shash_desc *desc, const u8 *data,
 		       unsigned int len)
@@ -59,15 +59,11 @@ static int sha3_update(struct shash_desc *desc, const u8 *data,
 		blocks = len / sctx->rsiz;
 		len %= sctx->rsiz;
 
-		while (blocks) {
-			int rem;
-
+		if (blocks) {
 			kernel_neon_begin();
-			rem = sha3_ce_transform(sctx->st, data, blocks,
-						digest_size);
+			sha3_ce_transform(sctx->st, data, blocks, digest_size);
 			kernel_neon_end();
-			data += (blocks - rem) * sctx->rsiz;
-			blocks = rem;
+			data += blocks * sctx->rsiz;
 		}
 	}
 
diff --git a/arch/arm64/crypto/sha512-ce-core.S b/arch/arm64/crypto/sha512-ce-core.S
index 91ef68b15fcc..96acc9295230 100644
--- a/arch/arm64/crypto/sha512-ce-core.S
+++ b/arch/arm64/crypto/sha512-ce-core.S
@@ -102,8 +102,8 @@
 	.endm
 
 	/*
-	 * int __sha512_ce_transform(struct sha512_state *sst, u8 const *src,
-	 *			     int blocks)
+	 * void __sha512_ce_transform(struct sha512_state *sst, u8 const *src,
+	 *			      int blocks)
 	 */
 	.text
 SYM_FUNC_START(__sha512_ce_transform)
@@ -195,12 +195,10 @@ CPU_LE(	rev64		v19.16b, v19.16b	)
 	add		v10.2d, v10.2d, v2.2d
 	add		v11.2d, v11.2d, v3.2d
 
-	cond_yield	3f, x4, x5
 	/* handled all input blocks? */
 	cbnz		w2, 0b
 
 	/* store new state */
-3:	st1		{v8.2d-v11.2d}, [x0]
-	mov		w0, w2
+	st1		{v8.2d-v11.2d}, [x0]
 	ret
 SYM_FUNC_END(__sha512_ce_transform)
diff --git a/arch/arm64/crypto/sha512-ce-glue.c b/arch/arm64/crypto/sha512-ce-glue.c
index f3431fc62315..70eef74fe031 100644
--- a/arch/arm64/crypto/sha512-ce-glue.c
+++ b/arch/arm64/crypto/sha512-ce-glue.c
@@ -26,23 +26,17 @@ MODULE_LICENSE("GPL v2");
 MODULE_ALIAS_CRYPTO("sha384");
 MODULE_ALIAS_CRYPTO("sha512");
 
-asmlinkage int __sha512_ce_transform(struct sha512_state *sst, u8 const *src,
-				     int blocks);
+asmlinkage void __sha512_ce_transform(struct sha512_state *sst, u8 const *src,
+				      int blocks);
 
 asmlinkage void sha512_block_data_order(u64 *digest, u8 const *src, int blocks);
 
 static void sha512_ce_transform(struct sha512_state *sst, u8 const *src,
 				int blocks)
 {
-	while (blocks) {
-		int rem;
-
-		kernel_neon_begin();
-		rem = __sha512_ce_transform(sst, src, blocks);
-		kernel_neon_end();
-		src += (blocks - rem) * SHA512_BLOCK_SIZE;
-		blocks = rem;
-	}
+	kernel_neon_begin();
+	__sha512_ce_transform(sst, src, blocks);
+	kernel_neon_end();
 }
 
 static void sha512_arm64_transform(struct sha512_state *sst, u8 const *src,
diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
index 376a980f2bad..f0da53a0388f 100644
--- a/arch/arm64/include/asm/assembler.h
+++ b/arch/arm64/include/asm/assembler.h
@@ -759,35 +759,6 @@ alternative_endif
 	set_sctlr sctlr_el2, \reg
 .endm
 
-	/*
-	 * Check whether preempt/bh-disabled asm code should yield as soon as
-	 * it is able. This is the case if we are currently running in task
-	 * context, and either a softirq is pending, or the TIF_NEED_RESCHED
-	 * flag is set and re-enabling preemption a single time would result in
-	 * a preempt count of zero. (Note that the TIF_NEED_RESCHED flag is
-	 * stored negated in the top word of the thread_info::preempt_count
-	 * field)
-	 */
-	.macro		cond_yield, lbl:req, tmp:req, tmp2:req
-	get_current_task \tmp
-	ldr		\tmp, [\tmp, #TSK_TI_PREEMPT]
-	/*
-	 * If we are serving a softirq, there is no point in yielding: the
-	 * softirq will not be preempted no matter what we do, so we should
-	 * run to completion as quickly as we can.
-	 */
-	tbnz		\tmp, #SOFTIRQ_SHIFT, .Lnoyield_\@
-#ifdef CONFIG_PREEMPTION
-	sub		\tmp, \tmp, #PREEMPT_DISABLE_OFFSET
-	cbz		\tmp, \lbl
-#endif
-	adr_l		\tmp, irq_stat + IRQ_CPUSTAT_SOFTIRQ_PENDING
-	get_this_cpu_offset	\tmp2
-	ldr		w\tmp, [\tmp, \tmp2]
-	cbnz		w\tmp, \lbl	// yield on pending softirq in task context
-.Lnoyield_\@:
-	.endm
-
 /*
  * Branch Target Identifier (BTI)
  */
diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
index 5ff1942b04fc..fb9e9ef9b527 100644
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -116,10 +116,6 @@ int main(void)
   DEFINE(DMA_TO_DEVICE,		DMA_TO_DEVICE);
   DEFINE(DMA_FROM_DEVICE,	DMA_FROM_DEVICE);
   BLANK();
-  DEFINE(PREEMPT_DISABLE_OFFSET, PREEMPT_DISABLE_OFFSET);
-  DEFINE(SOFTIRQ_SHIFT, SOFTIRQ_SHIFT);
-  DEFINE(IRQ_CPUSTAT_SOFTIRQ_PENDING, offsetof(irq_cpustat_t, __softirq_pending));
-  BLANK();
   DEFINE(CPU_BOOT_TASK,		offsetof(struct secondary_data, task));
   BLANK();
   DEFINE(FTR_OVR_VAL_OFFSET,	offsetof(struct arm64_ftr_override, val));
-- 
2.43.0.rc1.413.gea7ed67945-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v3 5/5] arm64: crypto: Remove FPSIMD yield logic from glue code
  2023-11-27 12:23 ` Ard Biesheuvel
@ 2023-11-27 12:23   ` Ard Biesheuvel
  -1 siblings, 0 replies; 20+ messages in thread
From: Ard Biesheuvel @ 2023-11-27 12:23 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: linux-crypto, Ard Biesheuvel, Marc Zyngier, Will Deacon,
	Mark Rutland, Kees Cook, Catalin Marinas, Mark Brown,
	Eric Biggers, Sebastian Andrzej Siewior

From: Ard Biesheuvel <ardb@kernel.org>

A previous patch already removed the assembler logic that was used to
check periodically whether a task has its TIF_NEED_RESCHED set, and to
yield the FPSIMD unit and the timeslice if this is the case. This is no
longer necessary now that we no longer disable preemption when using the
FPSIMD in kernel mode.

Let's also remove the remaining C logic that yields the FPSIMD unit
after every 4 KiB of input, which is arguably worse in terms of
overhead, given that it is unconditional and therefore mostly
unnecessary.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 arch/arm64/crypto/aes-ce-ccm-glue.c      |  5 ----
 arch/arm64/crypto/chacha-neon-glue.c     | 14 ++-------
 arch/arm64/crypto/crct10dif-ce-glue.c    | 30 ++++----------------
 arch/arm64/crypto/nhpoly1305-neon-glue.c | 12 ++------
 arch/arm64/crypto/poly1305-glue.c        | 15 +++-------
 arch/arm64/crypto/polyval-ce-glue.c      |  5 ++--
 6 files changed, 18 insertions(+), 63 deletions(-)

diff --git a/arch/arm64/crypto/aes-ce-ccm-glue.c b/arch/arm64/crypto/aes-ce-ccm-glue.c
index 25cd3808ecbe..a92ca6de1f96 100644
--- a/arch/arm64/crypto/aes-ce-ccm-glue.c
+++ b/arch/arm64/crypto/aes-ce-ccm-glue.c
@@ -125,16 +125,11 @@ static void ccm_calculate_auth_mac(struct aead_request *req, u8 mac[])
 			scatterwalk_start(&walk, sg_next(walk.sg));
 			n = scatterwalk_clamp(&walk, len);
 		}
-		n = min_t(u32, n, SZ_4K); /* yield NEON at least every 4k */
 		p = scatterwalk_map(&walk);
 
 		macp = ce_aes_ccm_auth_data(mac, p, n, macp, ctx->key_enc,
 					    num_rounds(ctx));
 
-		if (len / SZ_4K > (len - n) / SZ_4K) {
-			kernel_neon_end();
-			kernel_neon_begin();
-		}
 		len -= n;
 
 		scatterwalk_unmap(p);
diff --git a/arch/arm64/crypto/chacha-neon-glue.c b/arch/arm64/crypto/chacha-neon-glue.c
index af2bbca38e70..37ca3e889848 100644
--- a/arch/arm64/crypto/chacha-neon-glue.c
+++ b/arch/arm64/crypto/chacha-neon-glue.c
@@ -87,17 +87,9 @@ void chacha_crypt_arch(u32 *state, u8 *dst, const u8 *src, unsigned int bytes,
 	    !crypto_simd_usable())
 		return chacha_crypt_generic(state, dst, src, bytes, nrounds);
 
-	do {
-		unsigned int todo = min_t(unsigned int, bytes, SZ_4K);
-
-		kernel_neon_begin();
-		chacha_doneon(state, dst, src, todo, nrounds);
-		kernel_neon_end();
-
-		bytes -= todo;
-		src += todo;
-		dst += todo;
-	} while (bytes);
+	kernel_neon_begin();
+	chacha_doneon(state, dst, src, bytes, nrounds);
+	kernel_neon_end();
 }
 EXPORT_SYMBOL(chacha_crypt_arch);
 
diff --git a/arch/arm64/crypto/crct10dif-ce-glue.c b/arch/arm64/crypto/crct10dif-ce-glue.c
index 09eb1456aed4..ccc3f6067742 100644
--- a/arch/arm64/crypto/crct10dif-ce-glue.c
+++ b/arch/arm64/crypto/crct10dif-ce-glue.c
@@ -37,18 +37,9 @@ static int crct10dif_update_pmull_p8(struct shash_desc *desc, const u8 *data,
 	u16 *crc = shash_desc_ctx(desc);
 
 	if (length >= CRC_T10DIF_PMULL_CHUNK_SIZE && crypto_simd_usable()) {
-		do {
-			unsigned int chunk = length;
-
-			if (chunk > SZ_4K + CRC_T10DIF_PMULL_CHUNK_SIZE)
-				chunk = SZ_4K;
-
-			kernel_neon_begin();
-			*crc = crc_t10dif_pmull_p8(*crc, data, chunk);
-			kernel_neon_end();
-			data += chunk;
-			length -= chunk;
-		} while (length);
+		kernel_neon_begin();
+		*crc = crc_t10dif_pmull_p8(*crc, data, length);
+		kernel_neon_end();
 	} else {
 		*crc = crc_t10dif_generic(*crc, data, length);
 	}
@@ -62,18 +53,9 @@ static int crct10dif_update_pmull_p64(struct shash_desc *desc, const u8 *data,
 	u16 *crc = shash_desc_ctx(desc);
 
 	if (length >= CRC_T10DIF_PMULL_CHUNK_SIZE && crypto_simd_usable()) {
-		do {
-			unsigned int chunk = length;
-
-			if (chunk > SZ_4K + CRC_T10DIF_PMULL_CHUNK_SIZE)
-				chunk = SZ_4K;
-
-			kernel_neon_begin();
-			*crc = crc_t10dif_pmull_p64(*crc, data, chunk);
-			kernel_neon_end();
-			data += chunk;
-			length -= chunk;
-		} while (length);
+		kernel_neon_begin();
+		*crc = crc_t10dif_pmull_p64(*crc, data, length);
+		kernel_neon_end();
 	} else {
 		*crc = crc_t10dif_generic(*crc, data, length);
 	}
diff --git a/arch/arm64/crypto/nhpoly1305-neon-glue.c b/arch/arm64/crypto/nhpoly1305-neon-glue.c
index e4a0b463f080..7df0ab811c4e 100644
--- a/arch/arm64/crypto/nhpoly1305-neon-glue.c
+++ b/arch/arm64/crypto/nhpoly1305-neon-glue.c
@@ -22,15 +22,9 @@ static int nhpoly1305_neon_update(struct shash_desc *desc,
 	if (srclen < 64 || !crypto_simd_usable())
 		return crypto_nhpoly1305_update(desc, src, srclen);
 
-	do {
-		unsigned int n = min_t(unsigned int, srclen, SZ_4K);
-
-		kernel_neon_begin();
-		crypto_nhpoly1305_update_helper(desc, src, n, nh_neon);
-		kernel_neon_end();
-		src += n;
-		srclen -= n;
-	} while (srclen);
+	kernel_neon_begin();
+	crypto_nhpoly1305_update_helper(desc, src, srclen, nh_neon);
+	kernel_neon_end();
 	return 0;
 }
 
diff --git a/arch/arm64/crypto/poly1305-glue.c b/arch/arm64/crypto/poly1305-glue.c
index 1fae18ba11ed..326871897d5d 100644
--- a/arch/arm64/crypto/poly1305-glue.c
+++ b/arch/arm64/crypto/poly1305-glue.c
@@ -143,20 +143,13 @@ void poly1305_update_arch(struct poly1305_desc_ctx *dctx, const u8 *src,
 		unsigned int len = round_down(nbytes, POLY1305_BLOCK_SIZE);
 
 		if (static_branch_likely(&have_neon) && crypto_simd_usable()) {
-			do {
-				unsigned int todo = min_t(unsigned int, len, SZ_4K);
-
-				kernel_neon_begin();
-				poly1305_blocks_neon(&dctx->h, src, todo, 1);
-				kernel_neon_end();
-
-				len -= todo;
-				src += todo;
-			} while (len);
+			kernel_neon_begin();
+			poly1305_blocks_neon(&dctx->h, src, len, 1);
+			kernel_neon_end();
 		} else {
 			poly1305_blocks(&dctx->h, src, len, 1);
-			src += len;
 		}
+		src += len;
 		nbytes %= POLY1305_BLOCK_SIZE;
 	}
 
diff --git a/arch/arm64/crypto/polyval-ce-glue.c b/arch/arm64/crypto/polyval-ce-glue.c
index 0a3b5718df85..8c83e5f44e51 100644
--- a/arch/arm64/crypto/polyval-ce-glue.c
+++ b/arch/arm64/crypto/polyval-ce-glue.c
@@ -122,9 +122,8 @@ static int polyval_arm64_update(struct shash_desc *desc,
 					    tctx->key_powers[NUM_KEY_POWERS-1]);
 	}
 
-	while (srclen >= POLYVAL_BLOCK_SIZE) {
-		/* allow rescheduling every 4K bytes */
-		nblocks = min(srclen, 4096U) / POLYVAL_BLOCK_SIZE;
+	if (srclen >= POLYVAL_BLOCK_SIZE) {
+		nblocks = srclen / POLYVAL_BLOCK_SIZE;
 		internal_polyval_update(tctx, src, nblocks, dctx->buffer);
 		srclen -= nblocks * POLYVAL_BLOCK_SIZE;
 		src += nblocks * POLYVAL_BLOCK_SIZE;
-- 
2.43.0.rc1.413.gea7ed67945-goog


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v3 5/5] arm64: crypto: Remove FPSIMD yield logic from glue code
@ 2023-11-27 12:23   ` Ard Biesheuvel
  0 siblings, 0 replies; 20+ messages in thread
From: Ard Biesheuvel @ 2023-11-27 12:23 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: linux-crypto, Ard Biesheuvel, Marc Zyngier, Will Deacon,
	Mark Rutland, Kees Cook, Catalin Marinas, Mark Brown,
	Eric Biggers, Sebastian Andrzej Siewior

From: Ard Biesheuvel <ardb@kernel.org>

A previous patch already removed the assembler logic that was used to
check periodically whether a task has its TIF_NEED_RESCHED set, and to
yield the FPSIMD unit and the timeslice if this is the case. This is no
longer necessary now that we no longer disable preemption when using the
FPSIMD in kernel mode.

Let's also remove the remaining C logic that yields the FPSIMD unit
after every 4 KiB of input, which is arguably worse in terms of
overhead, given that it is unconditional and therefore mostly
unnecessary.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 arch/arm64/crypto/aes-ce-ccm-glue.c      |  5 ----
 arch/arm64/crypto/chacha-neon-glue.c     | 14 ++-------
 arch/arm64/crypto/crct10dif-ce-glue.c    | 30 ++++----------------
 arch/arm64/crypto/nhpoly1305-neon-glue.c | 12 ++------
 arch/arm64/crypto/poly1305-glue.c        | 15 +++-------
 arch/arm64/crypto/polyval-ce-glue.c      |  5 ++--
 6 files changed, 18 insertions(+), 63 deletions(-)

diff --git a/arch/arm64/crypto/aes-ce-ccm-glue.c b/arch/arm64/crypto/aes-ce-ccm-glue.c
index 25cd3808ecbe..a92ca6de1f96 100644
--- a/arch/arm64/crypto/aes-ce-ccm-glue.c
+++ b/arch/arm64/crypto/aes-ce-ccm-glue.c
@@ -125,16 +125,11 @@ static void ccm_calculate_auth_mac(struct aead_request *req, u8 mac[])
 			scatterwalk_start(&walk, sg_next(walk.sg));
 			n = scatterwalk_clamp(&walk, len);
 		}
-		n = min_t(u32, n, SZ_4K); /* yield NEON at least every 4k */
 		p = scatterwalk_map(&walk);
 
 		macp = ce_aes_ccm_auth_data(mac, p, n, macp, ctx->key_enc,
 					    num_rounds(ctx));
 
-		if (len / SZ_4K > (len - n) / SZ_4K) {
-			kernel_neon_end();
-			kernel_neon_begin();
-		}
 		len -= n;
 
 		scatterwalk_unmap(p);
diff --git a/arch/arm64/crypto/chacha-neon-glue.c b/arch/arm64/crypto/chacha-neon-glue.c
index af2bbca38e70..37ca3e889848 100644
--- a/arch/arm64/crypto/chacha-neon-glue.c
+++ b/arch/arm64/crypto/chacha-neon-glue.c
@@ -87,17 +87,9 @@ void chacha_crypt_arch(u32 *state, u8 *dst, const u8 *src, unsigned int bytes,
 	    !crypto_simd_usable())
 		return chacha_crypt_generic(state, dst, src, bytes, nrounds);
 
-	do {
-		unsigned int todo = min_t(unsigned int, bytes, SZ_4K);
-
-		kernel_neon_begin();
-		chacha_doneon(state, dst, src, todo, nrounds);
-		kernel_neon_end();
-
-		bytes -= todo;
-		src += todo;
-		dst += todo;
-	} while (bytes);
+	kernel_neon_begin();
+	chacha_doneon(state, dst, src, bytes, nrounds);
+	kernel_neon_end();
 }
 EXPORT_SYMBOL(chacha_crypt_arch);
 
diff --git a/arch/arm64/crypto/crct10dif-ce-glue.c b/arch/arm64/crypto/crct10dif-ce-glue.c
index 09eb1456aed4..ccc3f6067742 100644
--- a/arch/arm64/crypto/crct10dif-ce-glue.c
+++ b/arch/arm64/crypto/crct10dif-ce-glue.c
@@ -37,18 +37,9 @@ static int crct10dif_update_pmull_p8(struct shash_desc *desc, const u8 *data,
 	u16 *crc = shash_desc_ctx(desc);
 
 	if (length >= CRC_T10DIF_PMULL_CHUNK_SIZE && crypto_simd_usable()) {
-		do {
-			unsigned int chunk = length;
-
-			if (chunk > SZ_4K + CRC_T10DIF_PMULL_CHUNK_SIZE)
-				chunk = SZ_4K;
-
-			kernel_neon_begin();
-			*crc = crc_t10dif_pmull_p8(*crc, data, chunk);
-			kernel_neon_end();
-			data += chunk;
-			length -= chunk;
-		} while (length);
+		kernel_neon_begin();
+		*crc = crc_t10dif_pmull_p8(*crc, data, length);
+		kernel_neon_end();
 	} else {
 		*crc = crc_t10dif_generic(*crc, data, length);
 	}
@@ -62,18 +53,9 @@ static int crct10dif_update_pmull_p64(struct shash_desc *desc, const u8 *data,
 	u16 *crc = shash_desc_ctx(desc);
 
 	if (length >= CRC_T10DIF_PMULL_CHUNK_SIZE && crypto_simd_usable()) {
-		do {
-			unsigned int chunk = length;
-
-			if (chunk > SZ_4K + CRC_T10DIF_PMULL_CHUNK_SIZE)
-				chunk = SZ_4K;
-
-			kernel_neon_begin();
-			*crc = crc_t10dif_pmull_p64(*crc, data, chunk);
-			kernel_neon_end();
-			data += chunk;
-			length -= chunk;
-		} while (length);
+		kernel_neon_begin();
+		*crc = crc_t10dif_pmull_p64(*crc, data, length);
+		kernel_neon_end();
 	} else {
 		*crc = crc_t10dif_generic(*crc, data, length);
 	}
diff --git a/arch/arm64/crypto/nhpoly1305-neon-glue.c b/arch/arm64/crypto/nhpoly1305-neon-glue.c
index e4a0b463f080..7df0ab811c4e 100644
--- a/arch/arm64/crypto/nhpoly1305-neon-glue.c
+++ b/arch/arm64/crypto/nhpoly1305-neon-glue.c
@@ -22,15 +22,9 @@ static int nhpoly1305_neon_update(struct shash_desc *desc,
 	if (srclen < 64 || !crypto_simd_usable())
 		return crypto_nhpoly1305_update(desc, src, srclen);
 
-	do {
-		unsigned int n = min_t(unsigned int, srclen, SZ_4K);
-
-		kernel_neon_begin();
-		crypto_nhpoly1305_update_helper(desc, src, n, nh_neon);
-		kernel_neon_end();
-		src += n;
-		srclen -= n;
-	} while (srclen);
+	kernel_neon_begin();
+	crypto_nhpoly1305_update_helper(desc, src, srclen, nh_neon);
+	kernel_neon_end();
 	return 0;
 }
 
diff --git a/arch/arm64/crypto/poly1305-glue.c b/arch/arm64/crypto/poly1305-glue.c
index 1fae18ba11ed..326871897d5d 100644
--- a/arch/arm64/crypto/poly1305-glue.c
+++ b/arch/arm64/crypto/poly1305-glue.c
@@ -143,20 +143,13 @@ void poly1305_update_arch(struct poly1305_desc_ctx *dctx, const u8 *src,
 		unsigned int len = round_down(nbytes, POLY1305_BLOCK_SIZE);
 
 		if (static_branch_likely(&have_neon) && crypto_simd_usable()) {
-			do {
-				unsigned int todo = min_t(unsigned int, len, SZ_4K);
-
-				kernel_neon_begin();
-				poly1305_blocks_neon(&dctx->h, src, todo, 1);
-				kernel_neon_end();
-
-				len -= todo;
-				src += todo;
-			} while (len);
+			kernel_neon_begin();
+			poly1305_blocks_neon(&dctx->h, src, len, 1);
+			kernel_neon_end();
 		} else {
 			poly1305_blocks(&dctx->h, src, len, 1);
-			src += len;
 		}
+		src += len;
 		nbytes %= POLY1305_BLOCK_SIZE;
 	}
 
diff --git a/arch/arm64/crypto/polyval-ce-glue.c b/arch/arm64/crypto/polyval-ce-glue.c
index 0a3b5718df85..8c83e5f44e51 100644
--- a/arch/arm64/crypto/polyval-ce-glue.c
+++ b/arch/arm64/crypto/polyval-ce-glue.c
@@ -122,9 +122,8 @@ static int polyval_arm64_update(struct shash_desc *desc,
 					    tctx->key_powers[NUM_KEY_POWERS-1]);
 	}
 
-	while (srclen >= POLYVAL_BLOCK_SIZE) {
-		/* allow rescheduling every 4K bytes */
-		nblocks = min(srclen, 4096U) / POLYVAL_BLOCK_SIZE;
+	if (srclen >= POLYVAL_BLOCK_SIZE) {
+		nblocks = srclen / POLYVAL_BLOCK_SIZE;
 		internal_polyval_update(tctx, src, nblocks, dctx->buffer);
 		srclen -= nblocks * POLYVAL_BLOCK_SIZE;
 		src += nblocks * POLYVAL_BLOCK_SIZE;
-- 
2.43.0.rc1.413.gea7ed67945-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [PATCH v3 2/5] arm64: fpsimd: Preserve/restore kernel mode NEON at context switch
  2023-11-27 12:23   ` Ard Biesheuvel
@ 2023-11-27 13:09     ` Mark Rutland
  -1 siblings, 0 replies; 20+ messages in thread
From: Mark Rutland @ 2023-11-27 13:09 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: linux-arm-kernel, linux-crypto, Ard Biesheuvel, Marc Zyngier,
	Will Deacon, Kees Cook, Catalin Marinas, Mark Brown,
	Eric Biggers, Sebastian Andrzej Siewior

Hi Ard,

On Mon, Nov 27, 2023 at 01:23:02PM +0100, Ard Biesheuvel wrote:
> From: Ard Biesheuvel <ardb@kernel.org>
> 
> Currently, the FPSIMD register file is not preserved and restored along
> with the general registers on exception entry/exit or context switch.
> For this reason, we disable preemption when enabling FPSIMD for kernel
> mode use in task context, and suspend the processing of softirqs so that
> there are no concurrent uses in the kernel. (Kernel mode FPSIMD may not
> be used at all in other contexts).
> 
> Disabling preemption while doing CPU intensive work on inputs of
> potentially unbounded size is bad for real-time performance, which is
> why we try and ensure that SIMD crypto code does not operate on more
> than ~4k at a time, which is an arbitrary limit and requires assembler
> code to implement efficiently.
> 
> We can avoid the need for disabling preemption if we can ensure that any
> in-kernel users of the NEON will not lose the FPSIMD register state
> across a context switch. And given that disabling softirqs implicitly
> disables preemption as well, we will also have to ensure that a softirq
> that runs code using FPSIMD can safely interrupt an in-kernel user.
> 
> So introduce a thread_info flag TIF_USING_KMODE_FPSIMD, and modify the
> context switch hook for FPSIMD to preserve and restore the kernel mode
> FPSIMD to/from struct thread_struct when it is set. This avoids any
> scheduling blackouts due to prolonged use of FPSIMD in kernel mode,
> without the need for manual yielding.
> 
> In order to support softirq processing while FPSIMD is being used in
> kernel task context, use the same flag to decide whether the kernel mode
> FPSIMD state needs to be preserved and restored before allowing FPSIMD
> to be used in softirq context.
> 
> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
> Reviewed-by: Mark Brown <broonie@kernel.org>

I have a couple of naming/structural comments below, but this looks
functionally good to me. I appreciate those are arguable bikeshedding, so
either way:

Reviewed-by: Mark Rutland <mark.rutland@arm.com>

> ---
>  arch/arm64/include/asm/processor.h   |  2 +
>  arch/arm64/include/asm/thread_info.h |  1 +
>  arch/arm64/kernel/fpsimd.c           | 92 ++++++++++++++++----
>  3 files changed, 77 insertions(+), 18 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
> index e5bc54522e71..dcb51c0571af 100644
> --- a/arch/arm64/include/asm/processor.h
> +++ b/arch/arm64/include/asm/processor.h
> @@ -167,6 +167,8 @@ struct thread_struct {
>  	unsigned long		fault_address;	/* fault info */
>  	unsigned long		fault_code;	/* ESR_EL1 value */
>  	struct debug_info	debug;		/* debugging */
> +
> +	struct user_fpsimd_state	kmode_fpsimd_state;
>  #ifdef CONFIG_ARM64_PTR_AUTH
>  	struct ptrauth_keys_user	keys_user;
>  #ifdef CONFIG_ARM64_PTR_AUTH_KERNEL
> diff --git a/arch/arm64/include/asm/thread_info.h b/arch/arm64/include/asm/thread_info.h
> index 553d1bc559c6..6b254cf90e8b 100644
> --- a/arch/arm64/include/asm/thread_info.h
> +++ b/arch/arm64/include/asm/thread_info.h
> @@ -80,6 +80,7 @@ void arch_setup_new_exec(void);
>  #define TIF_TAGGED_ADDR		26	/* Allow tagged user addresses */
>  #define TIF_SME			27	/* SME in use */
>  #define TIF_SME_VL_INHERIT	28	/* Inherit SME vl_onexec across exec */
> +#define TIF_USING_KMODE_FPSIMD	29	/* Task is in a kernel mode FPSIMD section */

Sorry for the bikeshedding, but "KMODE" isn't a term we use elsewhere, and I
think it'd be nicer/clearer if this had "KERNEL" spelled out in full,
especially as it's only 1 additional character.

Could this be TIF_FPSIMD_KERNEL, or maybe TIF_KERNEL_FPSTATE to align with
TIF_FORIEGN_FPSTATE?

>  
>  #define _TIF_SIGPENDING		(1 << TIF_SIGPENDING)
>  #define _TIF_NEED_RESCHED	(1 << TIF_NEED_RESCHED)
> diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
> index ccc4a78a70e4..198918805bf6 100644
> --- a/arch/arm64/kernel/fpsimd.c
> +++ b/arch/arm64/kernel/fpsimd.c
> @@ -357,6 +357,7 @@ static void task_fpsimd_load(void)
>  
>  	WARN_ON(!system_supports_fpsimd());
>  	WARN_ON(preemptible());
> +	WARN_ON(test_thread_flag(TIF_USING_KMODE_FPSIMD));
>  
>  	if (system_supports_sve() || system_supports_sme()) {
>  		switch (current->thread.fp_type) {
> @@ -379,7 +380,7 @@ static void task_fpsimd_load(void)
>  		default:
>  			/*
>  			 * This indicates either a bug in
> -			 * fpsimd_save() or memory corruption, we
> +			 * fpsimd_save_user_state() or memory corruption, we
>  			 * should always record an explicit format
>  			 * when we save. We always at least have the
>  			 * memory allocated for FPSMID registers so
> @@ -430,7 +431,7 @@ static void task_fpsimd_load(void)
>   * than via current, if we are saving KVM state then it will have
>   * ensured that the type of registers to save is set in last->to_save.
>   */
> -static void fpsimd_save(void)
> +static void fpsimd_save_user_state(void)
>  {
>  	struct cpu_fp_state const *last =
>  		this_cpu_ptr(&fpsimd_last_state);
> @@ -861,7 +862,7 @@ int vec_set_vector_length(struct task_struct *task, enum vec_type type,
>  	if (task == current) {
>  		get_cpu_fpsimd_context();
>  
> -		fpsimd_save();
> +		fpsimd_save_user_state();
>  	}
>  
>  	fpsimd_flush_task_state(task);
> @@ -1473,6 +1474,16 @@ void do_fpsimd_exc(unsigned long esr, struct pt_regs *regs)
>  		       current);
>  }
>  
> +static void fpsimd_load_kernel_state(struct task_struct *task)
> +{
> +	fpsimd_load_state(&task->thread.kmode_fpsimd_state);
> +}
> +
> +static void fpsimd_save_kernel_state(struct task_struct *task)
> +{
> +	fpsimd_save_state(&task->thread.kmode_fpsimd_state);
> +}
> +
>  void fpsimd_thread_switch(struct task_struct *next)
>  {
>  	bool wrong_task, wrong_cpu;
> @@ -1483,19 +1494,28 @@ void fpsimd_thread_switch(struct task_struct *next)
>  	WARN_ON_ONCE(!irqs_disabled());
>  
>  	/* Save unsaved fpsimd state, if any: */
> -	fpsimd_save();
> +	if (!test_thread_flag(TIF_USING_KMODE_FPSIMD))
> +		fpsimd_save_user_state();
> +	else
> +		fpsimd_save_kernel_state(current);

Minor nit: I find this condition inversion slightly hard to read since in prose
it'd be "if there's not kernel state, save the user state; else save the kernel
state", whereas:

	if (test_thread_flag(TIF_USING_KMODE_FPSIMD))
		fpsimd_save_kernel_state(current);
	else
		fpsimd_save_user_state();

... is more clearly "if there's kernel state, save it; else save the user
state", and I think that'd be preferable.

>  
> -	/*
> -	 * Fix up TIF_FOREIGN_FPSTATE to correctly describe next's
> -	 * state.  For kernel threads, FPSIMD registers are never loaded
> -	 * and wrong_task and wrong_cpu will always be true.
> -	 */
> -	wrong_task = __this_cpu_read(fpsimd_last_state.st) !=
> -					&next->thread.uw.fpsimd_state;
> -	wrong_cpu = next->thread.fpsimd_cpu != smp_processor_id();
> +	if (test_tsk_thread_flag(next, TIF_USING_KMODE_FPSIMD)) {
> +		fpsimd_load_kernel_state(next);
> +		set_tsk_thread_flag(next, TIF_FOREIGN_FPSTATE);
> +	} else {
> +		/*
> +		 * Fix up TIF_FOREIGN_FPSTATE to correctly describe next's
> +		 * state.  For kernel threads, FPSIMD registers are never
> +		 * loaded with user mode FPSIMD state and so wrong_task and
> +		 * wrong_cpu will always be true.
> +		 */
> +		wrong_task = __this_cpu_read(fpsimd_last_state.st) !=
> +			&next->thread.uw.fpsimd_state;
> +		wrong_cpu = next->thread.fpsimd_cpu != smp_processor_id();
>  
> -	update_tsk_thread_flag(next, TIF_FOREIGN_FPSTATE,
> -			       wrong_task || wrong_cpu);
> +		update_tsk_thread_flag(next, TIF_FOREIGN_FPSTATE,
> +				       wrong_task || wrong_cpu);
> +	}
>  }
>  
>  static void fpsimd_flush_thread_vl(enum vec_type type)
> @@ -1585,7 +1605,7 @@ void fpsimd_preserve_current_state(void)
>  		return;
>  
>  	get_cpu_fpsimd_context();
> -	fpsimd_save();
> +	fpsimd_save_user_state();
>  	put_cpu_fpsimd_context();
>  }
>  
> @@ -1801,7 +1821,7 @@ void fpsimd_save_and_flush_cpu_state(void)
>  		return;
>  	WARN_ON(preemptible());
>  	get_cpu_fpsimd_context();
> -	fpsimd_save();
> +	fpsimd_save_user_state();
>  	fpsimd_flush_cpu_state();
>  	put_cpu_fpsimd_context();
>  }
> @@ -1835,10 +1855,37 @@ void kernel_neon_begin(void)
>  	get_cpu_fpsimd_context();
>  
>  	/* Save unsaved fpsimd state, if any: */
> -	fpsimd_save();
> +	if (!test_thread_flag(TIF_USING_KMODE_FPSIMD)) {
> +		fpsimd_save_user_state();
> +
> +		/*
> +		 * Set the thread flag so that the kernel mode FPSIMD state
> +		 * will be context switched along with the rest of the task
> +		 * state.
> +		 *
> +		 * On non-PREEMPT_RT, softirqs may interrupt task level kernel
> +		 * mode FPSIMD, but the task will not be preemptible so setting
> +		 * TIF_USING_KMODE_FPSIMD for those would be both wrong (as it
> +		 * would mark the task context FPSIMD state as requiring a
> +		 * context switch) and unnecessary.
> +		 *
> +		 * On PREEMPT_RT, softirqs are serviced from a separate thread,
> +		 * which is scheduled as usual, and this guarantees that these
> +		 * softirqs are not interrupting use of the FPSIMD in kernel
> +		 * mode in task context. So in this case, setting the flag here
> +		 * is always appropriate.
> +		 */
> +		if (IS_ENABLED(CONFIG_PREEMPT_RT) || !in_serving_softirq())
> +			set_thread_flag(TIF_USING_KMODE_FPSIMD);
> +	} else {
> +		BUG_ON(IS_ENABLED(CONFIG_PREEMPT_RT) || !in_serving_softirq());
> +		fpsimd_save_kernel_state(current);
> +	}

Same comment as above for condition inversion here.

Mark.

>  
>  	/* Invalidate any task state remaining in the fpsimd regs: */
>  	fpsimd_flush_cpu_state();
> +
> +	put_cpu_fpsimd_context();
>  }
>  EXPORT_SYMBOL_GPL(kernel_neon_begin);
>  
> @@ -1856,7 +1903,16 @@ void kernel_neon_end(void)
>  	if (!system_supports_fpsimd())
>  		return;
>  
> -	put_cpu_fpsimd_context();
> +	/*
> +	 * If we are returning from a nested use of kernel mode FPSIMD, restore
> +	 * the task context kernel mode FPSIMD state. This can only happen when
> +	 * running in softirq context on non-PREEMPT_RT.
> +	 */
> +	if (!IS_ENABLED(CONFIG_PREEMPT_RT) && in_serving_softirq() &&
> +	    test_thread_flag(TIF_USING_KMODE_FPSIMD))
> +		fpsimd_load_kernel_state(current);
> +	else
> +		clear_thread_flag(TIF_USING_KMODE_FPSIMD);
>  }
>  EXPORT_SYMBOL_GPL(kernel_neon_end);
>  
> -- 
> 2.43.0.rc1.413.gea7ed67945-goog
> 

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v3 2/5] arm64: fpsimd: Preserve/restore kernel mode NEON at context switch
@ 2023-11-27 13:09     ` Mark Rutland
  0 siblings, 0 replies; 20+ messages in thread
From: Mark Rutland @ 2023-11-27 13:09 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: linux-arm-kernel, linux-crypto, Ard Biesheuvel, Marc Zyngier,
	Will Deacon, Kees Cook, Catalin Marinas, Mark Brown,
	Eric Biggers, Sebastian Andrzej Siewior

Hi Ard,

On Mon, Nov 27, 2023 at 01:23:02PM +0100, Ard Biesheuvel wrote:
> From: Ard Biesheuvel <ardb@kernel.org>
> 
> Currently, the FPSIMD register file is not preserved and restored along
> with the general registers on exception entry/exit or context switch.
> For this reason, we disable preemption when enabling FPSIMD for kernel
> mode use in task context, and suspend the processing of softirqs so that
> there are no concurrent uses in the kernel. (Kernel mode FPSIMD may not
> be used at all in other contexts).
> 
> Disabling preemption while doing CPU intensive work on inputs of
> potentially unbounded size is bad for real-time performance, which is
> why we try and ensure that SIMD crypto code does not operate on more
> than ~4k at a time, which is an arbitrary limit and requires assembler
> code to implement efficiently.
> 
> We can avoid the need for disabling preemption if we can ensure that any
> in-kernel users of the NEON will not lose the FPSIMD register state
> across a context switch. And given that disabling softirqs implicitly
> disables preemption as well, we will also have to ensure that a softirq
> that runs code using FPSIMD can safely interrupt an in-kernel user.
> 
> So introduce a thread_info flag TIF_USING_KMODE_FPSIMD, and modify the
> context switch hook for FPSIMD to preserve and restore the kernel mode
> FPSIMD to/from struct thread_struct when it is set. This avoids any
> scheduling blackouts due to prolonged use of FPSIMD in kernel mode,
> without the need for manual yielding.
> 
> In order to support softirq processing while FPSIMD is being used in
> kernel task context, use the same flag to decide whether the kernel mode
> FPSIMD state needs to be preserved and restored before allowing FPSIMD
> to be used in softirq context.
> 
> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
> Reviewed-by: Mark Brown <broonie@kernel.org>

I have a couple of naming/structural comments below, but this looks
functionally good to me. I appreciate those are arguable bikeshedding, so
either way:

Reviewed-by: Mark Rutland <mark.rutland@arm.com>

> ---
>  arch/arm64/include/asm/processor.h   |  2 +
>  arch/arm64/include/asm/thread_info.h |  1 +
>  arch/arm64/kernel/fpsimd.c           | 92 ++++++++++++++++----
>  3 files changed, 77 insertions(+), 18 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
> index e5bc54522e71..dcb51c0571af 100644
> --- a/arch/arm64/include/asm/processor.h
> +++ b/arch/arm64/include/asm/processor.h
> @@ -167,6 +167,8 @@ struct thread_struct {
>  	unsigned long		fault_address;	/* fault info */
>  	unsigned long		fault_code;	/* ESR_EL1 value */
>  	struct debug_info	debug;		/* debugging */
> +
> +	struct user_fpsimd_state	kmode_fpsimd_state;
>  #ifdef CONFIG_ARM64_PTR_AUTH
>  	struct ptrauth_keys_user	keys_user;
>  #ifdef CONFIG_ARM64_PTR_AUTH_KERNEL
> diff --git a/arch/arm64/include/asm/thread_info.h b/arch/arm64/include/asm/thread_info.h
> index 553d1bc559c6..6b254cf90e8b 100644
> --- a/arch/arm64/include/asm/thread_info.h
> +++ b/arch/arm64/include/asm/thread_info.h
> @@ -80,6 +80,7 @@ void arch_setup_new_exec(void);
>  #define TIF_TAGGED_ADDR		26	/* Allow tagged user addresses */
>  #define TIF_SME			27	/* SME in use */
>  #define TIF_SME_VL_INHERIT	28	/* Inherit SME vl_onexec across exec */
> +#define TIF_USING_KMODE_FPSIMD	29	/* Task is in a kernel mode FPSIMD section */

Sorry for the bikeshedding, but "KMODE" isn't a term we use elsewhere, and I
think it'd be nicer/clearer if this had "KERNEL" spelled out in full,
especially as it's only 1 additional character.

Could this be TIF_FPSIMD_KERNEL, or maybe TIF_KERNEL_FPSTATE to align with
TIF_FORIEGN_FPSTATE?

>  
>  #define _TIF_SIGPENDING		(1 << TIF_SIGPENDING)
>  #define _TIF_NEED_RESCHED	(1 << TIF_NEED_RESCHED)
> diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
> index ccc4a78a70e4..198918805bf6 100644
> --- a/arch/arm64/kernel/fpsimd.c
> +++ b/arch/arm64/kernel/fpsimd.c
> @@ -357,6 +357,7 @@ static void task_fpsimd_load(void)
>  
>  	WARN_ON(!system_supports_fpsimd());
>  	WARN_ON(preemptible());
> +	WARN_ON(test_thread_flag(TIF_USING_KMODE_FPSIMD));
>  
>  	if (system_supports_sve() || system_supports_sme()) {
>  		switch (current->thread.fp_type) {
> @@ -379,7 +380,7 @@ static void task_fpsimd_load(void)
>  		default:
>  			/*
>  			 * This indicates either a bug in
> -			 * fpsimd_save() or memory corruption, we
> +			 * fpsimd_save_user_state() or memory corruption, we
>  			 * should always record an explicit format
>  			 * when we save. We always at least have the
>  			 * memory allocated for FPSMID registers so
> @@ -430,7 +431,7 @@ static void task_fpsimd_load(void)
>   * than via current, if we are saving KVM state then it will have
>   * ensured that the type of registers to save is set in last->to_save.
>   */
> -static void fpsimd_save(void)
> +static void fpsimd_save_user_state(void)
>  {
>  	struct cpu_fp_state const *last =
>  		this_cpu_ptr(&fpsimd_last_state);
> @@ -861,7 +862,7 @@ int vec_set_vector_length(struct task_struct *task, enum vec_type type,
>  	if (task == current) {
>  		get_cpu_fpsimd_context();
>  
> -		fpsimd_save();
> +		fpsimd_save_user_state();
>  	}
>  
>  	fpsimd_flush_task_state(task);
> @@ -1473,6 +1474,16 @@ void do_fpsimd_exc(unsigned long esr, struct pt_regs *regs)
>  		       current);
>  }
>  
> +static void fpsimd_load_kernel_state(struct task_struct *task)
> +{
> +	fpsimd_load_state(&task->thread.kmode_fpsimd_state);
> +}
> +
> +static void fpsimd_save_kernel_state(struct task_struct *task)
> +{
> +	fpsimd_save_state(&task->thread.kmode_fpsimd_state);
> +}
> +
>  void fpsimd_thread_switch(struct task_struct *next)
>  {
>  	bool wrong_task, wrong_cpu;
> @@ -1483,19 +1494,28 @@ void fpsimd_thread_switch(struct task_struct *next)
>  	WARN_ON_ONCE(!irqs_disabled());
>  
>  	/* Save unsaved fpsimd state, if any: */
> -	fpsimd_save();
> +	if (!test_thread_flag(TIF_USING_KMODE_FPSIMD))
> +		fpsimd_save_user_state();
> +	else
> +		fpsimd_save_kernel_state(current);

Minor nit: I find this condition inversion slightly hard to read since in prose
it'd be "if there's not kernel state, save the user state; else save the kernel
state", whereas:

	if (test_thread_flag(TIF_USING_KMODE_FPSIMD))
		fpsimd_save_kernel_state(current);
	else
		fpsimd_save_user_state();

... is more clearly "if there's kernel state, save it; else save the user
state", and I think that'd be preferable.

>  
> -	/*
> -	 * Fix up TIF_FOREIGN_FPSTATE to correctly describe next's
> -	 * state.  For kernel threads, FPSIMD registers are never loaded
> -	 * and wrong_task and wrong_cpu will always be true.
> -	 */
> -	wrong_task = __this_cpu_read(fpsimd_last_state.st) !=
> -					&next->thread.uw.fpsimd_state;
> -	wrong_cpu = next->thread.fpsimd_cpu != smp_processor_id();
> +	if (test_tsk_thread_flag(next, TIF_USING_KMODE_FPSIMD)) {
> +		fpsimd_load_kernel_state(next);
> +		set_tsk_thread_flag(next, TIF_FOREIGN_FPSTATE);
> +	} else {
> +		/*
> +		 * Fix up TIF_FOREIGN_FPSTATE to correctly describe next's
> +		 * state.  For kernel threads, FPSIMD registers are never
> +		 * loaded with user mode FPSIMD state and so wrong_task and
> +		 * wrong_cpu will always be true.
> +		 */
> +		wrong_task = __this_cpu_read(fpsimd_last_state.st) !=
> +			&next->thread.uw.fpsimd_state;
> +		wrong_cpu = next->thread.fpsimd_cpu != smp_processor_id();
>  
> -	update_tsk_thread_flag(next, TIF_FOREIGN_FPSTATE,
> -			       wrong_task || wrong_cpu);
> +		update_tsk_thread_flag(next, TIF_FOREIGN_FPSTATE,
> +				       wrong_task || wrong_cpu);
> +	}
>  }
>  
>  static void fpsimd_flush_thread_vl(enum vec_type type)
> @@ -1585,7 +1605,7 @@ void fpsimd_preserve_current_state(void)
>  		return;
>  
>  	get_cpu_fpsimd_context();
> -	fpsimd_save();
> +	fpsimd_save_user_state();
>  	put_cpu_fpsimd_context();
>  }
>  
> @@ -1801,7 +1821,7 @@ void fpsimd_save_and_flush_cpu_state(void)
>  		return;
>  	WARN_ON(preemptible());
>  	get_cpu_fpsimd_context();
> -	fpsimd_save();
> +	fpsimd_save_user_state();
>  	fpsimd_flush_cpu_state();
>  	put_cpu_fpsimd_context();
>  }
> @@ -1835,10 +1855,37 @@ void kernel_neon_begin(void)
>  	get_cpu_fpsimd_context();
>  
>  	/* Save unsaved fpsimd state, if any: */
> -	fpsimd_save();
> +	if (!test_thread_flag(TIF_USING_KMODE_FPSIMD)) {
> +		fpsimd_save_user_state();
> +
> +		/*
> +		 * Set the thread flag so that the kernel mode FPSIMD state
> +		 * will be context switched along with the rest of the task
> +		 * state.
> +		 *
> +		 * On non-PREEMPT_RT, softirqs may interrupt task level kernel
> +		 * mode FPSIMD, but the task will not be preemptible so setting
> +		 * TIF_USING_KMODE_FPSIMD for those would be both wrong (as it
> +		 * would mark the task context FPSIMD state as requiring a
> +		 * context switch) and unnecessary.
> +		 *
> +		 * On PREEMPT_RT, softirqs are serviced from a separate thread,
> +		 * which is scheduled as usual, and this guarantees that these
> +		 * softirqs are not interrupting use of the FPSIMD in kernel
> +		 * mode in task context. So in this case, setting the flag here
> +		 * is always appropriate.
> +		 */
> +		if (IS_ENABLED(CONFIG_PREEMPT_RT) || !in_serving_softirq())
> +			set_thread_flag(TIF_USING_KMODE_FPSIMD);
> +	} else {
> +		BUG_ON(IS_ENABLED(CONFIG_PREEMPT_RT) || !in_serving_softirq());
> +		fpsimd_save_kernel_state(current);
> +	}

Same comment as above for condition inversion here.

Mark.

>  
>  	/* Invalidate any task state remaining in the fpsimd regs: */
>  	fpsimd_flush_cpu_state();
> +
> +	put_cpu_fpsimd_context();
>  }
>  EXPORT_SYMBOL_GPL(kernel_neon_begin);
>  
> @@ -1856,7 +1903,16 @@ void kernel_neon_end(void)
>  	if (!system_supports_fpsimd())
>  		return;
>  
> -	put_cpu_fpsimd_context();
> +	/*
> +	 * If we are returning from a nested use of kernel mode FPSIMD, restore
> +	 * the task context kernel mode FPSIMD state. This can only happen when
> +	 * running in softirq context on non-PREEMPT_RT.
> +	 */
> +	if (!IS_ENABLED(CONFIG_PREEMPT_RT) && in_serving_softirq() &&
> +	    test_thread_flag(TIF_USING_KMODE_FPSIMD))
> +		fpsimd_load_kernel_state(current);
> +	else
> +		clear_thread_flag(TIF_USING_KMODE_FPSIMD);
>  }
>  EXPORT_SYMBOL_GPL(kernel_neon_end);
>  
> -- 
> 2.43.0.rc1.413.gea7ed67945-goog
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v3 3/5] arm64: fpsimd: Implement lazy restore for kernel mode FPSIMD
  2023-11-27 12:23   ` Ard Biesheuvel
@ 2023-11-27 13:32     ` Mark Rutland
  -1 siblings, 0 replies; 20+ messages in thread
From: Mark Rutland @ 2023-11-27 13:32 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: linux-arm-kernel, linux-crypto, Ard Biesheuvel, Marc Zyngier,
	Will Deacon, Kees Cook, Catalin Marinas, Mark Brown,
	Eric Biggers, Sebastian Andrzej Siewior

On Mon, Nov 27, 2023 at 01:23:03PM +0100, Ard Biesheuvel wrote:
> From: Ard Biesheuvel <ardb@kernel.org>
> 
> Now that kernel mode FPSIMD state is context switched along with other
> task state, we can enable the existing logic that keeps track of which
> task's FPSIMD state the CPU is holding in its registers. If it is the
> context of the task that we are switching to, we can elide the reload of
> the FPSIMD state from memory.
> 
> Note that we also need to check whether the FPSIMD state on this CPU is
> the most recent: if a task gets migrated away and back again, the state
> in memory may be more recent than the state in the CPU. So add another
> CPU id field to task_struct to keep track of this. (We could reuse the
> existing CPU id field used for user mode context, but that might result
> in user state to be discarded unnecessarily, given that two distinct
> CPUs could be holding the most recent user mode state and the most
> recent kernel mode state)
> 
> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
> Reviewed-by: Mark Brown <broonie@kernel.org>
> ---
>  arch/arm64/include/asm/processor.h |  1 +
>  arch/arm64/kernel/fpsimd.c         | 18 ++++++++++++++++++
>  2 files changed, 19 insertions(+)
> 
> diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
> index dcb51c0571af..332f15d0abcf 100644
> --- a/arch/arm64/include/asm/processor.h
> +++ b/arch/arm64/include/asm/processor.h
> @@ -169,6 +169,7 @@ struct thread_struct {
>  	struct debug_info	debug;		/* debugging */
>  
>  	struct user_fpsimd_state	kmode_fpsimd_state;
> +	unsigned int			kmode_fpsimd_cpu;
>  #ifdef CONFIG_ARM64_PTR_AUTH
>  	struct ptrauth_keys_user	keys_user;
>  #ifdef CONFIG_ARM64_PTR_AUTH_KERNEL
> diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
> index 198918805bf6..112111a078b6 100644
> --- a/arch/arm64/kernel/fpsimd.c
> +++ b/arch/arm64/kernel/fpsimd.c
> @@ -1476,12 +1476,30 @@ void do_fpsimd_exc(unsigned long esr, struct pt_regs *regs)
>  
>  static void fpsimd_load_kernel_state(struct task_struct *task)
>  {
> +	struct cpu_fp_state *last = this_cpu_ptr(&fpsimd_last_state);
> +
> +	/*
> +	 * Elide the load if this CPU holds the most recent kernel mode
> +	 * FPSIMD context of the current task.
> +	 */
> +	if (last->st == &task->thread.kmode_fpsimd_state &&
> +	    task->thread.kmode_fpsimd_cpu == smp_processor_id())
> +		return;
> +
>  	fpsimd_load_state(&task->thread.kmode_fpsimd_state);
>  }
>  
>  static void fpsimd_save_kernel_state(struct task_struct *task)
>  {
> +	struct cpu_fp_state cpu_fp_state = {
> +		.st		= &task->thread.kmode_fpsimd_state,
> +		.to_save	= FP_STATE_FPSIMD,
> +	};
> +
>  	fpsimd_save_state(&task->thread.kmode_fpsimd_state);
> +	fpsimd_bind_state_to_cpu(&cpu_fp_state);
> +
> +	task->thread.kmode_fpsimd_cpu = smp_processor_id();
>  }

I was a little worried tha we might be missing a change to
fpsimd_cpu_pm_notifier() to handle contesxt-destructive idle states correctly,
but since that clears the fpsimd_last_state variable already, that should do
the right thing as-is.

Acked-by: Mark Rutland <mark.rutland@arm.com>

Mark.

>  
>  void fpsimd_thread_switch(struct task_struct *next)
> -- 
> 2.43.0.rc1.413.gea7ed67945-goog
> 

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v3 3/5] arm64: fpsimd: Implement lazy restore for kernel mode FPSIMD
@ 2023-11-27 13:32     ` Mark Rutland
  0 siblings, 0 replies; 20+ messages in thread
From: Mark Rutland @ 2023-11-27 13:32 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: linux-arm-kernel, linux-crypto, Ard Biesheuvel, Marc Zyngier,
	Will Deacon, Kees Cook, Catalin Marinas, Mark Brown,
	Eric Biggers, Sebastian Andrzej Siewior

On Mon, Nov 27, 2023 at 01:23:03PM +0100, Ard Biesheuvel wrote:
> From: Ard Biesheuvel <ardb@kernel.org>
> 
> Now that kernel mode FPSIMD state is context switched along with other
> task state, we can enable the existing logic that keeps track of which
> task's FPSIMD state the CPU is holding in its registers. If it is the
> context of the task that we are switching to, we can elide the reload of
> the FPSIMD state from memory.
> 
> Note that we also need to check whether the FPSIMD state on this CPU is
> the most recent: if a task gets migrated away and back again, the state
> in memory may be more recent than the state in the CPU. So add another
> CPU id field to task_struct to keep track of this. (We could reuse the
> existing CPU id field used for user mode context, but that might result
> in user state to be discarded unnecessarily, given that two distinct
> CPUs could be holding the most recent user mode state and the most
> recent kernel mode state)
> 
> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
> Reviewed-by: Mark Brown <broonie@kernel.org>
> ---
>  arch/arm64/include/asm/processor.h |  1 +
>  arch/arm64/kernel/fpsimd.c         | 18 ++++++++++++++++++
>  2 files changed, 19 insertions(+)
> 
> diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
> index dcb51c0571af..332f15d0abcf 100644
> --- a/arch/arm64/include/asm/processor.h
> +++ b/arch/arm64/include/asm/processor.h
> @@ -169,6 +169,7 @@ struct thread_struct {
>  	struct debug_info	debug;		/* debugging */
>  
>  	struct user_fpsimd_state	kmode_fpsimd_state;
> +	unsigned int			kmode_fpsimd_cpu;
>  #ifdef CONFIG_ARM64_PTR_AUTH
>  	struct ptrauth_keys_user	keys_user;
>  #ifdef CONFIG_ARM64_PTR_AUTH_KERNEL
> diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
> index 198918805bf6..112111a078b6 100644
> --- a/arch/arm64/kernel/fpsimd.c
> +++ b/arch/arm64/kernel/fpsimd.c
> @@ -1476,12 +1476,30 @@ void do_fpsimd_exc(unsigned long esr, struct pt_regs *regs)
>  
>  static void fpsimd_load_kernel_state(struct task_struct *task)
>  {
> +	struct cpu_fp_state *last = this_cpu_ptr(&fpsimd_last_state);
> +
> +	/*
> +	 * Elide the load if this CPU holds the most recent kernel mode
> +	 * FPSIMD context of the current task.
> +	 */
> +	if (last->st == &task->thread.kmode_fpsimd_state &&
> +	    task->thread.kmode_fpsimd_cpu == smp_processor_id())
> +		return;
> +
>  	fpsimd_load_state(&task->thread.kmode_fpsimd_state);
>  }
>  
>  static void fpsimd_save_kernel_state(struct task_struct *task)
>  {
> +	struct cpu_fp_state cpu_fp_state = {
> +		.st		= &task->thread.kmode_fpsimd_state,
> +		.to_save	= FP_STATE_FPSIMD,
> +	};
> +
>  	fpsimd_save_state(&task->thread.kmode_fpsimd_state);
> +	fpsimd_bind_state_to_cpu(&cpu_fp_state);
> +
> +	task->thread.kmode_fpsimd_cpu = smp_processor_id();
>  }

I was a little worried tha we might be missing a change to
fpsimd_cpu_pm_notifier() to handle contesxt-destructive idle states correctly,
but since that clears the fpsimd_last_state variable already, that should do
the right thing as-is.

Acked-by: Mark Rutland <mark.rutland@arm.com>

Mark.

>  
>  void fpsimd_thread_switch(struct task_struct *next)
> -- 
> 2.43.0.rc1.413.gea7ed67945-goog
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v3 4/5] arm64: crypto: Remove conditional yield logic
  2023-11-27 12:23   ` Ard Biesheuvel
@ 2023-11-27 13:46     ` Mark Rutland
  -1 siblings, 0 replies; 20+ messages in thread
From: Mark Rutland @ 2023-11-27 13:46 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: linux-arm-kernel, linux-crypto, Ard Biesheuvel, Marc Zyngier,
	Will Deacon, Kees Cook, Catalin Marinas, Mark Brown,
	Eric Biggers, Sebastian Andrzej Siewior

On Mon, Nov 27, 2023 at 01:23:04PM +0100, Ard Biesheuvel wrote:
> From: Ard Biesheuvel <ardb@kernel.org>
> 
> Some classes of crypto algorithms (such as skciphers or aeads) have
> natural yield points, but SIMD based shashes yield the NEON unit
> manually to avoid causing scheduling blackouts when operating on large
> inputs.
> 
> This is no longer necessary now that kernel mode NEON runs with
> preemption enabled, so remove this logic from the crypto assembler code,
> along with the macro that implements the TIF_NEED_RESCHED check.
> 
> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>

I definitely want to get rid of all of the voluntary preemption points, but
IIUC for the moment we need to keep these for PREEMPT_NONE and
PREEMPT_VOLUNTARY (and consequently for PREEMPT_DYNAMIC). Once the preemption
rework lands, these should no longer be necessary and can be removed:

  https://lore.kernel.org/lkml/20231107215742.363031-1-ankur.a.arora@oracle.com/

Thanks,
Mark.

> ---
>  arch/arm64/crypto/aes-glue.c       | 21 +++++---------
>  arch/arm64/crypto/aes-modes.S      |  2 --
>  arch/arm64/crypto/sha1-ce-core.S   |  6 ++--
>  arch/arm64/crypto/sha1-ce-glue.c   | 19 ++++---------
>  arch/arm64/crypto/sha2-ce-core.S   |  6 ++--
>  arch/arm64/crypto/sha2-ce-glue.c   | 19 ++++---------
>  arch/arm64/crypto/sha3-ce-core.S   |  6 ++--
>  arch/arm64/crypto/sha3-ce-glue.c   | 14 ++++------
>  arch/arm64/crypto/sha512-ce-core.S |  8 ++----
>  arch/arm64/crypto/sha512-ce-glue.c | 16 ++++-------
>  arch/arm64/include/asm/assembler.h | 29 --------------------
>  arch/arm64/kernel/asm-offsets.c    |  4 ---
>  12 files changed, 38 insertions(+), 112 deletions(-)
> 
> diff --git a/arch/arm64/crypto/aes-glue.c b/arch/arm64/crypto/aes-glue.c
> index 162787c7aa86..c42c903b7d60 100644
> --- a/arch/arm64/crypto/aes-glue.c
> +++ b/arch/arm64/crypto/aes-glue.c
> @@ -109,9 +109,9 @@ asmlinkage void aes_essiv_cbc_decrypt(u8 out[], u8 const in[], u32 const rk1[],
>  				      int rounds, int blocks, u8 iv[],
>  				      u32 const rk2[]);
>  
> -asmlinkage int aes_mac_update(u8 const in[], u32 const rk[], int rounds,
> -			      int blocks, u8 dg[], int enc_before,
> -			      int enc_after);
> +asmlinkage void aes_mac_update(u8 const in[], u32 const rk[], int rounds,
> +			       int blocks, u8 dg[], int enc_before,
> +			       int enc_after);
>  
>  struct crypto_aes_xts_ctx {
>  	struct crypto_aes_ctx key1;
> @@ -880,17 +880,10 @@ static void mac_do_update(struct crypto_aes_ctx *ctx, u8 const in[], int blocks,
>  	int rounds = 6 + ctx->key_length / 4;
>  
>  	if (crypto_simd_usable()) {
> -		int rem;
> -
> -		do {
> -			kernel_neon_begin();
> -			rem = aes_mac_update(in, ctx->key_enc, rounds, blocks,
> -					     dg, enc_before, enc_after);
> -			kernel_neon_end();
> -			in += (blocks - rem) * AES_BLOCK_SIZE;
> -			blocks = rem;
> -			enc_before = 0;
> -		} while (blocks);
> +		kernel_neon_begin();
> +		aes_mac_update(in, ctx->key_enc, rounds, blocks, dg,
> +			       enc_before, enc_after);
> +		kernel_neon_end();
>  	} else {
>  		if (enc_before)
>  			aes_encrypt(ctx, dg, dg);
> diff --git a/arch/arm64/crypto/aes-modes.S b/arch/arm64/crypto/aes-modes.S
> index 0e834a2c062c..4d68853d0caf 100644
> --- a/arch/arm64/crypto/aes-modes.S
> +++ b/arch/arm64/crypto/aes-modes.S
> @@ -842,7 +842,6 @@ AES_FUNC_START(aes_mac_update)
>  	cbz		w5, .Lmacout
>  	encrypt_block	v0, w2, x1, x7, w8
>  	st1		{v0.16b}, [x4]			/* return dg */
> -	cond_yield	.Lmacout, x7, x8
>  	b		.Lmacloop4x
>  .Lmac1x:
>  	add		w3, w3, #4
> @@ -861,6 +860,5 @@ AES_FUNC_START(aes_mac_update)
>  
>  .Lmacout:
>  	st1		{v0.16b}, [x4]			/* return dg */
> -	mov		w0, w3
>  	ret
>  AES_FUNC_END(aes_mac_update)
> diff --git a/arch/arm64/crypto/sha1-ce-core.S b/arch/arm64/crypto/sha1-ce-core.S
> index 9b1f2d82a6fe..9e37bc09c3a5 100644
> --- a/arch/arm64/crypto/sha1-ce-core.S
> +++ b/arch/arm64/crypto/sha1-ce-core.S
> @@ -62,8 +62,8 @@
>  	.endm
>  
>  	/*
> -	 * int __sha1_ce_transform(struct sha1_ce_state *sst, u8 const *src,
> -	 *			   int blocks)
> +	 * void __sha1_ce_transform(struct sha1_ce_state *sst, u8 const *src,
> +	 *			    int blocks)
>  	 */
>  SYM_FUNC_START(__sha1_ce_transform)
>  	/* load round constants */
> @@ -121,7 +121,6 @@ CPU_LE(	rev32		v11.16b, v11.16b	)
>  	add		dgav.4s, dgav.4s, dg0v.4s
>  
>  	cbz		w2, 2f
> -	cond_yield	3f, x5, x6
>  	b		0b
>  
>  	/*
> @@ -145,6 +144,5 @@ CPU_LE(	rev32		v11.16b, v11.16b	)
>  	/* store new state */
>  3:	st1		{dgav.4s}, [x0]
>  	str		dgb, [x0, #16]
> -	mov		w0, w2
>  	ret
>  SYM_FUNC_END(__sha1_ce_transform)
> diff --git a/arch/arm64/crypto/sha1-ce-glue.c b/arch/arm64/crypto/sha1-ce-glue.c
> index 1dd93e1fcb39..c1c5c5cb104b 100644
> --- a/arch/arm64/crypto/sha1-ce-glue.c
> +++ b/arch/arm64/crypto/sha1-ce-glue.c
> @@ -29,23 +29,16 @@ struct sha1_ce_state {
>  extern const u32 sha1_ce_offsetof_count;
>  extern const u32 sha1_ce_offsetof_finalize;
>  
> -asmlinkage int __sha1_ce_transform(struct sha1_ce_state *sst, u8 const *src,
> -				   int blocks);
> +asmlinkage void __sha1_ce_transform(struct sha1_ce_state *sst, u8 const *src,
> +				    int blocks);
>  
>  static void sha1_ce_transform(struct sha1_state *sst, u8 const *src,
>  			      int blocks)
>  {
> -	while (blocks) {
> -		int rem;
> -
> -		kernel_neon_begin();
> -		rem = __sha1_ce_transform(container_of(sst,
> -						       struct sha1_ce_state,
> -						       sst), src, blocks);
> -		kernel_neon_end();
> -		src += (blocks - rem) * SHA1_BLOCK_SIZE;
> -		blocks = rem;
> -	}
> +	kernel_neon_begin();
> +	__sha1_ce_transform(container_of(sst, struct sha1_ce_state, sst), src,
> +			    blocks);
> +	kernel_neon_end();
>  }
>  
>  const u32 sha1_ce_offsetof_count = offsetof(struct sha1_ce_state, sst.count);
> diff --git a/arch/arm64/crypto/sha2-ce-core.S b/arch/arm64/crypto/sha2-ce-core.S
> index fce84d88ddb2..112d772b29db 100644
> --- a/arch/arm64/crypto/sha2-ce-core.S
> +++ b/arch/arm64/crypto/sha2-ce-core.S
> @@ -71,8 +71,8 @@
>  	.word		0x90befffa, 0xa4506ceb, 0xbef9a3f7, 0xc67178f2
>  
>  	/*
> -	 * int __sha256_ce_transform(struct sha256_ce_state *sst, u8 const *src,
> -	 *			     int blocks)
> +	 * void __sha256_ce_transform(struct sha256_ce_state *sst, u8 const *src,
> +	 *			      int blocks)
>  	 */
>  	.text
>  SYM_FUNC_START(__sha256_ce_transform)
> @@ -129,7 +129,6 @@ CPU_LE(	rev32		v19.16b, v19.16b	)
>  
>  	/* handled all input blocks? */
>  	cbz		w2, 2f
> -	cond_yield	3f, x5, x6
>  	b		0b
>  
>  	/*
> @@ -152,6 +151,5 @@ CPU_LE(	rev32		v19.16b, v19.16b	)
>  
>  	/* store new state */
>  3:	st1		{dgav.4s, dgbv.4s}, [x0]
> -	mov		w0, w2
>  	ret
>  SYM_FUNC_END(__sha256_ce_transform)
> diff --git a/arch/arm64/crypto/sha2-ce-glue.c b/arch/arm64/crypto/sha2-ce-glue.c
> index 0a44d2e7ee1f..f785a66a1de4 100644
> --- a/arch/arm64/crypto/sha2-ce-glue.c
> +++ b/arch/arm64/crypto/sha2-ce-glue.c
> @@ -30,23 +30,16 @@ struct sha256_ce_state {
>  extern const u32 sha256_ce_offsetof_count;
>  extern const u32 sha256_ce_offsetof_finalize;
>  
> -asmlinkage int __sha256_ce_transform(struct sha256_ce_state *sst, u8 const *src,
> -				     int blocks);
> +asmlinkage void __sha256_ce_transform(struct sha256_ce_state *sst, u8 const *src,
> +				      int blocks);
>  
>  static void sha256_ce_transform(struct sha256_state *sst, u8 const *src,
>  				int blocks)
>  {
> -	while (blocks) {
> -		int rem;
> -
> -		kernel_neon_begin();
> -		rem = __sha256_ce_transform(container_of(sst,
> -							 struct sha256_ce_state,
> -							 sst), src, blocks);
> -		kernel_neon_end();
> -		src += (blocks - rem) * SHA256_BLOCK_SIZE;
> -		blocks = rem;
> -	}
> +	kernel_neon_begin();
> +	__sha256_ce_transform(container_of(sst, struct sha256_ce_state, sst),
> +			      src, blocks);
> +	kernel_neon_end();
>  }
>  
>  const u32 sha256_ce_offsetof_count = offsetof(struct sha256_ce_state,
> diff --git a/arch/arm64/crypto/sha3-ce-core.S b/arch/arm64/crypto/sha3-ce-core.S
> index 9c77313f5a60..db64831ad35d 100644
> --- a/arch/arm64/crypto/sha3-ce-core.S
> +++ b/arch/arm64/crypto/sha3-ce-core.S
> @@ -37,7 +37,7 @@
>  	.endm
>  
>  	/*
> -	 * int sha3_ce_transform(u64 *st, const u8 *data, int blocks, int dg_size)
> +	 * void sha3_ce_transform(u64 *st, const u8 *data, int blocks, int dg_size)
>  	 */
>  	.text
>  SYM_FUNC_START(sha3_ce_transform)
> @@ -184,18 +184,16 @@ SYM_FUNC_START(sha3_ce_transform)
>  	eor	 v0.16b,  v0.16b, v31.16b
>  
>  	cbnz	w8, 3b
> -	cond_yield 4f, x8, x9
>  	cbnz	w2, 0b
>  
>  	/* save state */
> -4:	st1	{ v0.1d- v3.1d}, [x0], #32
> +	st1	{ v0.1d- v3.1d}, [x0], #32
>  	st1	{ v4.1d- v7.1d}, [x0], #32
>  	st1	{ v8.1d-v11.1d}, [x0], #32
>  	st1	{v12.1d-v15.1d}, [x0], #32
>  	st1	{v16.1d-v19.1d}, [x0], #32
>  	st1	{v20.1d-v23.1d}, [x0], #32
>  	st1	{v24.1d}, [x0]
> -	mov	w0, w2
>  	ret
>  SYM_FUNC_END(sha3_ce_transform)
>  
> diff --git a/arch/arm64/crypto/sha3-ce-glue.c b/arch/arm64/crypto/sha3-ce-glue.c
> index 250e1377c481..d689cd2bf4cf 100644
> --- a/arch/arm64/crypto/sha3-ce-glue.c
> +++ b/arch/arm64/crypto/sha3-ce-glue.c
> @@ -28,8 +28,8 @@ MODULE_ALIAS_CRYPTO("sha3-256");
>  MODULE_ALIAS_CRYPTO("sha3-384");
>  MODULE_ALIAS_CRYPTO("sha3-512");
>  
> -asmlinkage int sha3_ce_transform(u64 *st, const u8 *data, int blocks,
> -				 int md_len);
> +asmlinkage void sha3_ce_transform(u64 *st, const u8 *data, int blocks,
> +				  int md_len);
>  
>  static int sha3_update(struct shash_desc *desc, const u8 *data,
>  		       unsigned int len)
> @@ -59,15 +59,11 @@ static int sha3_update(struct shash_desc *desc, const u8 *data,
>  		blocks = len / sctx->rsiz;
>  		len %= sctx->rsiz;
>  
> -		while (blocks) {
> -			int rem;
> -
> +		if (blocks) {
>  			kernel_neon_begin();
> -			rem = sha3_ce_transform(sctx->st, data, blocks,
> -						digest_size);
> +			sha3_ce_transform(sctx->st, data, blocks, digest_size);
>  			kernel_neon_end();
> -			data += (blocks - rem) * sctx->rsiz;
> -			blocks = rem;
> +			data += blocks * sctx->rsiz;
>  		}
>  	}
>  
> diff --git a/arch/arm64/crypto/sha512-ce-core.S b/arch/arm64/crypto/sha512-ce-core.S
> index 91ef68b15fcc..96acc9295230 100644
> --- a/arch/arm64/crypto/sha512-ce-core.S
> +++ b/arch/arm64/crypto/sha512-ce-core.S
> @@ -102,8 +102,8 @@
>  	.endm
>  
>  	/*
> -	 * int __sha512_ce_transform(struct sha512_state *sst, u8 const *src,
> -	 *			     int blocks)
> +	 * void __sha512_ce_transform(struct sha512_state *sst, u8 const *src,
> +	 *			      int blocks)
>  	 */
>  	.text
>  SYM_FUNC_START(__sha512_ce_transform)
> @@ -195,12 +195,10 @@ CPU_LE(	rev64		v19.16b, v19.16b	)
>  	add		v10.2d, v10.2d, v2.2d
>  	add		v11.2d, v11.2d, v3.2d
>  
> -	cond_yield	3f, x4, x5
>  	/* handled all input blocks? */
>  	cbnz		w2, 0b
>  
>  	/* store new state */
> -3:	st1		{v8.2d-v11.2d}, [x0]
> -	mov		w0, w2
> +	st1		{v8.2d-v11.2d}, [x0]
>  	ret
>  SYM_FUNC_END(__sha512_ce_transform)
> diff --git a/arch/arm64/crypto/sha512-ce-glue.c b/arch/arm64/crypto/sha512-ce-glue.c
> index f3431fc62315..70eef74fe031 100644
> --- a/arch/arm64/crypto/sha512-ce-glue.c
> +++ b/arch/arm64/crypto/sha512-ce-glue.c
> @@ -26,23 +26,17 @@ MODULE_LICENSE("GPL v2");
>  MODULE_ALIAS_CRYPTO("sha384");
>  MODULE_ALIAS_CRYPTO("sha512");
>  
> -asmlinkage int __sha512_ce_transform(struct sha512_state *sst, u8 const *src,
> -				     int blocks);
> +asmlinkage void __sha512_ce_transform(struct sha512_state *sst, u8 const *src,
> +				      int blocks);
>  
>  asmlinkage void sha512_block_data_order(u64 *digest, u8 const *src, int blocks);
>  
>  static void sha512_ce_transform(struct sha512_state *sst, u8 const *src,
>  				int blocks)
>  {
> -	while (blocks) {
> -		int rem;
> -
> -		kernel_neon_begin();
> -		rem = __sha512_ce_transform(sst, src, blocks);
> -		kernel_neon_end();
> -		src += (blocks - rem) * SHA512_BLOCK_SIZE;
> -		blocks = rem;
> -	}
> +	kernel_neon_begin();
> +	__sha512_ce_transform(sst, src, blocks);
> +	kernel_neon_end();
>  }
>  
>  static void sha512_arm64_transform(struct sha512_state *sst, u8 const *src,
> diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
> index 376a980f2bad..f0da53a0388f 100644
> --- a/arch/arm64/include/asm/assembler.h
> +++ b/arch/arm64/include/asm/assembler.h
> @@ -759,35 +759,6 @@ alternative_endif
>  	set_sctlr sctlr_el2, \reg
>  .endm
>  
> -	/*
> -	 * Check whether preempt/bh-disabled asm code should yield as soon as
> -	 * it is able. This is the case if we are currently running in task
> -	 * context, and either a softirq is pending, or the TIF_NEED_RESCHED
> -	 * flag is set and re-enabling preemption a single time would result in
> -	 * a preempt count of zero. (Note that the TIF_NEED_RESCHED flag is
> -	 * stored negated in the top word of the thread_info::preempt_count
> -	 * field)
> -	 */
> -	.macro		cond_yield, lbl:req, tmp:req, tmp2:req
> -	get_current_task \tmp
> -	ldr		\tmp, [\tmp, #TSK_TI_PREEMPT]
> -	/*
> -	 * If we are serving a softirq, there is no point in yielding: the
> -	 * softirq will not be preempted no matter what we do, so we should
> -	 * run to completion as quickly as we can.
> -	 */
> -	tbnz		\tmp, #SOFTIRQ_SHIFT, .Lnoyield_\@
> -#ifdef CONFIG_PREEMPTION
> -	sub		\tmp, \tmp, #PREEMPT_DISABLE_OFFSET
> -	cbz		\tmp, \lbl
> -#endif
> -	adr_l		\tmp, irq_stat + IRQ_CPUSTAT_SOFTIRQ_PENDING
> -	get_this_cpu_offset	\tmp2
> -	ldr		w\tmp, [\tmp, \tmp2]
> -	cbnz		w\tmp, \lbl	// yield on pending softirq in task context
> -.Lnoyield_\@:
> -	.endm
> -
>  /*
>   * Branch Target Identifier (BTI)
>   */
> diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
> index 5ff1942b04fc..fb9e9ef9b527 100644
> --- a/arch/arm64/kernel/asm-offsets.c
> +++ b/arch/arm64/kernel/asm-offsets.c
> @@ -116,10 +116,6 @@ int main(void)
>    DEFINE(DMA_TO_DEVICE,		DMA_TO_DEVICE);
>    DEFINE(DMA_FROM_DEVICE,	DMA_FROM_DEVICE);
>    BLANK();
> -  DEFINE(PREEMPT_DISABLE_OFFSET, PREEMPT_DISABLE_OFFSET);
> -  DEFINE(SOFTIRQ_SHIFT, SOFTIRQ_SHIFT);
> -  DEFINE(IRQ_CPUSTAT_SOFTIRQ_PENDING, offsetof(irq_cpustat_t, __softirq_pending));
> -  BLANK();
>    DEFINE(CPU_BOOT_TASK,		offsetof(struct secondary_data, task));
>    BLANK();
>    DEFINE(FTR_OVR_VAL_OFFSET,	offsetof(struct arm64_ftr_override, val));
> -- 
> 2.43.0.rc1.413.gea7ed67945-goog
> 

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v3 4/5] arm64: crypto: Remove conditional yield logic
@ 2023-11-27 13:46     ` Mark Rutland
  0 siblings, 0 replies; 20+ messages in thread
From: Mark Rutland @ 2023-11-27 13:46 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: linux-arm-kernel, linux-crypto, Ard Biesheuvel, Marc Zyngier,
	Will Deacon, Kees Cook, Catalin Marinas, Mark Brown,
	Eric Biggers, Sebastian Andrzej Siewior

On Mon, Nov 27, 2023 at 01:23:04PM +0100, Ard Biesheuvel wrote:
> From: Ard Biesheuvel <ardb@kernel.org>
> 
> Some classes of crypto algorithms (such as skciphers or aeads) have
> natural yield points, but SIMD based shashes yield the NEON unit
> manually to avoid causing scheduling blackouts when operating on large
> inputs.
> 
> This is no longer necessary now that kernel mode NEON runs with
> preemption enabled, so remove this logic from the crypto assembler code,
> along with the macro that implements the TIF_NEED_RESCHED check.
> 
> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>

I definitely want to get rid of all of the voluntary preemption points, but
IIUC for the moment we need to keep these for PREEMPT_NONE and
PREEMPT_VOLUNTARY (and consequently for PREEMPT_DYNAMIC). Once the preemption
rework lands, these should no longer be necessary and can be removed:

  https://lore.kernel.org/lkml/20231107215742.363031-1-ankur.a.arora@oracle.com/

Thanks,
Mark.

> ---
>  arch/arm64/crypto/aes-glue.c       | 21 +++++---------
>  arch/arm64/crypto/aes-modes.S      |  2 --
>  arch/arm64/crypto/sha1-ce-core.S   |  6 ++--
>  arch/arm64/crypto/sha1-ce-glue.c   | 19 ++++---------
>  arch/arm64/crypto/sha2-ce-core.S   |  6 ++--
>  arch/arm64/crypto/sha2-ce-glue.c   | 19 ++++---------
>  arch/arm64/crypto/sha3-ce-core.S   |  6 ++--
>  arch/arm64/crypto/sha3-ce-glue.c   | 14 ++++------
>  arch/arm64/crypto/sha512-ce-core.S |  8 ++----
>  arch/arm64/crypto/sha512-ce-glue.c | 16 ++++-------
>  arch/arm64/include/asm/assembler.h | 29 --------------------
>  arch/arm64/kernel/asm-offsets.c    |  4 ---
>  12 files changed, 38 insertions(+), 112 deletions(-)
> 
> diff --git a/arch/arm64/crypto/aes-glue.c b/arch/arm64/crypto/aes-glue.c
> index 162787c7aa86..c42c903b7d60 100644
> --- a/arch/arm64/crypto/aes-glue.c
> +++ b/arch/arm64/crypto/aes-glue.c
> @@ -109,9 +109,9 @@ asmlinkage void aes_essiv_cbc_decrypt(u8 out[], u8 const in[], u32 const rk1[],
>  				      int rounds, int blocks, u8 iv[],
>  				      u32 const rk2[]);
>  
> -asmlinkage int aes_mac_update(u8 const in[], u32 const rk[], int rounds,
> -			      int blocks, u8 dg[], int enc_before,
> -			      int enc_after);
> +asmlinkage void aes_mac_update(u8 const in[], u32 const rk[], int rounds,
> +			       int blocks, u8 dg[], int enc_before,
> +			       int enc_after);
>  
>  struct crypto_aes_xts_ctx {
>  	struct crypto_aes_ctx key1;
> @@ -880,17 +880,10 @@ static void mac_do_update(struct crypto_aes_ctx *ctx, u8 const in[], int blocks,
>  	int rounds = 6 + ctx->key_length / 4;
>  
>  	if (crypto_simd_usable()) {
> -		int rem;
> -
> -		do {
> -			kernel_neon_begin();
> -			rem = aes_mac_update(in, ctx->key_enc, rounds, blocks,
> -					     dg, enc_before, enc_after);
> -			kernel_neon_end();
> -			in += (blocks - rem) * AES_BLOCK_SIZE;
> -			blocks = rem;
> -			enc_before = 0;
> -		} while (blocks);
> +		kernel_neon_begin();
> +		aes_mac_update(in, ctx->key_enc, rounds, blocks, dg,
> +			       enc_before, enc_after);
> +		kernel_neon_end();
>  	} else {
>  		if (enc_before)
>  			aes_encrypt(ctx, dg, dg);
> diff --git a/arch/arm64/crypto/aes-modes.S b/arch/arm64/crypto/aes-modes.S
> index 0e834a2c062c..4d68853d0caf 100644
> --- a/arch/arm64/crypto/aes-modes.S
> +++ b/arch/arm64/crypto/aes-modes.S
> @@ -842,7 +842,6 @@ AES_FUNC_START(aes_mac_update)
>  	cbz		w5, .Lmacout
>  	encrypt_block	v0, w2, x1, x7, w8
>  	st1		{v0.16b}, [x4]			/* return dg */
> -	cond_yield	.Lmacout, x7, x8
>  	b		.Lmacloop4x
>  .Lmac1x:
>  	add		w3, w3, #4
> @@ -861,6 +860,5 @@ AES_FUNC_START(aes_mac_update)
>  
>  .Lmacout:
>  	st1		{v0.16b}, [x4]			/* return dg */
> -	mov		w0, w3
>  	ret
>  AES_FUNC_END(aes_mac_update)
> diff --git a/arch/arm64/crypto/sha1-ce-core.S b/arch/arm64/crypto/sha1-ce-core.S
> index 9b1f2d82a6fe..9e37bc09c3a5 100644
> --- a/arch/arm64/crypto/sha1-ce-core.S
> +++ b/arch/arm64/crypto/sha1-ce-core.S
> @@ -62,8 +62,8 @@
>  	.endm
>  
>  	/*
> -	 * int __sha1_ce_transform(struct sha1_ce_state *sst, u8 const *src,
> -	 *			   int blocks)
> +	 * void __sha1_ce_transform(struct sha1_ce_state *sst, u8 const *src,
> +	 *			    int blocks)
>  	 */
>  SYM_FUNC_START(__sha1_ce_transform)
>  	/* load round constants */
> @@ -121,7 +121,6 @@ CPU_LE(	rev32		v11.16b, v11.16b	)
>  	add		dgav.4s, dgav.4s, dg0v.4s
>  
>  	cbz		w2, 2f
> -	cond_yield	3f, x5, x6
>  	b		0b
>  
>  	/*
> @@ -145,6 +144,5 @@ CPU_LE(	rev32		v11.16b, v11.16b	)
>  	/* store new state */
>  3:	st1		{dgav.4s}, [x0]
>  	str		dgb, [x0, #16]
> -	mov		w0, w2
>  	ret
>  SYM_FUNC_END(__sha1_ce_transform)
> diff --git a/arch/arm64/crypto/sha1-ce-glue.c b/arch/arm64/crypto/sha1-ce-glue.c
> index 1dd93e1fcb39..c1c5c5cb104b 100644
> --- a/arch/arm64/crypto/sha1-ce-glue.c
> +++ b/arch/arm64/crypto/sha1-ce-glue.c
> @@ -29,23 +29,16 @@ struct sha1_ce_state {
>  extern const u32 sha1_ce_offsetof_count;
>  extern const u32 sha1_ce_offsetof_finalize;
>  
> -asmlinkage int __sha1_ce_transform(struct sha1_ce_state *sst, u8 const *src,
> -				   int blocks);
> +asmlinkage void __sha1_ce_transform(struct sha1_ce_state *sst, u8 const *src,
> +				    int blocks);
>  
>  static void sha1_ce_transform(struct sha1_state *sst, u8 const *src,
>  			      int blocks)
>  {
> -	while (blocks) {
> -		int rem;
> -
> -		kernel_neon_begin();
> -		rem = __sha1_ce_transform(container_of(sst,
> -						       struct sha1_ce_state,
> -						       sst), src, blocks);
> -		kernel_neon_end();
> -		src += (blocks - rem) * SHA1_BLOCK_SIZE;
> -		blocks = rem;
> -	}
> +	kernel_neon_begin();
> +	__sha1_ce_transform(container_of(sst, struct sha1_ce_state, sst), src,
> +			    blocks);
> +	kernel_neon_end();
>  }
>  
>  const u32 sha1_ce_offsetof_count = offsetof(struct sha1_ce_state, sst.count);
> diff --git a/arch/arm64/crypto/sha2-ce-core.S b/arch/arm64/crypto/sha2-ce-core.S
> index fce84d88ddb2..112d772b29db 100644
> --- a/arch/arm64/crypto/sha2-ce-core.S
> +++ b/arch/arm64/crypto/sha2-ce-core.S
> @@ -71,8 +71,8 @@
>  	.word		0x90befffa, 0xa4506ceb, 0xbef9a3f7, 0xc67178f2
>  
>  	/*
> -	 * int __sha256_ce_transform(struct sha256_ce_state *sst, u8 const *src,
> -	 *			     int blocks)
> +	 * void __sha256_ce_transform(struct sha256_ce_state *sst, u8 const *src,
> +	 *			      int blocks)
>  	 */
>  	.text
>  SYM_FUNC_START(__sha256_ce_transform)
> @@ -129,7 +129,6 @@ CPU_LE(	rev32		v19.16b, v19.16b	)
>  
>  	/* handled all input blocks? */
>  	cbz		w2, 2f
> -	cond_yield	3f, x5, x6
>  	b		0b
>  
>  	/*
> @@ -152,6 +151,5 @@ CPU_LE(	rev32		v19.16b, v19.16b	)
>  
>  	/* store new state */
>  3:	st1		{dgav.4s, dgbv.4s}, [x0]
> -	mov		w0, w2
>  	ret
>  SYM_FUNC_END(__sha256_ce_transform)
> diff --git a/arch/arm64/crypto/sha2-ce-glue.c b/arch/arm64/crypto/sha2-ce-glue.c
> index 0a44d2e7ee1f..f785a66a1de4 100644
> --- a/arch/arm64/crypto/sha2-ce-glue.c
> +++ b/arch/arm64/crypto/sha2-ce-glue.c
> @@ -30,23 +30,16 @@ struct sha256_ce_state {
>  extern const u32 sha256_ce_offsetof_count;
>  extern const u32 sha256_ce_offsetof_finalize;
>  
> -asmlinkage int __sha256_ce_transform(struct sha256_ce_state *sst, u8 const *src,
> -				     int blocks);
> +asmlinkage void __sha256_ce_transform(struct sha256_ce_state *sst, u8 const *src,
> +				      int blocks);
>  
>  static void sha256_ce_transform(struct sha256_state *sst, u8 const *src,
>  				int blocks)
>  {
> -	while (blocks) {
> -		int rem;
> -
> -		kernel_neon_begin();
> -		rem = __sha256_ce_transform(container_of(sst,
> -							 struct sha256_ce_state,
> -							 sst), src, blocks);
> -		kernel_neon_end();
> -		src += (blocks - rem) * SHA256_BLOCK_SIZE;
> -		blocks = rem;
> -	}
> +	kernel_neon_begin();
> +	__sha256_ce_transform(container_of(sst, struct sha256_ce_state, sst),
> +			      src, blocks);
> +	kernel_neon_end();
>  }
>  
>  const u32 sha256_ce_offsetof_count = offsetof(struct sha256_ce_state,
> diff --git a/arch/arm64/crypto/sha3-ce-core.S b/arch/arm64/crypto/sha3-ce-core.S
> index 9c77313f5a60..db64831ad35d 100644
> --- a/arch/arm64/crypto/sha3-ce-core.S
> +++ b/arch/arm64/crypto/sha3-ce-core.S
> @@ -37,7 +37,7 @@
>  	.endm
>  
>  	/*
> -	 * int sha3_ce_transform(u64 *st, const u8 *data, int blocks, int dg_size)
> +	 * void sha3_ce_transform(u64 *st, const u8 *data, int blocks, int dg_size)
>  	 */
>  	.text
>  SYM_FUNC_START(sha3_ce_transform)
> @@ -184,18 +184,16 @@ SYM_FUNC_START(sha3_ce_transform)
>  	eor	 v0.16b,  v0.16b, v31.16b
>  
>  	cbnz	w8, 3b
> -	cond_yield 4f, x8, x9
>  	cbnz	w2, 0b
>  
>  	/* save state */
> -4:	st1	{ v0.1d- v3.1d}, [x0], #32
> +	st1	{ v0.1d- v3.1d}, [x0], #32
>  	st1	{ v4.1d- v7.1d}, [x0], #32
>  	st1	{ v8.1d-v11.1d}, [x0], #32
>  	st1	{v12.1d-v15.1d}, [x0], #32
>  	st1	{v16.1d-v19.1d}, [x0], #32
>  	st1	{v20.1d-v23.1d}, [x0], #32
>  	st1	{v24.1d}, [x0]
> -	mov	w0, w2
>  	ret
>  SYM_FUNC_END(sha3_ce_transform)
>  
> diff --git a/arch/arm64/crypto/sha3-ce-glue.c b/arch/arm64/crypto/sha3-ce-glue.c
> index 250e1377c481..d689cd2bf4cf 100644
> --- a/arch/arm64/crypto/sha3-ce-glue.c
> +++ b/arch/arm64/crypto/sha3-ce-glue.c
> @@ -28,8 +28,8 @@ MODULE_ALIAS_CRYPTO("sha3-256");
>  MODULE_ALIAS_CRYPTO("sha3-384");
>  MODULE_ALIAS_CRYPTO("sha3-512");
>  
> -asmlinkage int sha3_ce_transform(u64 *st, const u8 *data, int blocks,
> -				 int md_len);
> +asmlinkage void sha3_ce_transform(u64 *st, const u8 *data, int blocks,
> +				  int md_len);
>  
>  static int sha3_update(struct shash_desc *desc, const u8 *data,
>  		       unsigned int len)
> @@ -59,15 +59,11 @@ static int sha3_update(struct shash_desc *desc, const u8 *data,
>  		blocks = len / sctx->rsiz;
>  		len %= sctx->rsiz;
>  
> -		while (blocks) {
> -			int rem;
> -
> +		if (blocks) {
>  			kernel_neon_begin();
> -			rem = sha3_ce_transform(sctx->st, data, blocks,
> -						digest_size);
> +			sha3_ce_transform(sctx->st, data, blocks, digest_size);
>  			kernel_neon_end();
> -			data += (blocks - rem) * sctx->rsiz;
> -			blocks = rem;
> +			data += blocks * sctx->rsiz;
>  		}
>  	}
>  
> diff --git a/arch/arm64/crypto/sha512-ce-core.S b/arch/arm64/crypto/sha512-ce-core.S
> index 91ef68b15fcc..96acc9295230 100644
> --- a/arch/arm64/crypto/sha512-ce-core.S
> +++ b/arch/arm64/crypto/sha512-ce-core.S
> @@ -102,8 +102,8 @@
>  	.endm
>  
>  	/*
> -	 * int __sha512_ce_transform(struct sha512_state *sst, u8 const *src,
> -	 *			     int blocks)
> +	 * void __sha512_ce_transform(struct sha512_state *sst, u8 const *src,
> +	 *			      int blocks)
>  	 */
>  	.text
>  SYM_FUNC_START(__sha512_ce_transform)
> @@ -195,12 +195,10 @@ CPU_LE(	rev64		v19.16b, v19.16b	)
>  	add		v10.2d, v10.2d, v2.2d
>  	add		v11.2d, v11.2d, v3.2d
>  
> -	cond_yield	3f, x4, x5
>  	/* handled all input blocks? */
>  	cbnz		w2, 0b
>  
>  	/* store new state */
> -3:	st1		{v8.2d-v11.2d}, [x0]
> -	mov		w0, w2
> +	st1		{v8.2d-v11.2d}, [x0]
>  	ret
>  SYM_FUNC_END(__sha512_ce_transform)
> diff --git a/arch/arm64/crypto/sha512-ce-glue.c b/arch/arm64/crypto/sha512-ce-glue.c
> index f3431fc62315..70eef74fe031 100644
> --- a/arch/arm64/crypto/sha512-ce-glue.c
> +++ b/arch/arm64/crypto/sha512-ce-glue.c
> @@ -26,23 +26,17 @@ MODULE_LICENSE("GPL v2");
>  MODULE_ALIAS_CRYPTO("sha384");
>  MODULE_ALIAS_CRYPTO("sha512");
>  
> -asmlinkage int __sha512_ce_transform(struct sha512_state *sst, u8 const *src,
> -				     int blocks);
> +asmlinkage void __sha512_ce_transform(struct sha512_state *sst, u8 const *src,
> +				      int blocks);
>  
>  asmlinkage void sha512_block_data_order(u64 *digest, u8 const *src, int blocks);
>  
>  static void sha512_ce_transform(struct sha512_state *sst, u8 const *src,
>  				int blocks)
>  {
> -	while (blocks) {
> -		int rem;
> -
> -		kernel_neon_begin();
> -		rem = __sha512_ce_transform(sst, src, blocks);
> -		kernel_neon_end();
> -		src += (blocks - rem) * SHA512_BLOCK_SIZE;
> -		blocks = rem;
> -	}
> +	kernel_neon_begin();
> +	__sha512_ce_transform(sst, src, blocks);
> +	kernel_neon_end();
>  }
>  
>  static void sha512_arm64_transform(struct sha512_state *sst, u8 const *src,
> diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
> index 376a980f2bad..f0da53a0388f 100644
> --- a/arch/arm64/include/asm/assembler.h
> +++ b/arch/arm64/include/asm/assembler.h
> @@ -759,35 +759,6 @@ alternative_endif
>  	set_sctlr sctlr_el2, \reg
>  .endm
>  
> -	/*
> -	 * Check whether preempt/bh-disabled asm code should yield as soon as
> -	 * it is able. This is the case if we are currently running in task
> -	 * context, and either a softirq is pending, or the TIF_NEED_RESCHED
> -	 * flag is set and re-enabling preemption a single time would result in
> -	 * a preempt count of zero. (Note that the TIF_NEED_RESCHED flag is
> -	 * stored negated in the top word of the thread_info::preempt_count
> -	 * field)
> -	 */
> -	.macro		cond_yield, lbl:req, tmp:req, tmp2:req
> -	get_current_task \tmp
> -	ldr		\tmp, [\tmp, #TSK_TI_PREEMPT]
> -	/*
> -	 * If we are serving a softirq, there is no point in yielding: the
> -	 * softirq will not be preempted no matter what we do, so we should
> -	 * run to completion as quickly as we can.
> -	 */
> -	tbnz		\tmp, #SOFTIRQ_SHIFT, .Lnoyield_\@
> -#ifdef CONFIG_PREEMPTION
> -	sub		\tmp, \tmp, #PREEMPT_DISABLE_OFFSET
> -	cbz		\tmp, \lbl
> -#endif
> -	adr_l		\tmp, irq_stat + IRQ_CPUSTAT_SOFTIRQ_PENDING
> -	get_this_cpu_offset	\tmp2
> -	ldr		w\tmp, [\tmp, \tmp2]
> -	cbnz		w\tmp, \lbl	// yield on pending softirq in task context
> -.Lnoyield_\@:
> -	.endm
> -
>  /*
>   * Branch Target Identifier (BTI)
>   */
> diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
> index 5ff1942b04fc..fb9e9ef9b527 100644
> --- a/arch/arm64/kernel/asm-offsets.c
> +++ b/arch/arm64/kernel/asm-offsets.c
> @@ -116,10 +116,6 @@ int main(void)
>    DEFINE(DMA_TO_DEVICE,		DMA_TO_DEVICE);
>    DEFINE(DMA_FROM_DEVICE,	DMA_FROM_DEVICE);
>    BLANK();
> -  DEFINE(PREEMPT_DISABLE_OFFSET, PREEMPT_DISABLE_OFFSET);
> -  DEFINE(SOFTIRQ_SHIFT, SOFTIRQ_SHIFT);
> -  DEFINE(IRQ_CPUSTAT_SOFTIRQ_PENDING, offsetof(irq_cpustat_t, __softirq_pending));
> -  BLANK();
>    DEFINE(CPU_BOOT_TASK,		offsetof(struct secondary_data, task));
>    BLANK();
>    DEFINE(FTR_OVR_VAL_OFFSET,	offsetof(struct arm64_ftr_override, val));
> -- 
> 2.43.0.rc1.413.gea7ed67945-goog
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v3 4/5] arm64: crypto: Remove conditional yield logic
  2023-11-27 13:46     ` Mark Rutland
@ 2023-11-27 15:17       ` Ard Biesheuvel
  -1 siblings, 0 replies; 20+ messages in thread
From: Ard Biesheuvel @ 2023-11-27 15:17 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Ard Biesheuvel, linux-arm-kernel, linux-crypto, Marc Zyngier,
	Will Deacon, Kees Cook, Catalin Marinas, Mark Brown,
	Eric Biggers, Sebastian Andrzej Siewior

On Mon, 27 Nov 2023 at 14:46, Mark Rutland <mark.rutland@arm.com> wrote:
>
> On Mon, Nov 27, 2023 at 01:23:04PM +0100, Ard Biesheuvel wrote:
> > From: Ard Biesheuvel <ardb@kernel.org>
> >
> > Some classes of crypto algorithms (such as skciphers or aeads) have
> > natural yield points, but SIMD based shashes yield the NEON unit
> > manually to avoid causing scheduling blackouts when operating on large
> > inputs.
> >
> > This is no longer necessary now that kernel mode NEON runs with
> > preemption enabled, so remove this logic from the crypto assembler code,
> > along with the macro that implements the TIF_NEED_RESCHED check.
> >
> > Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
>
> I definitely want to get rid of all of the voluntary preemption points, but
> IIUC for the moment we need to keep these for PREEMPT_NONE and
> PREEMPT_VOLUNTARY (and consequently for PREEMPT_DYNAMIC). Once the preemption
> rework lands, these should no longer be necessary and can be removed:
>
>   https://lore.kernel.org/lkml/20231107215742.363031-1-ankur.a.arora@oracle.com/
>

Oh, right - yeah, good point.

So until that lands, we could at least simplify cond_yield and go back
to the original logic, given that yielding to a pending softirq will
no longer be necessary. (The original logic does not deal with
softirqs specifically, but relies on the preempt_count() not being
equal to PREEMPT_DISABLE_OFFSET to avoid yielding in sofirq context
unnecessarily)

This also means that only PREEMPT_VOLUNTARY is implicated here -
PREEMPT_NONE only has the yield-to-pending-softirq behavior atm.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v3 4/5] arm64: crypto: Remove conditional yield logic
@ 2023-11-27 15:17       ` Ard Biesheuvel
  0 siblings, 0 replies; 20+ messages in thread
From: Ard Biesheuvel @ 2023-11-27 15:17 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Ard Biesheuvel, linux-arm-kernel, linux-crypto, Marc Zyngier,
	Will Deacon, Kees Cook, Catalin Marinas, Mark Brown,
	Eric Biggers, Sebastian Andrzej Siewior

On Mon, 27 Nov 2023 at 14:46, Mark Rutland <mark.rutland@arm.com> wrote:
>
> On Mon, Nov 27, 2023 at 01:23:04PM +0100, Ard Biesheuvel wrote:
> > From: Ard Biesheuvel <ardb@kernel.org>
> >
> > Some classes of crypto algorithms (such as skciphers or aeads) have
> > natural yield points, but SIMD based shashes yield the NEON unit
> > manually to avoid causing scheduling blackouts when operating on large
> > inputs.
> >
> > This is no longer necessary now that kernel mode NEON runs with
> > preemption enabled, so remove this logic from the crypto assembler code,
> > along with the macro that implements the TIF_NEED_RESCHED check.
> >
> > Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
>
> I definitely want to get rid of all of the voluntary preemption points, but
> IIUC for the moment we need to keep these for PREEMPT_NONE and
> PREEMPT_VOLUNTARY (and consequently for PREEMPT_DYNAMIC). Once the preemption
> rework lands, these should no longer be necessary and can be removed:
>
>   https://lore.kernel.org/lkml/20231107215742.363031-1-ankur.a.arora@oracle.com/
>

Oh, right - yeah, good point.

So until that lands, we could at least simplify cond_yield and go back
to the original logic, given that yielding to a pending softirq will
no longer be necessary. (The original logic does not deal with
softirqs specifically, but relies on the preempt_count() not being
equal to PREEMPT_DISABLE_OFFSET to avoid yielding in sofirq context
unnecessarily)

This also means that only PREEMPT_VOLUNTARY is implicated here -
PREEMPT_NONE only has the yield-to-pending-softirq behavior atm.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2023-11-27 15:18 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-11-27 12:23 [PATCH v3 0/5] arm64: Run kernel mode NEON with preemption enabled Ard Biesheuvel
2023-11-27 12:23 ` Ard Biesheuvel
2023-11-27 12:23 ` [PATCH v3 1/5] arm64: fpsimd: Drop unneeded 'busy' flag Ard Biesheuvel
2023-11-27 12:23   ` Ard Biesheuvel
2023-11-27 12:23 ` [PATCH v3 2/5] arm64: fpsimd: Preserve/restore kernel mode NEON at context switch Ard Biesheuvel
2023-11-27 12:23   ` Ard Biesheuvel
2023-11-27 13:09   ` Mark Rutland
2023-11-27 13:09     ` Mark Rutland
2023-11-27 12:23 ` [PATCH v3 3/5] arm64: fpsimd: Implement lazy restore for kernel mode FPSIMD Ard Biesheuvel
2023-11-27 12:23   ` Ard Biesheuvel
2023-11-27 13:32   ` Mark Rutland
2023-11-27 13:32     ` Mark Rutland
2023-11-27 12:23 ` [PATCH v3 4/5] arm64: crypto: Remove conditional yield logic Ard Biesheuvel
2023-11-27 12:23   ` Ard Biesheuvel
2023-11-27 13:46   ` Mark Rutland
2023-11-27 13:46     ` Mark Rutland
2023-11-27 15:17     ` Ard Biesheuvel
2023-11-27 15:17       ` Ard Biesheuvel
2023-11-27 12:23 ` [PATCH v3 5/5] arm64: crypto: Remove FPSIMD yield logic from glue code Ard Biesheuvel
2023-11-27 12:23   ` Ard Biesheuvel

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.