linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/3] crypto: yield at end of operations
@ 2022-12-19 20:37 Robert Elliott
  2022-12-19 20:37 ` [PATCH 1/3] crypto: skcipher - always yield at end of walk Robert Elliott
                   ` (2 more replies)
  0 siblings, 3 replies; 7+ messages in thread
From: Robert Elliott @ 2022-12-19 20:37 UTC (permalink / raw)
  To: herbert, davem; +Cc: linux-crypto, linux-kernel, Robert Elliott

Call crypto_yield() consistently in the skcipher, aead, and shash
helper functions so even generic drivers don't hog the CPU and
cause RCU stall warnings and soft lockups.

Add cond_yield() in tcrypt's do_test so back-to-back tests yield
as well.

Robert Elliott (3):
  crypto: skcipher - always yield at end of walk
  crypto: aead/shash - yield at end of operations
  crypto: tcrypt - yield at end of test

 crypto/aead.c     |  4 ++++
 crypto/shash.c    | 32 ++++++++++++++++++++++++--------
 crypto/skcipher.c | 15 +++++++++++----
 crypto/tcrypt.c   |  1 +
 4 files changed, 40 insertions(+), 12 deletions(-)

-- 
2.38.1


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH 1/3] crypto: skcipher - always yield at end of walk
  2022-12-19 20:37 [PATCH 0/3] crypto: yield at end of operations Robert Elliott
@ 2022-12-19 20:37 ` Robert Elliott
  2022-12-20  3:54   ` Herbert Xu
  2022-12-19 20:37 ` [PATCH 2/3] crypto: aead/shash - yield at end of operations Robert Elliott
  2022-12-19 20:37 ` [PATCH 3/3] crypto: tcrypt - yield at end of test Robert Elliott
  2 siblings, 1 reply; 7+ messages in thread
From: Robert Elliott @ 2022-12-19 20:37 UTC (permalink / raw)
  To: herbert, davem; +Cc: linux-crypto, linux-kernel, Robert Elliott

Always yield to the scheduler at the end of skcipher_walk_done(),
not just if additional bytes are left to be processed.

This avoids soft lockups if drivers are invoked back-to-back
to process data that is an integer multiple of their block
size.

Example: while processing 1 MiB buffers, multiple skciphers run
from 192 s to 218 s without ever yielding to the scheduler,
causing three soft lockup complaints.

The kernel is configured for CONFIG_PREEMPT_NONE=y (or
preempt=none on the kernel command line), so only explicit
cond_resched() calls trigger scheduling - might_resched() and
preempt_enable() do not (see kernel/sched/core.c).

[  190.865601] tcrypt: PERL my %speeds_skcipher = (
[  192.041157] tcrypt: PERL "cbc-aes-aesni" => 2396490,
[  192.373934] tcrypt: PERL "ctr-aes-aesni" => 574888,
[  193.548967] tcrypt: PERL "cts-cbc-aes-aesni" => 2396018,
[  193.909077] tcrypt: PERL "ecb-aes-aesni" => 631824,
[  194.223801] tcrypt: PERL "xctr-aes-aesni" => 535778,
[  194.608548] tcrypt: PERL "xts-aes-aesni" => 676518,
[  196.440247] tcrypt: PERL "ctr-aria-avx" => 3804224,
[  196.675788] tcrypt: PERL "xchacha12-simd" => 368668,
[  196.988868] tcrypt: PERL "xchacha20-simd" => 535314,
[  197.301510] tcrypt: PERL "chacha20-simd" => 535142,
[  198.590113] tcrypt: PERL "ctr-sm4-aesni-avx2" => 2642930,
[  208.975253] tcrypt: PERL "cfb-sm4-aesni-avx2" => 22499840,
[  218.187217] watchdog: BUG: soft lockup - CPU#1 stuck for 26s!  [modprobe:3433]
[  246.181238] Modules linked in: tcrypt(+) hctr2 essiv adiantum
...
[  246.185048] RIP: 0010:measure_skcipher_bigbuf.constprop.0.isra.0+0x282/0x393 [tcrypt]
[  246.185304] Code: 00 0f 31 ...
...
[  218.197313] Call Trace:
[  218.197567]  <TASK>
[  218.197822]  ? 0xffffffffc052a000
[  218.198079]  do_test.cold+0x989/0xc7a [tcrypt]
[  218.198340]  ? 0xffffffffc052a000
[  218.198595]  tcrypt_mod_init+0x50/0x1000 [tcrypt]
[  218.198857]  ? 0xffffffffc052a000
[  218.199112]  do_one_initcall+0x41/0x200
...
[  219.391776] tcrypt: PERL "cbc-sm4-aesni-avx2" => 22528138,
[  221.560847] tcrypt: PERL "ctr-sm4-aesni-avx" => 4560732,
[  231.960414] tcrypt: PERL "cfb-sm4-aesni-avx" => 22498380,
[  242.350070] tcrypt: PERL "cbc-sm4-aesni-avx" => 22527668,
[  244.471181] tcrypt: PERL "ecb-sm4-aesni-avx" => 4469626,
...
[  246.181064] watchdog: BUG: soft lockup - CPU#1 stuck for 52s!  [modprobe:3433]
...
[  250.168239] tcrypt: PERL "cbc-camellia-aesni-avx2" => 12202738,
[  255.840094] tcrypt: PERL "cbc-camellia-aesni" => 12203096,
[  264.047440] tcrypt: PERL "cbc-cast5-avx" => 17744280,
[  273.091258] tcrypt: PERL "cbc-cast6-avx" => 19375400,
[  274.183249] watchdog: BUG: soft lockup - CPU#1 stuck for 78s!  [modprobe:3433]
...
[  283.066260] tcrypt: PERL "cbc-serpent-avx2" => 21454930,
[  292.983848] tcrypt: PERL "cbc-serpent-avx" => 21452996,
...

By adding a unilateral call to crypto_yield(), which calls
cond_resched() and lets the scheduler use the CPU for another
thread, that no longer happens.

Starting at 2218 s, there is no soft lockup reported at 2244 s:
[ 2217.202692] tcrypt: PERL my %speeds_skcipher = (
[ 2218.450215] tcrypt: PERL            "cbc-aes-aesni" =>  2179138,
[ 2218.950960] tcrypt: PERL            "cbc-aes-aesni" =>   538738,
[ 2219.460618] tcrypt: PERL            "ctr-aes-aesni" =>   575212,
[ 2219.983006] tcrypt: PERL            "ctr-aes-aesni" =>   574402,
[ 2221.329550] tcrypt: PERL        "cts-cbc-aes-aesni" =>  2182864,
[ 2221.840599] tcrypt: PERL        "cts-cbc-aes-aesni" =>   539064,
[ 2222.344290] tcrypt: PERL            "ecb-aes-aesni" =>   537402,
[ 2222.869201] tcrypt: PERL            "ecb-aes-aesni" =>   537730,
[ 2223.400315] tcrypt: PERL           "xctr-aes-aesni" =>   534824,
[ 2223.897915] tcrypt: PERL           "xctr-aes-aesni" =>   534782,
[ 2224.414956] tcrypt: PERL            "xts-aes-aesni" =>   539592,
[ 2224.923715] tcrypt: PERL            "xts-aes-aesni" =>   539356,
[ 2226.740211] tcrypt: PERL             "ctr-aria-avx" =>  3392444,
[ 2228.545624] tcrypt: PERL             "ctr-aria-avx" =>  3392068,
[ 2228.869883] tcrypt: PERL           "xchacha12-simd" =>   368932,
[ 2229.204980] tcrypt: PERL           "xchacha12-simd" =>   374122,
[ 2229.609975] tcrypt: PERL           "xchacha20-simd" =>   535596,
[ 2230.022425] tcrypt: PERL           "xchacha20-simd" =>   537500,
[ 2230.429674] tcrypt: PERL            "chacha20-simd" =>   535474,
[ 2230.831041] tcrypt: PERL            "chacha20-simd" =>   534264,
[ 2232.278150] tcrypt: PERL       "ctr-sm4-aesni-avx2" =>  2640770,
[ 2233.744781] tcrypt: PERL       "ctr-sm4-aesni-avx2" =>  2642520,
[ 2244.290542] tcrypt: PERL       "cfb-sm4-aesni-avx2" => 22497308,
[ 2245.725044] tcrypt: PERL       "cfb-sm4-aesni-avx2" =>  2604468,
[ 2256.279228] tcrypt: PERL       "cbc-sm4-aesni-avx2" => 22526084,
[ 2257.729868] tcrypt: PERL       "cbc-sm4-aesni-avx2" =>  2600460,
[ 2260.068782] tcrypt: PERL        "ctr-sm4-aesni-avx" =>  4560650,
[ 2262.414663] tcrypt: PERL        "ctr-sm4-aesni-avx" =>  4561468,
[ 2272.943000] tcrypt: PERL        "cfb-sm4-aesni-avx" => 22496026,
[ 2275.233755] tcrypt: PERL        "cfb-sm4-aesni-avx" =>  4456984,
[ 2285.779516] tcrypt: PERL        "cbc-sm4-aesni-avx" => 22525908,
[ 2288.081160] tcrypt: PERL        "cbc-sm4-aesni-avx" =>  4457036,
[ 2290.374086] tcrypt: PERL        "ecb-sm4-aesni-avx" =>  4465790,
[ 2292.677381] tcrypt: PERL        "ecb-sm4-aesni-avx" =>  4466014,
[ 2298.544718] tcrypt: PERL  "cbc-camellia-aesni-avx2" => 12246268,
[ 2299.869611] tcrypt: PERL  "cbc-camellia-aesni-avx2" =>  2349440,
[ 2305.734078] tcrypt: PERL       "cbc-camellia-aesni" => 12246930,
[ 2307.746065] tcrypt: PERL       "cbc-camellia-aesni" =>  3832992,
[ 2316.127414] tcrypt: PERL            "cbc-cast5-avx" => 17737348,
[ 2318.703437] tcrypt: PERL            "cbc-cast5-avx" =>  5061014,
[ 2327.694881] tcrypt: PERL            "cbc-cast6-avx" => 19065488,
[ 2331.672188] tcrypt: PERL            "cbc-cast6-avx" =>  8145590,
[ 2341.750274] tcrypt: PERL         "cbc-serpent-avx2" => 21453172,
[ 2343.209420] tcrypt: PERL         "cbc-serpent-avx2" =>  2611702,

Fixes: b286d8b1a690 ("crypto: skcipher - Add skcipher walk interface")
Signed-off-by: Robert Elliott <elliott@hpe.com>
---
 crypto/skcipher.c | 15 +++++++++++----
 1 file changed, 11 insertions(+), 4 deletions(-)

diff --git a/crypto/skcipher.c b/crypto/skcipher.c
index 0ecab31cfe79..cdead632117a 100644
--- a/crypto/skcipher.c
+++ b/crypto/skcipher.c
@@ -153,13 +153,20 @@ int skcipher_walk_done(struct skcipher_walk *walk, int err)
 	scatterwalk_done(&walk->in, 0, nbytes);
 	scatterwalk_done(&walk->out, 1, nbytes);
 
-	if (nbytes) {
-		crypto_yield(walk->flags & SKCIPHER_WALK_SLEEP ?
-			     CRYPTO_TFM_REQ_MAY_SLEEP : 0);
+	/*
+	 * Allow scheduler to use the CPU since it has been busy,
+	 * regardless of whether another loop pass is due
+	 */
+	crypto_yield(walk->flags & SKCIPHER_WALK_SLEEP ?
+		     CRYPTO_TFM_REQ_MAY_SLEEP : 0);
+
+	if (nbytes)
 		return skcipher_walk_next(walk);
-	}
 
 finish:
+	crypto_yield(walk->flags & SKCIPHER_WALK_SLEEP ?
+		     CRYPTO_TFM_REQ_MAY_SLEEP : 0);
+
 	/* Short-circuit for the common/fast path. */
 	if (!((unsigned long)walk->buffer | (unsigned long)walk->page))
 		goto out;
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 2/3] crypto: aead/shash - yield at end of operations
  2022-12-19 20:37 [PATCH 0/3] crypto: yield at end of operations Robert Elliott
  2022-12-19 20:37 ` [PATCH 1/3] crypto: skcipher - always yield at end of walk Robert Elliott
@ 2022-12-19 20:37 ` Robert Elliott
  2022-12-20  3:55   ` Herbert Xu
  2022-12-19 20:37 ` [PATCH 3/3] crypto: tcrypt - yield at end of test Robert Elliott
  2 siblings, 1 reply; 7+ messages in thread
From: Robert Elliott @ 2022-12-19 20:37 UTC (permalink / raw)
  To: herbert, davem; +Cc: linux-crypto, linux-kernel, Robert Elliott

Add crypto_yield() calls at the end of all the encrypt and decrypt
functions to let the scheduler use the CPU after possibly a long
tenure by the crypto driver.

This reduces RCU stalls and soft lockups when running crypto
functions back-to-back that don't have their own yield calls
(e.g., aligned generic functions).

Signed-off-by: Robert Elliott <elliott@hpe.com>
---
 crypto/aead.c  |  4 ++++
 crypto/shash.c | 32 ++++++++++++++++++++++++--------
 2 files changed, 28 insertions(+), 8 deletions(-)

diff --git a/crypto/aead.c b/crypto/aead.c
index 16991095270d..f88378f4d4f5 100644
--- a/crypto/aead.c
+++ b/crypto/aead.c
@@ -93,6 +93,8 @@ int crypto_aead_encrypt(struct aead_request *req)
 	else
 		ret = crypto_aead_alg(aead)->encrypt(req);
 	crypto_stats_aead_encrypt(cryptlen, alg, ret);
+
+	crypto_yield(crypto_aead_get_flags(aead));
 	return ret;
 }
 EXPORT_SYMBOL_GPL(crypto_aead_encrypt);
@@ -112,6 +114,8 @@ int crypto_aead_decrypt(struct aead_request *req)
 	else
 		ret = crypto_aead_alg(aead)->decrypt(req);
 	crypto_stats_aead_decrypt(cryptlen, alg, ret);
+
+	crypto_yield(crypto_aead_get_flags(aead));
 	return ret;
 }
 EXPORT_SYMBOL_GPL(crypto_aead_decrypt);
diff --git a/crypto/shash.c b/crypto/shash.c
index 868b6ba2b3b7..6fea17a50048 100644
--- a/crypto/shash.c
+++ b/crypto/shash.c
@@ -114,11 +114,15 @@ int crypto_shash_update(struct shash_desc *desc, const u8 *data,
 	struct crypto_shash *tfm = desc->tfm;
 	struct shash_alg *shash = crypto_shash_alg(tfm);
 	unsigned long alignmask = crypto_shash_alignmask(tfm);
+	int ret;
 
 	if ((unsigned long)data & alignmask)
-		return shash_update_unaligned(desc, data, len);
+		ret = shash_update_unaligned(desc, data, len);
+	else
+		ret = shash->update(desc, data, len);
 
-	return shash->update(desc, data, len);
+	crypto_yield(crypto_shash_get_flags(tfm));
+	return ret;
 }
 EXPORT_SYMBOL_GPL(crypto_shash_update);
 
@@ -155,11 +159,15 @@ int crypto_shash_final(struct shash_desc *desc, u8 *out)
 	struct crypto_shash *tfm = desc->tfm;
 	struct shash_alg *shash = crypto_shash_alg(tfm);
 	unsigned long alignmask = crypto_shash_alignmask(tfm);
+	int ret;
 
 	if ((unsigned long)out & alignmask)
-		return shash_final_unaligned(desc, out);
+		ret = shash_final_unaligned(desc, out);
+	else
+		ret = shash->final(desc, out);
 
-	return shash->final(desc, out);
+	crypto_yield(crypto_shash_get_flags(tfm));
+	return ret;
 }
 EXPORT_SYMBOL_GPL(crypto_shash_final);
 
@@ -176,11 +184,15 @@ int crypto_shash_finup(struct shash_desc *desc, const u8 *data,
 	struct crypto_shash *tfm = desc->tfm;
 	struct shash_alg *shash = crypto_shash_alg(tfm);
 	unsigned long alignmask = crypto_shash_alignmask(tfm);
+	int ret;
 
 	if (((unsigned long)data | (unsigned long)out) & alignmask)
-		return shash_finup_unaligned(desc, data, len, out);
+		ret = shash_finup_unaligned(desc, data, len, out);
+	else
+		ret = shash->finup(desc, data, len, out);
 
-	return shash->finup(desc, data, len, out);
+	crypto_yield(crypto_shash_get_flags(tfm));
+	return ret;
 }
 EXPORT_SYMBOL_GPL(crypto_shash_finup);
 
@@ -197,14 +209,18 @@ int crypto_shash_digest(struct shash_desc *desc, const u8 *data,
 	struct crypto_shash *tfm = desc->tfm;
 	struct shash_alg *shash = crypto_shash_alg(tfm);
 	unsigned long alignmask = crypto_shash_alignmask(tfm);
+	int ret;
 
 	if (crypto_shash_get_flags(tfm) & CRYPTO_TFM_NEED_KEY)
 		return -ENOKEY;
 
 	if (((unsigned long)data | (unsigned long)out) & alignmask)
-		return shash_digest_unaligned(desc, data, len, out);
+		ret = shash_digest_unaligned(desc, data, len, out);
+	else
+		ret = shash->digest(desc, data, len, out);
 
-	return shash->digest(desc, data, len, out);
+	crypto_yield(crypto_shash_get_flags(tfm));
+	return ret;
 }
 EXPORT_SYMBOL_GPL(crypto_shash_digest);
 
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 3/3] crypto: tcrypt - yield at end of test
  2022-12-19 20:37 [PATCH 0/3] crypto: yield at end of operations Robert Elliott
  2022-12-19 20:37 ` [PATCH 1/3] crypto: skcipher - always yield at end of walk Robert Elliott
  2022-12-19 20:37 ` [PATCH 2/3] crypto: aead/shash - yield at end of operations Robert Elliott
@ 2022-12-19 20:37 ` Robert Elliott
  2022-12-20  3:55   ` Herbert Xu
  2 siblings, 1 reply; 7+ messages in thread
From: Robert Elliott @ 2022-12-19 20:37 UTC (permalink / raw)
  To: herbert, davem; +Cc: linux-crypto, linux-kernel, Robert Elliott

Call cond_resched() to let the scheduler reschedule the
CPU at the end of each test pass.

If the kernel is configured with CONFIG_PREEMPT_NONE=y (or
preempt=none is used on the kernel command line), the only
time the scheduler will intervene is when cond_resched()
is called. So, repeated calls to
	modprobe tcrypt mode=<something>

hold the CPU for a long time.

Signed-off-by: Robert Elliott <elliott@hpe.com>
---
 crypto/tcrypt.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/crypto/tcrypt.c b/crypto/tcrypt.c
index 3e9e4adeef02..916bddbf4e75 100644
--- a/crypto/tcrypt.c
+++ b/crypto/tcrypt.c
@@ -3027,6 +3027,7 @@ static int do_test(const char *alg, u32 type, u32 mask, int m, u32 num_mb)
 
 	}
 
+	cond_resched();
 	return ret;
 }
 
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH 1/3] crypto: skcipher - always yield at end of walk
  2022-12-19 20:37 ` [PATCH 1/3] crypto: skcipher - always yield at end of walk Robert Elliott
@ 2022-12-20  3:54   ` Herbert Xu
  0 siblings, 0 replies; 7+ messages in thread
From: Herbert Xu @ 2022-12-20  3:54 UTC (permalink / raw)
  To: Robert Elliott; +Cc: davem, linux-crypto, linux-kernel

On Mon, Dec 19, 2022 at 02:37:31PM -0600, Robert Elliott wrote:
>
> diff --git a/crypto/skcipher.c b/crypto/skcipher.c
> index 0ecab31cfe79..cdead632117a 100644
> --- a/crypto/skcipher.c
> +++ b/crypto/skcipher.c
> @@ -153,13 +153,20 @@ int skcipher_walk_done(struct skcipher_walk *walk, int err)
>  	scatterwalk_done(&walk->in, 0, nbytes);
>  	scatterwalk_done(&walk->out, 1, nbytes);
>  
> -	if (nbytes) {
> -		crypto_yield(walk->flags & SKCIPHER_WALK_SLEEP ?
> -			     CRYPTO_TFM_REQ_MAY_SLEEP : 0);
> +	/*
> +	 * Allow scheduler to use the CPU since it has been busy,
> +	 * regardless of whether another loop pass is due
> +	 */
> +	crypto_yield(walk->flags & SKCIPHER_WALK_SLEEP ?
> +		     CRYPTO_TFM_REQ_MAY_SLEEP : 0);
> +
> +	if (nbytes)
>  		return skcipher_walk_next(walk);
> -	}
>  
>  finish:
> +	crypto_yield(walk->flags & SKCIPHER_WALK_SLEEP ?
> +		     CRYPTO_TFM_REQ_MAY_SLEEP : 0);
> +

You're calling crypto_yield twice if nbytes == 0.  How about
deleting the second crypto_yield call because the only case
where it would matter is when n == 0.

Thanks,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 2/3] crypto: aead/shash - yield at end of operations
  2022-12-19 20:37 ` [PATCH 2/3] crypto: aead/shash - yield at end of operations Robert Elliott
@ 2022-12-20  3:55   ` Herbert Xu
  0 siblings, 0 replies; 7+ messages in thread
From: Herbert Xu @ 2022-12-20  3:55 UTC (permalink / raw)
  To: Robert Elliott; +Cc: davem, linux-crypto, linux-kernel

On Mon, Dec 19, 2022 at 02:37:32PM -0600, Robert Elliott wrote:
> Add crypto_yield() calls at the end of all the encrypt and decrypt
> functions to let the scheduler use the CPU after possibly a long
> tenure by the crypto driver.
> 
> This reduces RCU stalls and soft lockups when running crypto
> functions back-to-back that don't have their own yield calls
> (e.g., aligned generic functions).
> 
> Signed-off-by: Robert Elliott <elliott@hpe.com>
> ---
>  crypto/aead.c  |  4 ++++
>  crypto/shash.c | 32 ++++++++++++++++++++++++--------
>  2 files changed, 28 insertions(+), 8 deletions(-)
> 
> diff --git a/crypto/aead.c b/crypto/aead.c
> index 16991095270d..f88378f4d4f5 100644
> --- a/crypto/aead.c
> +++ b/crypto/aead.c
> @@ -93,6 +93,8 @@ int crypto_aead_encrypt(struct aead_request *req)
>  	else
>  		ret = crypto_aead_alg(aead)->encrypt(req);
>  	crypto_stats_aead_encrypt(cryptlen, alg, ret);
> +
> +	crypto_yield(crypto_aead_get_flags(aead));

This is the wrong place to do it.  It should be done by the code
that's actually doing the work, just like skcipher.

> diff --git a/crypto/shash.c b/crypto/shash.c
> index 868b6ba2b3b7..6fea17a50048 100644
> --- a/crypto/shash.c
> +++ b/crypto/shash.c
> @@ -114,11 +114,15 @@ int crypto_shash_update(struct shash_desc *desc, const u8 *data,
>  	struct crypto_shash *tfm = desc->tfm;
>  	struct shash_alg *shash = crypto_shash_alg(tfm);
>  	unsigned long alignmask = crypto_shash_alignmask(tfm);
> +	int ret;
>  
>  	if ((unsigned long)data & alignmask)
> -		return shash_update_unaligned(desc, data, len);
> +		ret = shash_update_unaligned(desc, data, len);
> +	else
> +		ret = shash->update(desc, data, len);
>  
> -	return shash->update(desc, data, len);
> +	crypto_yield(crypto_shash_get_flags(tfm));
> +	return ret;
>  }
>  EXPORT_SYMBOL_GPL(crypto_shash_update);

Ditto.

Cheers,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 3/3] crypto: tcrypt - yield at end of test
  2022-12-19 20:37 ` [PATCH 3/3] crypto: tcrypt - yield at end of test Robert Elliott
@ 2022-12-20  3:55   ` Herbert Xu
  0 siblings, 0 replies; 7+ messages in thread
From: Herbert Xu @ 2022-12-20  3:55 UTC (permalink / raw)
  To: Robert Elliott; +Cc: davem, linux-crypto, linux-kernel

On Mon, Dec 19, 2022 at 02:37:33PM -0600, Robert Elliott wrote:
> Call cond_resched() to let the scheduler reschedule the
> CPU at the end of each test pass.
> 
> If the kernel is configured with CONFIG_PREEMPT_NONE=y (or
> preempt=none is used on the kernel command line), the only
> time the scheduler will intervene is when cond_resched()
> is called. So, repeated calls to
> 	modprobe tcrypt mode=<something>
> 
> hold the CPU for a long time.
> 
> Signed-off-by: Robert Elliott <elliott@hpe.com>
> ---
>  crypto/tcrypt.c | 1 +
>  1 file changed, 1 insertion(+)

I don't really see the point of this.

Cheers,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2022-12-20  3:55 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-12-19 20:37 [PATCH 0/3] crypto: yield at end of operations Robert Elliott
2022-12-19 20:37 ` [PATCH 1/3] crypto: skcipher - always yield at end of walk Robert Elliott
2022-12-20  3:54   ` Herbert Xu
2022-12-19 20:37 ` [PATCH 2/3] crypto: aead/shash - yield at end of operations Robert Elliott
2022-12-20  3:55   ` Herbert Xu
2022-12-19 20:37 ` [PATCH 3/3] crypto: tcrypt - yield at end of test Robert Elliott
2022-12-20  3:55   ` Herbert Xu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).