All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/5] crypto: lib/sha256 - cleanup/optimization
@ 2020-10-19 15:30 Arvind Sankar
  2020-10-19 15:30 ` [PATCH 1/5] crypto: Use memzero_explicit() for clearing state Arvind Sankar
                   ` (4 more replies)
  0 siblings, 5 replies; 10+ messages in thread
From: Arvind Sankar @ 2020-10-19 15:30 UTC (permalink / raw)
  To: Herbert Xu, David S. Miller, linux-crypto; +Cc: linux-kernel

Patch 1 -- Use memzero_explicit() instead of structure assignment/plain
memset() to clear sensitive state.

Patch 2 -- I am not sure about this one: currently the temporary
variables used in the generic sha256 implementation are cleared, but the
clearing is optimized away due to lack of compiler barriers. I don't
think it's really necessary to clear them, but I'm not a cryptanalyst,
so I would like comment on whether it's indeed safe not to, or we should
instead add the required barriers to force clearing.

The last three patches are optimizations for generic sha256.

Arvind Sankar (5):
  crypto: Use memzero_explicit() for clearing state
  crypto: lib/sha256 - Don't clear temporary variables
  crypto: lib/sha256 - Clear W[] in sha256_update() instead of
    sha256_transform()
  crypto: lib/sha256 - Unroll SHA256 loop 8 times intead of 64
  crypto: lib/sha256 - Unroll LOAD and BLEND loops

 include/crypto/sha1_base.h   |   3 +-
 include/crypto/sha256_base.h |   3 +-
 include/crypto/sha512_base.h |   3 +-
 include/crypto/sm3_base.h    |   3 +-
 lib/crypto/sha256.c          | 202 ++++++++++-------------------------
 5 files changed, 62 insertions(+), 152 deletions(-)

-- 
2.26.2


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH 1/5] crypto: Use memzero_explicit() for clearing state
  2020-10-19 15:30 [PATCH 0/5] crypto: lib/sha256 - cleanup/optimization Arvind Sankar
@ 2020-10-19 15:30 ` Arvind Sankar
  2020-10-19 15:30 ` [PATCH 2/5] crypto: lib/sha256 - Don't clear temporary variables Arvind Sankar
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 10+ messages in thread
From: Arvind Sankar @ 2020-10-19 15:30 UTC (permalink / raw)
  To: Herbert Xu, David S. Miller, linux-crypto; +Cc: linux-kernel

Without the barrier_data() inside memzero_explicit(), the compiler may
optimize away the state-clearing if it can tell that the state is not
used afterwards. At least in lib/crypto/sha256.c:__sha256_final(), the
function can get inlined into sha256(), in which case the memset is
optimized away.

Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
---
 include/crypto/sha1_base.h   | 3 ++-
 include/crypto/sha256_base.h | 3 ++-
 include/crypto/sha512_base.h | 3 ++-
 include/crypto/sm3_base.h    | 3 ++-
 lib/crypto/sha256.c          | 2 +-
 5 files changed, 9 insertions(+), 5 deletions(-)

diff --git a/include/crypto/sha1_base.h b/include/crypto/sha1_base.h
index 20fd1f7468af..a5d6033efef7 100644
--- a/include/crypto/sha1_base.h
+++ b/include/crypto/sha1_base.h
@@ -12,6 +12,7 @@
 #include <crypto/sha.h>
 #include <linux/crypto.h>
 #include <linux/module.h>
+#include <linux/string.h>
 
 #include <asm/unaligned.h>
 
@@ -101,7 +102,7 @@ static inline int sha1_base_finish(struct shash_desc *desc, u8 *out)
 	for (i = 0; i < SHA1_DIGEST_SIZE / sizeof(__be32); i++)
 		put_unaligned_be32(sctx->state[i], digest++);
 
-	*sctx = (struct sha1_state){};
+	memzero_explicit(sctx, sizeof(*sctx));
 	return 0;
 }
 
diff --git a/include/crypto/sha256_base.h b/include/crypto/sha256_base.h
index 6ded110783ae..93f9fd21cc06 100644
--- a/include/crypto/sha256_base.h
+++ b/include/crypto/sha256_base.h
@@ -12,6 +12,7 @@
 #include <crypto/sha.h>
 #include <linux/crypto.h>
 #include <linux/module.h>
+#include <linux/string.h>
 
 #include <asm/unaligned.h>
 
@@ -105,7 +106,7 @@ static inline int sha256_base_finish(struct shash_desc *desc, u8 *out)
 	for (i = 0; digest_size > 0; i++, digest_size -= sizeof(__be32))
 		put_unaligned_be32(sctx->state[i], digest++);
 
-	*sctx = (struct sha256_state){};
+	memzero_explicit(sctx, sizeof(*sctx));
 	return 0;
 }
 
diff --git a/include/crypto/sha512_base.h b/include/crypto/sha512_base.h
index fb19c77494dc..93ab73baa38e 100644
--- a/include/crypto/sha512_base.h
+++ b/include/crypto/sha512_base.h
@@ -12,6 +12,7 @@
 #include <crypto/sha.h>
 #include <linux/crypto.h>
 #include <linux/module.h>
+#include <linux/string.h>
 
 #include <asm/unaligned.h>
 
@@ -126,7 +127,7 @@ static inline int sha512_base_finish(struct shash_desc *desc, u8 *out)
 	for (i = 0; digest_size > 0; i++, digest_size -= sizeof(__be64))
 		put_unaligned_be64(sctx->state[i], digest++);
 
-	*sctx = (struct sha512_state){};
+	memzero_explicit(sctx, sizeof(*sctx));
 	return 0;
 }
 
diff --git a/include/crypto/sm3_base.h b/include/crypto/sm3_base.h
index 1cbf9aa1fe52..2f3a32ab97bb 100644
--- a/include/crypto/sm3_base.h
+++ b/include/crypto/sm3_base.h
@@ -13,6 +13,7 @@
 #include <crypto/sm3.h>
 #include <linux/crypto.h>
 #include <linux/module.h>
+#include <linux/string.h>
 #include <asm/unaligned.h>
 
 typedef void (sm3_block_fn)(struct sm3_state *sst, u8 const *src, int blocks);
@@ -104,7 +105,7 @@ static inline int sm3_base_finish(struct shash_desc *desc, u8 *out)
 	for (i = 0; i < SM3_DIGEST_SIZE / sizeof(__be32); i++)
 		put_unaligned_be32(sctx->state[i], digest++);
 
-	*sctx = (struct sm3_state){};
+	memzero_explicit(sctx, sizeof(*sctx));
 	return 0;
 }
 
diff --git a/lib/crypto/sha256.c b/lib/crypto/sha256.c
index 2321f6cb322f..d43bc39ab05e 100644
--- a/lib/crypto/sha256.c
+++ b/lib/crypto/sha256.c
@@ -265,7 +265,7 @@ static void __sha256_final(struct sha256_state *sctx, u8 *out, int digest_words)
 		put_unaligned_be32(sctx->state[i], &dst[i]);
 
 	/* Zeroize sensitive information. */
-	memset(sctx, 0, sizeof(*sctx));
+	memzero_explicit(sctx, sizeof(*sctx));
 }
 
 void sha256_final(struct sha256_state *sctx, u8 *out)
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 2/5] crypto: lib/sha256 - Don't clear temporary variables
  2020-10-19 15:30 [PATCH 0/5] crypto: lib/sha256 - cleanup/optimization Arvind Sankar
  2020-10-19 15:30 ` [PATCH 1/5] crypto: Use memzero_explicit() for clearing state Arvind Sankar
@ 2020-10-19 15:30 ` Arvind Sankar
  2020-10-19 15:30 ` [PATCH 3/5] crypto: lib/sha256 - Clear W[] in sha256_update() instead of sha256_transform() Arvind Sankar
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 10+ messages in thread
From: Arvind Sankar @ 2020-10-19 15:30 UTC (permalink / raw)
  To: Herbert Xu, David S. Miller, linux-crypto; +Cc: linux-kernel

The assignments to clear a through h and t1/t2 are optimized out by the
compiler because they are unused after the assignments.

These variables shouldn't be very sensitive: t1/t2 can be calculated
from a through h, so they don't reveal any additional information.
Knowing a through h is equivalent to knowing one 64-byte block's SHA256
hash (with non-standard initial value) which, assuming SHA256 is secure,
doesn't reveal any information about the input.

Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
---
 lib/crypto/sha256.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/lib/crypto/sha256.c b/lib/crypto/sha256.c
index d43bc39ab05e..099cd11f83c1 100644
--- a/lib/crypto/sha256.c
+++ b/lib/crypto/sha256.c
@@ -202,7 +202,6 @@ static void sha256_transform(u32 *state, const u8 *input)
 	state[4] += e; state[5] += f; state[6] += g; state[7] += h;
 
 	/* clear any sensitive info... */
-	a = b = c = d = e = f = g = h = t1 = t2 = 0;
 	memzero_explicit(W, 64 * sizeof(u32));
 }
 
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 3/5] crypto: lib/sha256 - Clear W[] in sha256_update() instead of sha256_transform()
  2020-10-19 15:30 [PATCH 0/5] crypto: lib/sha256 - cleanup/optimization Arvind Sankar
  2020-10-19 15:30 ` [PATCH 1/5] crypto: Use memzero_explicit() for clearing state Arvind Sankar
  2020-10-19 15:30 ` [PATCH 2/5] crypto: lib/sha256 - Don't clear temporary variables Arvind Sankar
@ 2020-10-19 15:30 ` Arvind Sankar
  2020-10-19 15:30 ` [PATCH 4/5] crypto: lib/sha256 - Unroll SHA256 loop 8 times intead of 64 Arvind Sankar
  2020-10-19 15:30 ` [PATCH 5/5] crypto: lib/sha256 - Unroll LOAD and BLEND loops Arvind Sankar
  4 siblings, 0 replies; 10+ messages in thread
From: Arvind Sankar @ 2020-10-19 15:30 UTC (permalink / raw)
  To: Herbert Xu, David S. Miller, linux-crypto; +Cc: linux-kernel

The temporary W[] array is currently zeroed out once every call to
sha256_transform(), i.e. once every 64 bytes of input data. Moving it to
sha256_update() instead so that it is cleared only once per update can
save about 2-3% of the total time taken to compute the digest, with a
reasonable memset() implementation, and considerably more (~20%) with a
bad one (eg the x86 purgatory currently uses a memset() coded in C).

Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
---
 lib/crypto/sha256.c | 11 +++++------
 1 file changed, 5 insertions(+), 6 deletions(-)

diff --git a/lib/crypto/sha256.c b/lib/crypto/sha256.c
index 099cd11f83c1..c6bfeacc5b81 100644
--- a/lib/crypto/sha256.c
+++ b/lib/crypto/sha256.c
@@ -43,10 +43,9 @@ static inline void BLEND_OP(int I, u32 *W)
 	W[I] = s1(W[I-2]) + W[I-7] + s0(W[I-15]) + W[I-16];
 }
 
-static void sha256_transform(u32 *state, const u8 *input)
+static void sha256_transform(u32 *state, const u8 *input, u32 *W)
 {
 	u32 a, b, c, d, e, f, g, h, t1, t2;
-	u32 W[64];
 	int i;
 
 	/* load the input */
@@ -200,15 +199,13 @@ static void sha256_transform(u32 *state, const u8 *input)
 
 	state[0] += a; state[1] += b; state[2] += c; state[3] += d;
 	state[4] += e; state[5] += f; state[6] += g; state[7] += h;
-
-	/* clear any sensitive info... */
-	memzero_explicit(W, 64 * sizeof(u32));
 }
 
 void sha256_update(struct sha256_state *sctx, const u8 *data, unsigned int len)
 {
 	unsigned int partial, done;
 	const u8 *src;
+	u32 W[64];
 
 	partial = sctx->count & 0x3f;
 	sctx->count += len;
@@ -223,11 +220,13 @@ void sha256_update(struct sha256_state *sctx, const u8 *data, unsigned int len)
 		}
 
 		do {
-			sha256_transform(sctx->state, src);
+			sha256_transform(sctx->state, src, W);
 			done += 64;
 			src = data + done;
 		} while (done + 63 < len);
 
+		memzero_explicit(W, sizeof(W));
+
 		partial = 0;
 	}
 	memcpy(sctx->buf + partial, src, len - done);
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 4/5] crypto: lib/sha256 - Unroll SHA256 loop 8 times intead of 64
  2020-10-19 15:30 [PATCH 0/5] crypto: lib/sha256 - cleanup/optimization Arvind Sankar
                   ` (2 preceding siblings ...)
  2020-10-19 15:30 ` [PATCH 3/5] crypto: lib/sha256 - Clear W[] in sha256_update() instead of sha256_transform() Arvind Sankar
@ 2020-10-19 15:30 ` Arvind Sankar
  2020-10-20  7:41   ` David Laight
  2020-10-19 15:30 ` [PATCH 5/5] crypto: lib/sha256 - Unroll LOAD and BLEND loops Arvind Sankar
  4 siblings, 1 reply; 10+ messages in thread
From: Arvind Sankar @ 2020-10-19 15:30 UTC (permalink / raw)
  To: Herbert Xu, David S. Miller, linux-crypto; +Cc: linux-kernel

This reduces code size substantially (on x86_64 with gcc-10 the size of
sha256_update() goes from 7593 bytes to 1952 bytes including the new
SHA256_K array), and on x86 is slightly faster than the full unroll.

Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
---
 lib/crypto/sha256.c | 164 ++++++++------------------------------------
 1 file changed, 28 insertions(+), 136 deletions(-)

diff --git a/lib/crypto/sha256.c b/lib/crypto/sha256.c
index c6bfeacc5b81..9f0b71d41ea0 100644
--- a/lib/crypto/sha256.c
+++ b/lib/crypto/sha256.c
@@ -18,6 +18,17 @@
 #include <crypto/sha.h>
 #include <asm/unaligned.h>
 
+static const u32 SHA256_K[] = {
+	0x428a2f98, 0x71374491, 0xb5c0fbcf, 0xe9b5dba5, 0x3956c25b, 0x59f111f1, 0x923f82a4, 0xab1c5ed5,
+	0xd807aa98, 0x12835b01, 0x243185be, 0x550c7dc3, 0x72be5d74, 0x80deb1fe, 0x9bdc06a7, 0xc19bf174,
+	0xe49b69c1, 0xefbe4786, 0x0fc19dc6, 0x240ca1cc, 0x2de92c6f, 0x4a7484aa, 0x5cb0a9dc, 0x76f988da,
+	0x983e5152, 0xa831c66d, 0xb00327c8, 0xbf597fc7, 0xc6e00bf3, 0xd5a79147, 0x06ca6351, 0x14292967,
+	0x27b70a85, 0x2e1b2138, 0x4d2c6dfc, 0x53380d13, 0x650a7354, 0x766a0abb, 0x81c2c92e, 0x92722c85,
+	0xa2bfe8a1, 0xa81a664b, 0xc24b8b70, 0xc76c51a3, 0xd192e819, 0xd6990624, 0xf40e3585, 0x106aa070,
+	0x19a4c116, 0x1e376c08, 0x2748774c, 0x34b0bcb5, 0x391c0cb3, 0x4ed8aa4a, 0x5b9cca4f, 0x682e6ff3,
+	0x748f82ee, 0x78a5636f, 0x84c87814, 0x8cc70208, 0x90befffa, 0xa4506ceb, 0xbef9a3f7, 0xc67178f2,
+};
+
 static inline u32 Ch(u32 x, u32 y, u32 z)
 {
 	return z ^ (x & (y ^ z));
@@ -43,9 +54,15 @@ static inline void BLEND_OP(int I, u32 *W)
 	W[I] = s1(W[I-2]) + W[I-7] + s0(W[I-15]) + W[I-16];
 }
 
+#define SHA256_ROUND(i, a, b, c, d, e, f, g, h) do {		\
+	u32 t1, t2;						\
+	t1 = h + e1(e) + Ch(e, f, g) + SHA256_K[i] + W[i];	\
+	t2 = e0(a) + Maj(a, b, c);    d += t1;    h = t1 + t2;	\
+} while (0)
+
 static void sha256_transform(u32 *state, const u8 *input, u32 *W)
 {
-	u32 a, b, c, d, e, f, g, h, t1, t2;
+	u32 a, b, c, d, e, f, g, h;
 	int i;
 
 	/* load the input */
@@ -61,141 +78,16 @@ static void sha256_transform(u32 *state, const u8 *input, u32 *W)
 	e = state[4];  f = state[5];  g = state[6];  h = state[7];
 
 	/* now iterate */
-	t1 = h + e1(e) + Ch(e, f, g) + 0x428a2f98 + W[0];
-	t2 = e0(a) + Maj(a, b, c);    d += t1;    h = t1 + t2;
-	t1 = g + e1(d) + Ch(d, e, f) + 0x71374491 + W[1];
-	t2 = e0(h) + Maj(h, a, b);    c += t1;    g = t1 + t2;
-	t1 = f + e1(c) + Ch(c, d, e) + 0xb5c0fbcf + W[2];
-	t2 = e0(g) + Maj(g, h, a);    b += t1;    f = t1 + t2;
-	t1 = e + e1(b) + Ch(b, c, d) + 0xe9b5dba5 + W[3];
-	t2 = e0(f) + Maj(f, g, h);    a += t1;    e = t1 + t2;
-	t1 = d + e1(a) + Ch(a, b, c) + 0x3956c25b + W[4];
-	t2 = e0(e) + Maj(e, f, g);    h += t1;    d = t1 + t2;
-	t1 = c + e1(h) + Ch(h, a, b) + 0x59f111f1 + W[5];
-	t2 = e0(d) + Maj(d, e, f);    g += t1;    c = t1 + t2;
-	t1 = b + e1(g) + Ch(g, h, a) + 0x923f82a4 + W[6];
-	t2 = e0(c) + Maj(c, d, e);    f += t1;    b = t1 + t2;
-	t1 = a + e1(f) + Ch(f, g, h) + 0xab1c5ed5 + W[7];
-	t2 = e0(b) + Maj(b, c, d);    e += t1;    a = t1 + t2;
-
-	t1 = h + e1(e) + Ch(e, f, g) + 0xd807aa98 + W[8];
-	t2 = e0(a) + Maj(a, b, c);    d += t1;    h = t1 + t2;
-	t1 = g + e1(d) + Ch(d, e, f) + 0x12835b01 + W[9];
-	t2 = e0(h) + Maj(h, a, b);    c += t1;    g = t1 + t2;
-	t1 = f + e1(c) + Ch(c, d, e) + 0x243185be + W[10];
-	t2 = e0(g) + Maj(g, h, a);    b += t1;    f = t1 + t2;
-	t1 = e + e1(b) + Ch(b, c, d) + 0x550c7dc3 + W[11];
-	t2 = e0(f) + Maj(f, g, h);    a += t1;    e = t1 + t2;
-	t1 = d + e1(a) + Ch(a, b, c) + 0x72be5d74 + W[12];
-	t2 = e0(e) + Maj(e, f, g);    h += t1;    d = t1 + t2;
-	t1 = c + e1(h) + Ch(h, a, b) + 0x80deb1fe + W[13];
-	t2 = e0(d) + Maj(d, e, f);    g += t1;    c = t1 + t2;
-	t1 = b + e1(g) + Ch(g, h, a) + 0x9bdc06a7 + W[14];
-	t2 = e0(c) + Maj(c, d, e);    f += t1;    b = t1 + t2;
-	t1 = a + e1(f) + Ch(f, g, h) + 0xc19bf174 + W[15];
-	t2 = e0(b) + Maj(b, c, d);    e += t1;    a = t1 + t2;
-
-	t1 = h + e1(e) + Ch(e, f, g) + 0xe49b69c1 + W[16];
-	t2 = e0(a) + Maj(a, b, c);    d += t1;    h = t1 + t2;
-	t1 = g + e1(d) + Ch(d, e, f) + 0xefbe4786 + W[17];
-	t2 = e0(h) + Maj(h, a, b);    c += t1;    g = t1 + t2;
-	t1 = f + e1(c) + Ch(c, d, e) + 0x0fc19dc6 + W[18];
-	t2 = e0(g) + Maj(g, h, a);    b += t1;    f = t1 + t2;
-	t1 = e + e1(b) + Ch(b, c, d) + 0x240ca1cc + W[19];
-	t2 = e0(f) + Maj(f, g, h);    a += t1;    e = t1 + t2;
-	t1 = d + e1(a) + Ch(a, b, c) + 0x2de92c6f + W[20];
-	t2 = e0(e) + Maj(e, f, g);    h += t1;    d = t1 + t2;
-	t1 = c + e1(h) + Ch(h, a, b) + 0x4a7484aa + W[21];
-	t2 = e0(d) + Maj(d, e, f);    g += t1;    c = t1 + t2;
-	t1 = b + e1(g) + Ch(g, h, a) + 0x5cb0a9dc + W[22];
-	t2 = e0(c) + Maj(c, d, e);    f += t1;    b = t1 + t2;
-	t1 = a + e1(f) + Ch(f, g, h) + 0x76f988da + W[23];
-	t2 = e0(b) + Maj(b, c, d);    e += t1;    a = t1 + t2;
-
-	t1 = h + e1(e) + Ch(e, f, g) + 0x983e5152 + W[24];
-	t2 = e0(a) + Maj(a, b, c);    d += t1;    h = t1 + t2;
-	t1 = g + e1(d) + Ch(d, e, f) + 0xa831c66d + W[25];
-	t2 = e0(h) + Maj(h, a, b);    c += t1;    g = t1 + t2;
-	t1 = f + e1(c) + Ch(c, d, e) + 0xb00327c8 + W[26];
-	t2 = e0(g) + Maj(g, h, a);    b += t1;    f = t1 + t2;
-	t1 = e + e1(b) + Ch(b, c, d) + 0xbf597fc7 + W[27];
-	t2 = e0(f) + Maj(f, g, h);    a += t1;    e = t1 + t2;
-	t1 = d + e1(a) + Ch(a, b, c) + 0xc6e00bf3 + W[28];
-	t2 = e0(e) + Maj(e, f, g);    h += t1;    d = t1 + t2;
-	t1 = c + e1(h) + Ch(h, a, b) + 0xd5a79147 + W[29];
-	t2 = e0(d) + Maj(d, e, f);    g += t1;    c = t1 + t2;
-	t1 = b + e1(g) + Ch(g, h, a) + 0x06ca6351 + W[30];
-	t2 = e0(c) + Maj(c, d, e);    f += t1;    b = t1 + t2;
-	t1 = a + e1(f) + Ch(f, g, h) + 0x14292967 + W[31];
-	t2 = e0(b) + Maj(b, c, d);    e += t1;    a = t1 + t2;
-
-	t1 = h + e1(e) + Ch(e, f, g) + 0x27b70a85 + W[32];
-	t2 = e0(a) + Maj(a, b, c);    d += t1;    h = t1 + t2;
-	t1 = g + e1(d) + Ch(d, e, f) + 0x2e1b2138 + W[33];
-	t2 = e0(h) + Maj(h, a, b);    c += t1;    g = t1 + t2;
-	t1 = f + e1(c) + Ch(c, d, e) + 0x4d2c6dfc + W[34];
-	t2 = e0(g) + Maj(g, h, a);    b += t1;    f = t1 + t2;
-	t1 = e + e1(b) + Ch(b, c, d) + 0x53380d13 + W[35];
-	t2 = e0(f) + Maj(f, g, h);    a += t1;    e = t1 + t2;
-	t1 = d + e1(a) + Ch(a, b, c) + 0x650a7354 + W[36];
-	t2 = e0(e) + Maj(e, f, g);    h += t1;    d = t1 + t2;
-	t1 = c + e1(h) + Ch(h, a, b) + 0x766a0abb + W[37];
-	t2 = e0(d) + Maj(d, e, f);    g += t1;    c = t1 + t2;
-	t1 = b + e1(g) + Ch(g, h, a) + 0x81c2c92e + W[38];
-	t2 = e0(c) + Maj(c, d, e);    f += t1;    b = t1 + t2;
-	t1 = a + e1(f) + Ch(f, g, h) + 0x92722c85 + W[39];
-	t2 = e0(b) + Maj(b, c, d);    e += t1;    a = t1 + t2;
-
-	t1 = h + e1(e) + Ch(e, f, g) + 0xa2bfe8a1 + W[40];
-	t2 = e0(a) + Maj(a, b, c);    d += t1;    h = t1 + t2;
-	t1 = g + e1(d) + Ch(d, e, f) + 0xa81a664b + W[41];
-	t2 = e0(h) + Maj(h, a, b);    c += t1;    g = t1 + t2;
-	t1 = f + e1(c) + Ch(c, d, e) + 0xc24b8b70 + W[42];
-	t2 = e0(g) + Maj(g, h, a);    b += t1;    f = t1 + t2;
-	t1 = e + e1(b) + Ch(b, c, d) + 0xc76c51a3 + W[43];
-	t2 = e0(f) + Maj(f, g, h);    a += t1;    e = t1 + t2;
-	t1 = d + e1(a) + Ch(a, b, c) + 0xd192e819 + W[44];
-	t2 = e0(e) + Maj(e, f, g);    h += t1;    d = t1 + t2;
-	t1 = c + e1(h) + Ch(h, a, b) + 0xd6990624 + W[45];
-	t2 = e0(d) + Maj(d, e, f);    g += t1;    c = t1 + t2;
-	t1 = b + e1(g) + Ch(g, h, a) + 0xf40e3585 + W[46];
-	t2 = e0(c) + Maj(c, d, e);    f += t1;    b = t1 + t2;
-	t1 = a + e1(f) + Ch(f, g, h) + 0x106aa070 + W[47];
-	t2 = e0(b) + Maj(b, c, d);    e += t1;    a = t1 + t2;
-
-	t1 = h + e1(e) + Ch(e, f, g) + 0x19a4c116 + W[48];
-	t2 = e0(a) + Maj(a, b, c);    d += t1;    h = t1 + t2;
-	t1 = g + e1(d) + Ch(d, e, f) + 0x1e376c08 + W[49];
-	t2 = e0(h) + Maj(h, a, b);    c += t1;    g = t1 + t2;
-	t1 = f + e1(c) + Ch(c, d, e) + 0x2748774c + W[50];
-	t2 = e0(g) + Maj(g, h, a);    b += t1;    f = t1 + t2;
-	t1 = e + e1(b) + Ch(b, c, d) + 0x34b0bcb5 + W[51];
-	t2 = e0(f) + Maj(f, g, h);    a += t1;    e = t1 + t2;
-	t1 = d + e1(a) + Ch(a, b, c) + 0x391c0cb3 + W[52];
-	t2 = e0(e) + Maj(e, f, g);    h += t1;    d = t1 + t2;
-	t1 = c + e1(h) + Ch(h, a, b) + 0x4ed8aa4a + W[53];
-	t2 = e0(d) + Maj(d, e, f);    g += t1;    c = t1 + t2;
-	t1 = b + e1(g) + Ch(g, h, a) + 0x5b9cca4f + W[54];
-	t2 = e0(c) + Maj(c, d, e);    f += t1;    b = t1 + t2;
-	t1 = a + e1(f) + Ch(f, g, h) + 0x682e6ff3 + W[55];
-	t2 = e0(b) + Maj(b, c, d);    e += t1;    a = t1 + t2;
-
-	t1 = h + e1(e) + Ch(e, f, g) + 0x748f82ee + W[56];
-	t2 = e0(a) + Maj(a, b, c);    d += t1;    h = t1 + t2;
-	t1 = g + e1(d) + Ch(d, e, f) + 0x78a5636f + W[57];
-	t2 = e0(h) + Maj(h, a, b);    c += t1;    g = t1 + t2;
-	t1 = f + e1(c) + Ch(c, d, e) + 0x84c87814 + W[58];
-	t2 = e0(g) + Maj(g, h, a);    b += t1;    f = t1 + t2;
-	t1 = e + e1(b) + Ch(b, c, d) + 0x8cc70208 + W[59];
-	t2 = e0(f) + Maj(f, g, h);    a += t1;    e = t1 + t2;
-	t1 = d + e1(a) + Ch(a, b, c) + 0x90befffa + W[60];
-	t2 = e0(e) + Maj(e, f, g);    h += t1;    d = t1 + t2;
-	t1 = c + e1(h) + Ch(h, a, b) + 0xa4506ceb + W[61];
-	t2 = e0(d) + Maj(d, e, f);    g += t1;    c = t1 + t2;
-	t1 = b + e1(g) + Ch(g, h, a) + 0xbef9a3f7 + W[62];
-	t2 = e0(c) + Maj(c, d, e);    f += t1;    b = t1 + t2;
-	t1 = a + e1(f) + Ch(f, g, h) + 0xc67178f2 + W[63];
-	t2 = e0(b) + Maj(b, c, d);    e += t1;    a = t1 + t2;
+	for (i = 0; i < 64; i += 8) {
+		SHA256_ROUND(i + 0, a, b, c, d, e, f, g, h);
+		SHA256_ROUND(i + 1, h, a, b, c, d, e, f, g);
+		SHA256_ROUND(i + 2, g, h, a, b, c, d, e, f);
+		SHA256_ROUND(i + 3, f, g, h, a, b, c, d, e);
+		SHA256_ROUND(i + 4, e, f, g, h, a, b, c, d);
+		SHA256_ROUND(i + 5, d, e, f, g, h, a, b, c);
+		SHA256_ROUND(i + 6, c, d, e, f, g, h, a, b);
+		SHA256_ROUND(i + 7, b, c, d, e, f, g, h, a);
+	}
 
 	state[0] += a; state[1] += b; state[2] += c; state[3] += d;
 	state[4] += e; state[5] += f; state[6] += g; state[7] += h;
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 5/5] crypto: lib/sha256 - Unroll LOAD and BLEND loops
  2020-10-19 15:30 [PATCH 0/5] crypto: lib/sha256 - cleanup/optimization Arvind Sankar
                   ` (3 preceding siblings ...)
  2020-10-19 15:30 ` [PATCH 4/5] crypto: lib/sha256 - Unroll SHA256 loop 8 times intead of 64 Arvind Sankar
@ 2020-10-19 15:30 ` Arvind Sankar
  4 siblings, 0 replies; 10+ messages in thread
From: Arvind Sankar @ 2020-10-19 15:30 UTC (permalink / raw)
  To: Herbert Xu, David S. Miller, linux-crypto; +Cc: linux-kernel

Unrolling the LOAD and BLEND loops improves performance by ~8% on x86
while not increasing code size too much.

Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
---
 lib/crypto/sha256.c | 24 ++++++++++++++++++++----
 1 file changed, 20 insertions(+), 4 deletions(-)

diff --git a/lib/crypto/sha256.c b/lib/crypto/sha256.c
index 9f0b71d41ea0..a3db88d10523 100644
--- a/lib/crypto/sha256.c
+++ b/lib/crypto/sha256.c
@@ -66,12 +66,28 @@ static void sha256_transform(u32 *state, const u8 *input, u32 *W)
 	int i;
 
 	/* load the input */
-	for (i = 0; i < 16; i++)
-		LOAD_OP(i, W, input);
+	for (i = 0; i < 16; i += 8) {
+		LOAD_OP(i + 0, W, input);
+		LOAD_OP(i + 1, W, input);
+		LOAD_OP(i + 2, W, input);
+		LOAD_OP(i + 3, W, input);
+		LOAD_OP(i + 4, W, input);
+		LOAD_OP(i + 5, W, input);
+		LOAD_OP(i + 6, W, input);
+		LOAD_OP(i + 7, W, input);
+	}
 
 	/* now blend */
-	for (i = 16; i < 64; i++)
-		BLEND_OP(i, W);
+	for (i = 16; i < 64; i += 8) {
+		BLEND_OP(i + 0, W);
+		BLEND_OP(i + 1, W);
+		BLEND_OP(i + 2, W);
+		BLEND_OP(i + 3, W);
+		BLEND_OP(i + 4, W);
+		BLEND_OP(i + 5, W);
+		BLEND_OP(i + 6, W);
+		BLEND_OP(i + 7, W);
+	}
 
 	/* load the state into our registers */
 	a = state[0];  b = state[1];  c = state[2];  d = state[3];
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* RE: [PATCH 4/5] crypto: lib/sha256 - Unroll SHA256 loop 8 times intead of 64
  2020-10-19 15:30 ` [PATCH 4/5] crypto: lib/sha256 - Unroll SHA256 loop 8 times intead of 64 Arvind Sankar
@ 2020-10-20  7:41   ` David Laight
  2020-10-20 14:07     ` Arvind Sankar
  0 siblings, 1 reply; 10+ messages in thread
From: David Laight @ 2020-10-20  7:41 UTC (permalink / raw)
  To: 'Arvind Sankar', Herbert Xu, David S. Miller, linux-crypto
  Cc: linux-kernel

From: Arvind Sankar> Sent: 19 October 2020 16:30
> To: Herbert Xu <herbert@gondor.apana.org.au>; David S. Miller <davem@davemloft.net>; linux-
> crypto@vger.kernel.org
> Cc: linux-kernel@vger.kernel.org
> Subject: [PATCH 4/5] crypto: lib/sha256 - Unroll SHA256 loop 8 times intead of 64
> 
> This reduces code size substantially (on x86_64 with gcc-10 the size of
> sha256_update() goes from 7593 bytes to 1952 bytes including the new
> SHA256_K array), and on x86 is slightly faster than the full unroll.

The speed will depend on exactly which cpu type is used.
It is even possible that the 'not unrolled at all' loop
(with the all the extra register moves) is faster on some x86-64 cpu.

> 
> Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
> ---
>  lib/crypto/sha256.c | 164 ++++++++------------------------------------
>  1 file changed, 28 insertions(+), 136 deletions(-)
> 
> diff --git a/lib/crypto/sha256.c b/lib/crypto/sha256.c
> index c6bfeacc5b81..9f0b71d41ea0 100644
> --- a/lib/crypto/sha256.c
> +++ b/lib/crypto/sha256.c
> @@ -18,6 +18,17 @@
>  #include <crypto/sha.h>
>  #include <asm/unaligned.h>
...
> 
> +#define SHA256_ROUND(i, a, b, c, d, e, f, g, h) do {		\
> +	u32 t1, t2;						\
> +	t1 = h + e1(e) + Ch(e, f, g) + SHA256_K[i] + W[i];	\
> +	t2 = e0(a) + Maj(a, b, c);    d += t1;    h = t1 + t2;	\

Split to 3 lines.

If you can put SHA256_K[] and W[] into a struct then the
compiler can use the same register to address into both
arrays (using an offset of 64*4 for the second one).
(ie keep the two arrays, not an array of struct).
This should reduce the register pressure slightly.

	David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 4/5] crypto: lib/sha256 - Unroll SHA256 loop 8 times intead of 64
  2020-10-20  7:41   ` David Laight
@ 2020-10-20 14:07     ` Arvind Sankar
  2020-10-20 14:55       ` David Laight
  0 siblings, 1 reply; 10+ messages in thread
From: Arvind Sankar @ 2020-10-20 14:07 UTC (permalink / raw)
  To: David Laight
  Cc: 'Arvind Sankar',
	Herbert Xu, David S. Miller, linux-crypto, linux-kernel

On Tue, Oct 20, 2020 at 07:41:33AM +0000, David Laight wrote:
> From: Arvind Sankar> Sent: 19 October 2020 16:30
> > To: Herbert Xu <herbert@gondor.apana.org.au>; David S. Miller <davem@davemloft.net>; linux-
> > crypto@vger.kernel.org
> > Cc: linux-kernel@vger.kernel.org
> > Subject: [PATCH 4/5] crypto: lib/sha256 - Unroll SHA256 loop 8 times intead of 64
> > 
> > This reduces code size substantially (on x86_64 with gcc-10 the size of
> > sha256_update() goes from 7593 bytes to 1952 bytes including the new
> > SHA256_K array), and on x86 is slightly faster than the full unroll.
> 
> The speed will depend on exactly which cpu type is used.
> It is even possible that the 'not unrolled at all' loop
> (with the all the extra register moves) is faster on some x86-64 cpu.

Yes, I should have mentioned this was tested on a Broadwell Xeon, at
least on that processor, no unrolling is a measurable performance loss.
But the hope is that 8x unroll should be generally enough unrolling that
64x is unlikely to speed it up more, and so has no advantage over 8x.

> 
> > 
> > Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
> > ---
> >  lib/crypto/sha256.c | 164 ++++++++------------------------------------
> >  1 file changed, 28 insertions(+), 136 deletions(-)
> > 
> > diff --git a/lib/crypto/sha256.c b/lib/crypto/sha256.c
> > index c6bfeacc5b81..9f0b71d41ea0 100644
> > --- a/lib/crypto/sha256.c
> > +++ b/lib/crypto/sha256.c
> > @@ -18,6 +18,17 @@
> >  #include <crypto/sha.h>
> >  #include <asm/unaligned.h>
> ...
> > 
> > +#define SHA256_ROUND(i, a, b, c, d, e, f, g, h) do {		\
> > +	u32 t1, t2;						\
> > +	t1 = h + e1(e) + Ch(e, f, g) + SHA256_K[i] + W[i];	\
> > +	t2 = e0(a) + Maj(a, b, c);    d += t1;    h = t1 + t2;	\
> 
> Split to 3 lines.

This was the way the code was before, but I can reformat it, sure.

> 
> If you can put SHA256_K[] and W[] into a struct then the
> compiler can use the same register to address into both
> arrays (using an offset of 64*4 for the second one).
> (ie keep the two arrays, not an array of struct).
> This should reduce the register pressure slightly.

I can try that, could copy the data in sha256_update() so it's amortized
over the whole input.

> 
> 	David
> 
> -
> Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
> Registration No: 1397386 (Wales)
> 

^ permalink raw reply	[flat|nested] 10+ messages in thread

* RE: [PATCH 4/5] crypto: lib/sha256 - Unroll SHA256 loop 8 times intead of 64
  2020-10-20 14:07     ` Arvind Sankar
@ 2020-10-20 14:55       ` David Laight
  2020-10-20 19:45         ` Arvind Sankar
  0 siblings, 1 reply; 10+ messages in thread
From: David Laight @ 2020-10-20 14:55 UTC (permalink / raw)
  To: 'Arvind Sankar'
  Cc: Herbert Xu, David S. Miller, linux-crypto, linux-kernel

From: Arvind Sankar
> Sent: 20 October 2020 15:07
> To: David Laight <David.Laight@ACULAB.COM>
> 
> On Tue, Oct 20, 2020 at 07:41:33AM +0000, David Laight wrote:
> > From: Arvind Sankar> Sent: 19 October 2020 16:30
> > > To: Herbert Xu <herbert@gondor.apana.org.au>; David S. Miller <davem@davemloft.net>; linux-
> > > crypto@vger.kernel.org
> > > Cc: linux-kernel@vger.kernel.org
> > > Subject: [PATCH 4/5] crypto: lib/sha256 - Unroll SHA256 loop 8 times intead of 64
> > >
> > > This reduces code size substantially (on x86_64 with gcc-10 the size of
> > > sha256_update() goes from 7593 bytes to 1952 bytes including the new
> > > SHA256_K array), and on x86 is slightly faster than the full unroll.
> >
> > The speed will depend on exactly which cpu type is used.
> > It is even possible that the 'not unrolled at all' loop
> > (with the all the extra register moves) is faster on some x86-64 cpu.
> 
> Yes, I should have mentioned this was tested on a Broadwell Xeon, at
> least on that processor, no unrolling is a measurable performance loss.
> But the hope is that 8x unroll should be generally enough unrolling that
> 64x is unlikely to speed it up more, and so has no advantage over 8x.

(I've just looked at the actual code, not just the patch.)

Yes I doubt the 64x unroll was ever a significant gain.
Unrolling completely requires a load of register moves/renames;
probably too many to be 'zero cost'.

With modern cpu you can often get the loop control instructions
'for free' so unrolling just kills the I-cache.
Some of the cpu have loop buffers for decoded instructions,
unroll beyond that size and you suddenly get the decoder costs
hitting you again.

...
> > If you can put SHA256_K[] and W[] into a struct then the
> > compiler can use the same register to address into both
> > arrays (using an offset of 64*4 for the second one).
> > (ie keep the two arrays, not an array of struct).
> > This should reduce the register pressure slightly.
> 
> I can try that, could copy the data in sha256_update() so it's amortized
> over the whole input.

Having looked more closely the extra copy needed is probably
bigger than any saving.

What that code needs is some special 3-input instructions :-)
It would work a lot better written in VHDL for an fpga.

	David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 4/5] crypto: lib/sha256 - Unroll SHA256 loop 8 times intead of 64
  2020-10-20 14:55       ` David Laight
@ 2020-10-20 19:45         ` Arvind Sankar
  0 siblings, 0 replies; 10+ messages in thread
From: Arvind Sankar @ 2020-10-20 19:45 UTC (permalink / raw)
  To: David Laight
  Cc: 'Arvind Sankar',
	Herbert Xu, David S. Miller, linux-crypto, linux-kernel

On Tue, Oct 20, 2020 at 02:55:47PM +0000, David Laight wrote:
> From: Arvind Sankar
> > Sent: 20 October 2020 15:07
> > To: David Laight <David.Laight@ACULAB.COM>
> > 
> > On Tue, Oct 20, 2020 at 07:41:33AM +0000, David Laight wrote:
> > > From: Arvind Sankar> Sent: 19 October 2020 16:30
> > > > To: Herbert Xu <herbert@gondor.apana.org.au>; David S. Miller <davem@davemloft.net>; linux-
> > > > crypto@vger.kernel.org
> > > > Cc: linux-kernel@vger.kernel.org
> > > > Subject: [PATCH 4/5] crypto: lib/sha256 - Unroll SHA256 loop 8 times intead of 64
> > > >
> > > > This reduces code size substantially (on x86_64 with gcc-10 the size of
> > > > sha256_update() goes from 7593 bytes to 1952 bytes including the new
> > > > SHA256_K array), and on x86 is slightly faster than the full unroll.
> > >
> > > The speed will depend on exactly which cpu type is used.
> > > It is even possible that the 'not unrolled at all' loop
> > > (with the all the extra register moves) is faster on some x86-64 cpu.
> > 
> > Yes, I should have mentioned this was tested on a Broadwell Xeon, at
> > least on that processor, no unrolling is a measurable performance loss.
> > But the hope is that 8x unroll should be generally enough unrolling that
> > 64x is unlikely to speed it up more, and so has no advantage over 8x.
> 
> (I've just looked at the actual code, not just the patch.)
> 
> Yes I doubt the 64x unroll was ever a significant gain.
> Unrolling completely requires a load of register moves/renames;
> probably too many to be 'zero cost'.
> 
> With modern cpu you can often get the loop control instructions
> 'for free' so unrolling just kills the I-cache.
> Some of the cpu have loop buffers for decoded instructions,
> unroll beyond that size and you suddenly get the decoder costs
> hitting you again.
> 
> ...
> > > If you can put SHA256_K[] and W[] into a struct then the
> > > compiler can use the same register to address into both
> > > arrays (using an offset of 64*4 for the second one).
> > > (ie keep the two arrays, not an array of struct).
> > > This should reduce the register pressure slightly.
> > 
> > I can try that, could copy the data in sha256_update() so it's amortized
> > over the whole input.
> 
> Having looked more closely the extra copy needed is probably
> bigger than any saving.
> 

On x86-64 it doesn't make much difference, but it speeds up x86-32 by
around 10% (on long inputs).

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2020-10-20 19:46 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-19 15:30 [PATCH 0/5] crypto: lib/sha256 - cleanup/optimization Arvind Sankar
2020-10-19 15:30 ` [PATCH 1/5] crypto: Use memzero_explicit() for clearing state Arvind Sankar
2020-10-19 15:30 ` [PATCH 2/5] crypto: lib/sha256 - Don't clear temporary variables Arvind Sankar
2020-10-19 15:30 ` [PATCH 3/5] crypto: lib/sha256 - Clear W[] in sha256_update() instead of sha256_transform() Arvind Sankar
2020-10-19 15:30 ` [PATCH 4/5] crypto: lib/sha256 - Unroll SHA256 loop 8 times intead of 64 Arvind Sankar
2020-10-20  7:41   ` David Laight
2020-10-20 14:07     ` Arvind Sankar
2020-10-20 14:55       ` David Laight
2020-10-20 19:45         ` Arvind Sankar
2020-10-19 15:30 ` [PATCH 5/5] crypto: lib/sha256 - Unroll LOAD and BLEND loops Arvind Sankar

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.