All of lore.kernel.org
 help / color / mirror / Atom feed
From: Dave Watson <davejwatson@fb.com>
To: Herbert Xu <herbert@gondor.apana.org.au>,
	Junaid Shahid <junaids@google.com>,
	Steffen Klassert <steffen.klassert@secunet.com>,
	<linux-crypto@vger.kernel.org>
Cc: "David S. Miller" <davem@davemloft.net>,
	Hannes Frederic Sowa <hannes@stressinduktion.org>,
	Tim Chen <tim.c.chen@linux.intel.com>,
	Sabrina Dubroca <sd@queasysnail.net>,
	<linux-kernel@vger.kernel.org>,
	Stephan Mueller <smueller@chronox.de>,
	Ilya Lesokhin <ilyal@mellanox.com>
Subject: [PATCH v2 12/14] x86/crypto: aesni: Add fast path for > 16 byte update
Date: Wed, 14 Feb 2018 09:40:31 -0800	[thread overview]
Message-ID: <20180214174031.GA62186@davejwatson-mba> (raw)
In-Reply-To: <cover.1518628278.git.davejwatson@fb.com>

We can fast-path any < 16 byte read if the full message is > 16 bytes,
and shift over by the appropriate amount.  Usually we are
reading > 16 bytes, so this should be faster than the READ_PARTIAL
macro introduced in b20209c91e2 for the average case.

Signed-off-by: Dave Watson <davejwatson@fb.com>
---
 arch/x86/crypto/aesni-intel_asm.S | 25 +++++++++++++++++++++++++
 1 file changed, 25 insertions(+)

diff --git a/arch/x86/crypto/aesni-intel_asm.S b/arch/x86/crypto/aesni-intel_asm.S
index 398bd2237f..b941952 100644
--- a/arch/x86/crypto/aesni-intel_asm.S
+++ b/arch/x86/crypto/aesni-intel_asm.S
@@ -355,12 +355,37 @@ _zero_cipher_left_\@:
 	ENCRYPT_SINGLE_BLOCK	%xmm0, %xmm1        # Encrypt(K, Yn)
 	movdqu %xmm0, PBlockEncKey(%arg2)
 
+	cmp	$16, %arg5
+	jge _large_enough_update_\@
+
 	lea (%arg4,%r11,1), %r10
 	mov %r13, %r12
 	READ_PARTIAL_BLOCK %r10 %r12 %xmm2 %xmm1
+	jmp _data_read_\@
+
+_large_enough_update_\@:
+	sub	$16, %r11
+	add	%r13, %r11
+
+	# receive the last <16 Byte block
+	movdqu	(%arg4, %r11, 1), %xmm1
 
+	sub	%r13, %r11
+	add	$16, %r11
+
+	lea	SHIFT_MASK+16(%rip), %r12
+	# adjust the shuffle mask pointer to be able to shift 16-r13 bytes
+	# (r13 is the number of bytes in plaintext mod 16)
+	sub	%r13, %r12
+	# get the appropriate shuffle mask
+	movdqu	(%r12), %xmm2
+	# shift right 16-r13 bytes
+	PSHUFB_XMM  %xmm2, %xmm1
+
+_data_read_\@:
 	lea ALL_F+16(%rip), %r12
 	sub %r13, %r12
+
 .ifc \operation, dec
 	movdqa  %xmm1, %xmm2
 .endif
-- 
2.9.5

  parent reply	other threads:[~2018-02-14 17:40 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <cover.1518628278.git.davejwatson@fb.com>
2018-02-14 17:38 ` [PATCH v2 01/14] x86/crypto: aesni: Merge INITIAL_BLOCKS_ENC/DEC Dave Watson
2018-02-14 17:38 ` [PATCH v2 02/14] x86/crypto: aesni: Macro-ify func save/restore Dave Watson
2018-02-14 17:38 ` [PATCH v2 03/14] x86/crypto: aesni: Add GCM_INIT macro Dave Watson
2018-02-14 17:38 ` [PATCH v2 04/14] x86/crypto: aesni: Add GCM_COMPLETE macro Dave Watson
2018-02-14 17:39 ` [PATCH v2 05/14] x86/crypto: aesni: Merge encode and decode to GCM_ENC_DEC macro Dave Watson
2018-02-14 17:39 ` [PATCH v2 06/14] x86/crypto: aesni: Introduce gcm_context_data Dave Watson
2018-02-14 17:39 ` [PATCH v2 07/14] x86/crypto: aesni: Split AAD hash calculation to separate macro Dave Watson
2018-02-14 17:39 ` [PATCH v2 08/14] x86/crypto: aesni: Fill in new context data structures Dave Watson
2018-02-14 17:39 ` [PATCH v2 09/14] x86/crypto: aesni: Move ghash_mul to GCM_COMPLETE Dave Watson
2018-02-14 17:40 ` [PATCH v2 10/14] x86/crypto: aesni: Move HashKey computation from stack to gcm_context Dave Watson
2018-02-14 17:40 ` [PATCH v2 11/14] x86/crypto: aesni: Introduce partial block macro Dave Watson
2018-02-14 17:40 ` Dave Watson [this message]
2018-02-14 17:40 ` [PATCH v2 13/14] x86/crypto: aesni: Introduce scatter/gather asm function stubs Dave Watson
2018-02-14 17:40 ` [PATCH v2 14/14] x86/crypto: aesni: Update aesni-intel_glue to use scatter/gather Dave Watson

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180214174031.GA62186@davejwatson-mba \
    --to=davejwatson@fb.com \
    --cc=davem@davemloft.net \
    --cc=hannes@stressinduktion.org \
    --cc=herbert@gondor.apana.org.au \
    --cc=ilyal@mellanox.com \
    --cc=junaids@google.com \
    --cc=linux-crypto@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=sd@queasysnail.net \
    --cc=smueller@chronox.de \
    --cc=steffen.klassert@secunet.com \
    --cc=tim.c.chen@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.