All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/5] crypto: x86 AES-CBC encryption with multibuffer
       [not found] <cover.1446158797.git.tim.c.chen@linux.intel.com>
@ 2015-10-29 22:20 ` Tim Chen
  2015-10-29 22:21 ` [PATCH v2 1/5] crypto: Multi-buffer encryptioin infrastructure support Tim Chen
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 15+ messages in thread
From: Tim Chen @ 2015-10-29 22:20 UTC (permalink / raw)
  To: Herbert Xu, H. Peter Anvin, David S.Miller, Stephan Mueller
  Cc: Chandramouli Narayanan, Vinodh Gopal, James Guilford,
	Wajdi Feghali, Tim Chen, Jussi Kivilinna, Stephan Mueller,
	linux-crypto, linux-kernel


In this patch series, we introduce AES CBC encryption that is parallelized
on x86_64 cpu with XMM registers. The multi-buffer technique encrypt 8
data streams in parallel with SIMD instructions. Decryption is handled
as in the existing AESNI Intel CBC implementation which can already
parallelize decryption even for a single data stream.

Please see the multi-buffer whitepaper for details of the technique:
http://www.intel.com/content/www/us/en/communications/communications-ia-multi-buffer-paper.html

It is important that any driver uses this algorithm properly for scenarios
where we have many data streams that can fill up the data lanes most of
the time.  It shouldn't be used when only a single data stream is expected
mostly. Otherwise we may incurr extra delays when we have frequent gaps in
data lanes, causing us to wait till data come in to fill the data lanes
before initiating encryption.  We may have to wait for flush operations
to commence when no new data come in after some wait time. However we
keep this extra delay to a minimum by opportunistically flushing the
unfinished jobs if crypto daemon is the only active task running on a cpu.

By using this technique, we saw a throughput increase of up to
5.8x under optimal conditions when we have fully loaded encryption jobs
filling up all the data lanes.

Change Log:
v2
1. Update cpu feature check to make sure SSE is supported
2. Fix up unloading of aes-cbc-mb module to properly free memory

Tim Chen (5):
  crypto: Multi-buffer encryptioin infrastructure support
  crypto: AES CBC multi-buffer data structures
  crypto: AES CBC multi-buffer scheduler
  crypto: AES CBC by8 encryption
  crypto: AES CBC multi-buffer glue code

 arch/x86/crypto/Makefile                           |   1 +
 arch/x86/crypto/aes-cbc-mb/Makefile                |  22 +
 arch/x86/crypto/aes-cbc-mb/aes_cbc_enc_x8.S        | 774 ++++++++++++++++++++
 arch/x86/crypto/aes-cbc-mb/aes_cbc_mb.c            | 812 +++++++++++++++++++++
 arch/x86/crypto/aes-cbc-mb/aes_cbc_mb_ctx.h        |  96 +++
 arch/x86/crypto/aes-cbc-mb/aes_cbc_mb_mgr.h        | 131 ++++
 arch/x86/crypto/aes-cbc-mb/aes_mb_mgr_init.c       | 145 ++++
 arch/x86/crypto/aes-cbc-mb/mb_mgr_datastruct.S     | 270 +++++++
 arch/x86/crypto/aes-cbc-mb/mb_mgr_inorder_x8_asm.S | 222 ++++++
 arch/x86/crypto/aes-cbc-mb/mb_mgr_ooo_x8_asm.S     | 416 +++++++++++
 arch/x86/crypto/aes-cbc-mb/reg_sizes.S             | 125 ++++
 crypto/Kconfig                                     |  16 +
 crypto/blkcipher.c                                 |  29 +-
 crypto/mcryptd.c                                   | 256 ++++++-
 crypto/scatterwalk.c                               |   7 +
 include/crypto/algapi.h                            |   1 +
 include/crypto/mcryptd.h                           |  36 +
 include/crypto/scatterwalk.h                       |   6 +
 include/linux/crypto.h                             |   1 +
 19 files changed, 3361 insertions(+), 5 deletions(-)
 create mode 100644 arch/x86/crypto/aes-cbc-mb/Makefile
 create mode 100644 arch/x86/crypto/aes-cbc-mb/aes_cbc_enc_x8.S
 create mode 100644 arch/x86/crypto/aes-cbc-mb/aes_cbc_mb.c
 create mode 100644 arch/x86/crypto/aes-cbc-mb/aes_cbc_mb_ctx.h
 create mode 100644 arch/x86/crypto/aes-cbc-mb/aes_cbc_mb_mgr.h
 create mode 100644 arch/x86/crypto/aes-cbc-mb/aes_mb_mgr_init.c
 create mode 100644 arch/x86/crypto/aes-cbc-mb/mb_mgr_datastruct.S
 create mode 100644 arch/x86/crypto/aes-cbc-mb/mb_mgr_inorder_x8_asm.S
 create mode 100644 arch/x86/crypto/aes-cbc-mb/mb_mgr_ooo_x8_asm.S
 create mode 100644 arch/x86/crypto/aes-cbc-mb/reg_sizes.S

-- 
1.7.11.7

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH v2 1/5] crypto: Multi-buffer encryptioin infrastructure support
       [not found] <cover.1446158797.git.tim.c.chen@linux.intel.com>
  2015-10-29 22:20 ` [PATCH v2 0/5] crypto: x86 AES-CBC encryption with multibuffer Tim Chen
@ 2015-10-29 22:21 ` Tim Chen
  2015-11-17 13:06   ` Herbert Xu
  2015-10-29 22:21 ` [PATCH v2 2/5] crypto: AES CBC multi-buffer data structures Tim Chen
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 15+ messages in thread
From: Tim Chen @ 2015-10-29 22:21 UTC (permalink / raw)
  To: Herbert Xu, H. Peter Anvin, David S.Miller, Stephan Mueller
  Cc: Chandramouli Narayanan, Vinodh Gopal, James Guilford,
	Wajdi Feghali, Tim Chen, Jussi Kivilinna, Stephan Mueller,
	linux-crypto, linux-kernel


In this patch, the infrastructure needed to support multibuffer
encryption implementation is added:

a) Enhace mcryptd daemon to support blkcipher requests.

b) Update configuration to include multi-buffer encryption build support.

c) Add support to crypto scatterwalk support that can sleep during
encryption operation, as we may have buffers for jobs in data lanes
that are half-finished, waiting for additional jobs to come to fill
empty lanes before we start the encryption again.  Therefore, we need to
enhance crypto walk with the option to map data buffers non-atomically.
This is done by algorithms run from crypto daemon who knows it is safe
to do so as it can save and restore FPU state in correct context.

For an introduction to the multi-buffer implementation, please see
http://www.intel.com/content/www/us/en/communications/communications-ia-multi-buffer-paper.html

Originally-by: Chandramouli Narayanan <mouli_7982@yahoo.com>
Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
---
 crypto/Kconfig               |  16 +++
 crypto/blkcipher.c           |  29 ++++-
 crypto/mcryptd.c             | 256 ++++++++++++++++++++++++++++++++++++++++++-
 crypto/scatterwalk.c         |   7 ++
 include/crypto/algapi.h      |   1 +
 include/crypto/mcryptd.h     |  36 ++++++
 include/crypto/scatterwalk.h |   6 +
 include/linux/crypto.h       |   1 +
 8 files changed, 347 insertions(+), 5 deletions(-)

diff --git a/crypto/Kconfig b/crypto/Kconfig
index 7240821..6b51084 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -888,6 +888,22 @@ config CRYPTO_AES_NI_INTEL
 	  ECB, CBC, LRW, PCBC, XTS. The 64 bit version has additional
 	  acceleration for CTR.
 
+config CRYPTO_AES_CBC_MB
+	tristate "AES CBC algorithm (x86_64 Multi-Buffer, Experimental)"
+	depends on X86 && 64BIT
+	select CRYPTO_ABLK_HELPER
+	select CRYPTO_MCRYPTD
+	help
+	  AES CBC encryption implemented using multi-buffer technique.
+	  This algorithm computes on multiple data lanes concurrently with
+	  SIMD instructions for better throughput.  It should only be
+	  used when there is significant work to generate many separate
+	  crypto requests that keep all the data lanes filled to get
+	  the performance benefit.  If the data lanes are unfilled, a
+	  flush operation will be initiated after some delay to process
+	  the exisiting crypto jobs, adding some extra latency at low
+	  load case.
+
 config CRYPTO_AES_SPARC64
 	tristate "AES cipher algorithms (SPARC64)"
 	depends on SPARC64
diff --git a/crypto/blkcipher.c b/crypto/blkcipher.c
index 11b9814..9fd4028 100644
--- a/crypto/blkcipher.c
+++ b/crypto/blkcipher.c
@@ -35,6 +35,9 @@ enum {
 	BLKCIPHER_WALK_SLOW = 1 << 1,
 	BLKCIPHER_WALK_COPY = 1 << 2,
 	BLKCIPHER_WALK_DIFF = 1 << 3,
+	/* deal with scenarios where we can sleep during sg walk */
+	/* when we process part of a request */
+	BLKCIPHER_WALK_MAY_SLEEP = 1 << 4,
 };
 
 static int blkcipher_walk_next(struct blkcipher_desc *desc,
@@ -44,22 +47,38 @@ static int blkcipher_walk_first(struct blkcipher_desc *desc,
 
 static inline void blkcipher_map_src(struct blkcipher_walk *walk)
 {
-	walk->src.virt.addr = scatterwalk_map(&walk->in);
+	/* add support for asynchronous requests which need no atomic map */
+	if (walk->flags & BLKCIPHER_WALK_MAY_SLEEP)
+		walk->src.virt.addr = scatterwalk_map_nonatomic(&walk->in);
+	else
+		walk->src.virt.addr = scatterwalk_map(&walk->in);
 }
 
 static inline void blkcipher_map_dst(struct blkcipher_walk *walk)
 {
-	walk->dst.virt.addr = scatterwalk_map(&walk->out);
+	/* add support for asynchronous requests which need no atomic map */
+	if (walk->flags & BLKCIPHER_WALK_MAY_SLEEP)
+		walk->dst.virt.addr = scatterwalk_map_nonatomic(&walk->out);
+	else
+		walk->dst.virt.addr = scatterwalk_map(&walk->out);
 }
 
 static inline void blkcipher_unmap_src(struct blkcipher_walk *walk)
 {
-	scatterwalk_unmap(walk->src.virt.addr);
+	/* add support for asynchronous requests which need no atomic map */
+	if (walk->flags & BLKCIPHER_WALK_MAY_SLEEP)
+		scatterwalk_unmap_nonatomic(walk->src.virt.addr);
+	else
+		scatterwalk_unmap(walk->src.virt.addr);
 }
 
 static inline void blkcipher_unmap_dst(struct blkcipher_walk *walk)
 {
-	scatterwalk_unmap(walk->dst.virt.addr);
+	/* add support for asynchronous requests which need no atomic map */
+	if (walk->flags & BLKCIPHER_WALK_MAY_SLEEP)
+		scatterwalk_unmap_nonatomic(walk->dst.virt.addr);
+	else
+		scatterwalk_unmap(walk->dst.virt.addr);
 }
 
 /* Get a spot of the specified length that does not straddle a page.
@@ -299,6 +318,8 @@ static inline int blkcipher_copy_iv(struct blkcipher_walk *walk)
 int blkcipher_walk_virt(struct blkcipher_desc *desc,
 			struct blkcipher_walk *walk)
 {
+	if (desc->flags & CRYPTO_TFM_SGWALK_MAY_SLEEP)
+		walk->flags |= BLKCIPHER_WALK_MAY_SLEEP;
 	walk->flags &= ~BLKCIPHER_WALK_PHYS;
 	walk->walk_blocksize = crypto_blkcipher_blocksize(desc->tfm);
 	walk->cipher_blocksize = walk->walk_blocksize;
diff --git a/crypto/mcryptd.c b/crypto/mcryptd.c
index fe5b495a..01f747c 100644
--- a/crypto/mcryptd.c
+++ b/crypto/mcryptd.c
@@ -116,8 +116,28 @@ static int mcryptd_enqueue_request(struct mcryptd_queue *queue,
 	return err;
 }
 
+static int mcryptd_enqueue_blkcipher_request(struct mcryptd_queue *queue,
+				  struct crypto_async_request *request,
+				  struct mcryptd_blkcipher_request_ctx *rctx)
+{
+	int cpu, err;
+	struct mcryptd_cpu_queue *cpu_queue;
+
+	cpu = get_cpu();
+	cpu_queue = this_cpu_ptr(queue->cpu_queue);
+	rctx->tag.cpu = cpu;
+
+	err = crypto_enqueue_request(&cpu_queue->queue, request);
+	pr_debug("enqueue request: cpu %d cpu_queue %p request %p\n",
+		 cpu, cpu_queue, request);
+	queue_work_on(cpu, kcrypto_wq, &cpu_queue->work);
+	put_cpu();
+
+	return err;
+}
+
 /*
- * Try to opportunisticlly flush the partially completed jobs if
+ * Try to opportunistically flush the partially completed jobs if
  * crypto daemon is the only task running.
  */
 static void mcryptd_opportunistic_flush(void)
@@ -225,6 +245,130 @@ static inline struct mcryptd_queue *mcryptd_get_queue(struct crypto_tfm *tfm)
 	return ictx->queue;
 }
 
+static int mcryptd_blkcipher_setkey(struct crypto_ablkcipher *parent,
+				   const u8 *key, unsigned int keylen)
+{
+	struct mcryptd_blkcipher_ctx *ctx = crypto_ablkcipher_ctx(parent);
+	struct crypto_blkcipher *child = ctx->child;
+	int err;
+
+	crypto_blkcipher_clear_flags(child, CRYPTO_TFM_REQ_MASK);
+	crypto_blkcipher_set_flags(child, crypto_ablkcipher_get_flags(parent) &
+					  CRYPTO_TFM_REQ_MASK);
+	err = crypto_blkcipher_setkey(child, key, keylen);
+	crypto_ablkcipher_set_flags(parent, crypto_blkcipher_get_flags(child) &
+					    CRYPTO_TFM_RES_MASK);
+	return err;
+}
+
+static void mcryptd_blkcipher_crypt(struct ablkcipher_request *req,
+				   struct crypto_blkcipher *child,
+				   int err,
+				   int (*crypt)(struct blkcipher_desc *desc,
+						struct scatterlist *dst,
+						struct scatterlist *src,
+						unsigned int len))
+{
+	struct mcryptd_blkcipher_request_ctx *rctx;
+	struct blkcipher_desc desc;
+
+	rctx = ablkcipher_request_ctx(req);
+
+	if (unlikely(err == -EINPROGRESS))
+		goto out;
+
+	/* set up the blkcipher request to work on */
+	desc.tfm = child;
+	desc.info = req->info;
+	desc.flags = CRYPTO_TFM_REQ_MAY_SLEEP;
+	rctx->desc = desc;
+
+	/*
+	 * pass addr of descriptor stored in the request context
+	 * so that the callee can get to the request context
+	 */
+	err = crypt(&rctx->desc, req->dst, req->src, req->nbytes);
+	if (err) {
+		req->base.complete = rctx->complete;
+		goto out;
+	}
+	return;
+
+out:
+	local_bh_disable();
+	rctx->complete(&req->base, err);
+	local_bh_enable();
+
+}
+
+static void mcryptd_blkcipher_encrypt(struct crypto_async_request *req, int err)
+{
+	struct mcryptd_blkcipher_ctx *ctx = crypto_tfm_ctx(req->tfm);
+	struct crypto_blkcipher *child = ctx->child;
+
+	mcryptd_blkcipher_crypt(ablkcipher_request_cast(req), child, err,
+			       crypto_blkcipher_crt(child)->encrypt);
+}
+
+static void mcryptd_blkcipher_decrypt(struct crypto_async_request *req, int err)
+{
+	struct mcryptd_blkcipher_ctx *ctx = crypto_tfm_ctx(req->tfm);
+	struct crypto_blkcipher *child = ctx->child;
+
+	mcryptd_blkcipher_crypt(ablkcipher_request_cast(req), child, err,
+			       crypto_blkcipher_crt(child)->decrypt);
+}
+
+static int mcryptd_blkcipher_enqueue(struct ablkcipher_request *req,
+				    crypto_completion_t complete)
+{
+	struct mcryptd_blkcipher_request_ctx *rctx =
+			ablkcipher_request_ctx(req);
+	struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(req);
+	struct mcryptd_queue *queue;
+
+	queue = mcryptd_get_queue(crypto_ablkcipher_tfm(tfm));
+	rctx->complete = req->base.complete;
+	req->base.complete = complete;
+
+	return mcryptd_enqueue_blkcipher_request(queue, &req->base, rctx);
+}
+
+static int mcryptd_blkcipher_encrypt_enqueue(struct ablkcipher_request *req)
+{
+	return mcryptd_blkcipher_enqueue(req, mcryptd_blkcipher_encrypt);
+}
+
+static int mcryptd_blkcipher_decrypt_enqueue(struct ablkcipher_request *req)
+{
+	return mcryptd_blkcipher_enqueue(req, mcryptd_blkcipher_decrypt);
+}
+
+static int mcryptd_blkcipher_init_tfm(struct crypto_tfm *tfm)
+{
+	struct crypto_instance *inst = crypto_tfm_alg_instance(tfm);
+	struct mcryptd_instance_ctx *ictx = crypto_instance_ctx(inst);
+	struct crypto_spawn *spawn = &ictx->spawn;
+	struct mcryptd_blkcipher_ctx *ctx = crypto_tfm_ctx(tfm);
+	struct crypto_blkcipher *cipher;
+
+	cipher = crypto_spawn_blkcipher(spawn);
+	if (IS_ERR(cipher))
+		return PTR_ERR(cipher);
+
+	ctx->child = cipher;
+	tfm->crt_ablkcipher.reqsize =
+		sizeof(struct mcryptd_blkcipher_request_ctx);
+	return 0;
+}
+
+static void mcryptd_blkcipher_exit_tfm(struct crypto_tfm *tfm)
+{
+	struct mcryptd_blkcipher_ctx *ctx = crypto_tfm_ctx(tfm);
+
+	crypto_free_blkcipher(ctx->child);
+}
+
 static void *mcryptd_alloc_instance(struct crypto_alg *alg, unsigned int head,
 				   unsigned int tail)
 {
@@ -272,6 +416,70 @@ static inline void mcryptd_check_internal(struct rtattr **tb, u32 *type,
 		*mask |= CRYPTO_ALG_INTERNAL;
 }
 
+static int mcryptd_create_blkcipher(struct crypto_template *tmpl,
+				   struct rtattr **tb,
+				   struct mcryptd_queue *queue)
+{
+	struct mcryptd_instance_ctx *ctx;
+	struct crypto_instance *inst;
+	struct crypto_alg *alg;
+	u32 type = CRYPTO_ALG_TYPE_BLKCIPHER;
+	u32 mask = CRYPTO_ALG_TYPE_MASK;
+	int err;
+
+	mcryptd_check_internal(tb, &type, &mask);
+
+	alg = crypto_get_attr_alg(tb, type, mask);
+	if (IS_ERR(alg))
+		return PTR_ERR(alg);
+
+	pr_debug("crypto: mcryptd crypto alg: %s\n", alg->cra_name);
+	inst = mcryptd_alloc_instance(alg, 0, sizeof(*ctx));
+	err = PTR_ERR(inst);
+	if (IS_ERR(inst))
+		goto out_put_alg;
+
+	ctx = crypto_instance_ctx(inst);
+	ctx->queue = queue;
+
+	err = crypto_init_spawn(&ctx->spawn, alg, inst,
+				CRYPTO_ALG_TYPE_MASK | CRYPTO_ALG_ASYNC);
+	if (err)
+		goto out_free_inst;
+
+	type = CRYPTO_ALG_TYPE_ABLKCIPHER | CRYPTO_ALG_ASYNC;
+	if (alg->cra_flags & CRYPTO_ALG_INTERNAL)
+		type |= CRYPTO_ALG_INTERNAL;
+	inst->alg.cra_flags = type;
+	inst->alg.cra_type = &crypto_ablkcipher_type;
+
+	inst->alg.cra_ablkcipher.ivsize = alg->cra_blkcipher.ivsize;
+	inst->alg.cra_ablkcipher.min_keysize = alg->cra_blkcipher.min_keysize;
+	inst->alg.cra_ablkcipher.max_keysize = alg->cra_blkcipher.max_keysize;
+
+	inst->alg.cra_ablkcipher.geniv = alg->cra_blkcipher.geniv;
+
+	inst->alg.cra_ctxsize = sizeof(struct mcryptd_blkcipher_ctx);
+
+	inst->alg.cra_init = mcryptd_blkcipher_init_tfm;
+	inst->alg.cra_exit = mcryptd_blkcipher_exit_tfm;
+
+	inst->alg.cra_ablkcipher.setkey = mcryptd_blkcipher_setkey;
+	inst->alg.cra_ablkcipher.encrypt = mcryptd_blkcipher_encrypt_enqueue;
+	inst->alg.cra_ablkcipher.decrypt = mcryptd_blkcipher_decrypt_enqueue;
+
+	err = crypto_register_instance(tmpl, inst);
+	if (err) {
+		crypto_drop_spawn(&ctx->spawn);
+out_free_inst:
+		kfree(inst);
+	}
+
+out_put_alg:
+	crypto_mod_put(alg);
+	return err;
+}
+
 static int mcryptd_hash_init_tfm(struct crypto_tfm *tfm)
 {
 	struct crypto_instance *inst = crypto_tfm_alg_instance(tfm);
@@ -563,6 +771,8 @@ static int mcryptd_create(struct crypto_template *tmpl, struct rtattr **tb)
 		return PTR_ERR(algt);
 
 	switch (algt->type & algt->mask & CRYPTO_ALG_TYPE_MASK) {
+	case CRYPTO_ALG_TYPE_BLKCIPHER:
+		return mcryptd_create_blkcipher(tmpl, tb, &mqueue);
 	case CRYPTO_ALG_TYPE_DIGEST:
 		return mcryptd_create_hash(tmpl, tb, &mqueue);
 	break;
@@ -577,6 +787,10 @@ static void mcryptd_free(struct crypto_instance *inst)
 	struct hashd_instance_ctx *hctx = crypto_instance_ctx(inst);
 
 	switch (inst->alg.cra_flags & CRYPTO_ALG_TYPE_MASK) {
+	case CRYPTO_ALG_TYPE_BLKCIPHER:
+		crypto_drop_spawn(&ctx->spawn);
+		kfree(inst);
+		return;
 	case CRYPTO_ALG_TYPE_AHASH:
 		crypto_drop_shash(&hctx->spawn);
 		kfree(ahash_instance(inst));
@@ -594,6 +808,46 @@ static struct crypto_template mcryptd_tmpl = {
 	.module = THIS_MODULE,
 };
 
+struct mcryptd_ablkcipher *mcryptd_alloc_ablkcipher(const char *alg_name,
+						  u32 type, u32 mask)
+{
+	char cryptd_alg_name[CRYPTO_MAX_ALG_NAME];
+	struct crypto_tfm *tfm;
+
+	if (snprintf(cryptd_alg_name, CRYPTO_MAX_ALG_NAME,
+		     "mcryptd(%s)", alg_name) >= CRYPTO_MAX_ALG_NAME)
+		return ERR_PTR(-EINVAL);
+	type &= ~(CRYPTO_ALG_TYPE_MASK | CRYPTO_ALG_GENIV);
+	type |= CRYPTO_ALG_TYPE_BLKCIPHER;
+	mask &= ~CRYPTO_ALG_TYPE_MASK;
+	mask |= (CRYPTO_ALG_GENIV | CRYPTO_ALG_TYPE_BLKCIPHER_MASK);
+	tfm = crypto_alloc_base(cryptd_alg_name, type, mask);
+	if (IS_ERR(tfm))
+		return ERR_CAST(tfm);
+	if (tfm->__crt_alg->cra_module != THIS_MODULE) {
+		crypto_free_tfm(tfm);
+		return ERR_PTR(-EINVAL);
+	}
+
+	return __mcryptd_ablkcipher_cast(__crypto_ablkcipher_cast(tfm));
+}
+EXPORT_SYMBOL_GPL(mcryptd_alloc_ablkcipher);
+
+struct crypto_blkcipher *mcryptd_ablkcipher_child(
+		struct mcryptd_ablkcipher *tfm)
+{
+	struct mcryptd_blkcipher_ctx *ctx = crypto_ablkcipher_ctx(&tfm->base);
+
+	return ctx->child;
+}
+EXPORT_SYMBOL_GPL(mcryptd_ablkcipher_child);
+
+void mcryptd_free_ablkcipher(struct mcryptd_ablkcipher *tfm)
+{
+	crypto_free_ablkcipher(&tfm->base);
+}
+EXPORT_SYMBOL_GPL(mcryptd_free_ablkcipher);
+
 struct mcryptd_ahash *mcryptd_alloc_ahash(const char *alg_name,
 					u32 type, u32 mask)
 {
diff --git a/crypto/scatterwalk.c b/crypto/scatterwalk.c
index ea5815c..edb9ac7 100644
--- a/crypto/scatterwalk.c
+++ b/crypto/scatterwalk.c
@@ -47,6 +47,13 @@ void *scatterwalk_map(struct scatter_walk *walk)
 }
 EXPORT_SYMBOL_GPL(scatterwalk_map);
 
+void *scatterwalk_map_nonatomic(struct scatter_walk *walk)
+{
+	return kmap(scatterwalk_page(walk)) +
+	       offset_in_page(walk->offset);
+}
+EXPORT_SYMBOL_GPL(scatterwalk_map_nonatomic);
+
 static void scatterwalk_pagedone(struct scatter_walk *walk, int out,
 				 unsigned int more)
 {
diff --git a/include/crypto/algapi.h b/include/crypto/algapi.h
index c9fe145..4c5633f 100644
--- a/include/crypto/algapi.h
+++ b/include/crypto/algapi.h
@@ -301,6 +301,7 @@ static inline void blkcipher_walk_init(struct blkcipher_walk *walk,
 	walk->in.sg = src;
 	walk->out.sg = dst;
 	walk->total = nbytes;
+	walk->flags = 0;
 }
 
 static inline void ablkcipher_walk_init(struct ablkcipher_walk *walk,
diff --git a/include/crypto/mcryptd.h b/include/crypto/mcryptd.h
index c23ee1f..29f85bd 100644
--- a/include/crypto/mcryptd.h
+++ b/include/crypto/mcryptd.h
@@ -13,6 +13,7 @@
 #include <linux/crypto.h>
 #include <linux/kernel.h>
 #include <crypto/hash.h>
+#include <crypto/b128ops.h>
 
 struct mcryptd_ahash {
 	struct crypto_ahash base;
@@ -95,6 +96,41 @@ struct mcryptd_alg_state {
 	unsigned long (*flusher)(struct mcryptd_alg_cstate *cstate);
 };
 
+struct mcryptd_ablkcipher {
+	struct crypto_ablkcipher base;
+};
+
+static inline struct mcryptd_ablkcipher *__mcryptd_ablkcipher_cast(
+	struct crypto_ablkcipher *tfm)
+{
+	return (struct mcryptd_ablkcipher *)tfm;
+}
+
+/* alg_name should be algorithm to be cryptd-ed */
+struct mcryptd_ablkcipher *mcryptd_alloc_ablkcipher(const char *alg_name,
+						  u32 type, u32 mask);
+struct crypto_blkcipher *mcryptd_ablkcipher_child(
+	struct mcryptd_ablkcipher *tfm);
+void mcryptd_free_ablkcipher(struct mcryptd_ablkcipher *tfm);
+
+struct mcryptd_blkcipher_ctx {
+	struct crypto_blkcipher *child;
+	struct mcryptd_alg_state *alg_state;
+};
+
+struct mcryptd_blkcipher_request_ctx {
+	struct list_head waiter;
+	crypto_completion_t complete;
+	struct mcryptd_tag tag;
+	struct blkcipher_walk walk;
+	u8 flag;
+	int nbytes;
+	int error;
+	struct blkcipher_desc desc;
+	void *job;
+	u128 seq_iv;	/* running iv of a sequence */
+};
+
 /* return delay in jiffies from current time */
 static inline unsigned long get_delay(unsigned long t)
 {
diff --git a/include/crypto/scatterwalk.h b/include/crypto/scatterwalk.h
index 35f99b6..9e66ecf 100644
--- a/include/crypto/scatterwalk.h
+++ b/include/crypto/scatterwalk.h
@@ -83,10 +83,16 @@ static inline void scatterwalk_unmap(void *vaddr)
 	kunmap_atomic(vaddr);
 }
 
+static inline void scatterwalk_unmap_nonatomic(void *vaddr)
+{
+	kunmap(vaddr);
+}
+
 void scatterwalk_start(struct scatter_walk *walk, struct scatterlist *sg);
 void scatterwalk_copychunks(void *buf, struct scatter_walk *walk,
 			    size_t nbytes, int out);
 void *scatterwalk_map(struct scatter_walk *walk);
+void *scatterwalk_map_nonatomic(struct scatter_walk *walk);
 void scatterwalk_done(struct scatter_walk *walk, int out, int more);
 
 void scatterwalk_map_and_copy(void *buf, struct scatterlist *sg,
diff --git a/include/linux/crypto.h b/include/linux/crypto.h
index e71cb70..deea374 100644
--- a/include/linux/crypto.h
+++ b/include/linux/crypto.h
@@ -115,6 +115,7 @@
 #define CRYPTO_TFM_RES_BAD_KEY_SCHED 	0x00400000
 #define CRYPTO_TFM_RES_BAD_BLOCK_LEN 	0x00800000
 #define CRYPTO_TFM_RES_BAD_FLAGS 	0x01000000
+#define CRYPTO_TFM_SGWALK_MAY_SLEEP	0x02000000
 
 /*
  * Miscellaneous stuff.
-- 
1.7.11.7

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v2 2/5] crypto: AES CBC multi-buffer data structures
       [not found] <cover.1446158797.git.tim.c.chen@linux.intel.com>
  2015-10-29 22:20 ` [PATCH v2 0/5] crypto: x86 AES-CBC encryption with multibuffer Tim Chen
  2015-10-29 22:21 ` [PATCH v2 1/5] crypto: Multi-buffer encryptioin infrastructure support Tim Chen
@ 2015-10-29 22:21 ` Tim Chen
  2015-10-29 22:21 ` [PATCH v2 3/5] crypto: AES CBC multi-buffer scheduler Tim Chen
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 15+ messages in thread
From: Tim Chen @ 2015-10-29 22:21 UTC (permalink / raw)
  To: Herbert Xu, H. Peter Anvin, David S.Miller, Stephan Mueller
  Cc: Chandramouli Narayanan, Vinodh Gopal, James Guilford,
	Wajdi Feghali, Tim Chen, Jussi Kivilinna, Stephan Mueller,
	linux-crypto, linux-kernel


This patch introduces the data structures and prototypes of functions
needed for doing AES CBC encryption using multi-buffer. Included are
the structures of the multi-buffer AES CBC job, job scheduler in C and
data structure defines in x86 assembly code.

Originally-by: Chandramouli Narayanan <mouli_7982@yahoo.com>
Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
---
 arch/x86/crypto/aes-cbc-mb/aes_cbc_mb_ctx.h    |  96 +++++++++
 arch/x86/crypto/aes-cbc-mb/aes_cbc_mb_mgr.h    | 131 ++++++++++++
 arch/x86/crypto/aes-cbc-mb/mb_mgr_datastruct.S | 270 +++++++++++++++++++++++++
 arch/x86/crypto/aes-cbc-mb/reg_sizes.S         | 125 ++++++++++++
 4 files changed, 622 insertions(+)
 create mode 100644 arch/x86/crypto/aes-cbc-mb/aes_cbc_mb_ctx.h
 create mode 100644 arch/x86/crypto/aes-cbc-mb/aes_cbc_mb_mgr.h
 create mode 100644 arch/x86/crypto/aes-cbc-mb/mb_mgr_datastruct.S
 create mode 100644 arch/x86/crypto/aes-cbc-mb/reg_sizes.S

diff --git a/arch/x86/crypto/aes-cbc-mb/aes_cbc_mb_ctx.h b/arch/x86/crypto/aes-cbc-mb/aes_cbc_mb_ctx.h
new file mode 100644
index 0000000..5493f83
--- /dev/null
+++ b/arch/x86/crypto/aes-cbc-mb/aes_cbc_mb_ctx.h
@@ -0,0 +1,96 @@
+/*
+ *	Header file for multi buffer AES CBC algorithm manager
+ *	that deals with 8 buffers at a time
+ *
+ *
+ * This file is provided under a dual BSD/GPLv2 license.  When using or
+ * redistributing this file, you may do so under either license.
+ *
+ * GPL LICENSE SUMMARY
+ *
+ * Copyright(c) 2015 Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * Contact Information:
+ * James Guilford <james.guilford@intel.com>
+ * Sean Gulley <sean.m.gulley@intel.com>
+ * Tim Chen <tim.c.chen@linux.intel.com>
+ *
+ * BSD LICENSE
+ *
+ * Copyright(c) 2015 Intel Corporation.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ */
+#ifndef __AES_CBC_MB_CTX_H
+#define __AES_CBC_MB_CTX_H
+
+
+#include <linux/types.h>
+
+#include "aes_cbc_mb_mgr.h"
+
+#define CBC_ENCRYPT	0x01
+#define CBC_DECRYPT	0x02
+#define CBC_START	0x04
+#define CBC_DONE	0x08
+
+#define CBC_CTX_STS_IDLE       0x00
+#define CBC_CTX_STS_PROCESSING 0x01
+#define CBC_CTX_STS_LAST       0x02
+#define CBC_CTX_STS_COMPLETE   0x04
+
+enum cbc_ctx_error {
+	CBC_CTX_ERROR_NONE               =  0,
+	CBC_CTX_ERROR_INVALID_FLAGS      = -1,
+	CBC_CTX_ERROR_ALREADY_PROCESSING = -2,
+	CBC_CTX_ERROR_ALREADY_COMPLETED  = -3,
+};
+
+#define cbc_ctx_init(ctx, nbytes, op) \
+	do { \
+		(ctx)->flag = (op) | CBC_START; \
+		(ctx)->nbytes = nbytes; \
+	} while (0)
+
+/* AESNI routines to perform cbc decrypt and key expansion */
+
+asmlinkage void aesni_cbc_dec(struct crypto_aes_ctx *ctx, u8 *out,
+			      const u8 *in, unsigned int len, u8 *iv);
+asmlinkage int aesni_set_key(struct crypto_aes_ctx *ctx, const u8 *in_key,
+			     unsigned int key_len);
+
+#endif /* __AES_CBC_MB_CTX_H */
diff --git a/arch/x86/crypto/aes-cbc-mb/aes_cbc_mb_mgr.h b/arch/x86/crypto/aes-cbc-mb/aes_cbc_mb_mgr.h
new file mode 100644
index 0000000..0def82e
--- /dev/null
+++ b/arch/x86/crypto/aes-cbc-mb/aes_cbc_mb_mgr.h
@@ -0,0 +1,131 @@
+/*
+ *	Header file for multi buffer AES CBC algorithm manager
+ *	that deals with 8 buffers at a time
+ *
+ *
+ * This file is provided under a dual BSD/GPLv2 license.  When using or
+ * redistributing this file, you may do so under either license.
+ *
+ * GPL LICENSE SUMMARY
+ *
+ * Copyright(c) 2015 Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * Contact Information:
+ * James Guilford <james.guilford@intel.com>
+ * Sean Gulley <sean.m.gulley@intel.com>
+ * Tim Chen <tim.c.chen@linux.intel.com>
+ *
+ * BSD LICENSE
+ *
+ * Copyright(c) 2015 Intel Corporation.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ */
+#ifndef __AES_CBC_MB_MGR_H
+#define __AES_CBC_MB_MGR_H
+
+
+#include <linux/types.h>
+#include <linux/printk.h>
+#include <crypto/aes.h>
+#include <crypto/b128ops.h>
+
+#define MAX_AES_JOBS		128
+
+enum job_sts {
+	STS_UNKNOWN = 0,
+	STS_BEING_PROCESSED = 1,
+	STS_COMPLETED = 2,
+	STS_INTERNAL_ERROR = 3,
+	STS_ERROR = 4
+};
+
+/* AES CBC multi buffer in order job structure */
+
+struct job_aes_cbc {
+	u8	*plaintext;	/* pointer to plaintext */
+	u8	*ciphertext;	/* pointer to ciphertext */
+	u128	iv;		/* initialization vector */
+	u128	*keys;		/* pointer to keys */
+	u32	len;		/* length in bytes, must be multiple of 16 */
+	enum	job_sts status;	/* status enumeration */
+	void	*user_data;	/* pointer to user data */
+	u32	key_len;	/* key length */
+};
+
+struct aes_cbc_args_x8 {
+	u8	*arg_in[8];	/* array of 8 pointers to in text */
+	u8	*arg_out[8];	/* array of 8 pointers to out text */
+	u128	*arg_keys[8];	/* array of 8 pointers to keys */
+	u128	arg_iv[8] __aligned(16);	/* array of 8 128-bit IVs */
+};
+
+struct aes_cbc_mb_mgr_inorder_x8 {
+	struct aes_cbc_args_x8 args;
+	u16 lens[8] __aligned(16);
+	u64 unused_lanes; /* each nibble is index (0...7) of unused lanes */
+	/* nibble 8 is set to F as a flag */
+	struct job_aes_cbc *job_in_lane[8];
+	/* In-order components */
+	u32 earliest_job; /* byte offset, -1 if none */
+	u32 next_job;      /* byte offset */
+	struct job_aes_cbc jobs[MAX_AES_JOBS];
+};
+
+/* define AES CBC multi buffer manager function proto */
+struct job_aes_cbc *aes_cbc_submit_job_inorder_128x8(
+	struct aes_cbc_mb_mgr_inorder_x8 *state);
+struct job_aes_cbc *aes_cbc_submit_job_inorder_192x8(
+	struct aes_cbc_mb_mgr_inorder_x8 *state);
+struct job_aes_cbc *aes_cbc_submit_job_inorder_256x8(
+	struct aes_cbc_mb_mgr_inorder_x8 *state);
+struct job_aes_cbc *aes_cbc_flush_job_inorder_x8(
+	struct aes_cbc_mb_mgr_inorder_x8 *state);
+struct job_aes_cbc *aes_cbc_get_next_job_inorder_x8(
+	struct aes_cbc_mb_mgr_inorder_x8 *state);
+struct job_aes_cbc *aes_cbc_get_completed_job_inorder_x8(
+	struct aes_cbc_mb_mgr_inorder_x8 *state);
+void aes_cbc_init_mb_mgr_inorder_x8(
+	struct aes_cbc_mb_mgr_inorder_x8 *state);
+void aes_cbc_submit_job_ooo_x8(struct aes_cbc_mb_mgr_inorder_x8 *state,
+		struct job_aes_cbc *job);
+void aes_cbc_flush_job_ooo_x8(struct aes_cbc_mb_mgr_inorder_x8 *state);
+void aes_cbc_flush_job_ooo_128x8(struct aes_cbc_mb_mgr_inorder_x8 *state);
+void aes_cbc_flush_job_ooo_192x8(struct aes_cbc_mb_mgr_inorder_x8 *state);
+void aes_cbc_flush_job_ooo_256x8(struct aes_cbc_mb_mgr_inorder_x8 *state);
+
+#endif /* __AES_CBC_MB_MGR_H */
diff --git a/arch/x86/crypto/aes-cbc-mb/mb_mgr_datastruct.S b/arch/x86/crypto/aes-cbc-mb/mb_mgr_datastruct.S
new file mode 100644
index 0000000..0e72179
--- /dev/null
+++ b/arch/x86/crypto/aes-cbc-mb/mb_mgr_datastruct.S
@@ -0,0 +1,270 @@
+/*
+ *	Header file: Data structure: AES CBC multibuffer optimization
+ *				(x86_64)
+ *
+ * This is the generic header file defining data stuctures for
+ * AES multi-buffer CBC implementation. This file is to be included
+ * by the AES CBC multibuffer scheduler.
+ *
+ * This file is provided under a dual BSD/GPLv2 license.  When using or
+ * redistributing this file, you may do so under either license.
+ *
+ * GPL LICENSE SUMMARY
+ *
+ * Copyright(c) 2015 Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * Contact Information:
+ * James Guilford <james.guilford@intel.com>
+ * Sean Gulley <sean.m.gulley@intel.com>
+ * Tim Chen <tim.c.chen@linux.intel.com>
+ *
+ * BSD LICENSE
+ *
+ * Copyright(c) 2015 Intel Corporation.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ */
+
+/* This is the AES CBC BY8 implementation */
+#define	NBY	8
+
+#define AES_KEYSIZE_128 16
+#define AES_KEYSIZE_192 24
+#define AES_KEYSIZE_256 32
+
+#ifndef _AES_CBC_MB_MGR_DATASTRUCT_ASM_
+#define _AES_CBC_MB_MGR_DATASTRUCT_ASM_
+
+/* Macros for defining data structures */
+
+/* Usage example */
+
+/*
+ * Alternate "struct-like" syntax:
+ *	STRUCT job_aes2
+ *	RES_Q	.plaintext,	1
+ *	RES_Q	.ciphertext,	1
+ *	RES_DQ	.IV,		1
+ *	RES_B	.nested,	_JOB_AES_SIZE, _JOB_AES_ALIGN
+ *	RES_U	.union,		size1, align1, \
+ *				size2, align2, \
+ *				...
+ *	ENDSTRUCT
+ *	; Following only needed if nesting
+ *	%assign job_aes2_size	_FIELD_OFFSET
+ *	%assign job_aes2_align	_STRUCT_ALIGN
+ *
+ * RES_* macros take a name, a count and an optional alignment.
+ * The count in terms of the base size of the macro, and the
+ * default alignment is the base size.
+ * The macros are:
+ * Macro    Base size
+ * RES_B	    1
+ * RES_W	    2
+ * RES_D     4
+ * RES_Q     8
+ * RES_DQ   16
+ * RES_Y    32
+ * RES_Z    64
+ *
+ * RES_U defines a union. It's arguments are a name and two or more
+ * pairs of "size, alignment"
+ *
+ * The two assigns are only needed if this structure is being nested
+ * within another. Even if the assigns are not done, one can still use
+ * STRUCT_NAME_size as the size of the structure.
+ *
+ * Note that for nesting, you still need to assign to STRUCT_NAME_size.
+ *
+ * The differences between this and using "struct" directly are that each
+ * type is implicitly aligned to its natural length (although this can be
+ * over-ridden with an explicit third parameter), and that the structure
+ * is padded at the end to its overall alignment.
+ *
+ */
+
+/* START_FIELDS */
+
+.macro START_FIELDS
+_FIELD_OFFSET	=	0
+_STRUCT_ALIGN	=	0
+.endm
+
+/* FIELD name size align */
+.macro FIELD name, size, align
+
+	_FIELD_OFFSET = (_FIELD_OFFSET + (\align) - 1) & (~ ((\align)-1))
+	\name = _FIELD_OFFSET
+	_FIELD_OFFSET = _FIELD_OFFSET + (\size)
+	.if (\align > _STRUCT_ALIGN)
+		_STRUCT_ALIGN = \align
+	.endif
+.endm
+
+/* END_FIELDS */
+
+.macro END_FIELDS
+	_FIELD_OFFSET = \
+	(_FIELD_OFFSET + _STRUCT_ALIGN-1) & (~ (_STRUCT_ALIGN-1))
+.endm
+
+.macro STRUCT p1
+	START_FIELDS
+	.struct \p1
+.endm
+
+.macro ENDSTRUCT
+	tmp = _FIELD_OFFSET
+	END_FIELDS
+	tmp = (_FIELD_OFFSET - %%tmp)
+	.if (tmp > 0)
+		.lcomm	tmp
+.endif
+.endstruc
+.endm
+
+/* RES_int name size align */
+.macro RES_int p1, p2, p3
+	name = \p1
+	size = \p2
+	align = \p3
+
+	_FIELD_OFFSET = (_FIELD_OFFSET + (align) - 1) & (~ ((align)-1))
+	.align align
+	.lcomm name size
+	_FIELD_OFFSET = _FIELD_OFFSET + (size)
+	.if (align > _STRUCT_ALIGN)
+		_STRUCT_ALIGN = align
+	.endif
+.endm
+
+/* macro RES_B name, size [, align] */
+.macro RES_B _name, _size, _align=1
+	RES_int _name, _size, _align
+.endm
+
+/* macro RES_W name, size [, align] */
+.macro RES_W _name, _size, _align=2
+	RES_int _name, 2*(_size), _align
+.endm
+
+/* macro RES_D name, size [, align] */
+.macro RES_D _name, _size, _align=4
+	RES_int _name, 4*(_size), _align
+.endm
+
+/* macro RES_Q name, size [, align] */
+.macro RES_Q _name, _size, _align=8
+	RES_int _name, 8*(_size), _align
+.endm
+
+/* macro RES_DQ name, size [, align] */
+.macro RES_DQ _name, _size, _align=16
+	RES_int _name, 16*(_size), _align
+.endm
+
+/* macro RES_Y name, size [, align] */
+.macro RES_Y _name, _size, _align=32
+	RES_int _name, 32*(_size), _align
+.endm
+
+/*; macro RES_Z name, size [, align] */
+.macro RES_Z _name, _size, _align=64
+	RES_int _name, 64*(_size), _align
+.endm
+#endif /* _AES_CBC_MB_MGR_DATASTRUCT_ASM_ */
+
+/* Define JOB_AES structure */
+
+START_FIELDS	/* JOB_AES */
+/*	name		size	align	*/
+FIELD	_plaintext,	8,	8	/* pointer to plaintext */
+FIELD	_ciphertext,	8,	8	/* pointer to ciphertext */
+FIELD	_IV,		16,	8	/* IV */
+FIELD	_keys,		8,	8	/* pointer to keys */
+FIELD	_len,		4,	4	/* length in bytes */
+FIELD	_status,	4,	4	/* status enumeration */
+FIELD	_user_data,	8,	8	/* pointer to user data */
+FIELD	_key_len,	4,	8	/* key length */
+
+END_FIELDS
+
+_JOB_AES_size	=	_FIELD_OFFSET
+_JOB_AES_align	=	_STRUCT_ALIGN
+
+START_FIELDS	/* AES_ARGS_XX */
+/*	name		size	align	*/
+FIELD	_arg_in,	8*NBY, 8	/* [] of NBY pointers to in text */
+FIELD	_arg_out,	8*NBY, 8	/* [] of NBY pointers to out text */
+FIELD	_arg_keys,	8*NBY, 8	/* [] of NBY pointers to keys */
+FIELD	_arg_IV,	16*NBY, 16	/* [] of NBY 128-bit IV's */
+
+END_FIELDS
+
+_AES_ARGS_SIZE	=	_FIELD_OFFSET
+_AES_ARGS_ALIGN	=	_STRUCT_ALIGN
+
+START_FIELDS	/* MB_MGR_AES_INORDER_XX */
+/*	name		size	align				*/
+FIELD	_args,		_AES_ARGS_SIZE, _AES_ARGS_ALIGN
+FIELD	_lens,		16,	16	/* lengths */
+FIELD	_unused_lanes,	8,	8
+FIELD	_job_in_lane,	8*NBY,	8	/* [] of NBY pointers to jobs */
+
+FIELD	_earliest_job,	4,	4	/* -1 if none */
+FIELD	_next_job,	4,	4
+FIELD	_jobs,		_JOB_AES_size,	_JOB_AES_align	/*> 8, start of array */
+
+END_FIELDS
+
+/* size is not accurate because the arrays are not represented */
+
+_MB_MGR_AES_INORDER_align	=	_STRUCT_ALIGN
+
+_args_in	=	_args+_arg_in
+_args_out	=	_args+_arg_out
+_args_keys	=	_args+_arg_keys
+_args_IV	=	_args+_arg_IV
+
+
+#define STS_UNKNOWN		0
+#define STS_BEING_PROCESSED	1
+#define STS_COMPLETED		2
+#define STS_INTERNAL_ERROR	3
+#define STS_ERROR		4
+
+#define MAX_AES_JOBS		128
diff --git a/arch/x86/crypto/aes-cbc-mb/reg_sizes.S b/arch/x86/crypto/aes-cbc-mb/reg_sizes.S
new file mode 100644
index 0000000..0a5c850
--- /dev/null
+++ b/arch/x86/crypto/aes-cbc-mb/reg_sizes.S
@@ -0,0 +1,125 @@
+/*
+ *	Header file AES CBC multibuffer SSE optimization (x86_64)
+ *
+ *
+ * This file is provided under a dual BSD/GPLv2 license.  When using or
+ * redistributing this file, you may do so under either license.
+ *
+ * GPL LICENSE SUMMARY
+ *
+ * Copyright(c) 2015 Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * Contact Information:
+ * James Guilford <james.guilford@intel.com>
+ * Sean Gulley <sean.m.gulley@intel.com>
+ * Tim Chen <tim.c.chen@linux.intel.com>
+ *
+ * BSD LICENSE
+ *
+ * Copyright(c) 2015 Intel Corporation.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ */
+/* define d and w variants for registers */
+
+#define	raxd	eax
+#define raxw	ax
+#define raxb	al
+
+#define	rbxd	ebx
+#define rbxw	bx
+#define rbxb	bl
+
+#define	rcxd	ecx
+#define rcxw	cx
+#define rcxb	cl
+
+#define	rdxd	edx
+#define rdxw	dx
+#define rdxb	dl
+
+#define	rsid	esi
+#define rsiw	si
+#define rsib	sil
+
+#define	rdid	edi
+#define rdiw	di
+#define rdib	dil
+
+#define	rbpd	ebp
+#define rbpw	bp
+#define rbpb	bpl
+
+#define ymm0x	%xmm0
+#define ymm1x	%xmm1
+#define ymm2x	%xmm2
+#define ymm3x	%xmm3
+#define ymm4x	%xmm4
+#define ymm5x	%xmm5
+#define ymm6x	%xmm6
+#define ymm7x	%xmm7
+#define ymm8x	%xmm8
+#define ymm9x	%xmm9
+#define ymm10x	%xmm10
+#define ymm11x	%xmm11
+#define ymm12x	%xmm12
+#define ymm13x	%xmm13
+#define ymm14x	%xmm14
+#define ymm15x	%xmm15
+
+#define CONCAT(a,b)	a##b
+#define DWORD(reg)	CONCAT(reg, d)
+#define WORD(reg)	CONCAT(reg, w)
+#define BYTE(reg)	CONCAT(reg, b)
+
+#define XWORD(reg)	CONCAT(reg,x)
+
+/* common macros */
+
+/* Generate a label to go to */
+.macro LABEL prefix, num
+\prefix\num\():
+.endm
+
+/*
+ * cond_jump ins, name, suffix
+ * ins - conditional jump instruction to execute
+ * name,suffix - concatenate to form the label to go to
+ */
+.macro cond_jump ins, name, suffix
+	\ins	\name\suffix
+.endm
-- 
1.7.11.7

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v2 3/5] crypto: AES CBC multi-buffer scheduler
       [not found] <cover.1446158797.git.tim.c.chen@linux.intel.com>
                   ` (2 preceding siblings ...)
  2015-10-29 22:21 ` [PATCH v2 2/5] crypto: AES CBC multi-buffer data structures Tim Chen
@ 2015-10-29 22:21 ` Tim Chen
  2015-10-29 22:21 ` [PATCH v2 4/5] crypto: AES CBC by8 encryption Tim Chen
  2015-10-29 22:21 ` [PATCH v2 5/5] crypto: AES CBC multi-buffer glue code Tim Chen
  5 siblings, 0 replies; 15+ messages in thread
From: Tim Chen @ 2015-10-29 22:21 UTC (permalink / raw)
  To: Herbert Xu, H. Peter Anvin, David S.Miller, Stephan Mueller
  Cc: Chandramouli Narayanan, Vinodh Gopal, James Guilford,
	Wajdi Feghali, Tim Chen, Jussi Kivilinna, Stephan Mueller,
	linux-crypto, linux-kernel


This patch implements in-order scheduler for encrypting multiple buffers
in parallel supporting AES CBC encryption with key sizes of
128, 192 and 256 bits. It uses 8 data lanes by taking advantage of the
SIMD instructions with XMM registers.

The multibuffer manager and scheduler is mostly written in assembly and
the initialization support is written C. The AES CBC multibuffer crypto
driver support interfaces with the multibuffer manager and scheduler
to support AES CBC encryption in parallel. The scheduler supports
job submissions, job flushing and and job retrievals after completion.

The basic flow of usage of the CBC multibuffer scheduler is as follows:

- The caller allocates an aes_cbc_mb_mgr_inorder_x8 object
and initializes it once by calling aes_cbc_init_mb_mgr_inorder_x8().

- The aes_cbc_mb_mgr_inorder_x8 structure has an array of JOB_AES
objects. Allocation and scheduling of JOB_AES objects are managed
by the multibuffer scheduler support routines. The caller allocates
a JOB_AES using aes_cbc_get_next_job_inorder_x8().

- The returned JOB_AES must be filled in with parameters for CBC
encryption (eg: plaintext buffer, ciphertext buffer, key, iv, etc) and
submitted to the manager object using aes_cbc_submit_job_inorder_xx().

- If the oldest JOB_AES is completed during a call to
aes_cbc_submit_job_inorder_x8(), it is returned. Otherwise,
NULL is returned.

- A call to aes_cbc_flush_job_inorder_x8() always returns the
oldest job, unless the multibuffer manager is empty of jobs.

- A call to aes_cbc_get_completed_job_inorder_x8() returns
a completed job. This routine is useful to process completed
jobs instead of waiting for the flusher to engage.

- When a job is returned from submit or flush, the caller extracts
the useful data and returns it to the multibuffer manager implicitly
by the next call to aes_cbc_get_next_job_xx().

Jobs are always returned from submit or flush routines in the order they
were submitted (hence "inorder").A job allocated using
aes_cbc_get_next_job_inorder_x8() must be filled in and submitted before
another call. A job returned by aes_cbc_submit_job_inorder_x8() or
aes_cbc_flush_job_inorder_x8() is 'deallocated' upon the next call to
get a job structure. Calls to get_next_job() cannot fail. If all jobs are
allocated after a call to get_next_job(), the subsequent call to submit
always returns the oldest job in a completed state.

Originally-by: Chandramouli Narayanan <mouli_7982@yahoo.com>
Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
---
 arch/x86/crypto/aes-cbc-mb/aes_mb_mgr_init.c       | 145 +++++++
 arch/x86/crypto/aes-cbc-mb/mb_mgr_inorder_x8_asm.S | 222 +++++++++++
 arch/x86/crypto/aes-cbc-mb/mb_mgr_ooo_x8_asm.S     | 416 +++++++++++++++++++++
 3 files changed, 783 insertions(+)
 create mode 100644 arch/x86/crypto/aes-cbc-mb/aes_mb_mgr_init.c
 create mode 100644 arch/x86/crypto/aes-cbc-mb/mb_mgr_inorder_x8_asm.S
 create mode 100644 arch/x86/crypto/aes-cbc-mb/mb_mgr_ooo_x8_asm.S

diff --git a/arch/x86/crypto/aes-cbc-mb/aes_mb_mgr_init.c b/arch/x86/crypto/aes-cbc-mb/aes_mb_mgr_init.c
new file mode 100644
index 0000000..7a7f8a1
--- /dev/null
+++ b/arch/x86/crypto/aes-cbc-mb/aes_mb_mgr_init.c
@@ -0,0 +1,145 @@
+/*
+ * Initialization code for multi buffer AES CBC algorithm
+ *
+ *
+ * This file is provided under a dual BSD/GPLv2 license.  When using or
+ * redistributing this file, you may do so under either license.
+ *
+ * GPL LICENSE SUMMARY
+ *
+ * Copyright(c) 2015 Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * Contact Information:
+ * James Guilford <james.guilford@intel.com>
+ * Sean Gulley <sean.m.gulley@intel.com>
+ * Tim Chen <tim.c.chen@linux.intel.com>
+ *
+ * BSD LICENSE
+ *
+ * Copyright(c) 2015 Intel Corporation.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ */
+
+#include "aes_cbc_mb_mgr.h"
+
+void aes_cbc_init_mb_mgr_inorder_x8(struct aes_cbc_mb_mgr_inorder_x8 *state)
+{
+	/* Init "out of order" components */
+	state->unused_lanes = 0xF76543210;
+	state->job_in_lane[0] = NULL;
+	state->job_in_lane[1] = NULL;
+	state->job_in_lane[2] = NULL;
+	state->job_in_lane[3] = NULL;
+	state->job_in_lane[4] = NULL;
+	state->job_in_lane[5] = NULL;
+	state->job_in_lane[6] = NULL;
+	state->job_in_lane[7] = NULL;
+
+	/* Init "in order" components */
+	state->next_job = 0;
+	state->earliest_job = -1;
+
+}
+
+#define JOBS(offset) ((struct job_aes_cbc *)(((u64)state->jobs)+offset))
+
+struct job_aes_cbc *
+aes_cbc_get_next_job_inorder_x8(struct aes_cbc_mb_mgr_inorder_x8 *state)
+{
+	return JOBS(state->next_job);
+}
+
+struct job_aes_cbc *
+aes_cbc_flush_job_inorder_x8(struct aes_cbc_mb_mgr_inorder_x8 *state)
+{
+	struct job_aes_cbc *job;
+
+	/* checking earliest_job < 0 fails and the code walks over bogus */
+	if (state->earliest_job == -1)
+		return NULL; /* empty */
+
+	job = JOBS(state->earliest_job);
+	while (job->status != STS_COMPLETED) {
+		switch (job->key_len) {
+		case AES_KEYSIZE_128:
+			aes_cbc_flush_job_ooo_128x8(state);
+			break;
+		case AES_KEYSIZE_192:
+			aes_cbc_flush_job_ooo_192x8(state);
+			break;
+		case AES_KEYSIZE_256:
+			aes_cbc_flush_job_ooo_256x8(state);
+			break;
+		default:
+			break;
+		}
+	}
+
+	/* advance earliest job */
+	state->earliest_job += sizeof(struct job_aes_cbc);
+	if (state->earliest_job == MAX_AES_JOBS * sizeof(struct job_aes_cbc))
+		state->earliest_job = 0;
+
+	if (state->earliest_job == state->next_job)
+		state->earliest_job = -1;
+
+	return job;
+}
+
+struct job_aes_cbc *
+aes_cbc_get_completed_job_inorder_x8(struct aes_cbc_mb_mgr_inorder_x8 *state)
+{
+	struct job_aes_cbc *job;
+
+	if (state->earliest_job == -1)
+		return NULL; /* empty */
+
+	job = JOBS(state->earliest_job);
+	if (job->status != STS_COMPLETED)
+		return NULL;
+
+	state->earliest_job += sizeof(struct job_aes_cbc);
+
+	if (state->earliest_job == MAX_AES_JOBS * sizeof(struct job_aes_cbc))
+		state->earliest_job = 0;
+	if (state->earliest_job == state->next_job)
+		state->earliest_job = -1;
+
+	/* we have a completed job */
+	return job;
+}
diff --git a/arch/x86/crypto/aes-cbc-mb/mb_mgr_inorder_x8_asm.S b/arch/x86/crypto/aes-cbc-mb/mb_mgr_inorder_x8_asm.S
new file mode 100644
index 0000000..cdd66cb
--- /dev/null
+++ b/arch/x86/crypto/aes-cbc-mb/mb_mgr_inorder_x8_asm.S
@@ -0,0 +1,222 @@
+/*
+ *	AES CBC by8 multibuffer inorder scheduler optimization (x86_64)
+ *
+ *
+ * This file is provided under a dual BSD/GPLv2 license.  When using or
+ * redistributing this file, you may do so under either license.
+ *
+ * GPL LICENSE SUMMARY
+ *
+ * Copyright(c) 2015 Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * Contact Information:
+ * James Guilford <james.guilford@intel.com>
+ * Sean Gulley <sean.m.gulley@intel.com>
+ * Tim Chen <tim.c.chen@linux.intel.com>
+ *
+ * BSD LICENSE
+ *
+ * Copyright(c) 2015 Intel Corporation.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ */
+#include <linux/linkage.h>
+#include "mb_mgr_datastruct.S"
+#include "reg_sizes.S"
+
+#define JUMP
+
+#define arg1	%rdi
+#define arg2	%rsi
+#define state	arg1
+
+/* virtual registers used by submit_job_aes_inorder_x8 */
+#define next_job	%rdx
+#define earliest_job	%rcx
+#define zero		%r8
+#define returned_job	%rax	/* register that returns a value from func */
+
+.extern aes_cbc_submit_job_ooo_128x8
+.extern aes_cbc_submit_job_ooo_192x8
+.extern aes_cbc_submit_job_ooo_256x8
+.extern aes_cbc_flush_job_ooo_128x8
+.extern aes_cbc_flush_job_ooo_192x8
+.extern aes_cbc_flush_job_ooo_256x8
+.extern aes_cbc_flush_job_ooo_x8
+
+/*
+ * struct job_aes* aes_cbc_submit_job_inorder_x8(
+ *		struct aes_cbc_mb_mgr_inorder_x8 *state)
+ */
+
+.macro aes_cbc_submit_job_inorder_x8 key_len
+
+	sub	$8, %rsp	/* align stack for next calls */
+
+	mov	_next_job(state), DWORD(next_job)
+	lea	_jobs(state, next_job), arg2
+
+	.if \key_len == AES_KEYSIZE_128
+		call	aes_cbc_submit_job_ooo_128x8
+	.elseif \key_len == AES_KEYSIZE_192
+		call	aes_cbc_submit_job_ooo_192x8
+	.elseif \key_len == AES_KEYSIZE_256
+		call	aes_cbc_submit_job_ooo_256x8
+	.endif
+
+	mov	_earliest_job(state), DWORD(earliest_job)
+	cmp	$0, DWORD(earliest_job)
+	jl	.Lstate_was_empty\key_len
+
+	/* we have a valid earliest_job */
+
+	/* advance next_job */
+	mov	_next_job(state), DWORD(next_job)
+	add	$_JOB_AES_size, next_job
+#ifdef JUMP
+	cmp	$(MAX_AES_JOBS * _JOB_AES_size), next_job
+	jne	.Lskip1\key_len
+	xor	next_job, next_job
+.Lskip1\key_len:
+#else
+	xor	zero,zero
+	cmp	$(MAX_AES_JOBS * _JOB_AES_size), next_job
+	cmove	zero, next_job
+#endif
+	mov	DWORD(next_job), _next_job(state)
+
+	lea	_jobs(state, earliest_job), returned_job
+	cmp	next_job, earliest_job
+	je	.Lfull\key_len
+
+	/* not full */
+	cmpl	$STS_COMPLETED, _status(returned_job)
+	jne	.Lreturn_null\key_len
+
+	/* advance earliest_job */
+	add	$_JOB_AES_size, earliest_job
+	cmp	$(MAX_AES_JOBS * _JOB_AES_size), earliest_job
+#ifdef JUMP
+	jne	.Lskip2\key_len
+	xor	earliest_job, earliest_job
+.Lskip2\key_len:
+#else
+	cmove	zero, earliest_job
+#endif
+
+	add	$8, %rsp
+	mov	DWORD(earliest_job), _earliest_job(state)
+	ret
+
+.Lreturn_null\key_len:
+	add	$8, %rsp
+	xor	returned_job, returned_job
+	ret
+
+.Lfull\key_len:
+	cmpl	$STS_COMPLETED, _status(returned_job)
+	je	.Lcompleted\key_len
+	mov	earliest_job, (%rsp)
+.Lflush_loop\key_len:
+	.if \key_len == AES_KEYSIZE_128
+		call	aes_cbc_flush_job_ooo_128x8
+	.elseif \key_len == AES_KEYSIZE_192
+		call	aes_cbc_flush_job_ooo_192x8
+	.elseif \key_len == AES_KEYSIZE_256
+		call	aes_cbc_flush_job_ooo_256x8
+	.endif
+	/* state is still valid */
+	mov	(%rsp), earliest_job
+	cmpl	$STS_COMPLETED, _status(returned_job)
+	jne	.Lflush_loop\key_len
+	xor	zero,zero
+.Lcompleted\key_len:
+	/* advance earliest_job */
+	add	$_JOB_AES_size, earliest_job
+	cmp	$(MAX_AES_JOBS * _JOB_AES_size), earliest_job
+#ifdef JUMP
+	jne	.Lskip3\key_len
+	xor	earliest_job, earliest_job
+.Lskip3\key_len:
+#else
+	cmove	zero, earliest_job
+#endif
+
+	add	$8, %rsp
+	mov	DWORD(earliest_job), _earliest_job(state)
+	ret
+
+.Lstate_was_empty\key_len:
+	mov	_next_job(state), DWORD(next_job)
+	mov	DWORD(next_job), _earliest_job(state)
+
+	/* advance next_job */
+	add	$_JOB_AES_size, next_job
+#ifdef JUMP
+	cmp	$(MAX_AES_JOBS * _JOB_AES_size), next_job
+	jne	.Lskip4\key_len
+	xor	next_job, next_job
+.Lskip4\key_len:
+#else
+	xor	zero,zero
+	cmp	$(MAX_AES_JOBS * _JOB_AES_size), next_job
+	cmove	zero, next_job
+#endif
+	mov	DWORD(next_job), _next_job(state)
+
+	add	$8, %rsp
+	xor	returned_job, returned_job
+	ret
+.endm
+
+ENTRY(aes_cbc_submit_job_inorder_128x8)
+
+	aes_cbc_submit_job_inorder_x8 AES_KEYSIZE_128
+
+ENDPROC(aes_cbc_submit_job_inorder_128x8)
+
+ENTRY(aes_cbc_submit_job_inorder_192x8)
+
+	aes_cbc_submit_job_inorder_x8 AES_KEYSIZE_192
+
+ENDPROC(aes_cbc_submit_job_inorder_192x8)
+
+ENTRY(aes_cbc_submit_job_inorder_256x8)
+
+	aes_cbc_submit_job_inorder_x8 AES_KEYSIZE_256
+
+ENDPROC(aes_cbc_submit_job_inorder_256x8)
diff --git a/arch/x86/crypto/aes-cbc-mb/mb_mgr_ooo_x8_asm.S b/arch/x86/crypto/aes-cbc-mb/mb_mgr_ooo_x8_asm.S
new file mode 100644
index 0000000..8adcaf0
--- /dev/null
+++ b/arch/x86/crypto/aes-cbc-mb/mb_mgr_ooo_x8_asm.S
@@ -0,0 +1,416 @@
+/*
+ *	AES CBC by8 multibuffer out-of-order scheduler optimization (x86_64)
+ *
+ *
+ * This file is provided under a dual BSD/GPLv2 license.  When using or
+ * redistributing this file, you may do so under either license.
+ *
+ * GPL LICENSE SUMMARY
+ *
+ * Copyright(c) 2015 Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * Contact Information:
+ * James Guilford <james.guilford@intel.com>
+ * Sean Gulley <sean.m.gulley@intel.com>
+ * Tim Chen <tim.c.chen@linux.intel.com>
+ *
+ * BSD LICENSE
+ *
+ * Copyright(c) 2015 Intel Corporation.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ */
+
+#include <linux/linkage.h>
+#include "mb_mgr_datastruct.S"
+#include "reg_sizes.S"
+
+#define arg1	%rdi
+#define arg2	%rsi
+#define state	arg1
+#define job	arg2
+
+/* virtual registers used by aes_cbc_submit_job_ooo_x8 */
+#define unused_lanes	%rax
+#define lane		%rdx
+#define	tmp1		%rcx
+#define	tmp2		%r8
+#define	tmp3		%r9
+#define len		tmp1
+
+#define good_lane	lane
+
+/* virtual registers used by aes_cbc_submit_job_inorder_x8 */
+#define new_job		%rdx
+#define earliest_job	%rcx
+#define minus1		%r8
+#define returned_job	%rax	/* register that returns a value from func */
+
+.section .rodata
+.align 16
+len_masks:
+	.octa 0x0000000000000000000000000000FFFF
+	.octa 0x000000000000000000000000FFFF0000
+	.octa 0x00000000000000000000FFFF00000000
+	.octa 0x0000000000000000FFFF000000000000
+	.octa 0x000000000000FFFF0000000000000000
+	.octa 0x00000000FFFF00000000000000000000
+	.octa 0x0000FFFF000000000000000000000000
+	.octa 0xFFFF0000000000000000000000000000
+
+dupw:
+	.octa 0x01000100010001000100010001000100
+
+one:	.quad  1
+two:	.quad  2
+three:	.quad  3
+four:	.quad  4
+five:	.quad  5
+six:	.quad  6
+seven:	.quad  7
+
+.text
+
+.extern aes_cbc_enc_128_x8
+.extern aes_cbc_enc_192_x8
+.extern aes_cbc_enc_256_x8
+
+/* arg1/state remains intact after call */
+/*
+ * void aes_cbc_submit_job__ooo_128x8(
+ *	struct aes_cbc_mb_mgr_aes_inorder_x8 *state,
+ *	struct job_aes_cbc *job);
+ * void aes_cbc_submit_job__ooo_192x8(
+ *	struct aes_cbc_mb_mgr_aes_inorder_x8 *state,
+ *	struct job_aes_cbc *job);
+ * void aes_cbc_submit_job__ooo_256x8(
+ *	struct aes_cbc_mb_mgr_aes_inorder_x8 *state,
+ *	struct job_aes_cbc *job);
+ */
+
+.global aes_cbc_submit_job_ooo_128x8
+.global aes_cbc_submit_job_ooo_192x8
+.global aes_cbc_submit_job_ooo_256x8
+
+
+.macro aes_cbc_submit_job_ooo_x8 key_len
+
+	mov	_unused_lanes(state), unused_lanes
+	mov	unused_lanes, lane
+	and	$0xF, lane
+	shr	$4, unused_lanes
+
+	/* state->job_in_lane[lane] = job; */
+	mov	job, _job_in_lane(state, lane, 8)
+
+	/*state->lens[lane] = job->len / AES_BLOCK_SIZE; */
+	mov	_len(job), DWORD(len)
+	shr	$4, len
+	mov	WORD(len), _lens(state, lane, 2)
+
+	mov	_plaintext(job), tmp1
+	mov	_ciphertext(job), tmp2
+	mov	_keys(job), tmp3
+	movdqu	_IV(job), %xmm2
+	mov	tmp1, _args_in(state, lane, 8)
+	mov	tmp2, _args_out(state , lane, 8)
+	mov	tmp3, _args_keys(state, lane, 8)
+	shl	$4, lane
+	movdqa	%xmm2, _args_IV(state , lane)
+
+	movl	$STS_BEING_PROCESSED, _status(job)
+
+	mov	unused_lanes, _unused_lanes(state)
+	cmp	$0xF, unused_lanes
+	jne	.Lnot_enough_jobs\key_len
+
+	movdqa	_lens(state), %xmm0
+	phminposuw	%xmm0, %xmm1
+	/*
+	 * xmm1{15:0} = min value
+	 * xmm1{18:16} = index of min
+	 */
+
+	/*
+	 * arg1 = rcx = state = args (and it is not clobbered by routine)
+	 * arg2 = rdx = min len
+	 */
+	movd	%xmm1, arg2
+	and	$0xFFFF, arg2
+
+	/* subtract min len from lengths */
+	pshufb	dupw(%rip), %xmm1	/* duplicate words across all lanes */
+	psubw	%xmm1, %xmm0
+	movdqa	%xmm0, _lens(state)
+
+	/* need to align stack */
+	sub	$8, %rsp
+	.if \key_len == AES_KEYSIZE_128
+		call aes_cbc_enc_128_x8
+	.elseif \key_len == AES_KEYSIZE_192
+		call aes_cbc_enc_192_x8
+	.elseif \key_len == AES_KEYSIZE_256
+		call aes_cbc_enc_256_x8
+	.endif
+	add	$8, %rsp
+	/* arg1/state is still intact */
+
+	/* process completed jobs */
+	movdqa	_lens(state), %xmm0
+	phminposuw	%xmm0, %xmm1
+	/*
+	 * xmm1{15:0} = min value
+	 * xmm1{18:16} = index of min
+	 */
+
+	/*
+	 * at this point at least one len should be 0
+	 * so min value should be 0
+	 * and the index is the index of that lane [0...7]
+	 */
+	lea	len_masks(%rip), tmp3
+	mov	_unused_lanes(state), unused_lanes
+	movd	%xmm1, lane
+.Lcontinue_loop\key_len:
+	/* assert((lane & 0xFFFF) == 0) */
+	shr	$16, lane	/* lane is now index */
+	mov	_job_in_lane(state, lane, 8), job
+	movl	$STS_COMPLETED, _status(job)
+	movq	$0, _job_in_lane(state ,lane, 8)
+	shl	$4, unused_lanes
+	or	lane, unused_lanes
+	shl	$4, lane
+	movdqa	_args_IV(state,lane), %xmm2
+	movdqu	%xmm2, _IV(job)
+	por	(tmp3, lane), %xmm0
+
+	phminposuw	%xmm0, %xmm1
+	movd	%xmm1, lane
+	/* see if bits 15:0 are zero */
+	test	$0xFFFF, lane
+	jz	.Lcontinue_loop\key_len
+
+	/* done; save registers */
+	mov	unused_lanes, _unused_lanes(state)
+	/* don't need to save xmm0/lens */
+
+.Lnot_enough_jobs\key_len:
+	ret
+.endm
+
+ENTRY(aes_cbc_submit_job_ooo_128x8)
+
+	aes_cbc_submit_job_ooo_x8 AES_KEYSIZE_128
+
+ENDPROC(aes_cbc_submit_job_ooo_128x8)
+
+ENTRY(aes_cbc_submit_job_ooo_192x8)
+
+	aes_cbc_submit_job_ooo_x8 AES_KEYSIZE_192
+
+ENDPROC(aes_cbc_submit_job_ooo_192x8)
+
+ENTRY(aes_cbc_submit_job_ooo_256x8)
+
+	aes_cbc_submit_job_ooo_x8 AES_KEYSIZE_256
+
+ENDPROC(aes_cbc_submit_job_ooo_256x8)
+
+
+/* arg1/state remains intact after call */
+/*
+ * void aes_cbc_flush_job_ooo_128x8(
+ *	struct aes_cbc_mb_mgr_aes_inorder_x8 *state)
+ * void aes_cbc_flush_job_ooo_192x8(
+ *	struct aes_cbc_mb_mgr_aes_inorder_x8 *state)
+ * void aes_cbc_flush_job_ooo_256x8(
+ *	struct aes_cbc_mb_mgr_aes_inorder_x8 *state)
+ */
+.global aes_cbc_flush_job_ooo_128x8
+.global aes_cbc_flush_job_ooo_192x8
+.global aes_cbc_flush_job_ooo_256x8
+
+.macro aes_cbc_flush_job_ooo_x8 key_len
+
+	mov	_unused_lanes(state), unused_lanes
+
+	/* if bit (32+3) is set, then all lanes are empty */
+	bt	$(32+3), unused_lanes
+	jc	.Lreturn\key_len
+
+	/* find a lane with a non-null job */
+	xor	good_lane, good_lane
+	cmpq	$0, (_job_in_lane+8*1)(state)
+	cmovne	one(%rip), good_lane
+	cmpq	$0, (_job_in_lane+8*2)(state)
+	cmovne	two(%rip), good_lane
+	cmpq	$0, (_job_in_lane+8*3)(state)
+	cmovne	three(%rip), good_lane
+	cmpq	$0, (_job_in_lane+8*4)(state)
+	cmovne	four(%rip), good_lane
+	cmpq	$0, (_job_in_lane+8*5)(state)
+	cmovne	five(%rip), good_lane
+	cmpq	$0, (_job_in_lane+8*6)(state)
+	cmovne	six(%rip), good_lane
+	cmpq	$0, (_job_in_lane+8*7)(state)
+	cmovne	seven(%rip), good_lane
+
+	/* copy good_lane to empty lanes */
+	mov	_args_in(state, good_lane, 8), tmp1
+	mov	_args_out(state, good_lane, 8), tmp2
+	mov	_args_keys(state, good_lane, 8), tmp3
+	shl	$4, good_lane
+	movdqa	_args_IV(state, good_lane), %xmm2
+
+	movdqa	_lens(state), %xmm0
+
+	I = 0
+
+.altmacro
+	.rept 8
+		cmpq	$0, (_job_in_lane + 8*I)(state)
+		.if \key_len == AES_KEYSIZE_128
+			cond_jump jne, .Lskip128_,%I
+		.elseif \key_len == AES_KEYSIZE_192
+			cond_jump jne, .Lskip192_,%I
+		.elseif \key_len == AES_KEYSIZE_256
+			cond_jump jne, .Lskip256_,%I
+		.endif
+		mov	tmp1, (_args_in + 8*I)(state)
+		mov	tmp2, (_args_out + 8*I)(state)
+		mov	tmp3, (_args_keys + 8*I)(state)
+		movdqa	%xmm2, (_args_IV + 16*I)(state)
+		por	(len_masks + 16*I)(%rip), %xmm0
+		.if \key_len == AES_KEYSIZE_128
+			LABEL .Lskip128_,%I
+		.elseif \key_len == AES_KEYSIZE_192
+			LABEL .Lskip192_,%I
+		.elseif \key_len == AES_KEYSIZE_256
+			LABEL .Lskip256_,%I
+		.endif
+		I = (I+1)
+	.endr
+.noaltmacro
+
+	phminposuw	%xmm0, %xmm1
+	/*
+	 * xmm1{15:0} = min value
+	 * xmm1{18:16} = index of min
+	 */
+
+	/*
+	 * arg1 = rcx = state = args (and it is not clobbered by routine)
+	 * arg2 = rdx = min len
+	 */
+	movd	%xmm1, arg2
+	and	$0xFFFF, arg2
+
+	/* subtract min len from lengths */
+	pshufb	dupw(%rip), %xmm1	/* duplicate words across all lanes */
+	psubw	%xmm1, %xmm0
+	movdqa	%xmm0, _lens(state)
+
+	/* need to align stack */
+	sub	$8, %rsp
+	.if \key_len == AES_KEYSIZE_128
+		call aes_cbc_enc_128_x8
+	.elseif \key_len == AES_KEYSIZE_192
+		call aes_cbc_enc_192_x8
+	.elseif \key_len == AES_KEYSIZE_256
+		call aes_cbc_enc_256_x8
+	.endif
+	add	$8, %rsp
+	/* arg1/state is still intact */
+
+	/* process completed jobs */
+	movdqa	_lens(state), %xmm0
+	phminposuw	%xmm0, %xmm1
+	/*
+	 * xmm1{15:0} = min value
+	 * xmm1{18:16} = index of min
+	 */
+
+	/*
+	 * at this point at least one len should be 0, so min value should be 0
+	 * and the index is the index of that lane [0...3]
+	 */
+	lea	len_masks(%rip), tmp3
+	mov	_unused_lanes(state), unused_lanes
+	movd	%xmm1, lane
+.Lcontinue_loop2\key_len:
+	/* assert((lane & 0xFFFF) == 0) */
+	shr	$16, lane	/* lane is now index */
+	mov	_job_in_lane(state, lane, 8), job
+	movl	$STS_COMPLETED, _status(job)
+	movq	$0, _job_in_lane(state, lane, 8)
+	shl	$4, unused_lanes
+	or	lane, unused_lanes
+	shl	$4, lane
+	movdqa	_args_IV(state, lane), %xmm2
+	movdqu	%xmm2, _IV(job)
+	por	(tmp3, lane), %xmm0
+
+	phminposuw	%xmm0, %xmm1
+	movd	%xmm1, lane
+	/* see if bits 15:0 are zero */
+	test	$0xFFFF, lane
+	jz	.Lcontinue_loop2\key_len
+
+	/* done; save registers */
+	mov	unused_lanes, _unused_lanes(state)
+	/* don't need to save xmm0/lens */
+.Lreturn\key_len:
+	ret
+.endm
+
+ENTRY(aes_cbc_flush_job_ooo_128x8)
+
+	aes_cbc_flush_job_ooo_x8 AES_KEYSIZE_128
+
+ENDPROC(aes_cbc_flush_job_ooo_128x8)
+
+ENTRY(aes_cbc_flush_job_ooo_192x8)
+
+	aes_cbc_flush_job_ooo_x8 AES_KEYSIZE_192
+
+ENDPROC(aes_cbc_flush_job_ooo_192x8)
+
+ENTRY(aes_cbc_flush_job_ooo_256x8)
+
+	aes_cbc_flush_job_ooo_x8 AES_KEYSIZE_256
+
+ENDPROC(aes_cbc_flush_job_ooo_256x8)
-- 
1.7.11.7

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v2 4/5] crypto: AES CBC by8 encryption
       [not found] <cover.1446158797.git.tim.c.chen@linux.intel.com>
                   ` (3 preceding siblings ...)
  2015-10-29 22:21 ` [PATCH v2 3/5] crypto: AES CBC multi-buffer scheduler Tim Chen
@ 2015-10-29 22:21 ` Tim Chen
  2015-10-29 22:21 ` [PATCH v2 5/5] crypto: AES CBC multi-buffer glue code Tim Chen
  5 siblings, 0 replies; 15+ messages in thread
From: Tim Chen @ 2015-10-29 22:21 UTC (permalink / raw)
  To: Herbert Xu, H. Peter Anvin, David S.Miller, Stephan Mueller
  Cc: Chandramouli Narayanan, Vinodh Gopal, James Guilford,
	Wajdi Feghali, Tim Chen, Jussi Kivilinna, Stephan Mueller,
	linux-crypto, linux-kernel


This patch introduces the assembly routine to do a by8 AES CBC encryption
in support of the AES CBC multi-buffer implementation.

Encryption of 8 data streams of a key size are done simultaneously.

Originally-by: Chandramouli Narayanan <mouli_7982@yahoo.com>
Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
---
 arch/x86/crypto/aes-cbc-mb/aes_cbc_enc_x8.S | 774 ++++++++++++++++++++++++++++
 1 file changed, 774 insertions(+)
 create mode 100644 arch/x86/crypto/aes-cbc-mb/aes_cbc_enc_x8.S

diff --git a/arch/x86/crypto/aes-cbc-mb/aes_cbc_enc_x8.S b/arch/x86/crypto/aes-cbc-mb/aes_cbc_enc_x8.S
new file mode 100644
index 0000000..eaffc28
--- /dev/null
+++ b/arch/x86/crypto/aes-cbc-mb/aes_cbc_enc_x8.S
@@ -0,0 +1,774 @@
+/*
+ *	AES CBC by8 multibuffer optimization (x86_64)
+ *	This file implements 128/192/256 bit AES CBC encryption
+ *
+ *
+ * This file is provided under a dual BSD/GPLv2 license.  When using or
+ * redistributing this file, you may do so under either license.
+ *
+ * GPL LICENSE SUMMARY
+ *
+ * Copyright(c) 2015 Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * Contact Information:
+ * James Guilford <james.guilford@intel.com>
+ * Sean Gulley <sean.m.gulley@intel.com>
+ * Tim Chen <tim.c.chen@linux.intel.com>
+ *
+ * BSD LICENSE
+ *
+ * Copyright(c) 2015 Intel Corporation.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ */
+#include <linux/linkage.h>
+
+/* stack size needs to be an odd multiple of 8 for alignment */
+
+#define AES_KEYSIZE_128        16
+#define AES_KEYSIZE_192        24
+#define AES_KEYSIZE_256        32
+
+#define XMM_SAVE_SIZE	16*10
+#define GPR_SAVE_SIZE	8*9
+#define STACK_SIZE	(XMM_SAVE_SIZE + GPR_SAVE_SIZE)
+
+#define GPR_SAVE_REG	%rsp
+#define GPR_SAVE_AREA	%rsp + XMM_SAVE_SIZE
+#define LEN_AREA_OFFSET	XMM_SAVE_SIZE + 8*8
+#define LEN_AREA_REG	%rsp
+#define LEN_AREA	%rsp + XMM_SAVE_SIZE + 8*8
+
+#define IN_OFFSET	0
+#define OUT_OFFSET	8*8
+#define KEYS_OFFSET	16*8
+#define IV_OFFSET	24*8
+
+
+#define IDX	%rax
+#define TMP	%rbx
+#define ARG	%rdi
+#define LEN	%rsi
+
+#define KEYS0	%r14
+#define KEYS1	%r15
+#define KEYS2	%rbp
+#define KEYS3	%rdx
+#define KEYS4	%rcx
+#define KEYS5	%r8
+#define KEYS6	%r9
+#define KEYS7	%r10
+
+#define IN0	%r11
+#define IN2	%r12
+#define IN4	%r13
+#define IN6	LEN
+
+#define XDATA0	%xmm0
+#define XDATA1	%xmm1
+#define XDATA2	%xmm2
+#define XDATA3	%xmm3
+#define XDATA4	%xmm4
+#define XDATA5	%xmm5
+#define XDATA6	%xmm6
+#define XDATA7	%xmm7
+
+#define XKEY0_3	%xmm8
+#define XKEY1_4	%xmm9
+#define XKEY2_5	%xmm10
+#define XKEY3_6	%xmm11
+#define XKEY4_7	%xmm12
+#define XKEY5_8	%xmm13
+#define XKEY6_9	%xmm14
+#define XTMP	%xmm15
+
+#define	MOVDQ movdqu /* assume buffers not aligned */
+#define CONCAT(a, b)	a##b
+#define INPUT_REG_SUFX	1	/* IN */
+#define XDATA_REG_SUFX	2	/* XDAT */
+#define KEY_REG_SUFX	3	/* KEY */
+#define XMM_REG_SUFX	4	/* XMM */
+
+/*
+ * To avoid positional parameter errors while compiling
+ * three registers need to be passed
+ */
+.text
+
+.macro pxor2 x, y, z
+	MOVDQ	(\x,\y), XTMP
+	pxor	XTMP, \z
+.endm
+
+.macro inreg n
+	.if (\n == 0)
+		reg_IN = IN0
+	.elseif (\n == 2)
+		reg_IN = IN2
+	.elseif (\n == 4)
+		reg_IN = IN4
+	.elseif (\n == 6)
+		reg_IN = IN6
+	.else
+		error "inreg: incorrect register number"
+	.endif
+.endm
+.macro xdatareg n
+	.if (\n == 0)
+		reg_XDAT = XDATA0
+	.elseif (\n == 1)
+		reg_XDAT = XDATA1
+	.elseif (\n == 2)
+		reg_XDAT = XDATA2
+	.elseif (\n == 3)
+		reg_XDAT = XDATA3
+	.elseif (\n == 4)
+		reg_XDAT = XDATA4
+	.elseif (\n == 5)
+		reg_XDAT = XDATA5
+	.elseif (\n == 6)
+		reg_XDAT = XDATA6
+	.elseif (\n == 7)
+		reg_XDAT = XDATA7
+	.endif
+.endm
+.macro xkeyreg n
+	.if (\n == 0)
+		reg_KEY = KEYS0
+	.elseif (\n == 1)
+		reg_KEY = KEYS1
+	.elseif (\n == 2)
+		reg_KEY = KEYS2
+	.elseif (\n == 3)
+		reg_KEY = KEYS3
+	.elseif (\n == 4)
+		reg_KEY = KEYS4
+	.elseif (\n == 5)
+		reg_KEY = KEYS5
+	.elseif (\n == 6)
+		reg_KEY = KEYS6
+	.elseif (\n == 7)
+		reg_KEY = KEYS7
+	.endif
+.endm
+.macro xmmreg n
+	.if (\n >= 0) && (\n < 16)
+		/* Valid register number */
+		reg_XMM = %xmm\n
+	.else
+		error "xmmreg: incorrect register number"
+	.endif
+.endm
+
+/*
+ * suffix - register suffix
+ * set up the register name using the loop index I
+ */
+.macro define_reg suffix
+.altmacro
+	.if (\suffix == INPUT_REG_SUFX)
+		inreg  %I
+	.elseif (\suffix == XDATA_REG_SUFX)
+		xdatareg  %I
+	.elseif (\suffix == KEY_REG_SUFX)
+		xkeyreg  %I
+	.elseif (\suffix == XMM_REG_SUFX)
+		xmmreg  %I
+	.else
+		error "define_reg: unknown register suffix"
+	.endif
+.noaltmacro
+.endm
+
+/*
+ * aes_cbc_enc_x8 key_len
+ * macro to encode data for 128bit, 192bit and 256bit keys
+ */
+
+.macro aes_cbc_enc_x8 key_len
+
+	sub	$STACK_SIZE, %rsp
+
+	mov	%rbx, (XMM_SAVE_SIZE + 8*0)(GPR_SAVE_REG)
+	mov	%rbp, (XMM_SAVE_SIZE + 8*3)(GPR_SAVE_REG)
+	mov	%r12, (XMM_SAVE_SIZE + 8*4)(GPR_SAVE_REG)
+	mov	%r13, (XMM_SAVE_SIZE + 8*5)(GPR_SAVE_REG)
+	mov	%r14, (XMM_SAVE_SIZE + 8*6)(GPR_SAVE_REG)
+	mov	%r15, (XMM_SAVE_SIZE + 8*7)(GPR_SAVE_REG)
+
+	mov	$16, IDX
+	shl	$4, LEN	/* LEN = LEN * 16 */
+	/* LEN is now in terms of bytes */
+	mov	LEN, (LEN_AREA_OFFSET)(LEN_AREA_REG)
+
+	/* Run through storing arguments in IN0,2,4,6 */
+	I = 0
+	.rept 4
+		define_reg	INPUT_REG_SUFX
+		mov	(IN_OFFSET + 8*I)(ARG), reg_IN
+		I = (I + 2)
+	.endr
+
+	/* load 1 .. 8 blocks of plain text into XDATA0..XDATA7 */
+	I = 0
+	.rept 4
+		mov		(IN_OFFSET + 8*(I+1))(ARG), TMP
+		define_reg	INPUT_REG_SUFX
+		define_reg	XDATA_REG_SUFX
+		/* load first block of plain text */
+		MOVDQ		(reg_IN), reg_XDAT
+		I = (I + 1)
+		define_reg	XDATA_REG_SUFX
+		/* load next block of plain text */
+		MOVDQ		(TMP), reg_XDAT
+		I = (I + 1)
+	.endr
+
+	/* Run through XDATA0 .. XDATA7 to perform plaintext XOR IV */
+	I = 0
+	.rept 8
+		define_reg	XDATA_REG_SUFX
+		pxor	(IV_OFFSET + 16*I)(ARG), reg_XDAT
+		I = (I + 1)
+	.endr
+
+	I = 0
+	.rept 8
+		define_reg	KEY_REG_SUFX
+		mov	(KEYS_OFFSET + 8*I)(ARG), reg_KEY
+		I = (I + 1)
+	.endr
+
+	I = 0
+	/* 0..7 ARK */
+	.rept 8
+		define_reg	XDATA_REG_SUFX
+		define_reg	KEY_REG_SUFX
+		pxor	16*0(reg_KEY), reg_XDAT
+		I = (I + 1)
+	.endr
+
+	I = 0
+		/* 1. ENC */
+	.rept 8
+		define_reg	XDATA_REG_SUFX
+		define_reg	KEY_REG_SUFX
+		aesenc	(16*1)(reg_KEY), reg_XDAT
+		I = (I + 1)
+	.endr
+
+	movdqa		16*3(KEYS0), XKEY0_3	/* load round 3 key */
+
+	I = 0
+		/* 2. ENC */
+	.rept 8
+		define_reg	XDATA_REG_SUFX
+		define_reg	KEY_REG_SUFX
+		aesenc	(16*2)(reg_KEY), reg_XDAT
+		I = (I + 1)
+	.endr
+
+	movdqa		16*4(KEYS1), XKEY1_4	/* load round 4 key */
+
+		/* 3. ENC */
+	aesenc		XKEY0_3, XDATA0
+	I = 1
+	.rept 7
+		define_reg	XDATA_REG_SUFX
+		define_reg	KEY_REG_SUFX
+		aesenc	(16*3)(reg_KEY), reg_XDAT
+		I = (I + 1)
+	.endr
+
+	/*
+	 * FIXME:
+	 * why can't we reorder encrypt DATA0..DATA7 and load 5th round?
+	 */
+	aesenc		(16*4)(KEYS0), XDATA0		/* 4. ENC */
+	movdqa		16*5(KEYS2), XKEY2_5		/* load round 5 key */
+	aesenc		XKEY1_4, XDATA1			/* 4. ENC */
+
+	I = 2
+	.rept 6
+		define_reg	XDATA_REG_SUFX
+		define_reg	KEY_REG_SUFX
+		aesenc	(16*4)(reg_KEY), reg_XDAT
+		I = (I + 1)
+	.endr
+
+	aesenc		(16*5)(KEYS0), XDATA0		/* 5. ENC */
+	aesenc		(16*5)(KEYS1), XDATA1		/* 5. ENC */
+	movdqa		16*6(KEYS3), XKEY3_6		/* load round 6 key */
+	aesenc		XKEY2_5, XDATA2			/* 5. ENC */
+	I = 3
+	.rept 5
+		define_reg	XDATA_REG_SUFX
+		define_reg	KEY_REG_SUFX
+		aesenc	(16*5)(reg_KEY), reg_XDAT
+		I = (I + 1)
+	.endr
+
+	aesenc		(16*6)(KEYS0), XDATA0		/* 6. ENC */
+	aesenc		(16*6)(KEYS1), XDATA1		/* 6. ENC */
+	aesenc		(16*6)(KEYS2), XDATA2		/* 6. ENC */
+	movdqa		16*7(KEYS4), XKEY4_7		/* load round 7 key */
+	aesenc		XKEY3_6, XDATA3			/* 6. ENC */
+
+	I = 4
+	.rept 4
+		define_reg	XDATA_REG_SUFX
+		define_reg	KEY_REG_SUFX
+		aesenc	(16*6)(reg_KEY), reg_XDAT
+		I = (I + 1)
+	.endr
+
+	I = 0
+	.rept 4
+		define_reg	XDATA_REG_SUFX
+		define_reg	KEY_REG_SUFX
+		aesenc	(16*7)(reg_KEY), reg_XDAT
+		I = (I + 1)
+	.endr
+	movdqa		16*8(KEYS5), XKEY5_8		/* load round 8 key */
+	aesenc		XKEY4_7, XDATA4			/* 7. ENC */
+	I = 5
+	.rept 3
+		define_reg	XDATA_REG_SUFX
+		define_reg	KEY_REG_SUFX
+		aesenc	(16*7)(reg_KEY), reg_XDAT
+		I = (I + 1)
+	.endr
+
+	I = 0
+	.rept 5
+		define_reg	XDATA_REG_SUFX
+		define_reg	KEY_REG_SUFX
+		aesenc	(16*8)(reg_KEY), reg_XDAT
+		I = (I + 1)
+	.endr
+	movdqa		16*9(KEYS6), XKEY6_9		/* load round 9 key */
+	aesenc		XKEY5_8, XDATA5			/* 8. ENC */
+	aesenc		16*8(KEYS6), XDATA6		/* 8. ENC */
+	aesenc		16*8(KEYS7), XDATA7		/* 8. ENC */
+
+	I = 0
+	.rept 6
+		define_reg	XDATA_REG_SUFX
+		define_reg	KEY_REG_SUFX
+		aesenc	(16*9)(reg_KEY), reg_XDAT
+		I = (I + 1)
+	.endr
+	mov		(OUT_OFFSET + 8*0)(ARG), TMP
+	aesenc		XKEY6_9, XDATA6			/* 9. ENC */
+	aesenc		16*9(KEYS7), XDATA7		/* 9. ENC */
+
+		/* 10. ENC (last for 128bit keys) */
+	I = 0
+	.rept 8
+		define_reg	XDATA_REG_SUFX
+		define_reg	KEY_REG_SUFX
+		.if (\key_len == AES_KEYSIZE_128)
+			aesenclast	(16*10)(reg_KEY), reg_XDAT
+		.else
+			aesenc	(16*10)(reg_KEY), reg_XDAT
+		.endif
+		I = (I + 1)
+	.endr
+
+	.if (\key_len != AES_KEYSIZE_128)
+		/* 11. ENC */
+		I = 0
+		.rept 8
+			define_reg	XDATA_REG_SUFX
+			define_reg	KEY_REG_SUFX
+			aesenc	(16*11)(reg_KEY), reg_XDAT
+			I = (I + 1)
+		.endr
+
+		/* 12. ENC (last for 192bit key) */
+		I = 0
+		.rept 8
+			define_reg	XDATA_REG_SUFX
+			define_reg	KEY_REG_SUFX
+			.if (\key_len == AES_KEYSIZE_192)
+				aesenclast	(16*12)(reg_KEY), reg_XDAT
+			.else
+				aesenc		(16*12)(reg_KEY), reg_XDAT
+			.endif
+			I = (I + 1)
+		.endr
+
+		/* for 256bit, two more rounds */
+		.if \key_len == AES_KEYSIZE_256
+
+			/* 13. ENC */
+			I = 0
+			.rept 8
+				define_reg	XDATA_REG_SUFX
+				define_reg	KEY_REG_SUFX
+				aesenc	(16*13)(reg_KEY), reg_XDAT
+				I = (I + 1)
+			.endr
+
+			/* 14. ENC last encode for 256bit key */
+			I = 0
+			.rept 8
+				define_reg	XDATA_REG_SUFX
+				define_reg	KEY_REG_SUFX
+				aesenclast	(16*14)(reg_KEY), reg_XDAT
+				I = (I + 1)
+			.endr
+		.endif
+
+	.endif
+
+	I = 0
+	.rept 8
+		define_reg	XDATA_REG_SUFX
+		MOVDQ	reg_XDAT, (TMP)	/* write back ciphertext */
+		I = (I + 1)
+		.if (I < 8)
+			mov	(OUT_OFFSET + 8*I)(ARG), TMP
+		.endif
+	.endr
+
+	cmp		IDX, LEN_AREA_OFFSET(LEN_AREA_REG)
+	je		.Ldone\key_len
+
+.Lmain_loop\key_len:
+	mov		(IN_OFFSET + 8*1)(ARG), TMP
+	pxor2		IN0, IDX, XDATA0	/* next block of plain text */
+	pxor2		TMP, IDX, XDATA1	/* next block of plain text */
+
+	mov		(IN_OFFSET + 8*3)(ARG), TMP
+	pxor2		IN2, IDX, XDATA2	/* next block of plain text */
+	pxor2		TMP, IDX, XDATA3	/* next block of plain text */
+
+	mov		(IN_OFFSET + 8*5)(ARG), TMP
+	pxor2		IN4, IDX, XDATA4	/* next block of plain text */
+	pxor2		TMP, IDX, XDATA5	/* next block of plain text */
+
+	mov		(IN_OFFSET + 8*7)(ARG), TMP
+	pxor2		IN6, IDX, XDATA6	/* next block of plain text */
+	pxor2		TMP, IDX, XDATA7	/* next block of plain text */
+
+	/* 0. ARK */
+	I = 0
+	.rept 8
+		define_reg	XDATA_REG_SUFX
+		define_reg	KEY_REG_SUFX
+		pxor	16*0(reg_KEY), reg_XDAT
+		I = (I + 1)
+	.endr
+
+	/* 1. ENC */
+	I = 0
+	.rept 8
+		define_reg	XDATA_REG_SUFX
+		define_reg	KEY_REG_SUFX
+		aesenc	16*1(reg_KEY), reg_XDAT
+		I = (I + 1)
+	.endr
+
+	/* 2. ENC */
+	I = 0
+	.rept 8
+		define_reg	XDATA_REG_SUFX
+		define_reg	KEY_REG_SUFX
+		aesenc	16*2(reg_KEY), reg_XDAT
+		I = (I + 1)
+	.endr
+
+	/* 3. ENC */
+	aesenc		XKEY0_3, XDATA0		/* 3. ENC */
+	I = 1
+	.rept 7
+		define_reg	XDATA_REG_SUFX
+		define_reg	KEY_REG_SUFX
+		aesenc	16*3(reg_KEY), reg_XDAT
+		I = (I + 1)
+	.endr
+
+	/* 4. ENC */
+	aesenc		16*4(KEYS0), XDATA0	/* 4. ENC */
+	aesenc		XKEY1_4, XDATA1		/* 4. ENC */
+	I = 2
+	.rept 6
+		define_reg	XDATA_REG_SUFX
+		define_reg	KEY_REG_SUFX
+		aesenc	16*4(reg_KEY), reg_XDAT
+		I = (I + 1)
+	.endr
+
+	/* 5. ENC */
+	aesenc		16*5(KEYS0), XDATA0	/* 5. ENC */
+	aesenc		16*5(KEYS1), XDATA1	/* 5. ENC */
+	aesenc		XKEY2_5, XDATA2		/* 5. ENC */
+
+	I = 3
+	.rept 5
+		define_reg	XDATA_REG_SUFX
+		define_reg	KEY_REG_SUFX
+		aesenc	16*5(reg_KEY), reg_XDAT
+		I = (I + 1)
+	.endr
+
+	/* 6. ENC */
+	I = 0
+	.rept 3
+		define_reg	XDATA_REG_SUFX
+		define_reg	KEY_REG_SUFX
+		aesenc	16*6(reg_KEY), reg_XDAT
+		I = (I + 1)
+	.endr
+	aesenc		XKEY3_6, XDATA3		/* 6. ENC */
+	I = 4
+	.rept 4
+		define_reg	XDATA_REG_SUFX
+		define_reg	KEY_REG_SUFX
+		aesenc	16*6(reg_KEY), reg_XDAT
+		I = (I + 1)
+	.endr
+
+	/* 7. ENC */
+	I = 0
+	.rept 4
+		define_reg	XDATA_REG_SUFX
+		define_reg	KEY_REG_SUFX
+		aesenc	16*7(reg_KEY), reg_XDAT
+		I = (I + 1)
+	.endr
+	aesenc		XKEY4_7, XDATA4		/* 7. ENC */
+	I = 5
+	.rept 3
+		define_reg	XDATA_REG_SUFX
+		define_reg	KEY_REG_SUFX
+		aesenc	16*7(reg_KEY), reg_XDAT
+		I = (I + 1)
+	.endr
+
+	/* 8. ENC */
+	I = 0
+	.rept 5
+		define_reg	XDATA_REG_SUFX
+		define_reg	KEY_REG_SUFX
+		aesenc	16*8(reg_KEY), reg_XDAT
+		I = (I + 1)
+	.endr
+	aesenc		XKEY5_8, XDATA5		/* 8. ENC */
+	aesenc		16*8(KEYS6), XDATA6	/* 8. ENC */
+	aesenc		16*8(KEYS7), XDATA7	/* 8. ENC */
+
+	/* 9. ENC */
+	I = 0
+	.rept 6
+		define_reg	XDATA_REG_SUFX
+		define_reg	KEY_REG_SUFX
+		aesenc	16*9(reg_KEY), reg_XDAT
+		I = (I + 1)
+	.endr
+	mov		(OUT_OFFSET + 8*0)(ARG), TMP
+	aesenc		XKEY6_9, XDATA6		/* 9. ENC */
+	aesenc		16*9(KEYS7), XDATA7	/* 9. ENC */
+
+	/* 10. ENC (last for 128 bit key) */
+	I = 0
+	.rept 8
+		define_reg	XDATA_REG_SUFX
+		define_reg	KEY_REG_SUFX
+		.if (\key_len == AES_KEYSIZE_128)
+			aesenclast	16*10(reg_KEY), reg_XDAT
+		.else
+			aesenc	16*10(reg_KEY), reg_XDAT
+		.endif
+		I = (I + 1)
+	.endr
+
+	.if (\key_len != AES_KEYSIZE_128)
+		/* 11. ENC */
+		I = 0
+		.rept 8
+			define_reg	XDATA_REG_SUFX
+			define_reg	KEY_REG_SUFX
+			aesenc	16*11(reg_KEY), reg_XDAT
+			I = (I + 1)
+		.endr
+
+		/* 12. last ENC for 192bit key */
+		I = 0
+		.rept 8
+			define_reg	XDATA_REG_SUFX
+			define_reg	KEY_REG_SUFX
+			.if (\key_len == AES_KEYSIZE_192)
+				aesenclast	16*12(reg_KEY), reg_XDAT
+			.else
+				aesenc		16*12(reg_KEY), reg_XDAT
+			.endif
+			I = (I + 1)
+		.endr
+
+		.if \key_len == AES_KEYSIZE_256
+			/* for 256bit, two more rounds */
+			/* 13. ENC */
+			I = 0
+			.rept 8
+				define_reg	XDATA_REG_SUFX
+				define_reg	KEY_REG_SUFX
+				aesenc	16*13(reg_KEY), reg_XDAT
+				I = (I + 1)
+			.endr
+
+			/* 14. last ENC for 256bit key */
+			I = 0
+			.rept 8
+				define_reg	XDATA_REG_SUFX
+				define_reg	KEY_REG_SUFX
+				aesenclast	16*14(reg_KEY), reg_XDAT
+				I = (I + 1)
+			.endr
+		.endif
+	.endif
+
+	I = 0
+	.rept 8
+		define_reg	XDATA_REG_SUFX
+		/* write back cipher text */
+		MOVDQ		reg_XDAT, (TMP , IDX)
+		I = (I + 1)
+		.if (I < 8)
+			mov		(OUT_OFFSET + 8*I)(ARG), TMP
+		.endif
+	.endr
+
+	add	$16, IDX
+	cmp	IDX, LEN_AREA_OFFSET(LEN_AREA_REG)
+	jne	.Lmain_loop\key_len
+
+.Ldone\key_len:
+	/* update IV */
+	I = 0
+	.rept 8
+		define_reg	XDATA_REG_SUFX
+		movdqa	reg_XDAT, (IV_OFFSET + 16*I)(ARG)
+		I = (I + 1)
+	.endr
+
+	/* update IN and OUT */
+	movd	LEN_AREA_OFFSET(LEN_AREA_REG), %xmm0
+	pshufd	$0x44, %xmm0, %xmm0
+
+	I = 1
+	.rept 4
+		define_reg	XMM_REG_SUFX
+		movdqa	(IN_OFFSET + 16*(I-1))(ARG), reg_XMM
+		I = (I + 1)
+	.endr
+
+	paddq	%xmm0, %xmm1
+	paddq	%xmm0, %xmm2
+	paddq	%xmm0, %xmm3
+	paddq	%xmm0, %xmm4
+
+	I = 5
+	.rept 4
+		define_reg	XMM_REG_SUFX
+		movdqa	(OUT_OFFSET + 16*(I-5))(ARG), reg_XMM
+		I = (I + 1)
+	.endr
+
+	I = 1
+	.rept 4
+		define_reg	XMM_REG_SUFX
+		movdqa	reg_XMM, (IN_OFFSET + 16*(I-1))(ARG)
+		I = (I + 1)
+	.endr
+
+	paddq	%xmm0, %xmm5
+	paddq	%xmm0, %xmm6
+	paddq	%xmm0, %xmm7
+	paddq	%xmm0, %xmm8
+
+	I = 5
+	.rept 4
+		define_reg	XMM_REG_SUFX
+		movdqa	reg_XMM, (OUT_OFFSET + 16*(I-5))(ARG)
+		I = (I + 1)
+	.endr
+
+	mov	(XMM_SAVE_SIZE + 8*0)(GPR_SAVE_REG), %rbx
+	mov	(XMM_SAVE_SIZE + 8*3)(GPR_SAVE_REG), %rbp
+	mov	(XMM_SAVE_SIZE + 8*4)(GPR_SAVE_REG), %r12
+	mov	(XMM_SAVE_SIZE + 8*5)(GPR_SAVE_REG), %r13
+	mov	(XMM_SAVE_SIZE + 8*6)(GPR_SAVE_REG), %r14
+	mov	(XMM_SAVE_SIZE + 8*7)(GPR_SAVE_REG), %r15
+
+	add	$STACK_SIZE, %rsp
+
+	ret
+.endm
+
+/*
+ * AES CBC encryption routine supporting 128/192/256 bit keys
+ *
+ * void aes_cbc_enc_128_x8(struct aes_cbc_args_x8 *args, u64 len);
+ * arg 1: rcx : addr of AES_ARGS_x8 structure
+ * arg 2: rdx : len (in units of 16-byte blocks)
+ * void aes_cbc_enc_192_x8(struct aes_cbc_args_x8 *args, u64 len);
+ * arg 1: rcx : addr of aes_cbc_args_x8 structure
+ * arg 2: rdx : len (in units of 16-byte blocks)
+ * void aes_cbc_enc_256_x8(struct aes_cbc_args_x8 *args, u64 len);
+ * arg 1: rcx : addr of aes_cbc_args_x8 structure
+ * arg 2: rdx : len (in units of 16-byte blocks)
+ */
+
+ENTRY(aes_cbc_enc_128_x8)
+
+	aes_cbc_enc_x8 AES_KEYSIZE_128
+
+ENDPROC(aes_cbc_enc_128_x8)
+
+ENTRY(aes_cbc_enc_192_x8)
+
+	aes_cbc_enc_x8 AES_KEYSIZE_192
+
+ENDPROC(aes_cbc_enc_192_x8)
+
+ENTRY(aes_cbc_enc_256_x8)
+
+	aes_cbc_enc_x8 AES_KEYSIZE_256
+
+ENDPROC(aes_cbc_enc_256_x8)
-- 
1.7.11.7

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v2 5/5] crypto: AES CBC multi-buffer glue code
       [not found] <cover.1446158797.git.tim.c.chen@linux.intel.com>
                   ` (4 preceding siblings ...)
  2015-10-29 22:21 ` [PATCH v2 4/5] crypto: AES CBC by8 encryption Tim Chen
@ 2015-10-29 22:21 ` Tim Chen
  5 siblings, 0 replies; 15+ messages in thread
From: Tim Chen @ 2015-10-29 22:21 UTC (permalink / raw)
  To: Herbert Xu, H. Peter Anvin, David S.Miller, Stephan Mueller
  Cc: Chandramouli Narayanan, Vinodh Gopal, James Guilford,
	Wajdi Feghali, Tim Chen, Jussi Kivilinna, Stephan Mueller,
	linux-crypto, linux-kernel


This patch introduces the multi-buffer job manager which is responsible
for submitting scatter-gather buffers from several AES CBC jobs
to the multi-buffer algorithm. The glue code interfaces with the
underlying algorithm that handles 8 data streams of AES CBC encryption
in parallel. AES key expansion and CBC decryption requests are performed
in a manner similar to the existing AESNI Intel glue driver.

The outline of the algorithm for AES CBC encryption requests is
sketched below:

Any driver requesting the crypto service will place an async crypto
request on the workqueue.  The multi-buffer crypto daemon will pull an
AES CBC encryption request from work queue and put each request in an
empty data lane for multi-buffer crypto computation.  When all the empty
lanes are filled, computation will commence on the jobs in parallel and
the job with the shortest remaining buffer will get completed and be
returned. To prevent prolonged stall, when no new jobs arrive, we will
flush workqueue of jobs after a maximum allowable delay has elapsed.

To accommodate the fragmented nature of scatter-gather, we will keep
submitting the next scatter-buffer fragment for a job for multi-buffer
computation until a job is completed and no more buffer fragments remain.
At that time we will pull a new job to fill the now empty data slot.
We check with the multibuffer scheduler to see if there are other
completed jobs to prevent extraneous delay in returning any completed
jobs.

This multi-buffer algorithm should be used for cases where we get at
least 8 streams of crypto jobs submitted at a reasonably high rate.
For low crypto job submission rate and low number of data streams, this
algorithm will not be beneficial. The reason is at low rate, we do not
fill out the data lanes before flushing the jobs instead of processing
them with all the data lanes full.  We will miss the benefit of parallel
computation, and adding delay to the processing of the crypto job at the
same time.  Some tuning of the maximum latency parameter may be needed
to get the best performance.

Originally-by: Chandramouli Narayanan <mouli_7982@yahoo.com>
Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
---
 arch/x86/crypto/Makefile                |   1 +
 arch/x86/crypto/aes-cbc-mb/Makefile     |  22 +
 arch/x86/crypto/aes-cbc-mb/aes_cbc_mb.c | 812 ++++++++++++++++++++++++++++++++
 3 files changed, 835 insertions(+)
 create mode 100644 arch/x86/crypto/aes-cbc-mb/Makefile
 create mode 100644 arch/x86/crypto/aes-cbc-mb/aes_cbc_mb.c

diff --git a/arch/x86/crypto/Makefile b/arch/x86/crypto/Makefile
index b9b912a..000db49 100644
--- a/arch/x86/crypto/Makefile
+++ b/arch/x86/crypto/Makefile
@@ -33,6 +33,7 @@ obj-$(CONFIG_CRYPTO_CRC32_PCLMUL) += crc32-pclmul.o
 obj-$(CONFIG_CRYPTO_SHA256_SSSE3) += sha256-ssse3.o
 obj-$(CONFIG_CRYPTO_SHA512_SSSE3) += sha512-ssse3.o
 obj-$(CONFIG_CRYPTO_CRCT10DIF_PCLMUL) += crct10dif-pclmul.o
+obj-$(CONFIG_CRYPTO_AES_CBC_MB) += aes-cbc-mb/
 obj-$(CONFIG_CRYPTO_POLY1305_X86_64) += poly1305-x86_64.o
 
 # These modules require assembler to support AVX.
diff --git a/arch/x86/crypto/aes-cbc-mb/Makefile b/arch/x86/crypto/aes-cbc-mb/Makefile
new file mode 100644
index 0000000..b642bd8
--- /dev/null
+++ b/arch/x86/crypto/aes-cbc-mb/Makefile
@@ -0,0 +1,22 @@
+#
+# Arch-specific CryptoAPI modules.
+#
+
+avx_supported := $(call as-instr,vpxor %xmm0$(comma)%xmm0$(comma)%xmm0,yes,no)
+
+# we need decryption and key expansion routine symbols
+# if either AESNI_NI_INTEL or AES_CBC_MB is a module
+
+ifeq ($(CONFIG_CRYPTO_AES_NI_INTEL),m)
+	dec_support := ../aesni-intel_asm.o
+endif
+ifeq ($(CONFIG_CRYPTO_AES_CBC_MB),m)
+	dec_support := ../aesni-intel_asm.o
+endif
+
+ifeq ($(avx_supported),yes)
+	obj-$(CONFIG_CRYPTO_AES_CBC_MB) += aes-cbc-mb.o
+	aes-cbc-mb-y := $(dec_support) aes_cbc_mb.o aes_mb_mgr_init.o \
+			mb_mgr_inorder_x8_asm.o mb_mgr_ooo_x8_asm.o \
+			aes_cbc_enc_x8.o
+endif
diff --git a/arch/x86/crypto/aes-cbc-mb/aes_cbc_mb.c b/arch/x86/crypto/aes-cbc-mb/aes_cbc_mb.c
new file mode 100644
index 0000000..6a03712
--- /dev/null
+++ b/arch/x86/crypto/aes-cbc-mb/aes_cbc_mb.c
@@ -0,0 +1,812 @@
+/*
+ *	Multi buffer AES CBC algorithm glue code
+ *
+ *
+ * This file is provided under a dual BSD/GPLv2 license.  When using or
+ * redistributing this file, you may do so under either license.
+ *
+ * GPL LICENSE SUMMARY
+ *
+ * Copyright(c) 2015 Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * Contact Information:
+ * James Guilford <james.guilford@intel.com>
+ * Sean Gulley <sean.m.gulley@intel.com>
+ * Tim Chen <tim.c.chen@linux.intel.com>
+ */
+
+#define pr_fmt(fmt)	KBUILD_MODNAME ": " fmt
+
+#include <linux/hardirq.h>
+#include <linux/types.h>
+#include <linux/crypto.h>
+#include <linux/module.h>
+#include <linux/err.h>
+#include <crypto/algapi.h>
+#include <crypto/aes.h>
+#include <crypto/internal/hash.h>
+#include <crypto/mcryptd.h>
+#include <crypto/crypto_wq.h>
+#include <crypto/ctr.h>
+#include <crypto/b128ops.h>
+#include <crypto/lrw.h>
+#include <crypto/xts.h>
+#include <asm/cpu_device_id.h>
+#include <asm/fpu/api.h>
+#include <asm/crypto/aes.h>
+#include <crypto/ablk_helper.h>
+#include <crypto/scatterwalk.h>
+#include <crypto/internal/aead.h>
+#include <linux/workqueue.h>
+#include <linux/spinlock.h>
+#ifdef CONFIG_X86_64
+#include <asm/crypto/glue_helper.h>
+#endif
+#include <asm/simd.h>
+
+#include "aes_cbc_mb_ctx.h"
+
+#define AESNI_ALIGN	(16)
+#define AES_BLOCK_MASK	(~(AES_BLOCK_SIZE-1))
+#define FLUSH_INTERVAL 500 /* in usec */
+
+static struct mcryptd_alg_state cbc_mb_alg_state;
+
+struct aes_cbc_mb_ctx {
+	struct mcryptd_ablkcipher *mcryptd_tfm;
+};
+
+static inline struct aes_cbc_mb_mgr_inorder_x8
+	*get_key_mgr(void *mgr, u32 key_len)
+{
+	struct aes_cbc_mb_mgr_inorder_x8 *key_mgr;
+
+	key_mgr = (struct aes_cbc_mb_mgr_inorder_x8 *) mgr;
+	/* valid keysize is guranteed to be one of 128/192/256 */
+	switch (key_len) {
+	case AES_KEYSIZE_256:
+		return key_mgr+2;
+	case AES_KEYSIZE_192:
+		return key_mgr+1;
+	case AES_KEYSIZE_128:
+	default:
+		return key_mgr;
+	}
+}
+
+/* support code from arch/x86/crypto/aesni-intel_glue.c */
+static inline struct crypto_aes_ctx *aes_ctx(void *raw_ctx)
+{
+	unsigned long addr = (unsigned long)raw_ctx;
+	unsigned long align = AESNI_ALIGN;
+	struct crypto_aes_ctx *ret_ctx;
+
+	if (align <= crypto_tfm_ctx_alignment())
+		align = 1;
+	ret_ctx = (struct crypto_aes_ctx *)ALIGN(addr, align);
+	return ret_ctx;
+}
+
+static struct job_aes_cbc *aes_cbc_job_mgr_submit(
+	struct aes_cbc_mb_mgr_inorder_x8 *key_mgr, u32 key_len)
+{
+	/* valid keysize is guranteed to be one of 128/192/256 */
+	switch (key_len) {
+	case AES_KEYSIZE_256:
+		return aes_cbc_submit_job_inorder_256x8(key_mgr);
+	case AES_KEYSIZE_192:
+		return aes_cbc_submit_job_inorder_192x8(key_mgr);
+	case AES_KEYSIZE_128:
+	default:
+		return aes_cbc_submit_job_inorder_128x8(key_mgr);
+	}
+}
+
+static inline struct ablkcipher_request *cast_mcryptd_ctx_to_req(
+	struct mcryptd_blkcipher_request_ctx *ctx)
+{
+	return container_of((void *) ctx, struct ablkcipher_request, __ctx);
+}
+
+/*
+ * Interface functions to the synchronous algorithm with acces
+ * to the underlying multibuffer AES CBC implementation
+ */
+
+/* Map the error in request context appropriately */
+
+static struct mcryptd_blkcipher_request_ctx *process_job_sts(
+		struct job_aes_cbc *job)
+{
+	struct mcryptd_blkcipher_request_ctx *ret_rctx;
+
+	ret_rctx = (struct mcryptd_blkcipher_request_ctx *)job->user_data;
+
+	switch (job->status) {
+	default:
+	case STS_COMPLETED:
+		ret_rctx->error = CBC_CTX_ERROR_NONE;
+		break;
+	case STS_BEING_PROCESSED:
+		ret_rctx->error = -EINPROGRESS;
+		break;
+	case STS_INTERNAL_ERROR:
+	case STS_ERROR:
+	case STS_UNKNOWN:
+		/* mark it done with error */
+		ret_rctx->flag = CBC_DONE;
+		ret_rctx->error = -EIO;
+		break;
+	}
+	return ret_rctx;
+}
+
+static struct mcryptd_blkcipher_request_ctx
+	*aes_cbc_ctx_mgr_flush(struct aes_cbc_mb_mgr_inorder_x8 *key_mgr)
+{
+	struct job_aes_cbc *job;
+
+	job = aes_cbc_flush_job_inorder_x8(key_mgr);
+	if (job)
+		return process_job_sts(job);
+	return NULL;
+}
+
+static struct mcryptd_blkcipher_request_ctx *aes_cbc_ctx_mgr_submit(
+	struct aes_cbc_mb_mgr_inorder_x8 *key_mgr,
+	struct mcryptd_blkcipher_request_ctx *rctx
+	)
+{
+	struct crypto_aes_ctx *mb_key_ctx;
+	struct job_aes_cbc *job;
+
+	mb_key_ctx = aes_ctx(crypto_blkcipher_ctx(rctx->desc.tfm));
+
+	/* get job, fill the details and submit */
+	job = aes_cbc_get_next_job_inorder_x8(key_mgr);
+	job->plaintext = rctx->walk.src.virt.addr;
+	job->ciphertext = rctx->walk.dst.virt.addr;
+	if (rctx->flag & CBC_START) {
+		/* fresh sequence, copy iv from walk buffer initially */
+		memcpy(&job->iv, rctx->walk.iv, rctx->walk.ivsize);
+		rctx->flag &= ~CBC_START;
+	} else {
+		/* For a multi-part sequence, set up the updated IV */
+		job->iv = rctx->seq_iv;
+	}
+
+	job->keys = (u128 *)mb_key_ctx->key_enc;
+	/* set up updated length from the walk buffers */
+	job->len = rctx->walk.nbytes & AES_BLOCK_MASK;
+	/* stow away the req_ctx so we can later check */
+	job->user_data = (void *)rctx;
+	job->key_len = mb_key_ctx->key_length;
+	rctx->job = job;
+	rctx->error = CBC_CTX_ERROR_NONE;
+	job = aes_cbc_job_mgr_submit(key_mgr, mb_key_ctx->key_length);
+	if (job) {
+		/* we already have the request context stashed in job */
+		return process_job_sts(job);
+	}
+	return NULL;
+}
+
+static int cbc_encrypt_finish(struct mcryptd_blkcipher_request_ctx **ret_rctx,
+			      struct mcryptd_alg_cstate *cstate,
+			      bool flush)
+{
+	struct mcryptd_blkcipher_request_ctx *rctx = *ret_rctx;
+	struct job_aes_cbc *job;
+	int err = 0;
+	unsigned int nbytes;
+	struct crypto_aes_ctx *mb_key_ctx;
+	struct aes_cbc_mb_mgr_inorder_x8 *key_mgr;
+
+
+	mb_key_ctx = aes_ctx(crypto_blkcipher_ctx(rctx->desc.tfm));
+	key_mgr = get_key_mgr(cstate->mgr, mb_key_ctx->key_length);
+
+	/*
+	 * Some low-level mb job is done. Keep going till done.
+	 * This loop may process multiple multi part requests
+	 */
+	while (!(rctx->flag & CBC_DONE)) {
+		/* update bytes and check for more work */
+		nbytes = rctx->walk.nbytes & (AES_BLOCK_SIZE - 1);
+		err = blkcipher_walk_done(&rctx->desc, &rctx->walk, nbytes);
+		if (err) {
+			/* done with error */
+			rctx->flag = CBC_DONE;
+			rctx->error = err;
+			goto out;
+		}
+		nbytes = rctx->walk.nbytes;
+		if (!nbytes) {
+			/* done with successful encryption */
+			rctx->flag = CBC_DONE;
+			goto out;
+		}
+		/*
+		 * This is a multi-part job and there is more work to do.
+		 * From the completed job, copy the running sequence of IV
+		 * and start the next one in sequence.
+		 */
+		job = (struct job_aes_cbc *)rctx->job;
+		rctx->seq_iv = job->iv;	/* copy the running sequence of iv */
+		kernel_fpu_begin();
+		rctx = aes_cbc_ctx_mgr_submit(key_mgr, rctx);
+		if (!rctx) {
+			/* multi part job submitted, no completed job. */
+			if (flush)
+				rctx = aes_cbc_ctx_mgr_flush(key_mgr);
+		}
+		kernel_fpu_end();
+		if (!rctx) {
+			/* no completions yet to process further */
+			break;
+		}
+		/* some job finished when we submitted multi part job. */
+		if (rctx->error) {
+			/*
+			 * some request completed with error
+			 * bail out of chain processing
+			 */
+			err = rctx->error;
+			break;
+		}
+		/* we have a valid request context to process further */
+	}
+	/* encrypted text is expected to be in out buffer already */
+out:
+	/* We came out multi-part processing for some request */
+	*ret_rctx = rctx;
+	return err;
+}
+
+/* notify the caller of progress ; request still stays in queue */
+
+static void notify_callback(struct mcryptd_blkcipher_request_ctx *rctx,
+			    struct mcryptd_alg_cstate *cstate,
+			    int err)
+{
+	struct ablkcipher_request *req = cast_mcryptd_ctx_to_req(rctx);
+
+	if (irqs_disabled())
+		rctx->complete(&req->base, err);
+	else {
+		local_bh_disable();
+		rctx->complete(&req->base, err);
+		local_bh_enable();
+	}
+}
+
+/* A request that completed is dequeued and the caller is notified */
+
+static void completion_callback(struct mcryptd_blkcipher_request_ctx *rctx,
+			    struct mcryptd_alg_cstate *cstate,
+			    int err)
+{
+	struct ablkcipher_request *req = cast_mcryptd_ctx_to_req(rctx);
+
+       /* remove from work list and invoke completion callback */
+	spin_lock(&cstate->work_lock);
+	list_del(&rctx->waiter);
+	spin_unlock(&cstate->work_lock);
+
+	if (irqs_disabled())
+		rctx->complete(&req->base, err);
+	else {
+		local_bh_disable();
+		rctx->complete(&req->base, err);
+		local_bh_enable();
+	}
+}
+
+/* complete a blkcipher request and process any further completions */
+
+static void cbc_complete_job(struct mcryptd_blkcipher_request_ctx *rctx,
+			    struct mcryptd_alg_cstate *cstate,
+			    int err)
+{
+	struct job_aes_cbc *job;
+	int ret;
+	struct mcryptd_blkcipher_request_ctx *sctx;
+	struct crypto_aes_ctx *mb_key_ctx;
+	struct aes_cbc_mb_mgr_inorder_x8 *key_mgr;
+
+	completion_callback(rctx, cstate, err);
+
+	mb_key_ctx = aes_ctx(crypto_blkcipher_ctx(rctx->desc.tfm));
+	key_mgr = get_key_mgr(cstate->mgr, mb_key_ctx->key_length);
+
+	/* check for more completed jobs and process */
+	while ((job = aes_cbc_get_completed_job_inorder_x8(key_mgr)) != NULL) {
+		sctx = process_job_sts(job);
+		if (WARN_ON(sctx == NULL))
+			return;
+		ret = sctx->error;
+		if (!ret) {
+			/* further process it */
+			ret = cbc_encrypt_finish(&sctx, cstate, false);
+		}
+		if (sctx)
+			completion_callback(sctx, cstate, ret);
+	}
+}
+
+/* Add request to the waiter list. It stays in queue until completion */
+
+static void cbc_mb_add_list(struct mcryptd_blkcipher_request_ctx *rctx,
+				struct mcryptd_alg_cstate *cstate)
+{
+	unsigned long next_flush;
+	unsigned long delay = usecs_to_jiffies(FLUSH_INTERVAL);
+
+	/* initialize tag */
+	rctx->tag.arrival = jiffies;    /* tag the arrival time */
+	rctx->tag.seq_num = cstate->next_seq_num++;
+	next_flush = rctx->tag.arrival + delay;
+	rctx->tag.expire = next_flush;
+
+	spin_lock(&cstate->work_lock);
+	list_add_tail(&rctx->waiter, &cstate->work_list);
+	spin_unlock(&cstate->work_lock);
+
+	mcryptd_arm_flusher(cstate, delay);
+}
+
+static int mb_aes_cbc_encrypt(struct blkcipher_desc *desc,
+		       struct scatterlist *dst, struct scatterlist *src,
+		       unsigned int nbytes)
+{
+	struct mcryptd_blkcipher_request_ctx *rctx =
+		container_of(desc, struct mcryptd_blkcipher_request_ctx, desc);
+	struct mcryptd_blkcipher_request_ctx *ret_rctx;
+	struct mcryptd_alg_cstate *cstate =
+				this_cpu_ptr(cbc_mb_alg_state.alg_cstate);
+	int err;
+	int ret = 0;
+	struct crypto_aes_ctx *mb_key_ctx;
+	struct aes_cbc_mb_mgr_inorder_x8 *key_mgr;
+
+	desc->flags |= CRYPTO_TFM_SGWALK_MAY_SLEEP;
+	mb_key_ctx = aes_ctx(crypto_blkcipher_ctx(rctx->desc.tfm));
+	key_mgr = get_key_mgr(cstate->mgr, mb_key_ctx->key_length);
+
+	/* sanity check */
+	if (rctx->tag.cpu != smp_processor_id()) {
+		/* job not on list yet */
+		pr_err("mcryptd error: cpu clash\n");
+		notify_callback(rctx, cstate, -EINVAL);
+		return 0;
+	}
+
+	/* a new job, initialize the cbc context and add to worklist */
+	cbc_ctx_init(rctx, nbytes, CBC_ENCRYPT);
+	cbc_mb_add_list(rctx, cstate);
+
+	blkcipher_walk_init(&rctx->walk, dst, src, nbytes);
+	err = blkcipher_walk_virt(desc, &rctx->walk);
+	if (err || !rctx->walk.nbytes) {
+		/* terminate this request */
+		completion_callback(rctx, cstate, (!err) ? -EINVAL : err);
+		return 0;
+	}
+	/* submit job */
+	kernel_fpu_begin();
+	ret_rctx = aes_cbc_ctx_mgr_submit(key_mgr, rctx);
+	kernel_fpu_end();
+
+	if (!ret_rctx) {
+		/* we submitted a job, but none completed */
+		/* just notify the caller */
+		notify_callback(rctx, cstate, -EINPROGRESS);
+		return 0;
+	}
+	/* some job completed */
+	if (ret_rctx->error) {
+		/* some job finished with error */
+		cbc_complete_job(ret_rctx, cstate, ret_rctx->error);
+		return 0;
+	}
+	/* some job finished without error, process it */
+	ret = cbc_encrypt_finish(&ret_rctx, cstate, false);
+	if (!ret_rctx) {
+		/* No completed job yet, notify caller */
+		notify_callback(rctx, cstate, -EINPROGRESS);
+		return 0;
+	}
+
+	/* complete the job */
+	cbc_complete_job(ret_rctx, cstate, ret);
+	return 0;
+}
+
+static int mb_aes_cbc_decrypt(struct blkcipher_desc *desc,
+			      struct scatterlist *dst, struct scatterlist *src,
+			      unsigned int nbytes)
+{
+	struct crypto_aes_ctx *aesni_ctx;
+	struct mcryptd_blkcipher_request_ctx *rctx =
+		container_of(desc, struct mcryptd_blkcipher_request_ctx, desc);
+	struct ablkcipher_request *req;
+	struct blkcipher_walk walk;
+	bool is_mcryptd_req;
+	int err;
+
+	/* note here whether it is mcryptd req */
+	is_mcryptd_req = desc->flags & CRYPTO_TFM_REQ_MAY_SLEEP;
+	req = cast_mcryptd_ctx_to_req(rctx);
+	aesni_ctx = aes_ctx(crypto_blkcipher_ctx(desc->tfm));
+
+	blkcipher_walk_init(&walk, dst, src, nbytes);
+	err = blkcipher_walk_virt(desc, &walk);
+	desc->flags &= ~CRYPTO_TFM_REQ_MAY_SLEEP;
+
+	kernel_fpu_begin();
+	while ((nbytes = walk.nbytes)) {
+		aesni_cbc_dec(aesni_ctx, walk.dst.virt.addr, walk.src.virt.addr,
+			      nbytes & AES_BLOCK_MASK, walk.iv);
+		nbytes &= AES_BLOCK_SIZE - 1;
+		err = blkcipher_walk_done(desc, &walk, nbytes);
+	}
+	kernel_fpu_end();
+	if (!is_mcryptd_req) {
+		/* synchronous request */
+		return err;
+	}
+
+	/* from mcryptd, we need to callback */
+	if (irqs_disabled())
+		rctx->complete(&req->base, err);
+	else {
+		local_bh_disable();
+		rctx->complete(&req->base, err);
+		local_bh_enable();
+	}
+	/* The completion status is via callback */
+	return 0;
+}
+
+/* use the same common code in aesni to expand key */
+
+static int aes_set_key_common(struct crypto_tfm *tfm, void *raw_ctx,
+			      const u8 *in_key, unsigned int key_len)
+{
+	struct crypto_aes_ctx *ctx = aes_ctx(raw_ctx);
+	u32 *flags = &tfm->crt_flags;
+	int err;
+
+	if (key_len != AES_KEYSIZE_128 && key_len != AES_KEYSIZE_192 &&
+	    key_len != AES_KEYSIZE_256) {
+		*flags |= CRYPTO_TFM_RES_BAD_KEY_LEN;
+		return -EINVAL;
+	}
+
+	if (!irq_fpu_usable())
+		err = crypto_aes_expand_key(ctx, in_key, key_len);
+	else {
+		kernel_fpu_begin();
+		err = aesni_set_key(ctx, in_key, key_len);
+		kernel_fpu_end();
+	}
+
+	return err;
+}
+
+static int aes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
+		       unsigned int key_len)
+{
+	return aes_set_key_common(tfm, crypto_tfm_ctx(tfm), in_key, key_len);
+}
+
+/* Interface functions to the asynchronous algorithm */
+
+static int cbc_mb_async_ablk_decrypt(struct ablkcipher_request *req)
+{
+	struct ablkcipher_request *mcryptd_req = ablkcipher_request_ctx(req);
+	struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(req);
+	struct aes_cbc_mb_ctx *ctx = crypto_ablkcipher_ctx(tfm);
+	struct mcryptd_ablkcipher *mcryptd_tfm = ctx->mcryptd_tfm;
+	struct crypto_blkcipher *child_tfm;
+	struct blkcipher_desc desc;
+	int err;
+
+	if (!may_use_simd()) {
+		/* request sent via mcryptd, completes asynchronously */
+		memcpy(mcryptd_req, req, sizeof(*req));
+		ablkcipher_request_set_tfm(mcryptd_req, &mcryptd_tfm->base);
+		return crypto_ablkcipher_decrypt(mcryptd_req);
+	}
+
+	/*
+	 * Package decryption descriptor in mcryptd req structure
+	 * and let the child tfm execute the decryption
+	 * without incurring the cost of mcryptd layers.
+	 * Request completes synchronously.
+	 */
+
+	child_tfm = mcryptd_ablkcipher_child(ctx->mcryptd_tfm);
+	desc.tfm = child_tfm;
+	desc.info = req->info;
+	desc.flags = 0;
+
+	err = crypto_blkcipher_crt(child_tfm)->decrypt(
+		&desc, req->dst, req->src, req->nbytes);
+	return err;
+}
+
+static int cbc_mb_async_ablk_encrypt(struct ablkcipher_request *req)
+{
+	struct ablkcipher_request *mcryptd_req = ablkcipher_request_ctx(req);
+	struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(req);
+	struct aes_cbc_mb_ctx *ctx = crypto_ablkcipher_ctx(tfm);
+	struct mcryptd_ablkcipher *mcryptd_tfm = ctx->mcryptd_tfm;
+
+	/* get the async driver ctx and tfm to package the request */
+	memcpy(mcryptd_req, req, sizeof(*req));
+	ablkcipher_request_set_tfm(mcryptd_req, &mcryptd_tfm->base);
+
+	return crypto_ablkcipher_encrypt(mcryptd_req);
+
+}
+
+/*
+ * Synchronous interface to the underlying mb implementation
+ * Although, this interfaces with the underlying mb implementation,
+ * CRYPTO_ALG_ASYNC flag is still passed so that memory allocated during
+ * scatter-gather walk uses kmap instead of kmap_atomic which can be
+ * problematic when processing threads are scheduled out.
+ */
+
+static struct crypto_alg aes_cbc_mb_alg = {
+	.cra_name		= "__cbc-aes-aesni-mb",
+	.cra_driver_name	= "__driver-cbc-aes-aesni-mb",
+	.cra_priority		= 100,
+	.cra_flags		= CRYPTO_ALG_TYPE_BLKCIPHER | CRYPTO_ALG_ASYNC
+					| CRYPTO_ALG_INTERNAL,
+	.cra_blocksize		= AES_BLOCK_SIZE,
+	.cra_ctxsize		= sizeof(struct crypto_aes_ctx) +
+				  AESNI_ALIGN - 1,
+	.cra_alignmask		= 0,
+	.cra_type		= &crypto_blkcipher_type,
+	.cra_module		= THIS_MODULE,
+	.cra_list		= LIST_HEAD_INIT(aes_cbc_mb_alg.cra_list),
+	.cra_u = {
+		.blkcipher = {
+			.min_keysize	= AES_MIN_KEY_SIZE,
+			.max_keysize	= AES_MAX_KEY_SIZE,
+			.ivsize		= AES_BLOCK_SIZE,
+			.setkey		= aes_set_key,
+			.encrypt	= mb_aes_cbc_encrypt,
+			.decrypt	= mb_aes_cbc_decrypt
+		},
+	},
+};
+
+static int ablk_cbc_async_init_tfm(struct crypto_tfm *tfm)
+{
+	struct mcryptd_ablkcipher *mcryptd_tfm;
+	struct aes_cbc_mb_ctx *ctx = crypto_tfm_ctx(tfm);
+	struct mcryptd_blkcipher_ctx *mctx;
+
+	mcryptd_tfm = mcryptd_alloc_ablkcipher(
+				"__driver-cbc-aes-aesni-mb",
+				CRYPTO_ALG_INTERNAL, CRYPTO_ALG_INTERNAL);
+	if (IS_ERR(mcryptd_tfm))
+		return PTR_ERR(mcryptd_tfm);
+	mctx = crypto_ablkcipher_ctx(&mcryptd_tfm->base);
+	mctx->alg_state = &cbc_mb_alg_state;
+	ctx->mcryptd_tfm = mcryptd_tfm;
+	tfm->crt_ablkcipher.reqsize = sizeof(struct ablkcipher_request) +
+		crypto_ablkcipher_reqsize(&mcryptd_tfm->base);
+
+	return 0;
+
+}
+
+static void ablk_cbc_async_exit_tfm(struct crypto_tfm *tfm)
+{
+	struct aes_cbc_mb_ctx *ctx = crypto_tfm_ctx(tfm);
+
+	mcryptd_free_ablkcipher(ctx->mcryptd_tfm);
+}
+
+/* Asynchronous interface to CBC multibuffer implementation */
+
+static struct crypto_alg aes_cbc_mb_async_alg = {
+	.cra_name		= "cbc(aes)",
+	.cra_driver_name	= "cbc-aes-aesni-mb",
+	.cra_priority		= 450,
+	.cra_flags		= CRYPTO_ALG_TYPE_ABLKCIPHER | CRYPTO_ALG_ASYNC,
+	.cra_blocksize		= AES_BLOCK_SIZE,
+	.cra_ctxsize		= sizeof(struct aes_cbc_mb_ctx) +
+				  AESNI_ALIGN - 1,
+	.cra_alignmask		= 0,
+	.cra_type		= &crypto_ablkcipher_type,
+	.cra_module		= THIS_MODULE,
+	.cra_init		= ablk_cbc_async_init_tfm,
+	.cra_exit		= ablk_cbc_async_exit_tfm,
+	.cra_list		= LIST_HEAD_INIT(aes_cbc_mb_async_alg.cra_list),
+	.cra_u = {
+		.ablkcipher = {
+			.min_keysize	= AES_MIN_KEY_SIZE,
+			.max_keysize	= AES_MAX_KEY_SIZE,
+			.ivsize		= AES_BLOCK_SIZE,
+			.setkey		= ablk_set_key,
+			.encrypt	= cbc_mb_async_ablk_encrypt,
+			.decrypt	= cbc_mb_async_ablk_decrypt,
+		},
+	},
+};
+
+/*
+ * When there are no new jobs arriving, the multibuffer queue may stall.
+ * To prevent prolonged stall, the flusher can be invoked to alleviate
+ * the following conditions:
+ * a) There are partially completed multi-part crypto jobs after a
+ * maximum allowable delay
+ * b) We have exhausted crypto jobs in queue, and the cpu
+ * does not have other tasks and cpu will become idle otherwise.
+ */
+unsigned long cbc_mb_flusher(struct mcryptd_alg_cstate *cstate)
+{
+	struct mcryptd_blkcipher_request_ctx *rctx;
+	unsigned long cur_time;
+	unsigned long next_flush = 0;
+
+	struct crypto_aes_ctx *mb_key_ctx;
+	struct aes_cbc_mb_mgr_inorder_x8 *key_mgr;
+
+
+	cur_time = jiffies;
+
+	while (!list_empty(&cstate->work_list)) {
+		rctx = list_entry(cstate->work_list.next,
+				struct mcryptd_blkcipher_request_ctx, waiter);
+		if time_before(cur_time, rctx->tag.expire)
+			break;
+
+		mb_key_ctx = aes_ctx(crypto_blkcipher_ctx(rctx->desc.tfm));
+		key_mgr = get_key_mgr(cstate->mgr, mb_key_ctx->key_length);
+
+		kernel_fpu_begin();
+		rctx = aes_cbc_ctx_mgr_flush(key_mgr);
+		kernel_fpu_end();
+		if (!rctx) {
+			pr_err("cbc_mb_flusher: nothing got flushed\n");
+			break;
+		}
+		cbc_encrypt_finish(&rctx, cstate, true);
+		if (rctx)
+			cbc_complete_job(rctx, cstate, rctx->error);
+	}
+
+	if (!list_empty(&cstate->work_list)) {
+		rctx = list_entry(cstate->work_list.next,
+				struct mcryptd_blkcipher_request_ctx, waiter);
+		/* get the blkcipher context and then flush time */
+		next_flush = rctx->tag.expire;
+		mcryptd_arm_flusher(cstate, get_delay(next_flush));
+	}
+	return next_flush;
+}
+
+static int __init aes_cbc_mb_mod_init(void)
+{
+
+	int cpu, i;
+	int err;
+	struct mcryptd_alg_cstate *cpu_state;
+	struct aes_cbc_mb_mgr_inorder_x8 *key_mgr;
+
+	/* check for dependent cpu features */
+	if (!boot_cpu_has(X86_FEATURE_AES)) {
+		pr_err("aes_cbc_mb_mod_init: no aes support\n");
+		err = -ENODEV;
+		goto err1;
+	}
+
+	if (!boot_cpu_has(X86_FEATURE_XMM)) {
+		pr_err("aes_cbc_mb_mod_init: no xmm support\n");
+		err = -ENODEV;
+		goto err1;
+	}
+
+	/* initialize multibuffer structures */
+
+	cbc_mb_alg_state.alg_cstate = alloc_percpu(struct mcryptd_alg_cstate);
+	if (!cbc_mb_alg_state.alg_cstate) {
+		pr_err("aes_cbc_mb_mod_init: insufficient memory\n");
+		err = -ENOMEM;
+		goto err1;
+	}
+
+	for_each_possible_cpu(cpu) {
+		cpu_state = per_cpu_ptr(cbc_mb_alg_state.alg_cstate, cpu);
+		cpu_state->next_flush = 0;
+		cpu_state->next_seq_num = 0;
+		cpu_state->flusher_engaged = false;
+		INIT_DELAYED_WORK(&cpu_state->flush, mcryptd_flusher);
+		cpu_state->cpu = cpu;
+		cpu_state->alg_state = &cbc_mb_alg_state;
+		cpu_state->mgr =
+			(struct aes_cbc_mb_mgr_inorder_x8 *)
+			kzalloc(3 * sizeof(struct aes_cbc_mb_mgr_inorder_x8),
+				GFP_KERNEL);
+		if (!cpu_state->mgr) {
+			err = -ENOMEM;
+			goto err2;
+		}
+		key_mgr = (struct aes_cbc_mb_mgr_inorder_x8 *) cpu_state->mgr;
+		/* initialize manager state for 128, 192 and 256 bit keys */
+		for (i = 0; i < 3; ++i) {
+			aes_cbc_init_mb_mgr_inorder_x8(key_mgr);
+			++key_mgr;
+		}
+		INIT_LIST_HEAD(&cpu_state->work_list);
+		spin_lock_init(&cpu_state->work_lock);
+	}
+	cbc_mb_alg_state.flusher = &cbc_mb_flusher;
+
+	/* register the synchronous mb algo */
+	err = crypto_register_alg(&aes_cbc_mb_alg);
+	if (err)
+		goto err3;
+
+	/* register the asynchronous mb algo */
+	err = crypto_register_alg(&aes_cbc_mb_async_alg);
+	if (err) {
+		/* unregister synchronous mb algo */
+		crypto_unregister_alg(&aes_cbc_mb_alg);
+		goto err3;
+	}
+
+	pr_info("x86 CBC multibuffer crypto module initialized successfully\n");
+	return 0; /* module init success */
+
+	/* error in algo registration */
+err3:
+	for_each_possible_cpu(cpu) {
+		cpu_state = per_cpu_ptr(cbc_mb_alg_state.alg_cstate, cpu);
+		kfree(cpu_state->mgr);
+	}
+err2:
+	free_percpu(cbc_mb_alg_state.alg_cstate);
+err1:
+	return err;
+}
+
+static void __exit aes_cbc_mb_mod_fini(void)
+{
+	int cpu;
+	struct mcryptd_alg_cstate *cpu_state;
+
+	crypto_unregister_alg(&aes_cbc_mb_alg);
+	crypto_unregister_alg(&aes_cbc_mb_async_alg);
+
+	for_each_possible_cpu(cpu) {
+		cpu_state = per_cpu_ptr(cbc_mb_alg_state.alg_cstate, cpu);
+		kfree(cpu_state->mgr);
+	}
+	free_percpu(cbc_mb_alg_state.alg_cstate);
+}
+
+module_init(aes_cbc_mb_mod_init);
+module_exit(aes_cbc_mb_mod_fini);
+
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("AES CBC Algorithm, multi buffer accelerated");
+MODULE_AUTHOR("Tim Chen <tim.c.chen@linux.intel.com");
+
+MODULE_ALIAS("aes-cbc-mb");
+MODULE_ALIAS_CRYPTO("cbc-aes-aesni-mb");
-- 
1.7.11.7

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 1/5] crypto: Multi-buffer encryptioin infrastructure support
  2015-10-29 22:21 ` [PATCH v2 1/5] crypto: Multi-buffer encryptioin infrastructure support Tim Chen
@ 2015-11-17 13:06   ` Herbert Xu
  2015-11-17 22:59     ` Tim Chen
  0 siblings, 1 reply; 15+ messages in thread
From: Herbert Xu @ 2015-11-17 13:06 UTC (permalink / raw)
  To: Tim Chen
  Cc: H. Peter Anvin, David S.Miller, Stephan Mueller,
	Chandramouli Narayanan, Vinodh Gopal, James Guilford,
	Wajdi Feghali, Jussi Kivilinna, linux-crypto, linux-kernel

On Thu, Oct 29, 2015 at 03:21:03PM -0700, Tim Chen wrote:
> 
> c) Add support to crypto scatterwalk support that can sleep during
> encryption operation, as we may have buffers for jobs in data lanes
> that are half-finished, waiting for additional jobs to come to fill
> empty lanes before we start the encryption again.  Therefore, we need to
> enhance crypto walk with the option to map data buffers non-atomically.
> This is done by algorithms run from crypto daemon who knows it is safe
> to do so as it can save and restore FPU state in correct context.

What about the existing ablkcipher scatterwalk helpers?

Cheers,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 1/5] crypto: Multi-buffer encryptioin infrastructure support
  2015-11-17 13:06   ` Herbert Xu
@ 2015-11-17 22:59     ` Tim Chen
  2015-11-18  0:07       ` Herbert Xu
  0 siblings, 1 reply; 15+ messages in thread
From: Tim Chen @ 2015-11-17 22:59 UTC (permalink / raw)
  To: Herbert Xu
  Cc: H. Peter Anvin, David S.Miller, Stephan Mueller,
	Chandramouli Narayanan, Vinodh Gopal, James Guilford,
	Wajdi Feghali, Jussi Kivilinna, linux-crypto, linux-kernel

On Tue, 2015-11-17 at 21:06 +0800, Herbert Xu wrote:
> On Thu, Oct 29, 2015 at 03:21:03PM -0700, Tim Chen wrote:
> > 
> > c) Add support to crypto scatterwalk support that can sleep during
> > encryption operation, as we may have buffers for jobs in data lanes
> > that are half-finished, waiting for additional jobs to come to fill
> > empty lanes before we start the encryption again.  Therefore, we need to
> > enhance crypto walk with the option to map data buffers non-atomically.
> > This is done by algorithms run from crypto daemon who knows it is safe
> > to do so as it can save and restore FPU state in correct context.
> 
> What about the existing ablkcipher scatterwalk helpers?
> 
> Cheers,

I suppose blkcipher was originally used because we 
were under the impression that there are less buffer copying
and less allocation of intermediate buffers
with blkcipher walk. 

But looking at the blkcipher walk and ablkcipher
walk code more carefully now, I am not sure that's really true
as it seems like ablkcipher keep all intermediate buffers till the
end and copy them to destination in one shot while blkcipher does
that at walk of every chunk.  The advantage of blkcipher is
you don't have as many outstanding buffers in a list.  If there's
really not much speed difference, I can try to use ablkcipher.

Herbert, would you prefer me to use ablkcipher scatter walk instead,
assuming the overhead of both walk are about the same?

Thanks.

Tim

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 1/5] crypto: Multi-buffer encryptioin infrastructure support
  2015-11-17 22:59     ` Tim Chen
@ 2015-11-18  0:07       ` Herbert Xu
  2015-11-18  0:30         ` Tim Chen
  0 siblings, 1 reply; 15+ messages in thread
From: Herbert Xu @ 2015-11-18  0:07 UTC (permalink / raw)
  To: Tim Chen
  Cc: H. Peter Anvin, David S.Miller, Stephan Mueller,
	Chandramouli Narayanan, Vinodh Gopal, James Guilford,
	Wajdi Feghali, Jussi Kivilinna, linux-crypto, linux-kernel

On Tue, Nov 17, 2015 at 02:59:29PM -0800, Tim Chen wrote:
>
> Herbert, would you prefer me to use ablkcipher scatter walk instead,
> assuming the overhead of both walk are about the same?

Well since you are going to potentially sleep in the middle of
an operation I'd think ablkcipher is required, no?

Cheers,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 1/5] crypto: Multi-buffer encryptioin infrastructure support
  2015-11-18  0:07       ` Herbert Xu
@ 2015-11-18  0:30         ` Tim Chen
  2015-11-18  5:06           ` Herbert Xu
  0 siblings, 1 reply; 15+ messages in thread
From: Tim Chen @ 2015-11-18  0:30 UTC (permalink / raw)
  To: Herbert Xu
  Cc: H. Peter Anvin, David S.Miller, Stephan Mueller,
	Chandramouli Narayanan, Vinodh Gopal, James Guilford,
	Wajdi Feghali, Jussi Kivilinna, linux-crypto, linux-kernel

On Wed, 2015-11-18 at 08:07 +0800, Herbert Xu wrote:
> On Tue, Nov 17, 2015 at 02:59:29PM -0800, Tim Chen wrote:
> >
> > Herbert, would you prefer me to use ablkcipher scatter walk instead,
> > assuming the overhead of both walk are about the same?
> 
> Well since you are going to potentially sleep in the middle of
> an operation I'd think ablkcipher is required, no?

We're using blkcipher walk in the implementation. 
As long as we use kmap and instead of kmap_atomic,
it allows us to sleep in the middle of the walk.

Tim

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 1/5] crypto: Multi-buffer encryptioin infrastructure support
  2015-11-18  0:30         ` Tim Chen
@ 2015-11-18  5:06           ` Herbert Xu
  2015-11-18 15:58             ` Tim Chen
  0 siblings, 1 reply; 15+ messages in thread
From: Herbert Xu @ 2015-11-18  5:06 UTC (permalink / raw)
  To: Tim Chen
  Cc: H. Peter Anvin, David S.Miller, Stephan Mueller,
	Chandramouli Narayanan, Vinodh Gopal, James Guilford,
	Wajdi Feghali, Jussi Kivilinna, linux-crypto, linux-kernel

On Tue, Nov 17, 2015 at 04:30:14PM -0800, Tim Chen wrote:
> On Wed, 2015-11-18 at 08:07 +0800, Herbert Xu wrote:
> > On Tue, Nov 17, 2015 at 02:59:29PM -0800, Tim Chen wrote:
> > >
> > > Herbert, would you prefer me to use ablkcipher scatter walk instead,
> > > assuming the overhead of both walk are about the same?
> > 
> > Well since you are going to potentially sleep in the middle of
> > an operation I'd think ablkcipher is required, no?
> 
> We're using blkcipher walk in the implementation. 
> As long as we use kmap and instead of kmap_atomic,
> it allows us to sleep in the middle of the walk.

What if you were called from an atomic context, such as IPsec?

Cheers,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 1/5] crypto: Multi-buffer encryptioin infrastructure support
  2015-11-18  5:06           ` Herbert Xu
@ 2015-11-18 15:58             ` Tim Chen
  2015-11-19  0:12               ` Herbert Xu
  0 siblings, 1 reply; 15+ messages in thread
From: Tim Chen @ 2015-11-18 15:58 UTC (permalink / raw)
  To: Herbert Xu
  Cc: H. Peter Anvin, David S.Miller, Stephan Mueller,
	Chandramouli Narayanan, Vinodh Gopal, James Guilford,
	Wajdi Feghali, Jussi Kivilinna, linux-crypto, linux-kernel

On Wed, 2015-11-18 at 13:06 +0800, Herbert Xu wrote:
> On Tue, Nov 17, 2015 at 04:30:14PM -0800, Tim Chen wrote:
> > On Wed, 2015-11-18 at 08:07 +0800, Herbert Xu wrote:
> > > On Tue, Nov 17, 2015 at 02:59:29PM -0800, Tim Chen wrote:
> > > >
> > > > Herbert, would you prefer me to use ablkcipher scatter walk instead,
> > > > assuming the overhead of both walk are about the same?
> > > 
> > > Well since you are going to potentially sleep in the middle of
> > > an operation I'd think ablkcipher is required, no?
> > 
> > We're using blkcipher walk in the implementation. 
> > As long as we use kmap and instead of kmap_atomic,
> > it allows us to sleep in the middle of the walk.
> 
> What if you were called from an atomic context, such as IPsec?
> 

IPSec will invoke this multi-buffer encrypt with async request.
The work is done in crypto daemon, so it wouldn't be in atomic
context.  But anyway, I'm okay with switching to ablkcipher walk,
as long as it doesn't incur too much more overhead than blkcipher
walk.

Tim

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 1/5] crypto: Multi-buffer encryptioin infrastructure support
  2015-11-18 15:58             ` Tim Chen
@ 2015-11-19  0:12               ` Herbert Xu
  2015-11-19  2:39                 ` Tim Chen
  0 siblings, 1 reply; 15+ messages in thread
From: Herbert Xu @ 2015-11-19  0:12 UTC (permalink / raw)
  To: Tim Chen
  Cc: H. Peter Anvin, David S.Miller, Stephan Mueller,
	Chandramouli Narayanan, Vinodh Gopal, James Guilford,
	Wajdi Feghali, Jussi Kivilinna, linux-crypto, linux-kernel

On Wed, Nov 18, 2015 at 07:58:56AM -0800, Tim Chen wrote:
>
> IPSec will invoke this multi-buffer encrypt with async request.
> The work is done in crypto daemon, so it wouldn't be in atomic
> context.  But anyway, I'm okay with switching to ablkcipher walk,
> as long as it doesn't incur too much more overhead than blkcipher
> walk.

What if some other user called the blkcipher interface in an atomic
context? You can't guarantee that your algorithm is only picked up
through the ablkcipher interface, unless of course you do something
like __driver-ctr-aes-aesni.

Hmm I was just looking at the sha_mb code and something doesn't
look right.  For instance, can sha1_mb_update ever return
-EINPROGRESS? This would be wrong as it's registered as an shash
algorithm.

Cheers,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 1/5] crypto: Multi-buffer encryptioin infrastructure support
  2015-11-19  0:12               ` Herbert Xu
@ 2015-11-19  2:39                 ` Tim Chen
  2015-11-19  9:39                   ` Herbert Xu
  0 siblings, 1 reply; 15+ messages in thread
From: Tim Chen @ 2015-11-19  2:39 UTC (permalink / raw)
  To: Herbert Xu
  Cc: H. Peter Anvin, David S.Miller, Stephan Mueller,
	Chandramouli Narayanan, Vinodh Gopal, James Guilford,
	Wajdi Feghali, Jussi Kivilinna, linux-crypto, linux-kernel

On Thu, 2015-11-19 at 08:12 +0800, Herbert Xu wrote:
> On Wed, Nov 18, 2015 at 07:58:56AM -0800, Tim Chen wrote:
> >
> > IPSec will invoke this multi-buffer encrypt with async request.
> > The work is done in crypto daemon, so it wouldn't be in atomic
> > context.  But anyway, I'm okay with switching to ablkcipher walk,
> > as long as it doesn't incur too much more overhead than blkcipher
> > walk.
> 
> What if some other user called the blkcipher interface in an atomic
> context? You can't guarantee that your algorithm is only picked up
> through the ablkcipher interface, unless of course you do something
> like __driver-ctr-aes-aesni.

The __cbc-aes-aesni-mb algorithm is marked as internal algorithm 
with flag CRYPTO_ALG_INTERNAL, so it should not be picked up by other
algorithms and should only be invoked from mcryptd.

Anyway, I've udpated the aes_cbc_mb code with ablkcipher helper.
So I will be posting the new series with ablkcipher walk
after testing is done.

> 
> Hmm I was just looking at the sha_mb code and something doesn't
> look right.  For instance, can sha1_mb_update ever return
> -EINPROGRESS? This would be wrong as it's registered as an shash
> algorithm.

The __sha1-mb works in tandem with the outer layer of mcryptd 
aysnc algorithm. It does the completion for the outer 
async algorithm.  So as far as mcryptd is concerned, the
inner algorithm is synchronous in the sense that it is done
once it dispatch the job to __sha1-mb and don't have to worry about it.
I don't think mcryptd check for the return value from __sha1-mb
so it should be okay to return 0 instead of -EINPROGRESS.  
I'll double check that.

Tim

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 1/5] crypto: Multi-buffer encryptioin infrastructure support
  2015-11-19  2:39                 ` Tim Chen
@ 2015-11-19  9:39                   ` Herbert Xu
  0 siblings, 0 replies; 15+ messages in thread
From: Herbert Xu @ 2015-11-19  9:39 UTC (permalink / raw)
  To: Tim Chen
  Cc: H. Peter Anvin, David S.Miller, Stephan Mueller,
	Chandramouli Narayanan, Vinodh Gopal, James Guilford,
	Wajdi Feghali, Jussi Kivilinna, linux-crypto, linux-kernel

On Wed, Nov 18, 2015 at 06:39:30PM -0800, Tim Chen wrote:
>
> The __cbc-aes-aesni-mb algorithm is marked as internal algorithm 
> with flag CRYPTO_ALG_INTERNAL, so it should not be picked up by other
> algorithms and should only be invoked from mcryptd.

OK I guess that's fine then.

> Anyway, I've udpated the aes_cbc_mb code with ablkcipher helper.
> So I will be posting the new series with ablkcipher walk
> after testing is done.

Yes I think what you have is a very special case.  As you said
the ablkcipher interface should be fairly low in overhead so I
think it makes sense to use that instead of blkcipher.

> The __sha1-mb works in tandem with the outer layer of mcryptd 
> aysnc algorithm. It does the completion for the outer 
> async algorithm.  So as far as mcryptd is concerned, the
> inner algorithm is synchronous in the sense that it is done
> once it dispatch the job to __sha1-mb and don't have to worry about it.
> I don't think mcryptd check for the return value from __sha1-mb
> so it should be okay to return 0 instead of -EINPROGRESS.  
> I'll double check that.

If it can never return EINPROGRESS then we should probably remove
the code in it that says "return -EINPROGRESS".

Thanks,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2015-11-19  9:40 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <cover.1446158797.git.tim.c.chen@linux.intel.com>
2015-10-29 22:20 ` [PATCH v2 0/5] crypto: x86 AES-CBC encryption with multibuffer Tim Chen
2015-10-29 22:21 ` [PATCH v2 1/5] crypto: Multi-buffer encryptioin infrastructure support Tim Chen
2015-11-17 13:06   ` Herbert Xu
2015-11-17 22:59     ` Tim Chen
2015-11-18  0:07       ` Herbert Xu
2015-11-18  0:30         ` Tim Chen
2015-11-18  5:06           ` Herbert Xu
2015-11-18 15:58             ` Tim Chen
2015-11-19  0:12               ` Herbert Xu
2015-11-19  2:39                 ` Tim Chen
2015-11-19  9:39                   ` Herbert Xu
2015-10-29 22:21 ` [PATCH v2 2/5] crypto: AES CBC multi-buffer data structures Tim Chen
2015-10-29 22:21 ` [PATCH v2 3/5] crypto: AES CBC multi-buffer scheduler Tim Chen
2015-10-29 22:21 ` [PATCH v2 4/5] crypto: AES CBC by8 encryption Tim Chen
2015-10-29 22:21 ` [PATCH v2 5/5] crypto: AES CBC multi-buffer glue code Tim Chen

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.