All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH V8 0/5] crypto: AES CBC multibuffer implementation
@ 2018-01-10  0:09 Megha Dey
  2018-01-10  0:09 ` [PATCH V8 1/5] crypto: Multi-buffer encryption infrastructure support Megha Dey
                   ` (4 more replies)
  0 siblings, 5 replies; 21+ messages in thread
From: Megha Dey @ 2018-01-10  0:09 UTC (permalink / raw)
  To: linux-kernel, linux-crypto, davem, herbert; +Cc: megha.dey, Megha Dey

In this patch series, we introduce AES CBC encryption that is parallelized
on x86_64 cpu with XMM registers. The multi-buffer technique encrypt 8
data streams in parallel with SIMD instructions. Decryption is handled as
in the existing AESNI Intel CBC implementation which can already
parallelize decryption even for a single data stream.

Please see the multi-buffer whitepaper for details of the technique:
http://www.intel.com/content/www/us/en/communications/communications-ia-multi-buffer-paper.html

It is important that any driver uses this algorithm properly for scenarios
where we have many data streams that can fill up the data lanes most of the
time. It shouldn't be used when only a single data stream is expected
mostly. Otherwise, we may incur extra delays when we have frequent gaps in
data lanes, causing us to wait till data come in to fill the data lanes
before initiating encryption.  We may have to wait for flush operations to
commence when no new data come in after some wait time. However, we keep
this extra delay to a minimum by opportunistically flushing the unfinished
jobs if crypto daemon is the only active task running on a cpu.

By using this technique, we saw a throughput increase of up to 5.7x under
optimal conditions when we have fully loaded encryption jobs filling up all
the data lanes.

Change Log:
v8
1. Remove the notify_callback construct
2. Remove remaining irq_disabled check
3. Remove related tcrypt test as it is already merged

v7
1. Add the CRYPTO_ALG_ASYNC flag to the internal algorithm
2. Remove the irq_disabled check

v6
1. Move away from the compat naming scheme and update the names of the inner
   and outer algorithm
2. Move wrapper code around synchronous internal algorithm from simd.c
   to mcryptd.c

v5
1. Use an async implementation of the inner algorithm instead of sync and use
   the latest skcipher interface instead of the older blkcipher interface.
   (we have picked up this work after a while)

v4
1. Make the decrypt path also use ablkcpher walk.
http://lkml.iu.edu/hypermail/linux/kernel/1512.0/01807.html

v3
1. Use ablkcipher_walk helpers to walk the scatter gather list
and eliminated needs to modify blkcipher_walk for multibuffer cipher

v2
1. Update cpu feature check to make sure SSE is supported
2. Fix up unloading of aes-cbc-mb module to properly free memory

Megha Dey (5):
  crypto: Multi-buffer encryption infrastructure support
  crypto: AES CBC multi-buffer data structures
  crypto: AES CBC multi-buffer scheduler
  crypto: AES CBC by8 encryption
  crypto: AES CBC multi-buffer glue code

 arch/x86/crypto/Makefile                           |   1 +
 arch/x86/crypto/aes-cbc-mb/Makefile                |  22 +
 arch/x86/crypto/aes-cbc-mb/aes_cbc_enc_x8.S        | 775 +++++++++++++++++++++
 arch/x86/crypto/aes-cbc-mb/aes_cbc_mb.c            | 698 +++++++++++++++++++
 arch/x86/crypto/aes-cbc-mb/aes_cbc_mb_ctx.h        |  97 +++
 arch/x86/crypto/aes-cbc-mb/aes_cbc_mb_mgr.h        | 132 ++++
 arch/x86/crypto/aes-cbc-mb/aes_mb_mgr_init.c       | 146 ++++
 arch/x86/crypto/aes-cbc-mb/mb_mgr_datastruct.S     | 271 +++++++
 arch/x86/crypto/aes-cbc-mb/mb_mgr_inorder_x8_asm.S | 223 ++++++
 arch/x86/crypto/aes-cbc-mb/mb_mgr_ooo_x8_asm.S     | 417 +++++++++++
 arch/x86/crypto/aes-cbc-mb/reg_sizes.S             | 126 ++++
 crypto/Kconfig                                     |  15 +
 crypto/mcryptd.c                                   | 475 +++++++++++++
 include/crypto/mcryptd.h                           |  56 ++
 14 files changed, 3454 insertions(+)
 create mode 100644 arch/x86/crypto/aes-cbc-mb/Makefile
 create mode 100644 arch/x86/crypto/aes-cbc-mb/aes_cbc_enc_x8.S
 create mode 100644 arch/x86/crypto/aes-cbc-mb/aes_cbc_mb.c
 create mode 100644 arch/x86/crypto/aes-cbc-mb/aes_cbc_mb_ctx.h
 create mode 100644 arch/x86/crypto/aes-cbc-mb/aes_cbc_mb_mgr.h
 create mode 100644 arch/x86/crypto/aes-cbc-mb/aes_mb_mgr_init.c
 create mode 100644 arch/x86/crypto/aes-cbc-mb/mb_mgr_datastruct.S
 create mode 100644 arch/x86/crypto/aes-cbc-mb/mb_mgr_inorder_x8_asm.S
 create mode 100644 arch/x86/crypto/aes-cbc-mb/mb_mgr_ooo_x8_asm.S
 create mode 100644 arch/x86/crypto/aes-cbc-mb/reg_sizes.S

-- 
1.9.1

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [PATCH V8 1/5] crypto: Multi-buffer encryption infrastructure support
  2018-01-10  0:09 [PATCH V8 0/5] crypto: AES CBC multibuffer implementation Megha Dey
@ 2018-01-10  0:09 ` Megha Dey
  2018-01-18 11:39   ` Herbert Xu
  2018-01-10  0:09 ` [PATCH V8 2/5] crypto: AES CBC multi-buffer data structures Megha Dey
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 21+ messages in thread
From: Megha Dey @ 2018-01-10  0:09 UTC (permalink / raw)
  To: linux-kernel, linux-crypto, davem, herbert; +Cc: megha.dey, Megha Dey

In this patch, the infrastructure needed to support multibuffer
encryption implementation is added:

a) Enhance mcryptd daemon to support skcipher requests.

b) Add multi-buffer mcryptd skcipher helper which presents the
   top-level algorithm as an skcipher.

b) Update configuration to include multi-buffer encryption build
support.

For an introduction to the multi-buffer implementation, please see
http://www.intel.com/content/www/us/en/communications/communications-ia-multi-buffer-paper.html

Originally-by: Chandramouli Narayanan <mouli_7982@yahoo.com>
Signed-off-by: Megha Dey <megha.dey@linux.intel.com>
Acked-by: Tim Chen <tim.c.chen@linux.intel.com>
---
 crypto/Kconfig           |  15 ++
 crypto/mcryptd.c         | 475 +++++++++++++++++++++++++++++++++++++++++++++++
 include/crypto/mcryptd.h |  56 ++++++
 3 files changed, 546 insertions(+)

diff --git a/crypto/Kconfig b/crypto/Kconfig
index 9327fbf..66bd682 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -1021,6 +1021,21 @@ config CRYPTO_AES_NI_INTEL
 	  ECB, CBC, LRW, PCBC, XTS. The 64 bit version has additional
 	  acceleration for CTR.
 
+config CRYPTO_AES_CBC_MB
+        tristate "AES CBC algorithm (x86_64 Multi-Buffer, Experimental)"
+        depends on X86 && 64BIT
+        select CRYPTO_SIMD
+        select CRYPTO_MCRYPTD
+        help
+          AES CBC encryption implemented using multi-buffer technique.
+          This algorithm computes on multiple data lanes concurrently with
+          SIMD instructions for better throughput. It should only be used
+          when we expect many concurrent crypto requests to keep all the
+          data lanes filled to realize the performance benefit. If the data
+          lanes are unfilled, a flush operation will be initiated after some
+          delay to process the exisiting crypto jobs, adding some extra
+          latency to low load case.
+
 config CRYPTO_AES_SPARC64
 	tristate "AES cipher algorithms (SPARC64)"
 	depends on SPARC64
diff --git a/crypto/mcryptd.c b/crypto/mcryptd.c
index 2908382..ce502e2 100644
--- a/crypto/mcryptd.c
+++ b/crypto/mcryptd.c
@@ -269,6 +269,443 @@ static inline bool mcryptd_check_internal(struct rtattr **tb, u32 *type,
 		return false;
 }
 
+static int mcryptd_enqueue_skcipher_request(struct mcryptd_queue *queue,
+				struct crypto_async_request *request,
+				struct mcryptd_skcipher_request_ctx *rctx)
+{
+	int cpu, err;
+	struct mcryptd_cpu_queue *cpu_queue;
+
+	cpu = get_cpu();
+	cpu_queue = this_cpu_ptr(queue->cpu_queue);
+	rctx->tag.cpu = cpu;
+
+	err = crypto_enqueue_request(&cpu_queue->queue, request);
+	pr_debug("enqueue request: cpu %d cpu_queue %p request %p\n",
+		cpu, cpu_queue, request);
+	queue_work_on(cpu, kcrypto_wq, &cpu_queue->work);
+	put_cpu();
+
+	return err;
+}
+
+static int mcryptd_skcipher_setkey(struct crypto_skcipher *parent,
+				const u8 *key, unsigned int keylen)
+{
+	struct mcryptd_skcipher_ctx *ctx = crypto_skcipher_ctx(parent);
+	struct crypto_skcipher *child = ctx->child;
+	int err;
+
+	crypto_skcipher_clear_flags(child, CRYPTO_TFM_REQ_MASK);
+	crypto_skcipher_set_flags(child, crypto_skcipher_get_flags(parent) &
+						CRYPTO_TFM_REQ_MASK);
+	err = crypto_skcipher_setkey(child, key, keylen);
+	crypto_skcipher_set_flags(parent, crypto_skcipher_get_flags(child) &
+						CRYPTO_TFM_RES_MASK);
+	return err;
+}
+
+static void mcryptd_skcipher_complete(struct skcipher_request *req, int err)
+{
+	struct mcryptd_skcipher_request_ctx *rctx = skcipher_request_ctx(req);
+
+	local_bh_disable();
+	rctx->complete(&req->base, err);
+	local_bh_enable();
+}
+
+static void mcryptd_skcipher_encrypt(struct crypto_async_request *base,
+								int err)
+{
+	struct skcipher_request *req = skcipher_request_cast(base);
+	struct mcryptd_skcipher_request_ctx *rctx = skcipher_request_ctx(req);
+	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+	struct mcryptd_skcipher_ctx *ctx = crypto_skcipher_ctx(tfm);
+	struct crypto_skcipher *child = ctx->child;
+	struct skcipher_request subreq;
+
+	if (unlikely(err == -EINPROGRESS))
+		goto out;
+
+	/* set up the skcipher request to work on */
+	skcipher_request_set_tfm(&subreq, child);
+	skcipher_request_set_callback(&subreq,
+					CRYPTO_TFM_REQ_MAY_SLEEP, 0, 0);
+	skcipher_request_set_crypt(&subreq, req->src, req->dst,
+					req->cryptlen, req->iv);
+
+	/*
+	 * pass addr of descriptor stored in the request context
+	 * so that the callee can get to the request context
+	 */
+	rctx->desc = subreq;
+	err = crypto_skcipher_encrypt(&rctx->desc);
+
+	if (err) {
+		req->base.complete = rctx->complete;
+		goto out;
+	}
+	return;
+
+out:
+	mcryptd_skcipher_complete(req, err);
+}
+
+static void mcryptd_skcipher_decrypt(struct crypto_async_request *base,
+								int err)
+{
+	struct skcipher_request *req = skcipher_request_cast(base);
+	struct mcryptd_skcipher_request_ctx *rctx = skcipher_request_ctx(req);
+	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+	struct mcryptd_skcipher_ctx *ctx = crypto_skcipher_ctx(tfm);
+	struct crypto_skcipher *child = ctx->child;
+	struct skcipher_request subreq;
+
+	if (unlikely(err == -EINPROGRESS))
+		goto out;
+
+	/* set up the skcipher request to work on */
+	skcipher_request_set_tfm(&subreq, child);
+	skcipher_request_set_callback(&subreq,
+				CRYPTO_TFM_REQ_MAY_SLEEP, NULL, NULL);
+	skcipher_request_set_crypt(&subreq, req->src, req->dst,
+						req->cryptlen, req->iv);
+
+	/*
+	 * pass addr of descriptor stored in the request context
+	 * so that the callee can get to the request context
+	 */
+	rctx->desc = subreq;
+	err = crypto_skcipher_decrypt(&rctx->desc);
+
+	if (err) {
+		req->base.complete = rctx->complete;
+	goto out;
+	}
+	return;
+
+out:
+	mcryptd_skcipher_complete(req, err);
+}
+
+static int mcryptd_skcipher_enqueue(struct skcipher_request *req,
+					crypto_completion_t complete)
+{
+	struct mcryptd_skcipher_request_ctx *rctx =
+					skcipher_request_ctx(req);
+	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+	struct mcryptd_queue *queue;
+
+	queue = mcryptd_get_queue(crypto_skcipher_tfm(tfm));
+	rctx->complete = req->base.complete;
+	req->base.complete = complete;
+
+	return mcryptd_enqueue_skcipher_request(queue, &req->base, rctx);
+}
+
+static int mcryptd_skcipher_encrypt_enqueue(struct skcipher_request *req)
+{
+	return mcryptd_skcipher_enqueue(req, mcryptd_skcipher_encrypt);
+}
+
+static int mcryptd_skcipher_decrypt_enqueue(struct skcipher_request *req)
+{
+	return mcryptd_skcipher_enqueue(req, mcryptd_skcipher_decrypt);
+}
+
+static int mcryptd_skcipher_init_tfm(struct crypto_skcipher *tfm)
+{
+	struct skcipher_instance *inst = skcipher_alg_instance(tfm);
+	struct mskcipherd_instance_ctx *ictx = skcipher_instance_ctx(inst);
+	struct crypto_skcipher_spawn *spawn = &ictx->spawn;
+	struct mcryptd_skcipher_ctx *ctx = crypto_skcipher_ctx(tfm);
+	struct crypto_skcipher *cipher;
+
+	cipher = crypto_spawn_skcipher(spawn);
+	if (IS_ERR(cipher))
+		return PTR_ERR(cipher);
+
+	ctx->child = cipher;
+	crypto_skcipher_set_reqsize(tfm,
+			sizeof(struct mcryptd_skcipher_request_ctx));
+	return 0;
+}
+
+static void mcryptd_skcipher_exit_tfm(struct crypto_skcipher *tfm)
+{
+	struct mcryptd_skcipher_ctx *ctx = crypto_skcipher_ctx(tfm);
+
+	crypto_free_skcipher(ctx->child);
+}
+
+static void mcryptd_skcipher_free(struct skcipher_instance *inst)
+{
+	struct mskcipherd_instance_ctx *ctx = skcipher_instance_ctx(inst);
+
+	crypto_drop_skcipher(&ctx->spawn);
+}
+
+static int mcryptd_init_instance(struct crypto_instance *inst,
+					struct crypto_alg *alg)
+{
+	if (snprintf(inst->alg.cra_driver_name, CRYPTO_MAX_ALG_NAME,
+		"mcryptd(%s)",
+			alg->cra_driver_name) >= CRYPTO_MAX_ALG_NAME)
+		return -ENAMETOOLONG;
+
+	memcpy(inst->alg.cra_name, alg->cra_name, CRYPTO_MAX_ALG_NAME);
+	inst->alg.cra_priority = alg->cra_priority + 50;
+	inst->alg.cra_blocksize = alg->cra_blocksize;
+	inst->alg.cra_alignmask = alg->cra_alignmask;
+
+	return 0;
+}
+
+static int mcryptd_create_skcipher(struct crypto_template *tmpl,
+				   struct rtattr **tb,
+				   struct mcryptd_queue *queue)
+{
+	struct mskcipherd_instance_ctx *ctx;
+	struct skcipher_instance *inst;
+	struct skcipher_alg *alg;
+	const char *name;
+	u32 type;
+	u32 mask;
+	int err;
+
+	type = CRYPTO_ALG_ASYNC;
+	mask = CRYPTO_ALG_ASYNC;
+
+	mcryptd_check_internal(tb, &type, &mask);
+
+	name = crypto_attr_alg_name(tb[1]);
+	if (IS_ERR(name))
+		return PTR_ERR(name);
+
+	inst = kzalloc(sizeof(*inst) + sizeof(*ctx), GFP_KERNEL);
+	if (!inst)
+		return -ENOMEM;
+
+	ctx = skcipher_instance_ctx(inst);
+	ctx->queue = queue;
+
+	crypto_set_skcipher_spawn(&ctx->spawn, skcipher_crypto_instance(inst));
+	err = crypto_grab_skcipher(&ctx->spawn, name, type, mask);
+
+	if (err)
+		goto out_free_inst;
+
+	alg = crypto_spawn_skcipher_alg(&ctx->spawn);
+	err = mcryptd_init_instance(skcipher_crypto_instance(inst), &alg->base);
+	if (err)
+		goto out_drop_skcipher;
+
+	inst->alg.base.cra_flags = CRYPTO_ALG_ASYNC |
+				(alg->base.cra_flags & CRYPTO_ALG_INTERNAL);
+
+	inst->alg.ivsize = crypto_skcipher_alg_ivsize(alg);
+	inst->alg.chunksize = crypto_skcipher_alg_chunksize(alg);
+	inst->alg.min_keysize = crypto_skcipher_alg_min_keysize(alg);
+	inst->alg.max_keysize = crypto_skcipher_alg_max_keysize(alg);
+
+	inst->alg.base.cra_ctxsize = sizeof(struct mcryptd_skcipher_ctx);
+
+	inst->alg.init = mcryptd_skcipher_init_tfm;
+	inst->alg.exit = mcryptd_skcipher_exit_tfm;
+
+	inst->alg.setkey = mcryptd_skcipher_setkey;
+	inst->alg.encrypt = mcryptd_skcipher_encrypt_enqueue;
+	inst->alg.decrypt = mcryptd_skcipher_decrypt_enqueue;
+
+	inst->free = mcryptd_skcipher_free;
+
+	err = skcipher_register_instance(tmpl, inst);
+	if (err) {
+out_drop_skcipher:
+		crypto_drop_skcipher(&ctx->spawn);
+out_free_inst:
+		kfree(inst);
+	}
+	return err;
+}
+
+static int mcryptd_skcipher_setkey_mb(struct crypto_skcipher *tfm,
+				      const u8 *key,
+				      unsigned int key_len)
+{
+	struct mcryptd_skcipher_ctx_mb *ctx = crypto_skcipher_ctx(tfm);
+	struct crypto_skcipher *child = &ctx->mcryptd_tfm->base;
+	int err;
+
+	crypto_skcipher_clear_flags(child, CRYPTO_TFM_REQ_MASK);
+	crypto_skcipher_set_flags(child, crypto_skcipher_get_flags(tfm) &
+					CRYPTO_TFM_REQ_MASK);
+	err = crypto_skcipher_setkey(child, key, key_len);
+	crypto_skcipher_set_flags(tfm, crypto_skcipher_get_flags(child) &
+					CRYPTO_TFM_RES_MASK);
+	return err;
+}
+
+static int mcryptd_skcipher_decrypt_mb(struct skcipher_request *req)
+{
+	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+	struct mcryptd_skcipher_ctx_mb *ctx = crypto_skcipher_ctx(tfm);
+	struct skcipher_request *subreq;
+	struct crypto_skcipher *child;
+
+	subreq = skcipher_request_ctx(req);
+	*subreq = *req;
+
+	child = &ctx->mcryptd_tfm->base;
+
+	skcipher_request_set_tfm(subreq, child);
+
+	return crypto_skcipher_decrypt(subreq);
+}
+
+static int mcryptd_skcipher_encrypt_mb(struct skcipher_request *req)
+{
+	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+	struct mcryptd_skcipher_ctx_mb *ctx = crypto_skcipher_ctx(tfm);
+	struct skcipher_request *subreq;
+	struct crypto_skcipher *child;
+
+	subreq = skcipher_request_ctx(req);
+	*subreq = *req;
+
+	child = &ctx->mcryptd_tfm->base;
+
+	skcipher_request_set_tfm(subreq, child);
+
+	return crypto_skcipher_encrypt(subreq);
+}
+
+static void mcryptd_skcipher_exit_mb(struct crypto_skcipher *tfm)
+{
+	struct mcryptd_skcipher_ctx_mb *ctx = crypto_skcipher_ctx(tfm);
+
+	mcryptd_free_skcipher(ctx->mcryptd_tfm);
+}
+
+static int mcryptd_skcipher_init_mb(struct crypto_skcipher *tfm)
+{
+	struct mcryptd_skcipher_ctx_mb *ctx = crypto_skcipher_ctx(tfm);
+	struct mcryptd_skcipher *mcryptd_tfm;
+	struct mcryptd_skcipher_alg_mb *salg;
+	struct skcipher_alg *alg;
+	unsigned int reqsize;
+	struct mcryptd_skcipher_ctx *mctx;
+
+	alg = crypto_skcipher_alg(tfm);
+	salg = container_of(alg, struct mcryptd_skcipher_alg_mb, alg);
+
+	mcryptd_tfm = mcryptd_alloc_skcipher(salg->ialg_name,
+						CRYPTO_ALG_INTERNAL,
+						CRYPTO_ALG_INTERNAL);
+	if (IS_ERR(mcryptd_tfm))
+		return PTR_ERR(mcryptd_tfm);
+
+	mctx = crypto_skcipher_ctx(&mcryptd_tfm->base);
+
+	mctx->alg_state = &cbc_mb_alg_state;
+	ctx->mcryptd_tfm = mcryptd_tfm;
+
+	reqsize = sizeof(struct skcipher_request);
+	reqsize += crypto_skcipher_reqsize(&mcryptd_tfm->base);
+
+	crypto_skcipher_set_reqsize(tfm, reqsize);
+
+	return 0;
+}
+struct mcryptd_skcipher_alg_mb *mcryptd_skcipher_create_compat_mb(
+							const char *algname,
+							const char *drvname,
+							const char *basename)
+{
+	struct mcryptd_skcipher_alg_mb *salg;
+	struct crypto_skcipher *tfm;
+	struct skcipher_alg *ialg;
+	struct skcipher_alg *alg;
+	int err;
+
+	tfm = crypto_alloc_skcipher(basename,
+				CRYPTO_ALG_INTERNAL | CRYPTO_ALG_ASYNC,
+				CRYPTO_ALG_INTERNAL | CRYPTO_ALG_ASYNC);
+	if (IS_ERR(tfm))
+		return ERR_CAST(tfm);
+
+	ialg = crypto_skcipher_alg(tfm);
+
+	salg = kzalloc(sizeof(*salg), GFP_KERNEL);
+	if (!salg) {
+		salg = ERR_PTR(-ENOMEM);
+		goto out_put_tfm;
+	}
+
+	salg->ialg_name = basename;
+	alg = &salg->alg;
+
+	err = -ENAMETOOLONG;
+	if (snprintf(alg->base.cra_name, CRYPTO_MAX_ALG_NAME, "%s", algname) >=
+							CRYPTO_MAX_ALG_NAME)
+		goto out_free_salg;
+
+	if (snprintf(alg->base.cra_driver_name, CRYPTO_MAX_ALG_NAME, "%s",
+						drvname) >= CRYPTO_MAX_ALG_NAME)
+		goto out_free_salg;
+	alg->base.cra_flags = CRYPTO_ALG_ASYNC;
+	alg->base.cra_priority = ialg->base.cra_priority;
+	alg->base.cra_blocksize = ialg->base.cra_blocksize;
+	alg->base.cra_alignmask = ialg->base.cra_alignmask;
+	alg->base.cra_module = ialg->base.cra_module;
+	alg->base.cra_ctxsize = sizeof(struct mcryptd_skcipher_ctx_mb);
+
+	alg->ivsize = ialg->ivsize;
+	alg->chunksize = ialg->chunksize;
+	alg->min_keysize = ialg->min_keysize;
+	alg->max_keysize = ialg->max_keysize;
+
+	alg->init = mcryptd_skcipher_init_mb;
+	alg->exit = mcryptd_skcipher_exit_mb;
+
+	alg->setkey = mcryptd_skcipher_setkey_mb;
+	alg->encrypt = mcryptd_skcipher_encrypt_mb;
+	alg->decrypt = mcryptd_skcipher_decrypt_mb;
+	err = crypto_register_skcipher(alg);
+	if (err)
+		goto out_free_salg;
+
+out_put_tfm:
+	crypto_free_skcipher(tfm);
+	return salg;
+
+out_free_salg:
+	kfree(salg);
+	salg = ERR_PTR(err);
+	goto out_put_tfm;
+}
+EXPORT_SYMBOL_GPL(mcryptd_skcipher_create_compat_mb);
+
+struct mcryptd_skcipher_alg_mb *mcryptd_skcipher_create_mb(const char *algname,
+							const char *basename)
+{
+	char drvname[CRYPTO_MAX_ALG_NAME];
+
+	if (snprintf(drvname, CRYPTO_MAX_ALG_NAME, "mcryptd-%s", basename) >=
+							CRYPTO_MAX_ALG_NAME)
+		return ERR_PTR(-ENAMETOOLONG);
+
+	return mcryptd_skcipher_create_compat_mb(algname, drvname, basename);
+}
+EXPORT_SYMBOL_GPL(mcryptd_skcipher_create_mb);
+
+void mcryptd_skcipher_free_mb(struct mcryptd_skcipher_alg_mb *salg)
+{
+	crypto_unregister_skcipher(&salg->alg);
+	kfree(salg);
+}
+EXPORT_SYMBOL_GPL(mcryptd_skcipher_free_mb);
+
 static int mcryptd_hash_init_tfm(struct crypto_tfm *tfm)
 {
 	struct crypto_instance *inst = crypto_tfm_alg_instance(tfm);
@@ -560,6 +997,8 @@ static int mcryptd_create(struct crypto_template *tmpl, struct rtattr **tb)
 		return PTR_ERR(algt);
 
 	switch (algt->type & algt->mask & CRYPTO_ALG_TYPE_MASK) {
+	case CRYPTO_ALG_TYPE_BLKCIPHER:
+		return mcryptd_create_skcipher(tmpl, tb, &mqueue);
 	case CRYPTO_ALG_TYPE_DIGEST:
 		return mcryptd_create_hash(tmpl, tb, &mqueue);
 	break;
@@ -591,6 +1030,42 @@ static void mcryptd_free(struct crypto_instance *inst)
 	.module = THIS_MODULE,
 };
 
+struct mcryptd_skcipher *mcryptd_alloc_skcipher(const char *alg_name,
+							u32 type, u32 mask)
+{
+	char cryptd_alg_name[CRYPTO_MAX_ALG_NAME];
+	struct crypto_skcipher *tfm;
+
+	if (snprintf(cryptd_alg_name, CRYPTO_MAX_ALG_NAME,
+		"mcryptd(%s)", alg_name) >= CRYPTO_MAX_ALG_NAME)
+		return ERR_PTR(-EINVAL);
+	tfm = crypto_alloc_skcipher(cryptd_alg_name, type, mask);
+	if (IS_ERR(tfm))
+		return ERR_CAST(tfm);
+	if (tfm->base.__crt_alg->cra_module != THIS_MODULE) {
+		crypto_free_skcipher(tfm);
+		return ERR_PTR(-EINVAL);
+	}
+
+	return container_of(tfm, struct mcryptd_skcipher, base);
+}
+EXPORT_SYMBOL_GPL(mcryptd_alloc_skcipher);
+
+struct crypto_skcipher *mcryptd_skcipher_child(
+			struct mcryptd_skcipher *tfm)
+{
+	struct mcryptd_skcipher_ctx *ctx = crypto_skcipher_ctx(&tfm->base);
+
+	return ctx->child;
+}
+EXPORT_SYMBOL_GPL(mcryptd_skcipher_child);
+
+void mcryptd_free_skcipher(struct mcryptd_skcipher *tfm)
+{
+	crypto_free_skcipher(&tfm->base);
+}
+EXPORT_SYMBOL_GPL(mcryptd_free_skcipher);
+
 struct mcryptd_ahash *mcryptd_alloc_ahash(const char *alg_name,
 					u32 type, u32 mask)
 {
diff --git a/include/crypto/mcryptd.h b/include/crypto/mcryptd.h
index b67404f..ff0f968 100644
--- a/include/crypto/mcryptd.h
+++ b/include/crypto/mcryptd.h
@@ -14,6 +14,49 @@
 #include <linux/crypto.h>
 #include <linux/kernel.h>
 #include <crypto/hash.h>
+#include <crypto/b128ops.h>
+#include <crypto/internal/skcipher.h>
+#include <crypto/internal/hash.h>
+
+static struct mcryptd_alg_state cbc_mb_alg_state;
+
+struct mcryptd_skcipher_ctx_mb {
+	struct mcryptd_skcipher *mcryptd_tfm;
+};
+
+struct mcryptd_skcipher_alg_mb {
+	const char *ialg_name;
+	struct skcipher_alg alg;
+};
+
+struct mskcipherd_instance_ctx {
+	struct crypto_skcipher_spawn spawn;
+	struct mcryptd_queue *queue;
+};
+
+struct mcryptd_skcipher_alg_mb *mcryptd_skcipher_create_mb(const char *algname,
+							const char *basename);
+
+void mcryptd_skcipher_free_mb(struct mcryptd_skcipher_alg_mb *alg);
+
+struct mcryptd_skcipher_alg_mb *mcryptd_skcipher_create_compat_mb(
+							const char *algname,
+							const char *drvname,
+							const char *basename);
+
+struct mcryptd_skcipher_ctx {
+	struct crypto_skcipher *child;
+	struct mcryptd_alg_state *alg_state;
+};
+
+struct mcryptd_skcipher {
+	struct crypto_skcipher base;
+};
+
+struct mcryptd_skcipher *mcryptd_alloc_skcipher(const char *alg_name,
+					      u32 type, u32 mask);
+struct crypto_skcipher *mcryptd_skcipher_child(struct mcryptd_skcipher *tfm);
+void mcryptd_free_skcipher(struct mcryptd_skcipher *tfm);
 
 struct mcryptd_ahash {
 	struct crypto_ahash base;
@@ -64,6 +107,19 @@ struct mcryptd_hash_request_ctx {
 	struct ahash_request areq;
 };
 
+struct mcryptd_skcipher_request_ctx {
+	struct list_head waiter;
+	crypto_completion_t complete;
+	struct mcryptd_tag tag;
+	struct skcipher_walk walk;
+	u8 flag;
+	int nbytes;
+	int error;
+	struct skcipher_request desc;
+	void *job;
+	u128 seq_iv;
+};
+
 struct mcryptd_ahash *mcryptd_alloc_ahash(const char *alg_name,
 					u32 type, u32 mask);
 struct crypto_ahash *mcryptd_ahash_child(struct mcryptd_ahash *tfm);
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH V8 2/5] crypto: AES CBC multi-buffer data structures
  2018-01-10  0:09 [PATCH V8 0/5] crypto: AES CBC multibuffer implementation Megha Dey
  2018-01-10  0:09 ` [PATCH V8 1/5] crypto: Multi-buffer encryption infrastructure support Megha Dey
@ 2018-01-10  0:09 ` Megha Dey
  2018-01-10  0:09 ` [PATCH V8 3/5] crypto: AES CBC multi-buffer scheduler Megha Dey
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 21+ messages in thread
From: Megha Dey @ 2018-01-10  0:09 UTC (permalink / raw)
  To: linux-kernel, linux-crypto, davem, herbert; +Cc: megha.dey, Megha Dey

This patch introduces the data structures and prototypes of functions
needed for doing AES CBC encryption using multi-buffer. Included are
the structures of the multi-buffer AES CBC job, job scheduler in C and
data structure defines in x86 assembly code.

Originally-by: Chandramouli Narayanan <mouli_7982@yahoo.com>
Signed-off-by: Megha Dey <megha.dey@linux.intel.com>
Acked-by: Tim Chen <tim.c.chen@linux.intel.com>
---
 arch/x86/crypto/aes-cbc-mb/aes_cbc_mb_ctx.h    |  97 +++++++++
 arch/x86/crypto/aes-cbc-mb/aes_cbc_mb_mgr.h    | 132 ++++++++++++
 arch/x86/crypto/aes-cbc-mb/mb_mgr_datastruct.S | 271 +++++++++++++++++++++++++
 arch/x86/crypto/aes-cbc-mb/reg_sizes.S         | 126 ++++++++++++
 4 files changed, 626 insertions(+)
 create mode 100644 arch/x86/crypto/aes-cbc-mb/aes_cbc_mb_ctx.h
 create mode 100644 arch/x86/crypto/aes-cbc-mb/aes_cbc_mb_mgr.h
 create mode 100644 arch/x86/crypto/aes-cbc-mb/mb_mgr_datastruct.S
 create mode 100644 arch/x86/crypto/aes-cbc-mb/reg_sizes.S

diff --git a/arch/x86/crypto/aes-cbc-mb/aes_cbc_mb_ctx.h b/arch/x86/crypto/aes-cbc-mb/aes_cbc_mb_ctx.h
new file mode 100644
index 0000000..024586b
--- /dev/null
+++ b/arch/x86/crypto/aes-cbc-mb/aes_cbc_mb_ctx.h
@@ -0,0 +1,97 @@
+/*
+ *	Header file for multi buffer AES CBC algorithm manager
+ *	that deals with 8 buffers at a time
+ *
+ *
+ * This file is provided under a dual BSD/GPLv2 license.  When using or
+ * redistributing this file, you may do so under either license.
+ *
+ * GPL LICENSE SUMMARY
+ *
+ * Copyright(c) 2016 Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * Contact Information:
+ * James Guilford <james.guilford@intel.com>
+ * Sean Gulley <sean.m.gulley@intel.com>
+ * Tim Chen <tim.c.chen@linux.intel.com>
+ * Megha Dey <megha.dey@linux.intel.com>
+ *
+ * BSD LICENSE
+ *
+ * Copyright(c) 2016 Intel Corporation.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ */
+#ifndef __AES_CBC_MB_CTX_H
+#define __AES_CBC_MB_CTX_H
+
+
+#include <linux/types.h>
+
+#include "aes_cbc_mb_mgr.h"
+
+#define CBC_ENCRYPT	0x01
+#define CBC_DECRYPT	0x02
+#define CBC_START	0x04
+#define CBC_DONE	0x08
+
+#define CBC_CTX_STS_IDLE       0x00
+#define CBC_CTX_STS_PROCESSING 0x01
+#define CBC_CTX_STS_LAST       0x02
+#define CBC_CTX_STS_COMPLETE   0x04
+
+enum cbc_ctx_error {
+	CBC_CTX_ERROR_NONE               =  0,
+	CBC_CTX_ERROR_INVALID_FLAGS      = -1,
+	CBC_CTX_ERROR_ALREADY_PROCESSING = -2,
+	CBC_CTX_ERROR_ALREADY_COMPLETED  = -3,
+};
+
+#define cbc_ctx_init(ctx, n_bytes, op) \
+	do { \
+		(ctx)->flag = (op) | CBC_START; \
+		(ctx)->nbytes = (n_bytes); \
+	} while (0)
+
+/* AESNI routines to perform cbc decrypt and key expansion */
+
+asmlinkage void aesni_cbc_dec(struct crypto_aes_ctx *ctx, u8 *out,
+			      const u8 *in, unsigned int len, u8 *iv);
+asmlinkage int aesni_set_key(struct crypto_aes_ctx *ctx, const u8 *in_key,
+			     unsigned int key_len);
+
+#endif /* __AES_CBC_MB_CTX_H */
diff --git a/arch/x86/crypto/aes-cbc-mb/aes_cbc_mb_mgr.h b/arch/x86/crypto/aes-cbc-mb/aes_cbc_mb_mgr.h
new file mode 100644
index 0000000..788180e
--- /dev/null
+++ b/arch/x86/crypto/aes-cbc-mb/aes_cbc_mb_mgr.h
@@ -0,0 +1,132 @@
+/*
+ *	Header file for multi buffer AES CBC algorithm manager
+ *	that deals with 8 buffers at a time
+ *
+ *
+ * This file is provided under a dual BSD/GPLv2 license.  When using or
+ * redistributing this file, you may do so under either license.
+ *
+ * GPL LICENSE SUMMARY
+ *
+ * Copyright(c) 2016 Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * Contact Information:
+ * James Guilford <james.guilford@intel.com>
+ * Sean Gulley <sean.m.gulley@intel.com>
+ * Tim Chen <tim.c.chen@linux.intel.com>
+ * Megha Dey <megha.dey@linux.intel.com>
+ *
+ * BSD LICENSE
+ *
+ * Copyright(c) 2016 Intel Corporation.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ */
+#ifndef __AES_CBC_MB_MGR_H
+#define __AES_CBC_MB_MGR_H
+
+
+#include <linux/types.h>
+#include <linux/printk.h>
+#include <crypto/aes.h>
+#include <crypto/b128ops.h>
+
+#define MAX_AES_JOBS		128
+
+enum job_sts {
+	STS_UNKNOWN = 0,
+	STS_BEING_PROCESSED = 1,
+	STS_COMPLETED = 2,
+	STS_INTERNAL_ERROR = 3,
+	STS_ERROR = 4
+};
+
+/* AES CBC multi buffer in order job structure */
+
+struct job_aes_cbc {
+	u8	*plaintext;	/* pointer to plaintext */
+	u8	*ciphertext;	/* pointer to ciphertext */
+	u128	iv;		/* initialization vector */
+	u128	*keys;		/* pointer to keys */
+	u32	len;		/* length in bytes, must be multiple of 16 */
+	enum	job_sts status;	/* status enumeration */
+	void	*user_data;	/* pointer to user data */
+	u32	key_len;	/* key length */
+};
+
+struct aes_cbc_args_x8 {
+	u8	*arg_in[8];	/* array of 8 pointers to in text */
+	u8	*arg_out[8];	/* array of 8 pointers to out text */
+	u128	*arg_keys[8];	/* array of 8 pointers to keys */
+	u128	arg_iv[8] __aligned(16);	/* array of 8 128-bit IVs */
+};
+
+struct aes_cbc_mb_mgr_inorder_x8 {
+	struct aes_cbc_args_x8 args;
+	u16 lens[8] __aligned(16);
+	u64 unused_lanes; /* each nibble is index (0...7) of unused lanes */
+	/* nibble 8 is set to F as a flag */
+	struct job_aes_cbc *job_in_lane[8];
+	/* In-order components */
+	u32 earliest_job; /* byte offset, -1 if none */
+	u32 next_job;      /* byte offset */
+	struct job_aes_cbc jobs[MAX_AES_JOBS];
+};
+
+/* define AES CBC multi buffer manager function proto */
+struct job_aes_cbc *aes_cbc_submit_job_inorder_128x8(
+	struct aes_cbc_mb_mgr_inorder_x8 *state);
+struct job_aes_cbc *aes_cbc_submit_job_inorder_192x8(
+	struct aes_cbc_mb_mgr_inorder_x8 *state);
+struct job_aes_cbc *aes_cbc_submit_job_inorder_256x8(
+	struct aes_cbc_mb_mgr_inorder_x8 *state);
+struct job_aes_cbc *aes_cbc_flush_job_inorder_x8(
+	struct aes_cbc_mb_mgr_inorder_x8 *state);
+struct job_aes_cbc *aes_cbc_get_next_job_inorder_x8(
+	struct aes_cbc_mb_mgr_inorder_x8 *state);
+struct job_aes_cbc *aes_cbc_get_completed_job_inorder_x8(
+	struct aes_cbc_mb_mgr_inorder_x8 *state);
+void aes_cbc_init_mb_mgr_inorder_x8(
+	struct aes_cbc_mb_mgr_inorder_x8 *state);
+void aes_cbc_submit_job_ooo_x8(struct aes_cbc_mb_mgr_inorder_x8 *state,
+		struct job_aes_cbc *job);
+void aes_cbc_flush_job_ooo_x8(struct aes_cbc_mb_mgr_inorder_x8 *state);
+void aes_cbc_flush_job_ooo_128x8(struct aes_cbc_mb_mgr_inorder_x8 *state);
+void aes_cbc_flush_job_ooo_192x8(struct aes_cbc_mb_mgr_inorder_x8 *state);
+void aes_cbc_flush_job_ooo_256x8(struct aes_cbc_mb_mgr_inorder_x8 *state);
+
+#endif /* __AES_CBC_MB_MGR_H */
diff --git a/arch/x86/crypto/aes-cbc-mb/mb_mgr_datastruct.S b/arch/x86/crypto/aes-cbc-mb/mb_mgr_datastruct.S
new file mode 100644
index 0000000..6892893
--- /dev/null
+++ b/arch/x86/crypto/aes-cbc-mb/mb_mgr_datastruct.S
@@ -0,0 +1,271 @@
+/*
+ *	Header file: Data structure: AES CBC multibuffer optimization
+ *				(x86_64)
+ *
+ * This is the generic header file defining data stuctures for
+ * AES multi-buffer CBC implementation. This file is to be included
+ * by the AES CBC multibuffer scheduler.
+ *
+ * This file is provided under a dual BSD/GPLv2 license.  When using or
+ * redistributing this file, you may do so under either license.
+ *
+ * GPL LICENSE SUMMARY
+ *
+ * Copyright(c) 2016 Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * Contact Information:
+ * James Guilford <james.guilford@intel.com>
+ * Sean Gulley <sean.m.gulley@intel.com>
+ * Tim Chen <tim.c.chen@linux.intel.com>
+ * Megha Dey <megha.dey@linux.intel.com>
+ *
+ * BSD LICENSE
+ *
+ * Copyright(c) 2016 Intel Corporation.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ */
+
+/* This is the AES CBC BY8 implementation */
+#define	NBY	8
+
+#define AES_KEYSIZE_128 16
+#define AES_KEYSIZE_192 24
+#define AES_KEYSIZE_256 32
+
+#ifndef _AES_CBC_MB_MGR_DATASTRUCT_ASM_
+#define _AES_CBC_MB_MGR_DATASTRUCT_ASM_
+
+/* Macros for defining data structures */
+
+/* Usage example */
+
+/*
+ * Alternate "struct-like" syntax:
+ *	STRUCT job_aes2
+ *	RES_Q	.plaintext,	1
+ *	RES_Q	.ciphertext,	1
+ *	RES_DQ	.IV,		1
+ *	RES_B	.nested,	_JOB_AES_SIZE, _JOB_AES_ALIGN
+ *	RES_U	.union,		size1, align1, \
+ *				size2, align2, \
+ *				...
+ *	ENDSTRUCT
+ *	; Following only needed if nesting
+ *	%assign job_aes2_size	_FIELD_OFFSET
+ *	%assign job_aes2_align	_STRUCT_ALIGN
+ *
+ * RES_* macros take a name, a count and an optional alignment.
+ * The count in terms of the base size of the macro, and the
+ * default alignment is the base size.
+ * The macros are:
+ * Macro    Base size
+ * RES_B	    1
+ * RES_W	    2
+ * RES_D     4
+ * RES_Q     8
+ * RES_DQ   16
+ * RES_Y    32
+ * RES_Z    64
+ *
+ * RES_U defines a union. It's arguments are a name and two or more
+ * pairs of "size, alignment"
+ *
+ * The two assigns are only needed if this structure is being nested
+ * within another. Even if the assigns are not done, one can still use
+ * STRUCT_NAME_size as the size of the structure.
+ *
+ * Note that for nesting, you still need to assign to STRUCT_NAME_size.
+ *
+ * The differences between this and using "struct" directly are that each
+ * type is implicitly aligned to its natural length (although this can be
+ * over-ridden with an explicit third parameter), and that the structure
+ * is padded at the end to its overall alignment.
+ *
+ */
+
+/* START_FIELDS */
+
+.macro START_FIELDS
+_FIELD_OFFSET	=	0
+_STRUCT_ALIGN	=	0
+.endm
+
+/* FIELD name size align */
+.macro FIELD name, size, align
+
+	_FIELD_OFFSET = (_FIELD_OFFSET + (\align) - 1) & (~ ((\align)-1))
+	\name = _FIELD_OFFSET
+	_FIELD_OFFSET = _FIELD_OFFSET + (\size)
+	.if (\align > _STRUCT_ALIGN)
+		_STRUCT_ALIGN = \align
+	.endif
+.endm
+
+/* END_FIELDS */
+
+.macro END_FIELDS
+	_FIELD_OFFSET = \
+	(_FIELD_OFFSET + _STRUCT_ALIGN-1) & (~ (_STRUCT_ALIGN-1))
+.endm
+
+.macro STRUCT p1
+	START_FIELDS
+	.struct \p1
+.endm
+
+.macro ENDSTRUCT
+	tmp = _FIELD_OFFSET
+	END_FIELDS
+	tmp = (_FIELD_OFFSET - %%tmp)
+	.if (tmp > 0)
+		.lcomm	tmp
+.endif
+.endstruc
+.endm
+
+/* RES_int name size align */
+.macro RES_int p1, p2, p3
+	name = \p1
+	size = \p2
+	align = \p3
+
+	_FIELD_OFFSET = (_FIELD_OFFSET + (align) - 1) & (~ ((align)-1))
+	.align align
+	.lcomm name size
+	_FIELD_OFFSET = _FIELD_OFFSET + (size)
+	.if (align > _STRUCT_ALIGN)
+		_STRUCT_ALIGN = align
+	.endif
+.endm
+
+/* macro RES_B name, size [, align] */
+.macro RES_B _name, _size, _align=1
+	RES_int _name, _size, _align
+.endm
+
+/* macro RES_W name, size [, align] */
+.macro RES_W _name, _size, _align=2
+	RES_int _name, 2*(_size), _align
+.endm
+
+/* macro RES_D name, size [, align] */
+.macro RES_D _name, _size, _align=4
+	RES_int _name, 4*(_size), _align
+.endm
+
+/* macro RES_Q name, size [, align] */
+.macro RES_Q _name, _size, _align=8
+	RES_int _name, 8*(_size), _align
+.endm
+
+/* macro RES_DQ name, size [, align] */
+.macro RES_DQ _name, _size, _align=16
+	RES_int _name, 16*(_size), _align
+.endm
+
+/* macro RES_Y name, size [, align] */
+.macro RES_Y _name, _size, _align=32
+	RES_int _name, 32*(_size), _align
+.endm
+
+/*; macro RES_Z name, size [, align] */
+.macro RES_Z _name, _size, _align=64
+	RES_int _name, 64*(_size), _align
+.endm
+#endif /* _AES_CBC_MB_MGR_DATASTRUCT_ASM_ */
+
+/* Define JOB_AES structure */
+
+START_FIELDS	/* JOB_AES */
+/*	name		size	align	*/
+FIELD	_plaintext,	8,	8	/* pointer to plaintext */
+FIELD	_ciphertext,	8,	8	/* pointer to ciphertext */
+FIELD	_IV,		16,	8	/* IV */
+FIELD	_keys,		8,	8	/* pointer to keys */
+FIELD	_len,		4,	4	/* length in bytes */
+FIELD	_status,	4,	4	/* status enumeration */
+FIELD	_user_data,	8,	8	/* pointer to user data */
+FIELD	_key_len,	4,	8	/* key length */
+
+END_FIELDS
+
+_JOB_AES_size	=	_FIELD_OFFSET
+_JOB_AES_align	=	_STRUCT_ALIGN
+
+START_FIELDS	/* AES_ARGS_XX */
+/*	name		size	align	*/
+FIELD	_arg_in,	8*NBY, 8	/* [] of NBY pointers to in text */
+FIELD	_arg_out,	8*NBY, 8	/* [] of NBY pointers to out text */
+FIELD	_arg_keys,	8*NBY, 8	/* [] of NBY pointers to keys */
+FIELD	_arg_IV,	16*NBY, 16	/* [] of NBY 128-bit IV's */
+
+END_FIELDS
+
+_AES_ARGS_SIZE	=	_FIELD_OFFSET
+_AES_ARGS_ALIGN	=	_STRUCT_ALIGN
+
+START_FIELDS	/* MB_MGR_AES_INORDER_XX */
+/*	name		size	align				*/
+FIELD	_args,		_AES_ARGS_SIZE, _AES_ARGS_ALIGN
+FIELD	_lens,		16,	16	/* lengths */
+FIELD	_unused_lanes,	8,	8
+FIELD	_job_in_lane,	8*NBY,	8	/* [] of NBY pointers to jobs */
+
+FIELD	_earliest_job,	4,	4	/* -1 if none */
+FIELD	_next_job,	4,	4
+FIELD	_jobs,		_JOB_AES_size,	_JOB_AES_align	/*> 8, start of array */
+
+END_FIELDS
+
+/* size is not accurate because the arrays are not represented */
+
+_MB_MGR_AES_INORDER_align	=	_STRUCT_ALIGN
+
+_args_in	=	_args+_arg_in
+_args_out	=	_args+_arg_out
+_args_keys	=	_args+_arg_keys
+_args_IV	=	_args+_arg_IV
+
+
+#define STS_UNKNOWN		0
+#define STS_BEING_PROCESSED	1
+#define STS_COMPLETED		2
+#define STS_INTERNAL_ERROR	3
+#define STS_ERROR		4
+
+#define MAX_AES_JOBS		128
diff --git a/arch/x86/crypto/aes-cbc-mb/reg_sizes.S b/arch/x86/crypto/aes-cbc-mb/reg_sizes.S
new file mode 100644
index 0000000..eeff13c
--- /dev/null
+++ b/arch/x86/crypto/aes-cbc-mb/reg_sizes.S
@@ -0,0 +1,126 @@
+/*
+ *	Header file AES CBC multibuffer SSE optimization (x86_64)
+ *
+ *
+ * This file is provided under a dual BSD/GPLv2 license.  When using or
+ * redistributing this file, you may do so under either license.
+ *
+ * GPL LICENSE SUMMARY
+ *
+ * Copyright(c) 2016 Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * Contact Information:
+ * James Guilford <james.guilford@intel.com>
+ * Sean Gulley <sean.m.gulley@intel.com>
+ * Tim Chen <tim.c.chen@linux.intel.com>
+ * Megha Dey <megha.dey@linux.intel.com>
+ *
+ * BSD LICENSE
+ *
+ * Copyright(c) 2016 Intel Corporation.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ */
+/* define d and w variants for registers */
+
+#define	raxd	eax
+#define raxw	ax
+#define raxb	al
+
+#define	rbxd	ebx
+#define rbxw	bx
+#define rbxb	bl
+
+#define	rcxd	ecx
+#define rcxw	cx
+#define rcxb	cl
+
+#define	rdxd	edx
+#define rdxw	dx
+#define rdxb	dl
+
+#define	rsid	esi
+#define rsiw	si
+#define rsib	sil
+
+#define	rdid	edi
+#define rdiw	di
+#define rdib	dil
+
+#define	rbpd	ebp
+#define rbpw	bp
+#define rbpb	bpl
+
+#define ymm0x	%xmm0
+#define ymm1x	%xmm1
+#define ymm2x	%xmm2
+#define ymm3x	%xmm3
+#define ymm4x	%xmm4
+#define ymm5x	%xmm5
+#define ymm6x	%xmm6
+#define ymm7x	%xmm7
+#define ymm8x	%xmm8
+#define ymm9x	%xmm9
+#define ymm10x	%xmm10
+#define ymm11x	%xmm11
+#define ymm12x	%xmm12
+#define ymm13x	%xmm13
+#define ymm14x	%xmm14
+#define ymm15x	%xmm15
+
+#define CONCAT(a,b)	a##b
+#define DWORD(reg)	CONCAT(reg, d)
+#define WORD(reg)	CONCAT(reg, w)
+#define BYTE(reg)	CONCAT(reg, b)
+
+#define XWORD(reg)	CONCAT(reg,x)
+
+/* common macros */
+
+/* Generate a label to go to */
+.macro LABEL prefix, num
+\prefix\num\():
+.endm
+
+/*
+ * cond_jump ins, name, suffix
+ * ins - conditional jump instruction to execute
+ * name,suffix - concatenate to form the label to go to
+ */
+.macro cond_jump ins, name, suffix
+	\ins	\name\suffix
+.endm
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH V8 3/5] crypto: AES CBC multi-buffer scheduler
  2018-01-10  0:09 [PATCH V8 0/5] crypto: AES CBC multibuffer implementation Megha Dey
  2018-01-10  0:09 ` [PATCH V8 1/5] crypto: Multi-buffer encryption infrastructure support Megha Dey
  2018-01-10  0:09 ` [PATCH V8 2/5] crypto: AES CBC multi-buffer data structures Megha Dey
@ 2018-01-10  0:09 ` Megha Dey
  2018-01-10  0:09 ` [PATCH V8 4/5] crypto: AES CBC by8 encryption Megha Dey
  2018-01-10  0:09 ` [PATCH V8 5/5] crypto: AES CBC multi-buffer glue code Megha Dey
  4 siblings, 0 replies; 21+ messages in thread
From: Megha Dey @ 2018-01-10  0:09 UTC (permalink / raw)
  To: linux-kernel, linux-crypto, davem, herbert; +Cc: megha.dey, Megha Dey

This patch implements in-order scheduler for encrypting multiple buffers
in parallel supporting AES CBC encryption with key sizes of
128, 192 and 256 bits. It uses 8 data lanes by taking advantage of the
SIMD instructions with XMM registers.

The multibuffer manager and scheduler is mostly written in assembly and
the initialization support is written C. The AES CBC multibuffer crypto
driver support interfaces with the multibuffer manager and scheduler
to support AES CBC encryption in parallel. The scheduler supports
job submissions, job flushing and and job retrievals after completion.

The basic flow of usage of the CBC multibuffer scheduler is as follows:

- The caller allocates an aes_cbc_mb_mgr_inorder_x8 object
and initializes it once by calling aes_cbc_init_mb_mgr_inorder_x8().

- The aes_cbc_mb_mgr_inorder_x8 structure has an array of JOB_AES
objects. Allocation and scheduling of JOB_AES objects are managed
by the multibuffer scheduler support routines. The caller allocates
a JOB_AES using aes_cbc_get_next_job_inorder_x8().

- The returned JOB_AES must be filled in with parameters for CBC
encryption (eg: plaintext buffer, ciphertext buffer, key, iv, etc) and
submitted to the manager object using aes_cbc_submit_job_inorder_xx().

- If the oldest JOB_AES is completed during a call to
aes_cbc_submit_job_inorder_x8(), it is returned. Otherwise,
NULL is returned.

- A call to aes_cbc_flush_job_inorder_x8() always returns the
oldest job, unless the multibuffer manager is empty of jobs.

- A call to aes_cbc_get_completed_job_inorder_x8() returns
a completed job. This routine is useful to process completed
jobs instead of waiting for the flusher to engage.

- When a job is returned from submit or flush, the caller extracts
the useful data and returns it to the multibuffer manager implicitly
by the next call to aes_cbc_get_next_job_xx().

Jobs are always returned from submit or flush routines in the order they
were submitted (hence "inorder").A job allocated using
aes_cbc_get_next_job_inorder_x8() must be filled in and submitted before
another call. A job returned by aes_cbc_submit_job_inorder_x8() or
aes_cbc_flush_job_inorder_x8() is 'deallocated' upon the next call to
get a job structure. Calls to get_next_job() cannot fail. If all jobs
are allocated after a call to get_next_job(), the subsequent call to submit
always returns the oldest job in a completed state.

Originally-by: Chandramouli Narayanan <mouli_7982@yahoo.com>
Signed-off-by: Megha Dey <megha.dey@linux.intel.com>
Acked-by: Tim Chen <tim.c.chen@linux.intel.com>
---
 arch/x86/crypto/aes-cbc-mb/aes_mb_mgr_init.c       | 146 ++++++++
 arch/x86/crypto/aes-cbc-mb/mb_mgr_inorder_x8_asm.S | 223 +++++++++++
 arch/x86/crypto/aes-cbc-mb/mb_mgr_ooo_x8_asm.S     | 417 +++++++++++++++++++++
 3 files changed, 786 insertions(+)
 create mode 100644 arch/x86/crypto/aes-cbc-mb/aes_mb_mgr_init.c
 create mode 100644 arch/x86/crypto/aes-cbc-mb/mb_mgr_inorder_x8_asm.S
 create mode 100644 arch/x86/crypto/aes-cbc-mb/mb_mgr_ooo_x8_asm.S

diff --git a/arch/x86/crypto/aes-cbc-mb/aes_mb_mgr_init.c b/arch/x86/crypto/aes-cbc-mb/aes_mb_mgr_init.c
new file mode 100644
index 0000000..2a2ce6c
--- /dev/null
+++ b/arch/x86/crypto/aes-cbc-mb/aes_mb_mgr_init.c
@@ -0,0 +1,146 @@
+/*
+ * Initialization code for multi buffer AES CBC algorithm
+ *
+ *
+ * This file is provided under a dual BSD/GPLv2 license.  When using or
+ * redistributing this file, you may do so under either license.
+ *
+ * GPL LICENSE SUMMARY
+ *
+ * Copyright(c) 2016 Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * Contact Information:
+ * James Guilford <james.guilford@intel.com>
+ * Sean Gulley <sean.m.gulley@intel.com>
+ * Tim Chen <tim.c.chen@linux.intel.com>
+ * Megha Dey <megha.dey@linux.intel.com>
+ *
+ * BSD LICENSE
+ *
+ * Copyright(c) 2016 Intel Corporation.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ */
+
+#include "aes_cbc_mb_mgr.h"
+
+void aes_cbc_init_mb_mgr_inorder_x8(struct aes_cbc_mb_mgr_inorder_x8 *state)
+{
+	/* Init "out of order" components */
+	state->unused_lanes = 0xF76543210;
+	state->job_in_lane[0] = NULL;
+	state->job_in_lane[1] = NULL;
+	state->job_in_lane[2] = NULL;
+	state->job_in_lane[3] = NULL;
+	state->job_in_lane[4] = NULL;
+	state->job_in_lane[5] = NULL;
+	state->job_in_lane[6] = NULL;
+	state->job_in_lane[7] = NULL;
+
+	/* Init "in order" components */
+	state->next_job = 0;
+	state->earliest_job = -1;
+
+}
+
+#define JOBS(offset) ((struct job_aes_cbc *)(((u64)state->jobs)+offset))
+
+struct job_aes_cbc *
+aes_cbc_get_next_job_inorder_x8(struct aes_cbc_mb_mgr_inorder_x8 *state)
+{
+	return JOBS(state->next_job);
+}
+
+struct job_aes_cbc *
+aes_cbc_flush_job_inorder_x8(struct aes_cbc_mb_mgr_inorder_x8 *state)
+{
+	struct job_aes_cbc *job;
+
+	/* checking earliest_job < 0 fails and the code walks over bogus */
+	if (state->earliest_job == -1)
+		return NULL; /* empty */
+
+	job = JOBS(state->earliest_job);
+	while (job->status != STS_COMPLETED) {
+		switch (job->key_len) {
+		case AES_KEYSIZE_128:
+			aes_cbc_flush_job_ooo_128x8(state);
+			break;
+		case AES_KEYSIZE_192:
+			aes_cbc_flush_job_ooo_192x8(state);
+			break;
+		case AES_KEYSIZE_256:
+			aes_cbc_flush_job_ooo_256x8(state);
+			break;
+		default:
+			break;
+		}
+	}
+
+	/* advance earliest job */
+	state->earliest_job += sizeof(struct job_aes_cbc);
+	if (state->earliest_job == MAX_AES_JOBS * sizeof(struct job_aes_cbc))
+		state->earliest_job = 0;
+
+	if (state->earliest_job == state->next_job)
+		state->earliest_job = -1;
+
+	return job;
+}
+
+struct job_aes_cbc *
+aes_cbc_get_completed_job_inorder_x8(struct aes_cbc_mb_mgr_inorder_x8 *state)
+{
+	struct job_aes_cbc *job;
+
+	if (state->earliest_job == -1)
+		return NULL; /* empty */
+
+	job = JOBS(state->earliest_job);
+	if (job->status != STS_COMPLETED)
+		return NULL;
+
+	state->earliest_job += sizeof(struct job_aes_cbc);
+
+	if (state->earliest_job == MAX_AES_JOBS * sizeof(struct job_aes_cbc))
+		state->earliest_job = 0;
+	if (state->earliest_job == state->next_job)
+		state->earliest_job = -1;
+
+	/* we have a completed job */
+	return job;
+}
diff --git a/arch/x86/crypto/aes-cbc-mb/mb_mgr_inorder_x8_asm.S b/arch/x86/crypto/aes-cbc-mb/mb_mgr_inorder_x8_asm.S
new file mode 100644
index 0000000..5108875
--- /dev/null
+++ b/arch/x86/crypto/aes-cbc-mb/mb_mgr_inorder_x8_asm.S
@@ -0,0 +1,223 @@
+/*
+ *	AES CBC by8 multibuffer inorder scheduler optimization (x86_64)
+ *
+ *
+ * This file is provided under a dual BSD/GPLv2 license.  When using or
+ * redistributing this file, you may do so under either license.
+ *
+ * GPL LICENSE SUMMARY
+ *
+ * Copyright(c) 2016 Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * Contact Information:
+ * James Guilford <james.guilford@intel.com>
+ * Sean Gulley <sean.m.gulley@intel.com>
+ * Tim Chen <tim.c.chen@linux.intel.com>
+ * Megha Dey <megha.dey@linux.intel.com>
+ *
+ * BSD LICENSE
+ *
+ * Copyright(c) 2016 Intel Corporation.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ */
+#include <linux/linkage.h>
+#include "mb_mgr_datastruct.S"
+#include "reg_sizes.S"
+
+#define JUMP
+
+#define arg1	%rdi
+#define arg2	%rsi
+#define state	arg1
+
+/* virtual registers used by submit_job_aes_inorder_x8 */
+#define next_job	%rdx
+#define earliest_job	%rcx
+#define zero		%r8
+#define returned_job	%rax	/* register that returns a value from func */
+
+.extern aes_cbc_submit_job_ooo_128x8
+.extern aes_cbc_submit_job_ooo_192x8
+.extern aes_cbc_submit_job_ooo_256x8
+.extern aes_cbc_flush_job_ooo_128x8
+.extern aes_cbc_flush_job_ooo_192x8
+.extern aes_cbc_flush_job_ooo_256x8
+.extern aes_cbc_flush_job_ooo_x8
+
+/*
+ * struct job_aes* aes_cbc_submit_job_inorder_x8(
+ *		struct aes_cbc_mb_mgr_inorder_x8 *state)
+ */
+
+.macro aes_cbc_submit_job_inorder_x8 key_len
+
+	sub	$8, %rsp	/* align stack for next calls */
+
+	mov	_next_job(state), DWORD(next_job)
+	lea	_jobs(state, next_job), arg2
+
+	.if \key_len == AES_KEYSIZE_128
+		call	aes_cbc_submit_job_ooo_128x8
+	.elseif \key_len == AES_KEYSIZE_192
+		call	aes_cbc_submit_job_ooo_192x8
+	.elseif \key_len == AES_KEYSIZE_256
+		call	aes_cbc_submit_job_ooo_256x8
+	.endif
+
+	mov	_earliest_job(state), DWORD(earliest_job)
+	cmp	$0, DWORD(earliest_job)
+	jl	.Lstate_was_empty\key_len
+
+	/* we have a valid earliest_job */
+
+	/* advance next_job */
+	mov	_next_job(state), DWORD(next_job)
+	add	$_JOB_AES_size, next_job
+#ifdef JUMP
+	cmp	$(MAX_AES_JOBS * _JOB_AES_size), next_job
+	jne	.Lskip1\key_len
+	xor	next_job, next_job
+.Lskip1\key_len:
+#else
+	xor	zero,zero
+	cmp	$(MAX_AES_JOBS * _JOB_AES_size), next_job
+	cmove	zero, next_job
+#endif
+	mov	DWORD(next_job), _next_job(state)
+
+	lea	_jobs(state, earliest_job), returned_job
+	cmp	next_job, earliest_job
+	je	.Lfull\key_len
+
+	/* not full */
+	cmpl	$STS_COMPLETED, _status(returned_job)
+	jne	.Lreturn_null\key_len
+
+	/* advance earliest_job */
+	add	$_JOB_AES_size, earliest_job
+	cmp	$(MAX_AES_JOBS * _JOB_AES_size), earliest_job
+#ifdef JUMP
+	jne	.Lskip2\key_len
+	xor	earliest_job, earliest_job
+.Lskip2\key_len:
+#else
+	cmove	zero, earliest_job
+#endif
+
+	add	$8, %rsp
+	mov	DWORD(earliest_job), _earliest_job(state)
+	ret
+
+.Lreturn_null\key_len:
+	add	$8, %rsp
+	xor	returned_job, returned_job
+	ret
+
+.Lfull\key_len:
+	cmpl	$STS_COMPLETED, _status(returned_job)
+	je	.Lcompleted\key_len
+	mov	earliest_job, (%rsp)
+.Lflush_loop\key_len:
+	.if \key_len == AES_KEYSIZE_128
+		call	aes_cbc_flush_job_ooo_128x8
+	.elseif \key_len == AES_KEYSIZE_192
+		call	aes_cbc_flush_job_ooo_192x8
+	.elseif \key_len == AES_KEYSIZE_256
+		call	aes_cbc_flush_job_ooo_256x8
+	.endif
+	/* state is still valid */
+	mov	(%rsp), earliest_job
+	cmpl	$STS_COMPLETED, _status(returned_job)
+	jne	.Lflush_loop\key_len
+	xor	zero,zero
+.Lcompleted\key_len:
+	/* advance earliest_job */
+	add	$_JOB_AES_size, earliest_job
+	cmp	$(MAX_AES_JOBS * _JOB_AES_size), earliest_job
+#ifdef JUMP
+	jne	.Lskip3\key_len
+	xor	earliest_job, earliest_job
+.Lskip3\key_len:
+#else
+	cmove	zero, earliest_job
+#endif
+
+	add	$8, %rsp
+	mov	DWORD(earliest_job), _earliest_job(state)
+	ret
+
+.Lstate_was_empty\key_len:
+	mov	_next_job(state), DWORD(next_job)
+	mov	DWORD(next_job), _earliest_job(state)
+
+	/* advance next_job */
+	add	$_JOB_AES_size, next_job
+#ifdef JUMP
+	cmp	$(MAX_AES_JOBS * _JOB_AES_size), next_job
+	jne	.Lskip4\key_len
+	xor	next_job, next_job
+.Lskip4\key_len:
+#else
+	xor	zero,zero
+	cmp	$(MAX_AES_JOBS * _JOB_AES_size), next_job
+	cmove	zero, next_job
+#endif
+	mov	DWORD(next_job), _next_job(state)
+
+	add	$8, %rsp
+	xor	returned_job, returned_job
+	ret
+.endm
+
+ENTRY(aes_cbc_submit_job_inorder_128x8)
+
+	aes_cbc_submit_job_inorder_x8 AES_KEYSIZE_128
+
+ENDPROC(aes_cbc_submit_job_inorder_128x8)
+
+ENTRY(aes_cbc_submit_job_inorder_192x8)
+
+	aes_cbc_submit_job_inorder_x8 AES_KEYSIZE_192
+
+ENDPROC(aes_cbc_submit_job_inorder_192x8)
+
+ENTRY(aes_cbc_submit_job_inorder_256x8)
+
+	aes_cbc_submit_job_inorder_x8 AES_KEYSIZE_256
+
+ENDPROC(aes_cbc_submit_job_inorder_256x8)
diff --git a/arch/x86/crypto/aes-cbc-mb/mb_mgr_ooo_x8_asm.S b/arch/x86/crypto/aes-cbc-mb/mb_mgr_ooo_x8_asm.S
new file mode 100644
index 0000000..351f30f
--- /dev/null
+++ b/arch/x86/crypto/aes-cbc-mb/mb_mgr_ooo_x8_asm.S
@@ -0,0 +1,417 @@
+/*
+ *	AES CBC by8 multibuffer out-of-order scheduler optimization (x86_64)
+ *
+ *
+ * This file is provided under a dual BSD/GPLv2 license.  When using or
+ * redistributing this file, you may do so under either license.
+ *
+ * GPL LICENSE SUMMARY
+ *
+ * Copyright(c) 2016 Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * Contact Information:
+ * James Guilford <james.guilford@intel.com>
+ * Sean Gulley <sean.m.gulley@intel.com>
+ * Tim Chen <tim.c.chen@linux.intel.com>
+ * Megha Dey <megha.dey@linux.intel.com>
+ *
+ * BSD LICENSE
+ *
+ * Copyright(c) 2016 Intel Corporation.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ */
+
+#include <linux/linkage.h>
+#include "mb_mgr_datastruct.S"
+#include "reg_sizes.S"
+
+#define arg1	%rdi
+#define arg2	%rsi
+#define state	arg1
+#define job	arg2
+
+/* virtual registers used by aes_cbc_submit_job_ooo_x8 */
+#define unused_lanes	%rax
+#define lane		%rdx
+#define	tmp1		%rcx
+#define	tmp2		%r8
+#define	tmp3		%r9
+#define len		tmp1
+
+#define good_lane	lane
+
+/* virtual registers used by aes_cbc_submit_job_inorder_x8 */
+#define new_job		%rdx
+#define earliest_job	%rcx
+#define minus1		%r8
+#define returned_job	%rax	/* register that returns a value from func */
+
+.section .rodata
+.align 16
+len_masks:
+	.octa 0x0000000000000000000000000000FFFF
+	.octa 0x000000000000000000000000FFFF0000
+	.octa 0x00000000000000000000FFFF00000000
+	.octa 0x0000000000000000FFFF000000000000
+	.octa 0x000000000000FFFF0000000000000000
+	.octa 0x00000000FFFF00000000000000000000
+	.octa 0x0000FFFF000000000000000000000000
+	.octa 0xFFFF0000000000000000000000000000
+
+dupw:
+	.octa 0x01000100010001000100010001000100
+
+one:	.quad  1
+two:	.quad  2
+three:	.quad  3
+four:	.quad  4
+five:	.quad  5
+six:	.quad  6
+seven:	.quad  7
+
+.text
+
+.extern aes_cbc_enc_128_x8
+.extern aes_cbc_enc_192_x8
+.extern aes_cbc_enc_256_x8
+
+/* arg1/state remains intact after call */
+/*
+ * void aes_cbc_submit_job__ooo_128x8(
+ *	struct aes_cbc_mb_mgr_aes_inorder_x8 *state,
+ *	struct job_aes_cbc *job);
+ * void aes_cbc_submit_job__ooo_192x8(
+ *	struct aes_cbc_mb_mgr_aes_inorder_x8 *state,
+ *	struct job_aes_cbc *job);
+ * void aes_cbc_submit_job__ooo_256x8(
+ *	struct aes_cbc_mb_mgr_aes_inorder_x8 *state,
+ *	struct job_aes_cbc *job);
+ */
+
+.global aes_cbc_submit_job_ooo_128x8
+.global aes_cbc_submit_job_ooo_192x8
+.global aes_cbc_submit_job_ooo_256x8
+
+
+.macro aes_cbc_submit_job_ooo_x8 key_len
+
+	mov	_unused_lanes(state), unused_lanes
+	mov	unused_lanes, lane
+	and	$0xF, lane
+	shr	$4, unused_lanes
+
+	/* state->job_in_lane[lane] = job; */
+	mov	job, _job_in_lane(state, lane, 8)
+
+	/*state->lens[lane] = job->len / AES_BLOCK_SIZE; */
+	mov	_len(job), DWORD(len)
+	shr	$4, len
+	mov	WORD(len), _lens(state, lane, 2)
+
+	mov	_plaintext(job), tmp1
+	mov	_ciphertext(job), tmp2
+	mov	_keys(job), tmp3
+	movdqu	_IV(job), %xmm2
+	mov	tmp1, _args_in(state, lane, 8)
+	mov	tmp2, _args_out(state , lane, 8)
+	mov	tmp3, _args_keys(state, lane, 8)
+	shl	$4, lane
+	movdqa	%xmm2, _args_IV(state , lane)
+
+	movl	$STS_BEING_PROCESSED, _status(job)
+
+	mov	unused_lanes, _unused_lanes(state)
+	cmp	$0xF, unused_lanes
+	jne	.Lnot_enough_jobs\key_len
+
+	movdqa	_lens(state), %xmm0
+	phminposuw	%xmm0, %xmm1
+	/*
+	 * xmm1{15:0} = min value
+	 * xmm1{18:16} = index of min
+	 */
+
+	/*
+	 * arg1 = rcx = state = args (and it is not clobbered by routine)
+	 * arg2 = rdx = min len
+	 */
+	movd	%xmm1, arg2
+	and	$0xFFFF, arg2
+
+	/* subtract min len from lengths */
+	pshufb	dupw(%rip), %xmm1	/* duplicate words across all lanes */
+	psubw	%xmm1, %xmm0
+	movdqa	%xmm0, _lens(state)
+
+	/* need to align stack */
+	sub	$8, %rsp
+	.if \key_len == AES_KEYSIZE_128
+		call aes_cbc_enc_128_x8
+	.elseif \key_len == AES_KEYSIZE_192
+		call aes_cbc_enc_192_x8
+	.elseif \key_len == AES_KEYSIZE_256
+		call aes_cbc_enc_256_x8
+	.endif
+	add	$8, %rsp
+	/* arg1/state is still intact */
+
+	/* process completed jobs */
+	movdqa	_lens(state), %xmm0
+	phminposuw	%xmm0, %xmm1
+	/*
+	 * xmm1{15:0} = min value
+	 * xmm1{18:16} = index of min
+	 */
+
+	/*
+	 * at this point at least one len should be 0
+	 * so min value should be 0
+	 * and the index is the index of that lane [0...7]
+	 */
+	lea	len_masks(%rip), tmp3
+	mov	_unused_lanes(state), unused_lanes
+	movd	%xmm1, lane
+.Lcontinue_loop\key_len:
+	/* assert((lane & 0xFFFF) == 0) */
+	shr	$16, lane	/* lane is now index */
+	mov	_job_in_lane(state, lane, 8), job
+	movl	$STS_COMPLETED, _status(job)
+	movq	$0, _job_in_lane(state ,lane, 8)
+	shl	$4, unused_lanes
+	or	lane, unused_lanes
+	shl	$4, lane
+	movdqa	_args_IV(state,lane), %xmm2
+	movdqu	%xmm2, _IV(job)
+	por	(tmp3, lane), %xmm0
+
+	phminposuw	%xmm0, %xmm1
+	movd	%xmm1, lane
+	/* see if bits 15:0 are zero */
+	test	$0xFFFF, lane
+	jz	.Lcontinue_loop\key_len
+
+	/* done; save registers */
+	mov	unused_lanes, _unused_lanes(state)
+	/* don't need to save xmm0/lens */
+
+.Lnot_enough_jobs\key_len:
+	ret
+.endm
+
+ENTRY(aes_cbc_submit_job_ooo_128x8)
+
+	aes_cbc_submit_job_ooo_x8 AES_KEYSIZE_128
+
+ENDPROC(aes_cbc_submit_job_ooo_128x8)
+
+ENTRY(aes_cbc_submit_job_ooo_192x8)
+
+	aes_cbc_submit_job_ooo_x8 AES_KEYSIZE_192
+
+ENDPROC(aes_cbc_submit_job_ooo_192x8)
+
+ENTRY(aes_cbc_submit_job_ooo_256x8)
+
+	aes_cbc_submit_job_ooo_x8 AES_KEYSIZE_256
+
+ENDPROC(aes_cbc_submit_job_ooo_256x8)
+
+
+/* arg1/state remains intact after call */
+/*
+ * void aes_cbc_flush_job_ooo_128x8(
+ *	struct aes_cbc_mb_mgr_aes_inorder_x8 *state)
+ * void aes_cbc_flush_job_ooo_192x8(
+ *	struct aes_cbc_mb_mgr_aes_inorder_x8 *state)
+ * void aes_cbc_flush_job_ooo_256x8(
+ *	struct aes_cbc_mb_mgr_aes_inorder_x8 *state)
+ */
+.global aes_cbc_flush_job_ooo_128x8
+.global aes_cbc_flush_job_ooo_192x8
+.global aes_cbc_flush_job_ooo_256x8
+
+.macro aes_cbc_flush_job_ooo_x8 key_len
+
+	mov	_unused_lanes(state), unused_lanes
+
+	/* if bit (32+3) is set, then all lanes are empty */
+	bt	$(32+3), unused_lanes
+	jc	.Lreturn\key_len
+
+	/* find a lane with a non-null job */
+	xor	good_lane, good_lane
+	cmpq	$0, (_job_in_lane+8*1)(state)
+	cmovne	one(%rip), good_lane
+	cmpq	$0, (_job_in_lane+8*2)(state)
+	cmovne	two(%rip), good_lane
+	cmpq	$0, (_job_in_lane+8*3)(state)
+	cmovne	three(%rip), good_lane
+	cmpq	$0, (_job_in_lane+8*4)(state)
+	cmovne	four(%rip), good_lane
+	cmpq	$0, (_job_in_lane+8*5)(state)
+	cmovne	five(%rip), good_lane
+	cmpq	$0, (_job_in_lane+8*6)(state)
+	cmovne	six(%rip), good_lane
+	cmpq	$0, (_job_in_lane+8*7)(state)
+	cmovne	seven(%rip), good_lane
+
+	/* copy good_lane to empty lanes */
+	mov	_args_in(state, good_lane, 8), tmp1
+	mov	_args_out(state, good_lane, 8), tmp2
+	mov	_args_keys(state, good_lane, 8), tmp3
+	shl	$4, good_lane
+	movdqa	_args_IV(state, good_lane), %xmm2
+
+	movdqa	_lens(state), %xmm0
+
+	I = 0
+
+.altmacro
+	.rept 8
+		cmpq	$0, (_job_in_lane + 8*I)(state)
+		.if \key_len == AES_KEYSIZE_128
+			cond_jump jne, .Lskip128_,%I
+		.elseif \key_len == AES_KEYSIZE_192
+			cond_jump jne, .Lskip192_,%I
+		.elseif \key_len == AES_KEYSIZE_256
+			cond_jump jne, .Lskip256_,%I
+		.endif
+		mov	tmp1, (_args_in + 8*I)(state)
+		mov	tmp2, (_args_out + 8*I)(state)
+		mov	tmp3, (_args_keys + 8*I)(state)
+		movdqa	%xmm2, (_args_IV + 16*I)(state)
+		por	(len_masks + 16*I)(%rip), %xmm0
+		.if \key_len == AES_KEYSIZE_128
+			LABEL .Lskip128_,%I
+		.elseif \key_len == AES_KEYSIZE_192
+			LABEL .Lskip192_,%I
+		.elseif \key_len == AES_KEYSIZE_256
+			LABEL .Lskip256_,%I
+		.endif
+		I = (I+1)
+	.endr
+.noaltmacro
+
+	phminposuw	%xmm0, %xmm1
+	/*
+	 * xmm1{15:0} = min value
+	 * xmm1{18:16} = index of min
+	 */
+
+	/*
+	 * arg1 = rcx = state = args (and it is not clobbered by routine)
+	 * arg2 = rdx = min len
+	 */
+	movd	%xmm1, arg2
+	and	$0xFFFF, arg2
+
+	/* subtract min len from lengths */
+	pshufb	dupw(%rip), %xmm1	/* duplicate words across all lanes */
+	psubw	%xmm1, %xmm0
+	movdqa	%xmm0, _lens(state)
+
+	/* need to align stack */
+	sub	$8, %rsp
+	.if \key_len == AES_KEYSIZE_128
+		call aes_cbc_enc_128_x8
+	.elseif \key_len == AES_KEYSIZE_192
+		call aes_cbc_enc_192_x8
+	.elseif \key_len == AES_KEYSIZE_256
+		call aes_cbc_enc_256_x8
+	.endif
+	add	$8, %rsp
+	/* arg1/state is still intact */
+
+	/* process completed jobs */
+	movdqa	_lens(state), %xmm0
+	phminposuw	%xmm0, %xmm1
+	/*
+	 * xmm1{15:0} = min value
+	 * xmm1{18:16} = index of min
+	 */
+
+	/*
+	 * at this point at least one len should be 0, so min value should be 0
+	 * and the index is the index of that lane [0...3]
+	 */
+	lea	len_masks(%rip), tmp3
+	mov	_unused_lanes(state), unused_lanes
+	movd	%xmm1, lane
+.Lcontinue_loop2\key_len:
+	/* assert((lane & 0xFFFF) == 0) */
+	shr	$16, lane	/* lane is now index */
+	mov	_job_in_lane(state, lane, 8), job
+	movl	$STS_COMPLETED, _status(job)
+	movq	$0, _job_in_lane(state, lane, 8)
+	shl	$4, unused_lanes
+	or	lane, unused_lanes
+	shl	$4, lane
+	movdqa	_args_IV(state, lane), %xmm2
+	movdqu	%xmm2, _IV(job)
+	por	(tmp3, lane), %xmm0
+
+	phminposuw	%xmm0, %xmm1
+	movd	%xmm1, lane
+	/* see if bits 15:0 are zero */
+	test	$0xFFFF, lane
+	jz	.Lcontinue_loop2\key_len
+
+	/* done; save registers */
+	mov	unused_lanes, _unused_lanes(state)
+	/* don't need to save xmm0/lens */
+.Lreturn\key_len:
+	ret
+.endm
+
+ENTRY(aes_cbc_flush_job_ooo_128x8)
+
+	aes_cbc_flush_job_ooo_x8 AES_KEYSIZE_128
+
+ENDPROC(aes_cbc_flush_job_ooo_128x8)
+
+ENTRY(aes_cbc_flush_job_ooo_192x8)
+
+	aes_cbc_flush_job_ooo_x8 AES_KEYSIZE_192
+
+ENDPROC(aes_cbc_flush_job_ooo_192x8)
+
+ENTRY(aes_cbc_flush_job_ooo_256x8)
+
+	aes_cbc_flush_job_ooo_x8 AES_KEYSIZE_256
+
+ENDPROC(aes_cbc_flush_job_ooo_256x8)
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH V8 4/5] crypto: AES CBC by8 encryption
  2018-01-10  0:09 [PATCH V8 0/5] crypto: AES CBC multibuffer implementation Megha Dey
                   ` (2 preceding siblings ...)
  2018-01-10  0:09 ` [PATCH V8 3/5] crypto: AES CBC multi-buffer scheduler Megha Dey
@ 2018-01-10  0:09 ` Megha Dey
  2018-01-10  0:09 ` [PATCH V8 5/5] crypto: AES CBC multi-buffer glue code Megha Dey
  4 siblings, 0 replies; 21+ messages in thread
From: Megha Dey @ 2018-01-10  0:09 UTC (permalink / raw)
  To: linux-kernel, linux-crypto, davem, herbert; +Cc: megha.dey, Megha Dey

This patch introduces the assembly routine to do a by8 AES CBC
encryption in support of the AES CBC multi-buffer implementation.

It encrypts 8 data streams of the same key size simultaneously.

Originally-by: Chandramouli Narayanan <mouli_7982@yahoo.com>
Signed-off-by: Megha Dey <megha.dey@linux.intel.com>
Acked-by: Tim Chen <tim.c.chen@linux.intel.com>
---
 arch/x86/crypto/aes-cbc-mb/aes_cbc_enc_x8.S | 775 ++++++++++++++++++++++++++++
 1 file changed, 775 insertions(+)
 create mode 100644 arch/x86/crypto/aes-cbc-mb/aes_cbc_enc_x8.S

diff --git a/arch/x86/crypto/aes-cbc-mb/aes_cbc_enc_x8.S b/arch/x86/crypto/aes-cbc-mb/aes_cbc_enc_x8.S
new file mode 100644
index 0000000..2130574
--- /dev/null
+++ b/arch/x86/crypto/aes-cbc-mb/aes_cbc_enc_x8.S
@@ -0,0 +1,775 @@
+/*
+ *	AES CBC by8 multibuffer optimization (x86_64)
+ *	This file implements 128/192/256 bit AES CBC encryption
+ *
+ *
+ * This file is provided under a dual BSD/GPLv2 license.  When using or
+ * redistributing this file, you may do so under either license.
+ *
+ * GPL LICENSE SUMMARY
+ *
+ * Copyright(c) 2016 Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * Contact Information:
+ * James Guilford <james.guilford@intel.com>
+ * Sean Gulley <sean.m.gulley@intel.com>
+ * Tim Chen <tim.c.chen@linux.intel.com>
+ * Megha Dey <megha.dey@linux.intel.com>
+ *
+ * BSD LICENSE
+ *
+ * Copyright(c) 2016 Intel Corporation.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ */
+#include <linux/linkage.h>
+
+/* stack size needs to be an odd multiple of 8 for alignment */
+
+#define AES_KEYSIZE_128        16
+#define AES_KEYSIZE_192        24
+#define AES_KEYSIZE_256        32
+
+#define XMM_SAVE_SIZE	16*10
+#define GPR_SAVE_SIZE	8*9
+#define STACK_SIZE	(XMM_SAVE_SIZE + GPR_SAVE_SIZE)
+
+#define GPR_SAVE_REG	%rsp
+#define GPR_SAVE_AREA	%rsp + XMM_SAVE_SIZE
+#define LEN_AREA_OFFSET	XMM_SAVE_SIZE + 8*8
+#define LEN_AREA_REG	%rsp
+#define LEN_AREA	%rsp + XMM_SAVE_SIZE + 8*8
+
+#define IN_OFFSET	0
+#define OUT_OFFSET	8*8
+#define KEYS_OFFSET	16*8
+#define IV_OFFSET	24*8
+
+
+#define IDX	%rax
+#define TMP	%rbx
+#define ARG	%rdi
+#define LEN	%rsi
+
+#define KEYS0	%r14
+#define KEYS1	%r15
+#define KEYS2	%rbp
+#define KEYS3	%rdx
+#define KEYS4	%rcx
+#define KEYS5	%r8
+#define KEYS6	%r9
+#define KEYS7	%r10
+
+#define IN0	%r11
+#define IN2	%r12
+#define IN4	%r13
+#define IN6	LEN
+
+#define XDATA0	%xmm0
+#define XDATA1	%xmm1
+#define XDATA2	%xmm2
+#define XDATA3	%xmm3
+#define XDATA4	%xmm4
+#define XDATA5	%xmm5
+#define XDATA6	%xmm6
+#define XDATA7	%xmm7
+
+#define XKEY0_3	%xmm8
+#define XKEY1_4	%xmm9
+#define XKEY2_5	%xmm10
+#define XKEY3_6	%xmm11
+#define XKEY4_7	%xmm12
+#define XKEY5_8	%xmm13
+#define XKEY6_9	%xmm14
+#define XTMP	%xmm15
+
+#define	MOVDQ movdqu /* assume buffers not aligned */
+#define CONCAT(a, b)	a##b
+#define INPUT_REG_SUFX	1	/* IN */
+#define XDATA_REG_SUFX	2	/* XDAT */
+#define KEY_REG_SUFX	3	/* KEY */
+#define XMM_REG_SUFX	4	/* XMM */
+
+/*
+ * To avoid positional parameter errors while compiling
+ * three registers need to be passed
+ */
+.text
+
+.macro pxor2 x, y, z
+	MOVDQ	(\x,\y), XTMP
+	pxor	XTMP, \z
+.endm
+
+.macro inreg n
+	.if (\n == 0)
+		reg_IN = IN0
+	.elseif (\n == 2)
+		reg_IN = IN2
+	.elseif (\n == 4)
+		reg_IN = IN4
+	.elseif (\n == 6)
+		reg_IN = IN6
+	.else
+		error "inreg: incorrect register number"
+	.endif
+.endm
+.macro xdatareg n
+	.if (\n == 0)
+		reg_XDAT = XDATA0
+	.elseif (\n == 1)
+		reg_XDAT = XDATA1
+	.elseif (\n == 2)
+		reg_XDAT = XDATA2
+	.elseif (\n == 3)
+		reg_XDAT = XDATA3
+	.elseif (\n == 4)
+		reg_XDAT = XDATA4
+	.elseif (\n == 5)
+		reg_XDAT = XDATA5
+	.elseif (\n == 6)
+		reg_XDAT = XDATA6
+	.elseif (\n == 7)
+		reg_XDAT = XDATA7
+	.endif
+.endm
+.macro xkeyreg n
+	.if (\n == 0)
+		reg_KEY = KEYS0
+	.elseif (\n == 1)
+		reg_KEY = KEYS1
+	.elseif (\n == 2)
+		reg_KEY = KEYS2
+	.elseif (\n == 3)
+		reg_KEY = KEYS3
+	.elseif (\n == 4)
+		reg_KEY = KEYS4
+	.elseif (\n == 5)
+		reg_KEY = KEYS5
+	.elseif (\n == 6)
+		reg_KEY = KEYS6
+	.elseif (\n == 7)
+		reg_KEY = KEYS7
+	.endif
+.endm
+.macro xmmreg n
+	.if (\n >= 0) && (\n < 16)
+		/* Valid register number */
+		reg_XMM = %xmm\n
+	.else
+		error "xmmreg: incorrect register number"
+	.endif
+.endm
+
+/*
+ * suffix - register suffix
+ * set up the register name using the loop index I
+ */
+.macro define_reg suffix
+.altmacro
+	.if (\suffix == INPUT_REG_SUFX)
+		inreg  %I
+	.elseif (\suffix == XDATA_REG_SUFX)
+		xdatareg  %I
+	.elseif (\suffix == KEY_REG_SUFX)
+		xkeyreg  %I
+	.elseif (\suffix == XMM_REG_SUFX)
+		xmmreg  %I
+	.else
+		error "define_reg: unknown register suffix"
+	.endif
+.noaltmacro
+.endm
+
+/*
+ * aes_cbc_enc_x8 key_len
+ * macro to encode data for 128bit, 192bit and 256bit keys
+ */
+
+.macro aes_cbc_enc_x8 key_len
+
+	sub	$STACK_SIZE, %rsp
+
+	mov	%rbx, (XMM_SAVE_SIZE + 8*0)(GPR_SAVE_REG)
+	mov	%rbp, (XMM_SAVE_SIZE + 8*3)(GPR_SAVE_REG)
+	mov	%r12, (XMM_SAVE_SIZE + 8*4)(GPR_SAVE_REG)
+	mov	%r13, (XMM_SAVE_SIZE + 8*5)(GPR_SAVE_REG)
+	mov	%r14, (XMM_SAVE_SIZE + 8*6)(GPR_SAVE_REG)
+	mov	%r15, (XMM_SAVE_SIZE + 8*7)(GPR_SAVE_REG)
+
+	mov	$16, IDX
+	shl	$4, LEN	/* LEN = LEN * 16 */
+	/* LEN is now in terms of bytes */
+	mov	LEN, (LEN_AREA_OFFSET)(LEN_AREA_REG)
+
+	/* Run through storing arguments in IN0,2,4,6 */
+	I = 0
+	.rept 4
+		define_reg	INPUT_REG_SUFX
+		mov	(IN_OFFSET + 8*I)(ARG), reg_IN
+		I = (I + 2)
+	.endr
+
+	/* load 1 .. 8 blocks of plain text into XDATA0..XDATA7 */
+	I = 0
+	.rept 4
+		mov		(IN_OFFSET + 8*(I+1))(ARG), TMP
+		define_reg	INPUT_REG_SUFX
+		define_reg	XDATA_REG_SUFX
+		/* load first block of plain text */
+		MOVDQ		(reg_IN), reg_XDAT
+		I = (I + 1)
+		define_reg	XDATA_REG_SUFX
+		/* load next block of plain text */
+		MOVDQ		(TMP), reg_XDAT
+		I = (I + 1)
+	.endr
+
+	/* Run through XDATA0 .. XDATA7 to perform plaintext XOR IV */
+	I = 0
+	.rept 8
+		define_reg	XDATA_REG_SUFX
+		pxor	(IV_OFFSET + 16*I)(ARG), reg_XDAT
+		I = (I + 1)
+	.endr
+
+	I = 0
+	.rept 8
+		define_reg	KEY_REG_SUFX
+		mov	(KEYS_OFFSET + 8*I)(ARG), reg_KEY
+		I = (I + 1)
+	.endr
+
+	I = 0
+	/* 0..7 ARK */
+	.rept 8
+		define_reg	XDATA_REG_SUFX
+		define_reg	KEY_REG_SUFX
+		pxor	16*0(reg_KEY), reg_XDAT
+		I = (I + 1)
+	.endr
+
+	I = 0
+		/* 1. ENC */
+	.rept 8
+		define_reg	XDATA_REG_SUFX
+		define_reg	KEY_REG_SUFX
+		aesenc	(16*1)(reg_KEY), reg_XDAT
+		I = (I + 1)
+	.endr
+
+	movdqa		16*3(KEYS0), XKEY0_3	/* load round 3 key */
+
+	I = 0
+		/* 2. ENC */
+	.rept 8
+		define_reg	XDATA_REG_SUFX
+		define_reg	KEY_REG_SUFX
+		aesenc	(16*2)(reg_KEY), reg_XDAT
+		I = (I + 1)
+	.endr
+
+	movdqa		16*4(KEYS1), XKEY1_4	/* load round 4 key */
+
+		/* 3. ENC */
+	aesenc		XKEY0_3, XDATA0
+	I = 1
+	.rept 7
+		define_reg	XDATA_REG_SUFX
+		define_reg	KEY_REG_SUFX
+		aesenc	(16*3)(reg_KEY), reg_XDAT
+		I = (I + 1)
+	.endr
+
+	/*
+	 * FIXME:
+	 * why can't we reorder encrypt DATA0..DATA7 and load 5th round?
+	 */
+	aesenc		(16*4)(KEYS0), XDATA0		/* 4. ENC */
+	movdqa		16*5(KEYS2), XKEY2_5		/* load round 5 key */
+	aesenc		XKEY1_4, XDATA1			/* 4. ENC */
+
+	I = 2
+	.rept 6
+		define_reg	XDATA_REG_SUFX
+		define_reg	KEY_REG_SUFX
+		aesenc	(16*4)(reg_KEY), reg_XDAT
+		I = (I + 1)
+	.endr
+
+	aesenc		(16*5)(KEYS0), XDATA0		/* 5. ENC */
+	aesenc		(16*5)(KEYS1), XDATA1		/* 5. ENC */
+	movdqa		16*6(KEYS3), XKEY3_6		/* load round 6 key */
+	aesenc		XKEY2_5, XDATA2			/* 5. ENC */
+	I = 3
+	.rept 5
+		define_reg	XDATA_REG_SUFX
+		define_reg	KEY_REG_SUFX
+		aesenc	(16*5)(reg_KEY), reg_XDAT
+		I = (I + 1)
+	.endr
+
+	aesenc		(16*6)(KEYS0), XDATA0		/* 6. ENC */
+	aesenc		(16*6)(KEYS1), XDATA1		/* 6. ENC */
+	aesenc		(16*6)(KEYS2), XDATA2		/* 6. ENC */
+	movdqa		16*7(KEYS4), XKEY4_7		/* load round 7 key */
+	aesenc		XKEY3_6, XDATA3			/* 6. ENC */
+
+	I = 4
+	.rept 4
+		define_reg	XDATA_REG_SUFX
+		define_reg	KEY_REG_SUFX
+		aesenc	(16*6)(reg_KEY), reg_XDAT
+		I = (I + 1)
+	.endr
+
+	I = 0
+	.rept 4
+		define_reg	XDATA_REG_SUFX
+		define_reg	KEY_REG_SUFX
+		aesenc	(16*7)(reg_KEY), reg_XDAT
+		I = (I + 1)
+	.endr
+	movdqa		16*8(KEYS5), XKEY5_8		/* load round 8 key */
+	aesenc		XKEY4_7, XDATA4			/* 7. ENC */
+	I = 5
+	.rept 3
+		define_reg	XDATA_REG_SUFX
+		define_reg	KEY_REG_SUFX
+		aesenc	(16*7)(reg_KEY), reg_XDAT
+		I = (I + 1)
+	.endr
+
+	I = 0
+	.rept 5
+		define_reg	XDATA_REG_SUFX
+		define_reg	KEY_REG_SUFX
+		aesenc	(16*8)(reg_KEY), reg_XDAT
+		I = (I + 1)
+	.endr
+	movdqa		16*9(KEYS6), XKEY6_9		/* load round 9 key */
+	aesenc		XKEY5_8, XDATA5			/* 8. ENC */
+	aesenc		16*8(KEYS6), XDATA6		/* 8. ENC */
+	aesenc		16*8(KEYS7), XDATA7		/* 8. ENC */
+
+	I = 0
+	.rept 6
+		define_reg	XDATA_REG_SUFX
+		define_reg	KEY_REG_SUFX
+		aesenc	(16*9)(reg_KEY), reg_XDAT
+		I = (I + 1)
+	.endr
+	mov		(OUT_OFFSET + 8*0)(ARG), TMP
+	aesenc		XKEY6_9, XDATA6			/* 9. ENC */
+	aesenc		16*9(KEYS7), XDATA7		/* 9. ENC */
+
+		/* 10. ENC (last for 128bit keys) */
+	I = 0
+	.rept 8
+		define_reg	XDATA_REG_SUFX
+		define_reg	KEY_REG_SUFX
+		.if (\key_len == AES_KEYSIZE_128)
+			aesenclast	(16*10)(reg_KEY), reg_XDAT
+		.else
+			aesenc	(16*10)(reg_KEY), reg_XDAT
+		.endif
+		I = (I + 1)
+	.endr
+
+	.if (\key_len != AES_KEYSIZE_128)
+		/* 11. ENC */
+		I = 0
+		.rept 8
+			define_reg	XDATA_REG_SUFX
+			define_reg	KEY_REG_SUFX
+			aesenc	(16*11)(reg_KEY), reg_XDAT
+			I = (I + 1)
+		.endr
+
+		/* 12. ENC (last for 192bit key) */
+		I = 0
+		.rept 8
+			define_reg	XDATA_REG_SUFX
+			define_reg	KEY_REG_SUFX
+			.if (\key_len == AES_KEYSIZE_192)
+				aesenclast	(16*12)(reg_KEY), reg_XDAT
+			.else
+				aesenc		(16*12)(reg_KEY), reg_XDAT
+			.endif
+			I = (I + 1)
+		.endr
+
+		/* for 256bit, two more rounds */
+		.if \key_len == AES_KEYSIZE_256
+
+			/* 13. ENC */
+			I = 0
+			.rept 8
+				define_reg	XDATA_REG_SUFX
+				define_reg	KEY_REG_SUFX
+				aesenc	(16*13)(reg_KEY), reg_XDAT
+				I = (I + 1)
+			.endr
+
+			/* 14. ENC last encode for 256bit key */
+			I = 0
+			.rept 8
+				define_reg	XDATA_REG_SUFX
+				define_reg	KEY_REG_SUFX
+				aesenclast	(16*14)(reg_KEY), reg_XDAT
+				I = (I + 1)
+			.endr
+		.endif
+
+	.endif
+
+	I = 0
+	.rept 8
+		define_reg	XDATA_REG_SUFX
+		MOVDQ	reg_XDAT, (TMP)	/* write back ciphertext */
+		I = (I + 1)
+		.if (I < 8)
+			mov	(OUT_OFFSET + 8*I)(ARG), TMP
+		.endif
+	.endr
+
+	cmp		IDX, LEN_AREA_OFFSET(LEN_AREA_REG)
+	je		.Ldone\key_len
+
+.Lmain_loop\key_len:
+	mov		(IN_OFFSET + 8*1)(ARG), TMP
+	pxor2		IN0, IDX, XDATA0	/* next block of plain text */
+	pxor2		TMP, IDX, XDATA1	/* next block of plain text */
+
+	mov		(IN_OFFSET + 8*3)(ARG), TMP
+	pxor2		IN2, IDX, XDATA2	/* next block of plain text */
+	pxor2		TMP, IDX, XDATA3	/* next block of plain text */
+
+	mov		(IN_OFFSET + 8*5)(ARG), TMP
+	pxor2		IN4, IDX, XDATA4	/* next block of plain text */
+	pxor2		TMP, IDX, XDATA5	/* next block of plain text */
+
+	mov		(IN_OFFSET + 8*7)(ARG), TMP
+	pxor2		IN6, IDX, XDATA6	/* next block of plain text */
+	pxor2		TMP, IDX, XDATA7	/* next block of plain text */
+
+	/* 0. ARK */
+	I = 0
+	.rept 8
+		define_reg	XDATA_REG_SUFX
+		define_reg	KEY_REG_SUFX
+		pxor	16*0(reg_KEY), reg_XDAT
+		I = (I + 1)
+	.endr
+
+	/* 1. ENC */
+	I = 0
+	.rept 8
+		define_reg	XDATA_REG_SUFX
+		define_reg	KEY_REG_SUFX
+		aesenc	16*1(reg_KEY), reg_XDAT
+		I = (I + 1)
+	.endr
+
+	/* 2. ENC */
+	I = 0
+	.rept 8
+		define_reg	XDATA_REG_SUFX
+		define_reg	KEY_REG_SUFX
+		aesenc	16*2(reg_KEY), reg_XDAT
+		I = (I + 1)
+	.endr
+
+	/* 3. ENC */
+	aesenc		XKEY0_3, XDATA0		/* 3. ENC */
+	I = 1
+	.rept 7
+		define_reg	XDATA_REG_SUFX
+		define_reg	KEY_REG_SUFX
+		aesenc	16*3(reg_KEY), reg_XDAT
+		I = (I + 1)
+	.endr
+
+	/* 4. ENC */
+	aesenc		16*4(KEYS0), XDATA0	/* 4. ENC */
+	aesenc		XKEY1_4, XDATA1		/* 4. ENC */
+	I = 2
+	.rept 6
+		define_reg	XDATA_REG_SUFX
+		define_reg	KEY_REG_SUFX
+		aesenc	16*4(reg_KEY), reg_XDAT
+		I = (I + 1)
+	.endr
+
+	/* 5. ENC */
+	aesenc		16*5(KEYS0), XDATA0	/* 5. ENC */
+	aesenc		16*5(KEYS1), XDATA1	/* 5. ENC */
+	aesenc		XKEY2_5, XDATA2		/* 5. ENC */
+
+	I = 3
+	.rept 5
+		define_reg	XDATA_REG_SUFX
+		define_reg	KEY_REG_SUFX
+		aesenc	16*5(reg_KEY), reg_XDAT
+		I = (I + 1)
+	.endr
+
+	/* 6. ENC */
+	I = 0
+	.rept 3
+		define_reg	XDATA_REG_SUFX
+		define_reg	KEY_REG_SUFX
+		aesenc	16*6(reg_KEY), reg_XDAT
+		I = (I + 1)
+	.endr
+	aesenc		XKEY3_6, XDATA3		/* 6. ENC */
+	I = 4
+	.rept 4
+		define_reg	XDATA_REG_SUFX
+		define_reg	KEY_REG_SUFX
+		aesenc	16*6(reg_KEY), reg_XDAT
+		I = (I + 1)
+	.endr
+
+	/* 7. ENC */
+	I = 0
+	.rept 4
+		define_reg	XDATA_REG_SUFX
+		define_reg	KEY_REG_SUFX
+		aesenc	16*7(reg_KEY), reg_XDAT
+		I = (I + 1)
+	.endr
+	aesenc		XKEY4_7, XDATA4		/* 7. ENC */
+	I = 5
+	.rept 3
+		define_reg	XDATA_REG_SUFX
+		define_reg	KEY_REG_SUFX
+		aesenc	16*7(reg_KEY), reg_XDAT
+		I = (I + 1)
+	.endr
+
+	/* 8. ENC */
+	I = 0
+	.rept 5
+		define_reg	XDATA_REG_SUFX
+		define_reg	KEY_REG_SUFX
+		aesenc	16*8(reg_KEY), reg_XDAT
+		I = (I + 1)
+	.endr
+	aesenc		XKEY5_8, XDATA5		/* 8. ENC */
+	aesenc		16*8(KEYS6), XDATA6	/* 8. ENC */
+	aesenc		16*8(KEYS7), XDATA7	/* 8. ENC */
+
+	/* 9. ENC */
+	I = 0
+	.rept 6
+		define_reg	XDATA_REG_SUFX
+		define_reg	KEY_REG_SUFX
+		aesenc	16*9(reg_KEY), reg_XDAT
+		I = (I + 1)
+	.endr
+	mov		(OUT_OFFSET + 8*0)(ARG), TMP
+	aesenc		XKEY6_9, XDATA6		/* 9. ENC */
+	aesenc		16*9(KEYS7), XDATA7	/* 9. ENC */
+
+	/* 10. ENC (last for 128 bit key) */
+	I = 0
+	.rept 8
+		define_reg	XDATA_REG_SUFX
+		define_reg	KEY_REG_SUFX
+		.if (\key_len == AES_KEYSIZE_128)
+			aesenclast	16*10(reg_KEY), reg_XDAT
+		.else
+			aesenc	16*10(reg_KEY), reg_XDAT
+		.endif
+		I = (I + 1)
+	.endr
+
+	.if (\key_len != AES_KEYSIZE_128)
+		/* 11. ENC */
+		I = 0
+		.rept 8
+			define_reg	XDATA_REG_SUFX
+			define_reg	KEY_REG_SUFX
+			aesenc	16*11(reg_KEY), reg_XDAT
+			I = (I + 1)
+		.endr
+
+		/* 12. last ENC for 192bit key */
+		I = 0
+		.rept 8
+			define_reg	XDATA_REG_SUFX
+			define_reg	KEY_REG_SUFX
+			.if (\key_len == AES_KEYSIZE_192)
+				aesenclast	16*12(reg_KEY), reg_XDAT
+			.else
+				aesenc		16*12(reg_KEY), reg_XDAT
+			.endif
+			I = (I + 1)
+		.endr
+
+		.if \key_len == AES_KEYSIZE_256
+			/* for 256bit, two more rounds */
+			/* 13. ENC */
+			I = 0
+			.rept 8
+				define_reg	XDATA_REG_SUFX
+				define_reg	KEY_REG_SUFX
+				aesenc	16*13(reg_KEY), reg_XDAT
+				I = (I + 1)
+			.endr
+
+			/* 14. last ENC for 256bit key */
+			I = 0
+			.rept 8
+				define_reg	XDATA_REG_SUFX
+				define_reg	KEY_REG_SUFX
+				aesenclast	16*14(reg_KEY), reg_XDAT
+				I = (I + 1)
+			.endr
+		.endif
+	.endif
+
+	I = 0
+	.rept 8
+		define_reg	XDATA_REG_SUFX
+		/* write back cipher text */
+		MOVDQ		reg_XDAT, (TMP , IDX)
+		I = (I + 1)
+		.if (I < 8)
+			mov		(OUT_OFFSET + 8*I)(ARG), TMP
+		.endif
+	.endr
+
+	add	$16, IDX
+	cmp	IDX, LEN_AREA_OFFSET(LEN_AREA_REG)
+	jne	.Lmain_loop\key_len
+
+.Ldone\key_len:
+	/* update IV */
+	I = 0
+	.rept 8
+		define_reg	XDATA_REG_SUFX
+		movdqa	reg_XDAT, (IV_OFFSET + 16*I)(ARG)
+		I = (I + 1)
+	.endr
+
+	/* update IN and OUT */
+	movd	LEN_AREA_OFFSET(LEN_AREA_REG), %xmm0
+	pshufd	$0x44, %xmm0, %xmm0
+
+	I = 1
+	.rept 4
+		define_reg	XMM_REG_SUFX
+		movdqa	(IN_OFFSET + 16*(I-1))(ARG), reg_XMM
+		I = (I + 1)
+	.endr
+
+	paddq	%xmm0, %xmm1
+	paddq	%xmm0, %xmm2
+	paddq	%xmm0, %xmm3
+	paddq	%xmm0, %xmm4
+
+	I = 5
+	.rept 4
+		define_reg	XMM_REG_SUFX
+		movdqa	(OUT_OFFSET + 16*(I-5))(ARG), reg_XMM
+		I = (I + 1)
+	.endr
+
+	I = 1
+	.rept 4
+		define_reg	XMM_REG_SUFX
+		movdqa	reg_XMM, (IN_OFFSET + 16*(I-1))(ARG)
+		I = (I + 1)
+	.endr
+
+	paddq	%xmm0, %xmm5
+	paddq	%xmm0, %xmm6
+	paddq	%xmm0, %xmm7
+	paddq	%xmm0, %xmm8
+
+	I = 5
+	.rept 4
+		define_reg	XMM_REG_SUFX
+		movdqa	reg_XMM, (OUT_OFFSET + 16*(I-5))(ARG)
+		I = (I + 1)
+	.endr
+
+	mov	(XMM_SAVE_SIZE + 8*0)(GPR_SAVE_REG), %rbx
+	mov	(XMM_SAVE_SIZE + 8*3)(GPR_SAVE_REG), %rbp
+	mov	(XMM_SAVE_SIZE + 8*4)(GPR_SAVE_REG), %r12
+	mov	(XMM_SAVE_SIZE + 8*5)(GPR_SAVE_REG), %r13
+	mov	(XMM_SAVE_SIZE + 8*6)(GPR_SAVE_REG), %r14
+	mov	(XMM_SAVE_SIZE + 8*7)(GPR_SAVE_REG), %r15
+
+	add	$STACK_SIZE, %rsp
+
+	ret
+.endm
+
+/*
+ * AES CBC encryption routine supporting 128/192/256 bit keys
+ *
+ * void aes_cbc_enc_128_x8(struct aes_cbc_args_x8 *args, u64 len);
+ * arg 1: rcx : addr of AES_ARGS_x8 structure
+ * arg 2: rdx : len (in units of 16-byte blocks)
+ * void aes_cbc_enc_192_x8(struct aes_cbc_args_x8 *args, u64 len);
+ * arg 1: rcx : addr of aes_cbc_args_x8 structure
+ * arg 2: rdx : len (in units of 16-byte blocks)
+ * void aes_cbc_enc_256_x8(struct aes_cbc_args_x8 *args, u64 len);
+ * arg 1: rcx : addr of aes_cbc_args_x8 structure
+ * arg 2: rdx : len (in units of 16-byte blocks)
+ */
+
+ENTRY(aes_cbc_enc_128_x8)
+
+	aes_cbc_enc_x8 AES_KEYSIZE_128
+
+ENDPROC(aes_cbc_enc_128_x8)
+
+ENTRY(aes_cbc_enc_192_x8)
+
+	aes_cbc_enc_x8 AES_KEYSIZE_192
+
+ENDPROC(aes_cbc_enc_192_x8)
+
+ENTRY(aes_cbc_enc_256_x8)
+
+	aes_cbc_enc_x8 AES_KEYSIZE_256
+
+ENDPROC(aes_cbc_enc_256_x8)
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH V8 5/5] crypto: AES CBC multi-buffer glue code
  2018-01-10  0:09 [PATCH V8 0/5] crypto: AES CBC multibuffer implementation Megha Dey
                   ` (3 preceding siblings ...)
  2018-01-10  0:09 ` [PATCH V8 4/5] crypto: AES CBC by8 encryption Megha Dey
@ 2018-01-10  0:09 ` Megha Dey
  4 siblings, 0 replies; 21+ messages in thread
From: Megha Dey @ 2018-01-10  0:09 UTC (permalink / raw)
  To: linux-kernel, linux-crypto, davem, herbert; +Cc: megha.dey, Megha Dey

This patch introduces the multi-buffer job manager which is responsible
for submitting scatter-gather buffers from several AES CBC jobs
to the multi-buffer algorithm. The glue code interfaces with the
underlying algorithm that handles 8 data streams of AES CBC encryption
in parallel. AES key expansion and CBC decryption requests are performed
in a manner similar to the existing AESNI Intel glue driver.

The outline of the algorithm for AES CBC encryption requests is
sketched below:

Any driver requesting the crypto service will place an async crypto
request on the workqueue.  The multi-buffer crypto daemon will pull an
AES CBC encryption request from work queue and put each request in an
empty data lane for multi-buffer crypto computation.  When all the empty
lanes are filled, computation will commence on the jobs in parallel and
the job with the shortest remaining buffer will get completed and be
returned. To prevent prolonged stall, when no new jobs arrive, we will
flush workqueue of jobs after a maximum allowable delay has elapsed.

To accommodate the fragmented nature of scatter-gather, we will keep
submitting the next scatter-buffer fragment for a job for multi-buffer
computation until a job is completed and no more buffer fragments
remain. At that time we will pull a new job to fill the now empty data
slot. We check with the multibuffer scheduler to see if there are other
completed jobs to prevent extraneous delay in returning any completed
jobs.

This multi-buffer algorithm should be used for cases where we get at
least 8 streams of crypto jobs submitted at a reasonably high rate.
For low crypto job submission rate and low number of data streams, this
algorithm will not be beneficial. The reason is at low rate, we do not
fill out the data lanes before flushing the jobs instead of processing
them with all the data lanes full. We will miss the benefit of parallel
computation, and adding delay to the processing of the crypto job at the
same time.  Some tuning of the maximum latency parameter may be needed
to get the best performance.

Originally-by: Chandramouli Narayanan <mouli_7982@yahoo.com>
Signed-off-by: Megha Dey <megha.dey@linux.intel.com>
Acked-by: Tim Chen <tim.c.chen@linux.intel.com>
---
 arch/x86/crypto/Makefile                |   1 +
 arch/x86/crypto/aes-cbc-mb/Makefile     |  22 +
 arch/x86/crypto/aes-cbc-mb/aes_cbc_mb.c | 698 ++++++++++++++++++++++++++++++++
 3 files changed, 721 insertions(+)
 create mode 100644 arch/x86/crypto/aes-cbc-mb/Makefile
 create mode 100644 arch/x86/crypto/aes-cbc-mb/aes_cbc_mb.c

diff --git a/arch/x86/crypto/Makefile b/arch/x86/crypto/Makefile
index 5f07333..3844f06 100644
--- a/arch/x86/crypto/Makefile
+++ b/arch/x86/crypto/Makefile
@@ -36,6 +36,7 @@ obj-$(CONFIG_CRYPTO_CRC32_PCLMUL) += crc32-pclmul.o
 obj-$(CONFIG_CRYPTO_SHA256_SSSE3) += sha256-ssse3.o
 obj-$(CONFIG_CRYPTO_SHA512_SSSE3) += sha512-ssse3.o
 obj-$(CONFIG_CRYPTO_CRCT10DIF_PCLMUL) += crct10dif-pclmul.o
+obj-$(CONFIG_CRYPTO_AES_CBC_MB) += aes-cbc-mb/
 obj-$(CONFIG_CRYPTO_POLY1305_X86_64) += poly1305-x86_64.o
 
 # These modules require assembler to support AVX.
diff --git a/arch/x86/crypto/aes-cbc-mb/Makefile b/arch/x86/crypto/aes-cbc-mb/Makefile
new file mode 100644
index 0000000..b642bd8
--- /dev/null
+++ b/arch/x86/crypto/aes-cbc-mb/Makefile
@@ -0,0 +1,22 @@
+#
+# Arch-specific CryptoAPI modules.
+#
+
+avx_supported := $(call as-instr,vpxor %xmm0$(comma)%xmm0$(comma)%xmm0,yes,no)
+
+# we need decryption and key expansion routine symbols
+# if either AESNI_NI_INTEL or AES_CBC_MB is a module
+
+ifeq ($(CONFIG_CRYPTO_AES_NI_INTEL),m)
+	dec_support := ../aesni-intel_asm.o
+endif
+ifeq ($(CONFIG_CRYPTO_AES_CBC_MB),m)
+	dec_support := ../aesni-intel_asm.o
+endif
+
+ifeq ($(avx_supported),yes)
+	obj-$(CONFIG_CRYPTO_AES_CBC_MB) += aes-cbc-mb.o
+	aes-cbc-mb-y := $(dec_support) aes_cbc_mb.o aes_mb_mgr_init.o \
+			mb_mgr_inorder_x8_asm.o mb_mgr_ooo_x8_asm.o \
+			aes_cbc_enc_x8.o
+endif
diff --git a/arch/x86/crypto/aes-cbc-mb/aes_cbc_mb.c b/arch/x86/crypto/aes-cbc-mb/aes_cbc_mb.c
new file mode 100644
index 0000000..af09c4b
--- /dev/null
+++ b/arch/x86/crypto/aes-cbc-mb/aes_cbc_mb.c
@@ -0,0 +1,698 @@
+/*
+ *	Multi buffer AES CBC algorithm glue code
+ *
+ *
+ * This file is provided under a dual BSD/GPLv2 license.  When using or
+ * redistributing this file, you may do so under either license.
+ *
+ * GPL LICENSE SUMMARY
+ *
+ * Copyright(c) 2016 Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * Contact Information:
+ * James Guilford <james.guilford@intel.com>
+ * Sean Gulley <sean.m.gulley@intel.com>
+ * Tim Chen <tim.c.chen@linux.intel.com>
+ * Megha Dey <megha.dey@linux.intel.com>
+ */
+
+#define pr_fmt(fmt)	KBUILD_MODNAME ": " fmt
+#define CRYPTO_AES_CTX_SIZE (sizeof(struct crypto_aes_ctx) + AESNI_ALIGN_EXTRA)
+#define AESNI_ALIGN_EXTRA ((AESNI_ALIGN - 1) & ~(CRYPTO_MINALIGN - 1))
+#include <linux/hardirq.h>
+#include <linux/types.h>
+#include <linux/crypto.h>
+#include <linux/module.h>
+#include <linux/err.h>
+#include <crypto/algapi.h>
+#include <crypto/aes.h>
+#include <crypto/internal/hash.h>
+#include <crypto/mcryptd.h>
+#include <crypto/crypto_wq.h>
+#include <crypto/ctr.h>
+#include <crypto/b128ops.h>
+#include <crypto/lrw.h>
+#include <crypto/xts.h>
+#include <asm/cpu_device_id.h>
+#include <asm/fpu/api.h>
+#include <asm/crypto/aes.h>
+#include <crypto/ablk_helper.h>
+#include <crypto/scatterwalk.h>
+#include <crypto/internal/aead.h>
+#include <linux/workqueue.h>
+#include <linux/spinlock.h>
+#include <crypto/internal/skcipher.h>
+#ifdef CONFIG_X86_64
+#include <asm/crypto/glue_helper.h>
+#endif
+
+#include "aes_cbc_mb_ctx.h"
+
+#define AESNI_ALIGN	(16)
+#define AES_BLOCK_MASK	(~(AES_BLOCK_SIZE-1))
+#define FLUSH_INTERVAL 500 /* in usec */
+
+static struct mcryptd_alg_state cbc_mb_alg_state;
+
+struct aes_cbc_mb_ctx {
+	struct mcryptd_skcipher *mcryptd_tfm;
+};
+
+static inline struct aes_cbc_mb_mgr_inorder_x8
+	*get_key_mgr(void *mgr, u32 key_len)
+{
+	struct aes_cbc_mb_mgr_inorder_x8 *key_mgr;
+
+	key_mgr = (struct aes_cbc_mb_mgr_inorder_x8 *) mgr;
+	/* valid keysize is guranteed to be one of 128/192/256 */
+	switch (key_len) {
+	case AES_KEYSIZE_256:
+		return key_mgr+2;
+	case AES_KEYSIZE_192:
+		return key_mgr+1;
+	case AES_KEYSIZE_128:
+	default:
+		return key_mgr;
+	}
+}
+
+/* support code from arch/x86/crypto/aesni-intel_glue.c */
+static inline struct crypto_aes_ctx *aes_ctx(void *raw_ctx)
+{
+	unsigned long addr = (unsigned long)raw_ctx;
+	unsigned long align = AESNI_ALIGN;
+	struct crypto_aes_ctx *ret_ctx;
+
+	if (align <= crypto_tfm_ctx_alignment())
+		align = 1;
+	ret_ctx = (struct crypto_aes_ctx *)ALIGN(addr, align);
+	return ret_ctx;
+}
+
+static struct job_aes_cbc *aes_cbc_job_mgr_submit(
+	struct aes_cbc_mb_mgr_inorder_x8 *key_mgr, u32 key_len)
+{
+	/* valid keysize is guranteed to be one of 128/192/256 */
+	switch (key_len) {
+	case AES_KEYSIZE_256:
+		return aes_cbc_submit_job_inorder_256x8(key_mgr);
+	case AES_KEYSIZE_192:
+		return aes_cbc_submit_job_inorder_192x8(key_mgr);
+	case AES_KEYSIZE_128:
+	default:
+		return aes_cbc_submit_job_inorder_128x8(key_mgr);
+	}
+}
+
+static inline struct skcipher_request *cast_mcryptd_ctx_to_req(
+	struct mcryptd_skcipher_request_ctx *ctx)
+{
+	return container_of((void *) ctx, struct skcipher_request, __ctx);
+}
+
+/*
+ * Interface functions to the synchronous algorithm with acces
+ * to the underlying multibuffer AES CBC implementation
+ */
+
+/* Map the error in request context appropriately */
+
+static struct mcryptd_skcipher_request_ctx *process_job_sts(
+		struct job_aes_cbc *job)
+{
+	struct mcryptd_skcipher_request_ctx *ret_rctx;
+
+	ret_rctx = (struct mcryptd_skcipher_request_ctx *)job->user_data;
+
+	switch (job->status) {
+	default:
+	case STS_COMPLETED:
+		ret_rctx->error = CBC_CTX_ERROR_NONE;
+		break;
+	case STS_BEING_PROCESSED:
+		ret_rctx->error = -EINPROGRESS;
+		break;
+	case STS_INTERNAL_ERROR:
+	case STS_ERROR:
+	case STS_UNKNOWN:
+		/* mark it done with error */
+		ret_rctx->flag = CBC_DONE;
+		ret_rctx->error = -EIO;
+		break;
+	}
+	return ret_rctx;
+}
+
+static struct mcryptd_skcipher_request_ctx
+	*aes_cbc_ctx_mgr_flush(struct aes_cbc_mb_mgr_inorder_x8 *key_mgr)
+{
+	struct job_aes_cbc *job;
+
+	job = aes_cbc_flush_job_inorder_x8(key_mgr);
+	if (job)
+		return process_job_sts(job);
+	return NULL;
+}
+
+static struct mcryptd_skcipher_request_ctx *aes_cbc_ctx_mgr_submit(
+	struct aes_cbc_mb_mgr_inorder_x8 *key_mgr,
+	struct mcryptd_skcipher_request_ctx *rctx
+	)
+{
+	struct crypto_aes_ctx *mb_key_ctx;
+	struct job_aes_cbc *job;
+	unsigned long src_paddr;
+	unsigned long dst_paddr;
+
+	mb_key_ctx = aes_ctx(crypto_tfm_ctx(rctx->desc.base.tfm));
+
+	/* get job, fill the details and submit */
+	job = aes_cbc_get_next_job_inorder_x8(key_mgr);
+
+	src_paddr = (page_to_phys(rctx->walk.src.phys.page) +
+						rctx->walk.src.phys.offset);
+	dst_paddr = (page_to_phys(rctx->walk.dst.phys.page) +
+						rctx->walk.dst.phys.offset);
+	job->plaintext = phys_to_virt(src_paddr);
+	job->ciphertext = phys_to_virt(dst_paddr);
+	if (rctx->flag & CBC_START) {
+		/* fresh sequence, copy iv from walk buffer initially */
+		memcpy(&job->iv, rctx->walk.iv, AES_BLOCK_SIZE);
+		rctx->flag &= ~CBC_START;
+	} else {
+		/* For a multi-part sequence, set up the updated IV */
+		job->iv = rctx->seq_iv;
+	}
+
+	job->keys = (u128 *)mb_key_ctx->key_enc;
+	/* set up updated length from the walk buffers */
+	job->len = rctx->walk.nbytes & AES_BLOCK_MASK;
+	/* stow away the req_ctx so we can later check */
+	job->user_data = (void *)rctx;
+	job->key_len = mb_key_ctx->key_length;
+	rctx->job = job;
+	rctx->error = CBC_CTX_ERROR_NONE;
+	job = aes_cbc_job_mgr_submit(key_mgr, mb_key_ctx->key_length);
+	if (job) {
+		/* we already have the request context stashed in job */
+		return process_job_sts(job);
+	}
+	return NULL;
+}
+
+static int cbc_encrypt_finish(struct mcryptd_skcipher_request_ctx **ret_rctx,
+			      struct mcryptd_alg_cstate *cstate,
+			      bool flush)
+{
+	struct mcryptd_skcipher_request_ctx *rctx = *ret_rctx;
+	struct job_aes_cbc *job;
+	int err = 0;
+	unsigned int nbytes;
+	struct crypto_aes_ctx *mb_key_ctx;
+	struct aes_cbc_mb_mgr_inorder_x8 *key_mgr;
+	struct skcipher_request *req;
+
+
+	mb_key_ctx = aes_ctx(crypto_tfm_ctx(rctx->desc.base.tfm));
+	key_mgr = get_key_mgr(cstate->mgr, mb_key_ctx->key_length);
+
+	/*
+	 * Some low-level mb job is done. Keep going till done.
+	 * This loop may process multiple multi part requests
+	 */
+	while (!(rctx->flag & CBC_DONE)) {
+		/* update bytes and check for more work */
+		nbytes = rctx->walk.nbytes & (AES_BLOCK_SIZE - 1);
+		req = cast_mcryptd_ctx_to_req(rctx);
+		err = skcipher_walk_done(&rctx->walk, nbytes);
+		if (err) {
+			/* done with error */
+			rctx->flag = CBC_DONE;
+			rctx->error = err;
+			goto out;
+		}
+		nbytes = rctx->walk.nbytes;
+		if (!nbytes) {
+			/* done with successful encryption */
+			rctx->flag = CBC_DONE;
+			goto out;
+		}
+		/*
+		 * This is a multi-part job and there is more work to do.
+		 * From the completed job, copy the running sequence of IV
+		 * and start the next one in sequence.
+		 */
+		job = (struct job_aes_cbc *)rctx->job;
+		rctx->seq_iv = job->iv;	/* copy the running sequence of iv */
+		kernel_fpu_begin();
+		rctx = aes_cbc_ctx_mgr_submit(key_mgr, rctx);
+		if (!rctx) {
+			/* multi part job submitted, no completed job. */
+			if (flush)
+				rctx = aes_cbc_ctx_mgr_flush(key_mgr);
+		}
+		kernel_fpu_end();
+		if (!rctx) {
+			/* no completions yet to process further */
+			break;
+		}
+		/* some job finished when we submitted multi part job. */
+		if (rctx->error) {
+			/*
+			 * some request completed with error
+			 * bail out of chain processing
+			 */
+			err = rctx->error;
+			break;
+		}
+		/* we have a valid request context to process further */
+	}
+	/* encrypted text is expected to be in out buffer already */
+out:
+	/* We came out multi-part processing for some request */
+	*ret_rctx = rctx;
+	return err;
+}
+
+/* A request that completed is dequeued and the caller is notified */
+
+static void completion_callback(struct mcryptd_skcipher_request_ctx *rctx,
+			    struct mcryptd_alg_cstate *cstate,
+			    int err)
+{
+	struct skcipher_request *req = cast_mcryptd_ctx_to_req(rctx);
+
+       /* remove from work list and invoke completion callback */
+	spin_lock(&cstate->work_lock);
+	list_del(&rctx->waiter);
+	spin_unlock(&cstate->work_lock);
+
+	local_bh_disable();
+	rctx->complete(&req->base, err);
+	local_bh_enable();
+}
+
+/* complete a blkcipher request and process any further completions */
+
+static void cbc_complete_job(struct mcryptd_skcipher_request_ctx *rctx,
+			    struct mcryptd_alg_cstate *cstate,
+			    int err)
+{
+	struct job_aes_cbc *job;
+	int ret;
+	struct mcryptd_skcipher_request_ctx *sctx;
+	struct crypto_aes_ctx *mb_key_ctx;
+	struct aes_cbc_mb_mgr_inorder_x8 *key_mgr;
+	struct skcipher_request *req;
+
+	req = cast_mcryptd_ctx_to_req(rctx);
+	skcipher_walk_complete(&rctx->walk, err);
+	completion_callback(rctx, cstate, err);
+
+	mb_key_ctx = aes_ctx(crypto_tfm_ctx(rctx->desc.base.tfm));
+	key_mgr = get_key_mgr(cstate->mgr, mb_key_ctx->key_length);
+
+	/* check for more completed jobs and process */
+	while ((job = aes_cbc_get_completed_job_inorder_x8(key_mgr)) != NULL) {
+		sctx = process_job_sts(job);
+		if (WARN_ON(sctx == NULL))
+			return;
+		ret = sctx->error;
+		if (!ret) {
+			/* further process it */
+			ret = cbc_encrypt_finish(&sctx, cstate, false);
+		}
+		if (sctx) {
+			req = cast_mcryptd_ctx_to_req(sctx);
+			skcipher_walk_complete(&sctx->walk, err);
+			completion_callback(sctx, cstate, ret);
+		}
+	}
+}
+
+/* Add request to the waiter list. It stays in queue until completion */
+
+static void cbc_mb_add_list(struct mcryptd_skcipher_request_ctx *rctx,
+				struct mcryptd_alg_cstate *cstate)
+{
+	unsigned long next_flush;
+	unsigned long delay = usecs_to_jiffies(FLUSH_INTERVAL);
+
+	/* initialize tag */
+	rctx->tag.arrival = jiffies;    /* tag the arrival time */
+	rctx->tag.seq_num = cstate->next_seq_num++;
+	next_flush = rctx->tag.arrival + delay;
+	rctx->tag.expire = next_flush;
+
+	spin_lock(&cstate->work_lock);
+	list_add_tail(&rctx->waiter, &cstate->work_list);
+	spin_unlock(&cstate->work_lock);
+
+	mcryptd_arm_flusher(cstate, delay);
+}
+
+static int mb_aes_cbc_encrypt(struct skcipher_request *desc)
+{
+	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(desc);
+	struct mcryptd_skcipher_request_ctx *rctx =
+		container_of(desc, struct mcryptd_skcipher_request_ctx, desc);
+	struct mcryptd_skcipher_request_ctx *ret_rctx;
+	struct mcryptd_alg_cstate *cstate =
+				this_cpu_ptr(cbc_mb_alg_state.alg_cstate);
+	int err;
+	int ret = 0;
+	struct crypto_aes_ctx *mb_key_ctx;
+	struct aes_cbc_mb_mgr_inorder_x8 *key_mgr;
+	struct skcipher_request *req;
+
+	mb_key_ctx = aes_ctx(crypto_skcipher_ctx(tfm));
+	key_mgr = get_key_mgr(cstate->mgr, mb_key_ctx->key_length);
+
+	/* sanity check */
+	if (rctx->tag.cpu != smp_processor_id()) {
+		/* job not on list yet */
+		pr_err("mcryptd error: cpu clash\n");
+		goto done;
+	}
+
+	/* a new job, initialize the cbc context and add to worklist */
+	cbc_ctx_init(rctx, desc->cryptlen, CBC_ENCRYPT);
+	cbc_mb_add_list(rctx, cstate);
+
+	req = cast_mcryptd_ctx_to_req(rctx);
+
+	err = skcipher_walk_async(&rctx->walk, req);
+	if (err || !rctx->walk.nbytes) {
+		/* terminate this request */
+		skcipher_walk_complete(&rctx->walk, err);
+		completion_callback(rctx, cstate, (!err) ? -EINVAL : err);
+		return 0;
+	}
+	/* submit job */
+	kernel_fpu_begin();
+	ret_rctx = aes_cbc_ctx_mgr_submit(key_mgr, rctx);
+	kernel_fpu_end();
+
+	if (!ret_rctx)
+		return -EINPROGRESS;
+
+	/* some job completed */
+	if (ret_rctx->error) {
+		/* some job finished with error */
+		cbc_complete_job(ret_rctx, cstate, ret_rctx->error);
+		return 0;
+	}
+	/* some job finished without error, process it */
+	ret = cbc_encrypt_finish(&ret_rctx, cstate, false);
+
+	if (!ret_rctx)
+		return -EINPROGRESS;
+
+done:
+	/* complete the job */
+	cbc_complete_job(ret_rctx, cstate, ret);
+	return 0;
+}
+
+static int mb_aes_cbc_decrypt(struct skcipher_request *desc)
+{
+	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(desc);
+	struct crypto_aes_ctx *aesni_ctx;
+	struct mcryptd_skcipher_request_ctx *rctx =
+		container_of(desc, struct mcryptd_skcipher_request_ctx, desc);
+	struct skcipher_request *req;
+	bool is_mcryptd_req;
+	unsigned long src_paddr;
+	unsigned long dst_paddr;
+	unsigned int nbytes;
+	int err;
+
+	/* note here whether it is mcryptd req */
+	is_mcryptd_req = desc->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP;
+	req = cast_mcryptd_ctx_to_req(rctx);
+	aesni_ctx = aes_ctx(crypto_skcipher_ctx(tfm));
+
+	err = skcipher_walk_async(&rctx->walk, req);
+	if (err || !rctx->walk.nbytes)
+		goto done1;
+
+	kernel_fpu_begin();
+	while ((nbytes = rctx->walk.nbytes)) {
+		src_paddr = (page_to_phys(rctx->walk.src.phys.page) +
+						rctx->walk.src.phys.offset);
+		dst_paddr = (page_to_phys(rctx->walk.dst.phys.page) +
+						rctx->walk.dst.phys.offset);
+		aesni_cbc_dec(aesni_ctx, phys_to_virt(dst_paddr),
+					phys_to_virt(src_paddr),
+					rctx->walk.nbytes & AES_BLOCK_MASK,
+					rctx->walk.iv);
+		nbytes &= AES_BLOCK_SIZE - 1;
+		err = skcipher_walk_done(&rctx->walk, nbytes);
+		if (err)
+			goto done2;
+	}
+done2:
+	kernel_fpu_end();
+done1:
+	skcipher_walk_complete(&rctx->walk, err);
+	if (!is_mcryptd_req) {
+		/* synchronous request */
+		return err;
+	}
+	local_bh_disable();
+	rctx->complete(&req->base, err);
+	local_bh_enable();
+
+	return 0;
+}
+
+/* use the same common code in aesni to expand key */
+
+static int aes_set_key_common(struct crypto_tfm *tfm, void *raw_ctx,
+			      const u8 *in_key, unsigned int key_len)
+{
+	struct crypto_aes_ctx *ctx = aes_ctx(raw_ctx);
+	u32 *flags = &tfm->crt_flags;
+	int err;
+
+	if (key_len != AES_KEYSIZE_128 && key_len != AES_KEYSIZE_192 &&
+	    key_len != AES_KEYSIZE_256) {
+		*flags |= CRYPTO_TFM_RES_BAD_KEY_LEN;
+		return -EINVAL;
+	}
+
+	if (!irq_fpu_usable())
+		err = crypto_aes_expand_key(ctx, in_key, key_len);
+	else {
+		kernel_fpu_begin();
+		err = aesni_set_key(ctx, in_key, key_len);
+		kernel_fpu_end();
+	}
+
+	return err;
+}
+
+static int aes_set_key(struct crypto_skcipher *tfm, const u8 *in_key,
+		       unsigned int key_len)
+{
+	return aes_set_key_common(crypto_skcipher_tfm(tfm),
+				crypto_skcipher_ctx(tfm), in_key, key_len);
+}
+
+/*
+ * CRYPTO_ALG_ASYNC flag is passed to indicate we have an ablk
+ * scatter-gather walk.
+ */
+static struct skcipher_alg aes_cbc_mb_alg = {
+	.base = {
+		.cra_name		= "cbc(aes)",
+		.cra_driver_name	= "cbc-aes-aesni-mb",
+		.cra_priority		= 500,
+		.cra_flags		= CRYPTO_ALG_INTERNAL |
+					  CRYPTO_ALG_ASYNC,
+		.cra_blocksize		= AES_BLOCK_SIZE,
+		.cra_ctxsize		= CRYPTO_AES_CTX_SIZE,
+		.cra_module		= THIS_MODULE,
+	},
+	.min_keysize	= AES_MIN_KEY_SIZE,
+	.max_keysize	= AES_MAX_KEY_SIZE,
+	.ivsize		= AES_BLOCK_SIZE,
+	.setkey		= aes_set_key,
+	.encrypt	= mb_aes_cbc_encrypt,
+	.decrypt	= mb_aes_cbc_decrypt
+};
+
+/*
+ * When there are no new jobs arriving, the multibuffer queue may stall.
+ * To prevent prolonged stall, the flusher can be invoked to alleviate
+ * the following conditions:
+ * a) There are partially completed multi-part crypto jobs after a
+ * maximum allowable delay
+ * b) We have exhausted crypto jobs in queue, and the cpu
+ * does not have other tasks and cpu will become idle otherwise.
+ */
+unsigned long cbc_mb_flusher(struct mcryptd_alg_cstate *cstate)
+{
+	struct mcryptd_skcipher_request_ctx *rctx;
+	unsigned long cur_time;
+	unsigned long next_flush = 0;
+
+	struct crypto_aes_ctx *mb_key_ctx;
+	struct aes_cbc_mb_mgr_inorder_x8 *key_mgr;
+
+
+	cur_time = jiffies;
+
+	while (!list_empty(&cstate->work_list)) {
+		rctx = list_entry(cstate->work_list.next,
+				struct mcryptd_skcipher_request_ctx, waiter);
+		if time_before(cur_time, rctx->tag.expire)
+			break;
+
+		mb_key_ctx = aes_ctx(crypto_tfm_ctx(rctx->desc.base.tfm));
+		key_mgr = get_key_mgr(cstate->mgr, mb_key_ctx->key_length);
+
+		kernel_fpu_begin();
+		rctx = aes_cbc_ctx_mgr_flush(key_mgr);
+		kernel_fpu_end();
+		if (!rctx) {
+			pr_err("%s: nothing got flushed\n", __func__);
+			break;
+		}
+		cbc_encrypt_finish(&rctx, cstate, true);
+		if (rctx)
+			cbc_complete_job(rctx, cstate, rctx->error);
+	}
+
+	if (!list_empty(&cstate->work_list)) {
+		rctx = list_entry(cstate->work_list.next,
+				struct mcryptd_skcipher_request_ctx, waiter);
+		/* get the blkcipher context and then flush time */
+		next_flush = rctx->tag.expire;
+		mcryptd_arm_flusher(cstate, get_delay(next_flush));
+	}
+	return next_flush;
+}
+struct mcryptd_skcipher_alg_mb *aes_cbc_mb_mcryptd_skciphers;
+
+static int __init aes_cbc_mb_mod_init(void)
+{
+	struct mcryptd_skcipher_alg_mb *mcryptd;
+
+	int cpu, i;
+	int err;
+	struct mcryptd_alg_cstate *cpu_state;
+	struct aes_cbc_mb_mgr_inorder_x8 *key_mgr;
+
+	/* check for dependent cpu features */
+	if (!boot_cpu_has(X86_FEATURE_AES)) {
+		pr_err("%s: no aes support\n", __func__);
+		err = -ENODEV;
+		goto err1;
+	}
+
+	if (!boot_cpu_has(X86_FEATURE_XMM)) {
+		pr_err("%s: no xmm support\n", __func__);
+		err = -ENODEV;
+		goto err1;
+	}
+
+	/* initialize multibuffer structures */
+
+	cbc_mb_alg_state.alg_cstate = alloc_percpu(struct mcryptd_alg_cstate);
+	if (!cbc_mb_alg_state.alg_cstate) {
+		pr_err("%s: insufficient memory\n", __func__);
+		err = -ENOMEM;
+		goto err1;
+	}
+
+	for_each_possible_cpu(cpu) {
+		cpu_state = per_cpu_ptr(cbc_mb_alg_state.alg_cstate, cpu);
+		cpu_state->next_flush = 0;
+		cpu_state->next_seq_num = 0;
+		cpu_state->flusher_engaged = false;
+		INIT_DELAYED_WORK(&cpu_state->flush, mcryptd_flusher);
+		cpu_state->cpu = cpu;
+		cpu_state->alg_state = &cbc_mb_alg_state;
+		cpu_state->mgr =
+			(struct aes_cbc_mb_mgr_inorder_x8 *)
+			kzalloc(3 * sizeof(struct aes_cbc_mb_mgr_inorder_x8),
+				GFP_KERNEL);
+		if (!cpu_state->mgr) {
+			err = -ENOMEM;
+			goto err2;
+		}
+		key_mgr = (struct aes_cbc_mb_mgr_inorder_x8 *) cpu_state->mgr;
+		/* initialize manager state for 128, 192 and 256 bit keys */
+		for (i = 0; i < 3; ++i) {
+			aes_cbc_init_mb_mgr_inorder_x8(key_mgr);
+			++key_mgr;
+		}
+		INIT_LIST_HEAD(&cpu_state->work_list);
+		spin_lock_init(&cpu_state->work_lock);
+	}
+	cbc_mb_alg_state.flusher = &cbc_mb_flusher;
+
+	/* register the synchronous mb algo */
+	err = crypto_register_skcipher(&aes_cbc_mb_alg);
+	if (err)
+		goto err3;
+
+	mcryptd = mcryptd_skcipher_create_mb(aes_cbc_mb_alg.base.cra_name,
+					aes_cbc_mb_alg.base.cra_driver_name);
+	err = PTR_ERR(mcryptd);
+
+	if (IS_ERR(mcryptd))
+		goto unregister_mcryptds;
+
+	aes_cbc_mb_mcryptd_skciphers = mcryptd;
+
+	return 0; /* module init success */
+
+	/* error in algo registration */
+unregister_mcryptds:
+	mcryptd_skcipher_free_mb(aes_cbc_mb_mcryptd_skciphers);
+err3:
+	for_each_possible_cpu(cpu) {
+		cpu_state = per_cpu_ptr(cbc_mb_alg_state.alg_cstate, cpu);
+		kfree(cpu_state->mgr);
+	}
+err2:
+	free_percpu(cbc_mb_alg_state.alg_cstate);
+err1:
+	return err;
+}
+
+static void __exit aes_cbc_mb_mod_fini(void)
+{
+	int cpu;
+	struct mcryptd_alg_cstate *cpu_state;
+
+	mcryptd_skcipher_free_mb(aes_cbc_mb_mcryptd_skciphers);
+	crypto_unregister_skcipher(&aes_cbc_mb_alg);
+
+	for_each_possible_cpu(cpu) {
+		cpu_state = per_cpu_ptr(cbc_mb_alg_state.alg_cstate, cpu);
+		kfree(cpu_state->mgr);
+	}
+	free_percpu(cbc_mb_alg_state.alg_cstate);
+}
+
+module_init(aes_cbc_mb_mod_init);
+module_exit(aes_cbc_mb_mod_fini);
+
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("AES CBC Algorithm, multi buffer accelerated");
+MODULE_AUTHOR("Megha Dey <megha.dey@linux.intel.com");
+
+MODULE_ALIAS("aes-cbc-mb");
+MODULE_ALIAS_CRYPTO("cbc-aes-aesni-mb");
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* Re: [PATCH V8 1/5] crypto: Multi-buffer encryption infrastructure support
  2018-01-10  0:09 ` [PATCH V8 1/5] crypto: Multi-buffer encryption infrastructure support Megha Dey
@ 2018-01-18 11:39   ` Herbert Xu
  2018-01-19  0:44     ` Megha Dey
  0 siblings, 1 reply; 21+ messages in thread
From: Herbert Xu @ 2018-01-18 11:39 UTC (permalink / raw)
  To: Megha Dey; +Cc: linux-kernel, linux-crypto, davem, megha.dey

On Tue, Jan 09, 2018 at 04:09:04PM -0800, Megha Dey wrote:
>
> +static void mcryptd_skcipher_encrypt(struct crypto_async_request *base,
> +								int err)
> +{
> +	struct skcipher_request *req = skcipher_request_cast(base);
> +	struct mcryptd_skcipher_request_ctx *rctx = skcipher_request_ctx(req);
> +	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
> +	struct mcryptd_skcipher_ctx *ctx = crypto_skcipher_ctx(tfm);
> +	struct crypto_skcipher *child = ctx->child;
> +	struct skcipher_request subreq;
> +
> +	if (unlikely(err == -EINPROGRESS))
> +		goto out;
> +
> +	/* set up the skcipher request to work on */
> +	skcipher_request_set_tfm(&subreq, child);
> +	skcipher_request_set_callback(&subreq,
> +					CRYPTO_TFM_REQ_MAY_SLEEP, 0, 0);
> +	skcipher_request_set_crypt(&subreq, req->src, req->dst,
> +					req->cryptlen, req->iv);
> +
> +	/*
> +	 * pass addr of descriptor stored in the request context
> +	 * so that the callee can get to the request context
> +	 */
> +	rctx->desc = subreq;
> +	err = crypto_skcipher_encrypt(&rctx->desc);
> +
> +	if (err) {
> +		req->base.complete = rctx->complete;
> +		goto out;
> +	}
> +	return;
> +
> +out:
> +	mcryptd_skcipher_complete(req, err);
> +}

OK this looks better but it's still abusing the crypto API interface.
In particular, you're sharing data with the underlying algorithm
behind the crypto API's back.  Also, the underlying algorithm does
callback completion behind the API's back through the shared data
context.

It seems to me that the current mcryptd scheme is flawed.  You
want to batch multiple requests and yet this isn't actually being
done by mcryptd at all.  The actual batching happens at the very
lowest level, i.e., in the crypto algorithm below mcryptd.  For
example, with your patch, the batching appears to happen in
aes_cbc_job_mgr_submit.

So the mcryptd template is in fact completely superfluous.  You
can remove it and just have all the main encrypt/decrypt functions
invoke the underlying encrypt/decrypt function directly and achieve
the same result.

Am I missing something?

Cheers,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH V8 1/5] crypto: Multi-buffer encryption infrastructure support
  2018-01-18 11:39   ` Herbert Xu
@ 2018-01-19  0:44     ` Megha Dey
  2018-03-16 14:53       ` Herbert Xu
  0 siblings, 1 reply; 21+ messages in thread
From: Megha Dey @ 2018-01-19  0:44 UTC (permalink / raw)
  To: Herbert Xu; +Cc: linux-kernel, linux-crypto, davem

On Thu, 2018-01-18 at 22:39 +1100, Herbert Xu wrote:
> On Tue, Jan 09, 2018 at 04:09:04PM -0800, Megha Dey wrote:
> >
> > +static void mcryptd_skcipher_encrypt(struct crypto_async_request *base,
> > +								int err)
> > +{
> > +	struct skcipher_request *req = skcipher_request_cast(base);
> > +	struct mcryptd_skcipher_request_ctx *rctx = skcipher_request_ctx(req);
> > +	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
> > +	struct mcryptd_skcipher_ctx *ctx = crypto_skcipher_ctx(tfm);
> > +	struct crypto_skcipher *child = ctx->child;
> > +	struct skcipher_request subreq;
> > +
> > +	if (unlikely(err == -EINPROGRESS))
> > +		goto out;
> > +
> > +	/* set up the skcipher request to work on */
> > +	skcipher_request_set_tfm(&subreq, child);
> > +	skcipher_request_set_callback(&subreq,
> > +					CRYPTO_TFM_REQ_MAY_SLEEP, 0, 0);
> > +	skcipher_request_set_crypt(&subreq, req->src, req->dst,
> > +					req->cryptlen, req->iv);
> > +
> > +	/*
> > +	 * pass addr of descriptor stored in the request context
> > +	 * so that the callee can get to the request context
> > +	 */
> > +	rctx->desc = subreq;
> > +	err = crypto_skcipher_encrypt(&rctx->desc);
> > +
> > +	if (err) {
> > +		req->base.complete = rctx->complete;
> > +		goto out;
> > +	}
> > +	return;
> > +
> > +out:
> > +	mcryptd_skcipher_complete(req, err);
> > +}
> 
> OK this looks better but it's still abusing the crypto API interface.
> In particular, you're sharing data with the underlying algorithm
> behind the crypto API's back.  Also, the underlying algorithm does
> callback completion behind the API's back through the shared data
> context.
> 
> It seems to me that the current mcryptd scheme is flawed.  You
> want to batch multiple requests and yet this isn't actually being
> done by mcryptd at all.  The actual batching happens at the very
> lowest level, i.e., in the crypto algorithm below mcryptd.  For
> example, with your patch, the batching appears to happen in
> aes_cbc_job_mgr_submit.
> 
> So the mcryptd template is in fact completely superfluous.  You
> can remove it and just have all the main encrypt/decrypt functions
> invoke the underlying encrypt/decrypt function directly and achieve
> the same result.
> 
> Am I missing something?

Hi Herbert,

After discussing with Tim, it seems like the mcryptd is responsible for
queuing up the encrypt requests and dispatching them to the actual
multi-buffer raw algorithm.  It also flushes the queue
if we wait too long without new requests coming in to force dispatch of
the requests in queue.

Its function is analogous to cryptd but it has its own multi-lane twists
so we haven't reused the cryptd interface.
> 
> Cheers,

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH V8 1/5] crypto: Multi-buffer encryption infrastructure support
  2018-01-19  0:44     ` Megha Dey
@ 2018-03-16 14:53       ` Herbert Xu
  2018-04-17 18:40         ` Dey, Megha
  0 siblings, 1 reply; 21+ messages in thread
From: Herbert Xu @ 2018-03-16 14:53 UTC (permalink / raw)
  To: Megha Dey; +Cc: linux-kernel, linux-crypto, davem

On Thu, Jan 18, 2018 at 04:44:21PM -0800, Megha Dey wrote:
>
> > So the mcryptd template is in fact completely superfluous.  You
> > can remove it and just have all the main encrypt/decrypt functions
> > invoke the underlying encrypt/decrypt function directly and achieve
> > the same result.
> > 
> > Am I missing something?
> 
> Hi Herbert,
> 
> After discussing with Tim, it seems like the mcryptd is responsible for
> queuing up the encrypt requests and dispatching them to the actual
> multi-buffer raw algorithm.  It also flushes the queue
> if we wait too long without new requests coming in to force dispatch of
> the requests in queue.
> 
> Its function is analogous to cryptd but it has its own multi-lane twists
> so we haven't reused the cryptd interface.

I have taken a deeper look and I'm even more convinced now that
mcryptd is simply not needed in your current model.

The only reason you would need mcryptd is if you need to limit
the rate of requests going into the underlying mb algorithm.

However, it doesn't do that all.  Even though it seems to have a
batch size of 10, but because it immediately reschedules itself
after the batch runs out, it's essentially just dumping all requests
at the underlying algorithm as fast as they're coming in.  The
underlying algorithm doesn't have need throttling anyway because
it'll do the work when the queue is full synchronously.

So why not just get rid of mcryptd completely and expose the
underlying algorithm as a proper async skcipher/hash?

Thanks,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 21+ messages in thread

* RE: [PATCH V8 1/5] crypto: Multi-buffer encryption infrastructure support
  2018-03-16 14:53       ` Herbert Xu
@ 2018-04-17 18:40         ` Dey, Megha
  2018-04-18 11:01           ` Herbert Xu
  0 siblings, 1 reply; 21+ messages in thread
From: Dey, Megha @ 2018-04-17 18:40 UTC (permalink / raw)
  To: Herbert Xu; +Cc: linux-kernel, linux-crypto, davem



>-----Original Message-----
>From: Herbert Xu [mailto:herbert@gondor.apana.org.au]
>Sent: Friday, March 16, 2018 7:54 AM
>To: Dey, Megha <megha.dey@intel.com>
>Cc: linux-kernel@vger.kernel.org; linux-crypto@vger.kernel.org;
>davem@davemloft.net
>Subject: Re: [PATCH V8 1/5] crypto: Multi-buffer encryption infrastructure
>support
>
>On Thu, Jan 18, 2018 at 04:44:21PM -0800, Megha Dey wrote:
>>
>> > So the mcryptd template is in fact completely superfluous.  You can
>> > remove it and just have all the main encrypt/decrypt functions
>> > invoke the underlying encrypt/decrypt function directly and achieve
>> > the same result.
>> >
>> > Am I missing something?
>>
>> Hi Herbert,
>>
>> After discussing with Tim, it seems like the mcryptd is responsible
>> for queuing up the encrypt requests and dispatching them to the actual
>> multi-buffer raw algorithm.  It also flushes the queue if we wait too
>> long without new requests coming in to force dispatch of the requests
>> in queue.
>>
>> Its function is analogous to cryptd but it has its own multi-lane
>> twists so we haven't reused the cryptd interface.
>
>I have taken a deeper look and I'm even more convinced now that mcryptd is
>simply not needed in your current model.
>
>The only reason you would need mcryptd is if you need to limit the rate of
>requests going into the underlying mb algorithm.
>
>However, it doesn't do that all.  Even though it seems to have a batch size of
>10, but because it immediately reschedules itself after the batch runs out,
>it's essentially just dumping all requests at the underlying algorithm as fast
>as they're coming in.  The underlying algorithm doesn't have need throttling
>anyway because it'll do the work when the queue is full synchronously.
>
>So why not just get rid of mcryptd completely and expose the underlying
>algorithm as a proper async skcipher/hash?

Hi Herbert,

Most part of the cryptd.c and mcryptd.c are similar, except the logic used to flush out partially completed jobs
in the case of multibuffer algorithms.

I think I will try to merge the cryptd and mcryptd adding necessary quirks for multibuffer where needed.

Also, in cryptd.c, I see shash interface being used for hash digests, update, finup, setkey etc. whereas we have shifted
to ahash interface for mcryptd. Is this correct?

Thanks,
Megha
 
>
>Thanks,
>--
>Email: Herbert Xu <herbert@gondor.apana.org.au> Home Page:
>http://gondor.apana.org.au/~herbert/
>PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH V8 1/5] crypto: Multi-buffer encryption infrastructure support
  2018-04-17 18:40         ` Dey, Megha
@ 2018-04-18 11:01           ` Herbert Xu
  2018-04-19  0:54             ` Dey, Megha
  0 siblings, 1 reply; 21+ messages in thread
From: Herbert Xu @ 2018-04-18 11:01 UTC (permalink / raw)
  To: Dey, Megha; +Cc: linux-kernel, linux-crypto, davem

On Tue, Apr 17, 2018 at 06:40:17PM +0000, Dey, Megha wrote:
> 
> 
> >-----Original Message-----
> >From: Herbert Xu [mailto:herbert@gondor.apana.org.au]
> >Sent: Friday, March 16, 2018 7:54 AM
> >To: Dey, Megha <megha.dey@intel.com>
> >Cc: linux-kernel@vger.kernel.org; linux-crypto@vger.kernel.org;
> >davem@davemloft.net
> >Subject: Re: [PATCH V8 1/5] crypto: Multi-buffer encryption infrastructure
> >support
> >
> >I have taken a deeper look and I'm even more convinced now that mcryptd is
> >simply not needed in your current model.
> >
> >The only reason you would need mcryptd is if you need to limit the rate of
> >requests going into the underlying mb algorithm.
> >
> >However, it doesn't do that all.  Even though it seems to have a batch size of
> >10, but because it immediately reschedules itself after the batch runs out,
> >it's essentially just dumping all requests at the underlying algorithm as fast
> >as they're coming in.  The underlying algorithm doesn't have need throttling
> >anyway because it'll do the work when the queue is full synchronously.
> >
> >So why not just get rid of mcryptd completely and expose the underlying
> >algorithm as a proper async skcipher/hash?
> 
> Hi Herbert,
> 
> Most part of the cryptd.c and mcryptd.c are similar, except the logic used to flush out partially completed jobs
> in the case of multibuffer algorithms.
> 
> I think I will try to merge the cryptd and mcryptd adding necessary quirks for multibuffer where needed.

I think you didn't quite get my point.  From what I'm seeing you
don't need either cryptd or mcryptd.  You just need to expose the
underlying mb algorithm directly.

So I'm not sure what we would gain from merging cryptd and mcryptd.

Cheers,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 21+ messages in thread

* RE: [PATCH V8 1/5] crypto: Multi-buffer encryption infrastructure support
  2018-04-18 11:01           ` Herbert Xu
@ 2018-04-19  0:54             ` Dey, Megha
  2018-04-19  3:25               ` Herbert Xu
  0 siblings, 1 reply; 21+ messages in thread
From: Dey, Megha @ 2018-04-19  0:54 UTC (permalink / raw)
  To: Herbert Xu; +Cc: linux-kernel, linux-crypto, davem



>-----Original Message-----
>From: Herbert Xu [mailto:herbert@gondor.apana.org.au]
>Sent: Wednesday, April 18, 2018 4:01 AM
>To: Dey, Megha <megha.dey@intel.com>
>Cc: linux-kernel@vger.kernel.org; linux-crypto@vger.kernel.org;
>davem@davemloft.net
>Subject: Re: [PATCH V8 1/5] crypto: Multi-buffer encryption infrastructure
>support
>
>On Tue, Apr 17, 2018 at 06:40:17PM +0000, Dey, Megha wrote:
>>
>>
>> >-----Original Message-----
>> >From: Herbert Xu [mailto:herbert@gondor.apana.org.au]
>> >Sent: Friday, March 16, 2018 7:54 AM
>> >To: Dey, Megha <megha.dey@intel.com>
>> >Cc: linux-kernel@vger.kernel.org; linux-crypto@vger.kernel.org;
>> >davem@davemloft.net
>> >Subject: Re: [PATCH V8 1/5] crypto: Multi-buffer encryption
>> >infrastructure support
>> >
>> >I have taken a deeper look and I'm even more convinced now that
>> >mcryptd is simply not needed in your current model.
>> >
>> >The only reason you would need mcryptd is if you need to limit the
>> >rate of requests going into the underlying mb algorithm.
>> >
>> >However, it doesn't do that all.  Even though it seems to have a
>> >batch size of 10, but because it immediately reschedules itself after
>> >the batch runs out, it's essentially just dumping all requests at the
>> >underlying algorithm as fast as they're coming in.  The underlying
>> >algorithm doesn't have need throttling anyway because it'll do the work
>when the queue is full synchronously.
>> >
>> >So why not just get rid of mcryptd completely and expose the
>> >underlying algorithm as a proper async skcipher/hash?
>>
>> Hi Herbert,
>>
>> Most part of the cryptd.c and mcryptd.c are similar, except the logic
>> used to flush out partially completed jobs in the case of multibuffer
>algorithms.
>>
>> I think I will try to merge the cryptd and mcryptd adding necessary quirks for
>multibuffer where needed.
>
>I think you didn't quite get my point.  From what I'm seeing you don't need
>either cryptd or mcryptd.  You just need to expose the underlying mb
>algorithm directly.

Hi Herbert,

Yeah I think I misunderstood. I think what you mean is to remove mcryptd.c completely and avoid the extra layer of indirection to call the underlying algorithm, instead call it directly, correct?

So currently we have 3 algorithms registered for every multibuffer algorithm:
name         : __sha1-mb
driver       : mcryptd(__intel_sha1-mb)

name         : sha1
driver       : sha1_mb

name         : __sha1-mb
driver       : __intel_sha1-mb

If we remove mcryptd, then we will have just the 2?

The outer algorithm:sha1-mb, will 
>
>So I'm not sure what we would gain from merging cryptd and mcryptd.
>
>Cheers,
>--
>Email: Herbert Xu <herbert@gondor.apana.org.au> Home Page:
>http://gondor.apana.org.au/~herbert/
>PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH V8 1/5] crypto: Multi-buffer encryption infrastructure support
  2018-04-19  0:54             ` Dey, Megha
@ 2018-04-19  3:25               ` Herbert Xu
  2018-04-25  1:14                 ` Dey, Megha
  0 siblings, 1 reply; 21+ messages in thread
From: Herbert Xu @ 2018-04-19  3:25 UTC (permalink / raw)
  To: Dey, Megha; +Cc: linux-kernel, linux-crypto, davem

On Thu, Apr 19, 2018 at 12:54:16AM +0000, Dey, Megha wrote:
>
> Yeah I think I misunderstood. I think what you mean is to remove mcryptd.c completely and avoid the extra layer of indirection to call the underlying algorithm, instead call it directly, correct?
> 
> So currently we have 3 algorithms registered for every multibuffer algorithm:
> name         : __sha1-mb
> driver       : mcryptd(__intel_sha1-mb)
> 
> name         : sha1
> driver       : sha1_mb
> 
> name         : __sha1-mb
> driver       : __intel_sha1-mb
> 
> If we remove mcryptd, then we will have just the 2?

It should be down to just one, i.e., the current inner algorithm.
It's doing all the scheduling work already so I don't really see
why it needs the wrappers around it.

Cheers,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 21+ messages in thread

* RE: [PATCH V8 1/5] crypto: Multi-buffer encryption infrastructure support
  2018-04-19  3:25               ` Herbert Xu
@ 2018-04-25  1:14                 ` Dey, Megha
  2018-04-26  9:44                   ` Herbert Xu
  0 siblings, 1 reply; 21+ messages in thread
From: Dey, Megha @ 2018-04-25  1:14 UTC (permalink / raw)
  To: Herbert Xu; +Cc: linux-kernel, linux-crypto, davem



>-----Original Message-----
>From: Herbert Xu [mailto:herbert@gondor.apana.org.au]
>Sent: Wednesday, April 18, 2018 8:25 PM
>To: Dey, Megha <megha.dey@intel.com>
>Cc: linux-kernel@vger.kernel.org; linux-crypto@vger.kernel.org;
>davem@davemloft.net
>Subject: Re: [PATCH V8 1/5] crypto: Multi-buffer encryption infrastructure
>support
>
>On Thu, Apr 19, 2018 at 12:54:16AM +0000, Dey, Megha wrote:
>>
>> Yeah I think I misunderstood. I think what you mean is to remove mcryptd.c
>completely and avoid the extra layer of indirection to call the underlying
>algorithm, instead call it directly, correct?
>>
>> So currently we have 3 algorithms registered for every multibuffer algorithm:
>> name         : __sha1-mb
>> driver       : mcryptd(__intel_sha1-mb)
>>
>> name         : sha1
>> driver       : sha1_mb
>>
>> name         : __sha1-mb
>> driver       : __intel_sha1-mb
>>
>> If we remove mcryptd, then we will have just the 2?
>
>It should be down to just one, i.e., the current inner algorithm.
>It's doing all the scheduling work already so I don't really see why it needs the
>wrappers around it.

Hi Herbert,

Is there any existing implementation of async crypto algorithm that uses the above approach? The ones I could find are either sync, have an outer and inner algorithm or use cryptd.

I tried removing the mcryptd layer and the outer algorithm and some plumbing to pass the correct structures, but see crashes.(obviously some errors in the plumbing)

I am not sure if we remove mcryptd, how would we queue work, flush partially completed jobs or call completions (currently done by mcryptd) if we simply call the inner algorithm.

Are you suggesting these are not required at all? I am not sure how to move forward.

>
>Cheers,
>--
>Email: Herbert Xu <herbert@gondor.apana.org.au> Home Page:
>http://gondor.apana.org.au/~herbert/
>PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH V8 1/5] crypto: Multi-buffer encryption infrastructure support
  2018-04-25  1:14                 ` Dey, Megha
@ 2018-04-26  9:44                   ` Herbert Xu
  2018-05-01 22:39                     ` Dey, Megha
  0 siblings, 1 reply; 21+ messages in thread
From: Herbert Xu @ 2018-04-26  9:44 UTC (permalink / raw)
  To: Dey, Megha; +Cc: linux-kernel, linux-crypto, davem

On Wed, Apr 25, 2018 at 01:14:26AM +0000, Dey, Megha wrote:
> 
> Is there any existing implementation of async crypto algorithm that uses the above approach? The ones I could find are either sync, have an outer and inner algorithm or use cryptd.
> 
> I tried removing the mcryptd layer and the outer algorithm and some plumbing to pass the correct structures, but see crashes.(obviously some errors in the plumbing)

OK, you can't just remove it because the inner algorithm requires
kernel_fpu_begin/kernel_fpu_end.  So we do need two layers but I
don't think we need cryptd or mcryptd.

The existing simd wrapper should work just fine on the inner
algorithm, provided that we add hash support to it.

> I am not sure if we remove mcryptd, how would we queue work, flush partially completed jobs or call completions (currently done by mcryptd) if we simply call the inner algorithm.

I don't think mcryptd is providing any real facility to the flushing
apart from a helper.  That same helper can live anywhere.

Cheers,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 21+ messages in thread

* RE: [PATCH V8 1/5] crypto: Multi-buffer encryption infrastructure support
  2018-04-26  9:44                   ` Herbert Xu
@ 2018-05-01 22:39                     ` Dey, Megha
  2018-05-07  9:35                       ` Herbert Xu
  0 siblings, 1 reply; 21+ messages in thread
From: Dey, Megha @ 2018-05-01 22:39 UTC (permalink / raw)
  To: Herbert Xu; +Cc: linux-kernel, linux-crypto, davem



>-----Original Message-----
>From: Herbert Xu [mailto:herbert@gondor.apana.org.au]
>Sent: Thursday, April 26, 2018 2:45 AM
>To: Dey, Megha <megha.dey@intel.com>
>Cc: linux-kernel@vger.kernel.org; linux-crypto@vger.kernel.org;
>davem@davemloft.net
>Subject: Re: [PATCH V8 1/5] crypto: Multi-buffer encryption infrastructure
>support
>
>On Wed, Apr 25, 2018 at 01:14:26AM +0000, Dey, Megha wrote:
>>
>> Is there any existing implementation of async crypto algorithm that uses the
>above approach? The ones I could find are either sync, have an outer and
>inner algorithm or use cryptd.
>>
>> I tried removing the mcryptd layer and the outer algorithm and some
>> plumbing to pass the correct structures, but see crashes.(obviously
>> some errors in the plumbing)
>
>OK, you can't just remove it because the inner algorithm requires
>kernel_fpu_begin/kernel_fpu_end.  So we do need two layers but I don't think
>we need cryptd or mcryptd.
>
>The existing simd wrapper should work just fine on the inner algorithm,
>provided that we add hash support to it.

Hi Herbert,

crypto/simd.c provides a simd_skcipher_create_compat. I have used the same template to introduce simd_ahash_create_compat
which would wrap around the inner hash algorithm.

Hence we would still register 2 algs, outer and inner.
>
>> I am not sure if we remove mcryptd, how would we queue work, flush
>partially completed jobs or call completions (currently done by mcryptd) if we
>simply call the inner algorithm.
>
>I don't think mcryptd is providing any real facility to the flushing apart from a
>helper.  That same helper can live anywhere.

Currently we have outer_alg -> mcryptd alg -> inner_alg

Mcryptd is mainly providing the following:
1. Ensuring the lanes(8 in case of AVX2) are full before dispatching to the lower inner algorithm. This is obviously why we would expect better performance for multi-buffer as opposed to the present single-buffer algorithms.
2. If there no new incoming jobs, issue a flush.
3. A glue layer which sends the correct pointers and completions.

If we get rid of mcryptd, these functions needs to be done by someone. Since all multi-buffer algorithms would require this tasks, where do you suggest these helpers live, if not the current mcryptd.c?

I am not sure if you are suggesting that we need to get rid of the mcryptd work queue itself. In that case, we would need to execute in the context of the job requesting the crypto transformation.
>
>Cheers,
>--
>Email: Herbert Xu <herbert@gondor.apana.org.au> Home Page:
>http://gondor.apana.org.au/~herbert/
>PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH V8 1/5] crypto: Multi-buffer encryption infrastructure support
  2018-05-01 22:39                     ` Dey, Megha
@ 2018-05-07  9:35                       ` Herbert Xu
  2018-05-11  1:24                         ` Dey, Megha
  0 siblings, 1 reply; 21+ messages in thread
From: Herbert Xu @ 2018-05-07  9:35 UTC (permalink / raw)
  To: Dey, Megha; +Cc: linux-kernel, linux-crypto, davem

On Tue, May 01, 2018 at 10:39:15PM +0000, Dey, Megha wrote:
>
> crypto/simd.c provides a simd_skcipher_create_compat. I have used the same template to introduce simd_ahash_create_compat
> which would wrap around the inner hash algorithm.
> 
> Hence we would still register 2 algs, outer and inner.

Right.

> Currently we have outer_alg -> mcryptd alg -> inner_alg
> 
> Mcryptd is mainly providing the following:
> 1. Ensuring the lanes(8 in case of AVX2) are full before dispatching to the lower inner algorithm. This is obviously why we would expect better performance for multi-buffer as opposed to the present single-buffer algorithms.
> 2. If there no new incoming jobs, issue a flush.
> 3. A glue layer which sends the correct pointers and completions.
> 
> If we get rid of mcryptd, these functions needs to be done by someone. Since all multi-buffer algorithms would require this tasks, where do you suggest these helpers live, if not the current mcryptd.c?

That's the issue.  I don't think mcryptd is doing any of these
claimed functions except for hosting the flush helper which could
really live anywhere.

All these functions are actually being carried out in the inner
algorithm already.

> I am not sure if you are suggesting that we need to get rid of the mcryptd work queue itself. In that case, we would need to execute in the context of the job requesting the crypto transformation.

Which is fine as long as you can disable the FPU.  If not the simd
wrapper will defer the job to kthread context as required.

Cheers,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 21+ messages in thread

* RE: [PATCH V8 1/5] crypto: Multi-buffer encryption infrastructure support
  2018-05-07  9:35                       ` Herbert Xu
@ 2018-05-11  1:24                         ` Dey, Megha
  2018-05-11  4:45                           ` Herbert Xu
  0 siblings, 1 reply; 21+ messages in thread
From: Dey, Megha @ 2018-05-11  1:24 UTC (permalink / raw)
  To: Herbert Xu; +Cc: linux-kernel, linux-crypto, davem



>-----Original Message-----
>From: Herbert Xu [mailto:herbert@gondor.apana.org.au]
>Sent: Monday, May 7, 2018 2:35 AM
>To: Dey, Megha <megha.dey@intel.com>
>Cc: linux-kernel@vger.kernel.org; linux-crypto@vger.kernel.org;
>davem@davemloft.net
>Subject: Re: [PATCH V8 1/5] crypto: Multi-buffer encryption infrastructure
>support
>
>On Tue, May 01, 2018 at 10:39:15PM +0000, Dey, Megha wrote:
>>
>> crypto/simd.c provides a simd_skcipher_create_compat. I have used the
>> same template to introduce simd_ahash_create_compat which would wrap
>around the inner hash algorithm.
>>
>> Hence we would still register 2 algs, outer and inner.
>
>Right.
>
>> Currently we have outer_alg -> mcryptd alg -> inner_alg
>>
>> Mcryptd is mainly providing the following:
>> 1. Ensuring the lanes(8 in case of AVX2) are full before dispatching to the
>lower inner algorithm. This is obviously why we would expect better
>performance for multi-buffer as opposed to the present single-buffer
>algorithms.
>> 2. If there no new incoming jobs, issue a flush.
>> 3. A glue layer which sends the correct pointers and completions.
>>
>> If we get rid of mcryptd, these functions needs to be done by someone. Since
>all multi-buffer algorithms would require this tasks, where do you suggest these
>helpers live, if not the current mcryptd.c?
>
>That's the issue.  I don't think mcryptd is doing any of these claimed functions
>except for hosting the flush helper which could really live anywhere.
>
>All these functions are actually being carried out in the inner algorithm already.
>
>> I am not sure if you are suggesting that we need to get rid of the mcryptd
>work queue itself. In that case, we would need to execute in the context of the
>job requesting the crypto transformation.
>
>Which is fine as long as you can disable the FPU.  If not the simd wrapper will
>defer the job to kthread context as required.

Hi Herbert,

Are you suggesting that the SIMD wrapper, will do what is currently being done by the ' mcryptd_queue_worker ' function (assuming FPU is not disabled) i.e dispatching the job to the inner algorithm?

I have got rid of the mcryptd layer( have an inner layer, outer SIMD layer, handled the pointers and completions accordingly), but still facing some issues after removing the per cpu mcryptd_cpu_queue.
 
>
>Cheers,
>--
>Email: Herbert Xu <herbert@gondor.apana.org.au> Home Page:
>http://gondor.apana.org.au/~herbert/
>PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH V8 1/5] crypto: Multi-buffer encryption infrastructure support
  2018-05-11  1:24                         ` Dey, Megha
@ 2018-05-11  4:45                           ` Herbert Xu
  2018-05-12  1:21                             ` Dey, Megha
  2018-06-21  1:05                             ` Dey, Megha
  0 siblings, 2 replies; 21+ messages in thread
From: Herbert Xu @ 2018-05-11  4:45 UTC (permalink / raw)
  To: Dey, Megha; +Cc: linux-kernel, linux-crypto, davem

On Fri, May 11, 2018 at 01:24:42AM +0000, Dey, Megha wrote:
> 
> Are you suggesting that the SIMD wrapper, will do what is currently being done by the ' mcryptd_queue_worker ' function (assuming FPU is not disabled) i.e dispatching the job to the inner algorithm?
> 
> I have got rid of the mcryptd layer( have an inner layer, outer SIMD layer, handled the pointers and completions accordingly), but still facing some issues after removing the per cpu mcryptd_cpu_queue.

Why don't you post what you've got and we can work it out together?

Thanks,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 21+ messages in thread

* RE: [PATCH V8 1/5] crypto: Multi-buffer encryption infrastructure support
  2018-05-11  4:45                           ` Herbert Xu
@ 2018-05-12  1:21                             ` Dey, Megha
  2018-06-21  1:05                             ` Dey, Megha
  1 sibling, 0 replies; 21+ messages in thread
From: Dey, Megha @ 2018-05-12  1:21 UTC (permalink / raw)
  To: Herbert Xu; +Cc: linux-kernel, linux-crypto, davem



>-----Original Message-----
>From: Herbert Xu [mailto:herbert@gondor.apana.org.au]
>Sent: Thursday, May 10, 2018 9:46 PM
>To: Dey, Megha <megha.dey@intel.com>
>Cc: linux-kernel@vger.kernel.org; linux-crypto@vger.kernel.org;
>davem@davemloft.net
>Subject: Re: [PATCH V8 1/5] crypto: Multi-buffer encryption infrastructure
>support
>
>On Fri, May 11, 2018 at 01:24:42AM +0000, Dey, Megha wrote:
>>
>> Are you suggesting that the SIMD wrapper, will do what is currently being
>done by the ' mcryptd_queue_worker ' function (assuming FPU is not disabled)
>i.e dispatching the job to the inner algorithm?
>>
>> I have got rid of the mcryptd layer( have an inner layer, outer SIMD layer,
>handled the pointers and completions accordingly), but still facing some issues
>after removing the per cpu mcryptd_cpu_queue.
>
>Why don't you post what you've got and we can work it out together?

Hi Herbert,

Sure, I will post an RFC patch. (crypto: Remove mcryptd). 

>
>Thanks,
>--
>Email: Herbert Xu <herbert@gondor.apana.org.au> Home Page:
>http://gondor.apana.org.au/~herbert/
>PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 21+ messages in thread

* RE: [PATCH V8 1/5] crypto: Multi-buffer encryption infrastructure support
  2018-05-11  4:45                           ` Herbert Xu
  2018-05-12  1:21                             ` Dey, Megha
@ 2018-06-21  1:05                             ` Dey, Megha
  1 sibling, 0 replies; 21+ messages in thread
From: Dey, Megha @ 2018-06-21  1:05 UTC (permalink / raw)
  To: Herbert Xu; +Cc: linux-kernel, linux-crypto, davem



>-----Original Message-----
>From: Dey, Megha
>Sent: Friday, May 11, 2018 6:22 PM
>To: Herbert Xu <herbert@gondor.apana.org.au>
>Cc: linux-kernel@vger.kernel.org; linux-crypto@vger.kernel.org;
>davem@davemloft.net
>Subject: RE: [PATCH V8 1/5] crypto: Multi-buffer encryption infrastructure
>support
>
>
>
>>-----Original Message-----
>>From: Herbert Xu [mailto:herbert@gondor.apana.org.au]
>>Sent: Thursday, May 10, 2018 9:46 PM
>>To: Dey, Megha <megha.dey@intel.com>
>>Cc: linux-kernel@vger.kernel.org; linux-crypto@vger.kernel.org;
>>davem@davemloft.net
>>Subject: Re: [PATCH V8 1/5] crypto: Multi-buffer encryption
>>infrastructure support
>>
>>On Fri, May 11, 2018 at 01:24:42AM +0000, Dey, Megha wrote:
>>>
>>> Are you suggesting that the SIMD wrapper, will do what is currently
>>> being
>>done by the ' mcryptd_queue_worker ' function (assuming FPU is not
>>disabled) i.e dispatching the job to the inner algorithm?
>>>
>>> I have got rid of the mcryptd layer( have an inner layer, outer SIMD
>>> layer,
>>handled the pointers and completions accordingly), but still facing
>>some issues after removing the per cpu mcryptd_cpu_queue.
>>
>>Why don't you post what you've got and we can work it out together?
>
>Hi Herbert,
>
>Sure, I will post an RFC patch. (crypto: Remove mcryptd).

Hi Herbert,

I had posted an RFC patch about a month back (2018/05/11). Could you please have a look?

[RFC] crypto: Remove mcryptd
>
>>
>>Thanks,
>>--
>>Email: Herbert Xu <herbert@gondor.apana.org.au> Home Page:
>>http://gondor.apana.org.au/~herbert/
>>PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2018-06-21  1:05 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-01-10  0:09 [PATCH V8 0/5] crypto: AES CBC multibuffer implementation Megha Dey
2018-01-10  0:09 ` [PATCH V8 1/5] crypto: Multi-buffer encryption infrastructure support Megha Dey
2018-01-18 11:39   ` Herbert Xu
2018-01-19  0:44     ` Megha Dey
2018-03-16 14:53       ` Herbert Xu
2018-04-17 18:40         ` Dey, Megha
2018-04-18 11:01           ` Herbert Xu
2018-04-19  0:54             ` Dey, Megha
2018-04-19  3:25               ` Herbert Xu
2018-04-25  1:14                 ` Dey, Megha
2018-04-26  9:44                   ` Herbert Xu
2018-05-01 22:39                     ` Dey, Megha
2018-05-07  9:35                       ` Herbert Xu
2018-05-11  1:24                         ` Dey, Megha
2018-05-11  4:45                           ` Herbert Xu
2018-05-12  1:21                             ` Dey, Megha
2018-06-21  1:05                             ` Dey, Megha
2018-01-10  0:09 ` [PATCH V8 2/5] crypto: AES CBC multi-buffer data structures Megha Dey
2018-01-10  0:09 ` [PATCH V8 3/5] crypto: AES CBC multi-buffer scheduler Megha Dey
2018-01-10  0:09 ` [PATCH V8 4/5] crypto: AES CBC by8 encryption Megha Dey
2018-01-10  0:09 ` [PATCH V8 5/5] crypto: AES CBC multi-buffer glue code Megha Dey

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.