All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/16] crypto: aead - Add single SG interface and new IPsec IV generation
@ 2015-05-21  7:09 Herbert Xu
  2015-05-21  7:10 ` [PATCH 1/16] crypto: cryptd - Use crypto_grab_aead Herbert Xu
                   ` (15 more replies)
  0 siblings, 16 replies; 17+ messages in thread
From: Herbert Xu @ 2015-05-21  7:09 UTC (permalink / raw)
  To: Linux Crypto Mailing List

Hi:

This series of patches takes the opportunity of the AEAD conversion
to adjust the interface so that it is more optimal for the intended
use-cases.

As it stands, AEAD takes two separate SG lists, one containing the
asssociated data (AD) and one containing the plain/cipher text.
These two lists have to be combined when we generate the ICV.

In order to provide the best performance, many AEAD algorithms try
to stitch these two lists together into a single contiguous SG
entry where possible.  Worse, with IPsec there is also an IV involved
that needs to be added into the mix.  The end result is a lot of
black magic all spread out through the crypto stack.  You just have
to take a look at the complexity in crypto/authenc.c to know what
I'm talking about (not to mention crypto/authencesn.c).

Another wart in the system is IV generation.  This is exclusively
used by IPsec.  However, it carries with it almost an entire
operation type in the form of givencrypt.  Again a lot of complexity
has been added in order to support this.

So this series slightly adjusts the AEAD interface in an attempt
to solve these two issues.  Firstly the AD is now placed into the
same SG list as the plain/cipher text.  This removes the need to
do any stitching.  As the primary user of AEAD, IPsec naturally
has a contiguous buffer containing the AD and plain/cipher text,
this simplifies esp4/esp6 quite a bit.

Secondly IV generation is now carried out in explicit but normal
AEAD algorithms.  The generated IV is simply part of the cipher
text.  This means that we can kill at least half of the AEAD geniv
code and all of the ablkcipher geniv code.  It also means that
authenc and other IPsec-specific algorithms no longer has to do
IV stitching.

This series only creates the new interface.  The actual conversion
will be carried out in subsequent series.  The old interface will
be maintained until the conversion is complete.

Cheers,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH 1/16] crypto: cryptd - Use crypto_grab_aead
  2015-05-21  7:09 [PATCH 0/16] crypto: aead - Add single SG interface and new IPsec IV generation Herbert Xu
@ 2015-05-21  7:10 ` Herbert Xu
  2015-05-21  7:10 ` [PATCH 2/16] crypto: pcrypt " Herbert Xu
                   ` (14 subsequent siblings)
  15 siblings, 0 replies; 17+ messages in thread
From: Herbert Xu @ 2015-05-21  7:10 UTC (permalink / raw)
  To: Linux Crypto Mailing List

As AEAD has switched over to using frontend types, the function
crypto_init_spawn must not be used since it does not specify a
frontend type.  Otherwise it leads to a crash when the spawn is
used.

This patch fixes it by switching over to crypto_grab_aead instead.

Fixes: 5d1d65f8bea6 ("crypto: aead - Convert top level interface to new style")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 crypto/cryptd.c |   60 ++++++++++++++++++++++++++++++++++----------------------
 1 file changed, 37 insertions(+), 23 deletions(-)

diff --git a/crypto/cryptd.c b/crypto/cryptd.c
index e1584fb..4264c8d 100644
--- a/crypto/cryptd.c
+++ b/crypto/cryptd.c
@@ -295,6 +295,23 @@ static void cryptd_blkcipher_exit_tfm(struct crypto_tfm *tfm)
 	crypto_free_blkcipher(ctx->child);
 }
 
+static int cryptd_init_instance(struct crypto_instance *inst,
+				struct crypto_alg *alg)
+{
+	if (snprintf(inst->alg.cra_driver_name, CRYPTO_MAX_ALG_NAME,
+		     "cryptd(%s)",
+		     alg->cra_driver_name) >= CRYPTO_MAX_ALG_NAME)
+		return -ENAMETOOLONG;
+
+	memcpy(inst->alg.cra_name, alg->cra_name, CRYPTO_MAX_ALG_NAME);
+
+	inst->alg.cra_priority = alg->cra_priority + 50;
+	inst->alg.cra_blocksize = alg->cra_blocksize;
+	inst->alg.cra_alignmask = alg->cra_alignmask;
+
+	return 0;
+}
+
 static void *cryptd_alloc_instance(struct crypto_alg *alg, unsigned int head,
 				   unsigned int tail)
 {
@@ -308,17 +325,10 @@ static void *cryptd_alloc_instance(struct crypto_alg *alg, unsigned int head,
 
 	inst = (void *)(p + head);
 
-	err = -ENAMETOOLONG;
-	if (snprintf(inst->alg.cra_driver_name, CRYPTO_MAX_ALG_NAME,
-		     "cryptd(%s)", alg->cra_driver_name) >= CRYPTO_MAX_ALG_NAME)
+	err = cryptd_init_instance(inst, alg);
+	if (err)
 		goto out_free_inst;
 
-	memcpy(inst->alg.cra_name, alg->cra_name, CRYPTO_MAX_ALG_NAME);
-
-	inst->alg.cra_priority = alg->cra_priority + 50;
-	inst->alg.cra_blocksize = alg->cra_blocksize;
-	inst->alg.cra_alignmask = alg->cra_alignmask;
-
 out:
 	return p;
 
@@ -747,29 +757,34 @@ static int cryptd_create_aead(struct crypto_template *tmpl,
 	struct aead_instance_ctx *ctx;
 	struct crypto_instance *inst;
 	struct crypto_alg *alg;
-	u32 type = CRYPTO_ALG_TYPE_AEAD;
-	u32 mask = CRYPTO_ALG_TYPE_MASK;
+	const char *name;
+	u32 type = 0;
+	u32 mask = 0;
 	int err;
 
 	cryptd_check_internal(tb, &type, &mask);
 
-	alg = crypto_get_attr_alg(tb, type, mask);
-        if (IS_ERR(alg))
-		return PTR_ERR(alg);
+	name = crypto_attr_alg_name(tb[1]);
+	if (IS_ERR(name))
+		return PTR_ERR(name);
 
-	inst = cryptd_alloc_instance(alg, 0, sizeof(*ctx));
-	err = PTR_ERR(inst);
-	if (IS_ERR(inst))
-		goto out_put_alg;
+	inst = kzalloc(sizeof(*inst) + sizeof(*ctx), GFP_KERNEL);
+	if (!inst)
+		return -ENOMEM;
 
 	ctx = crypto_instance_ctx(inst);
 	ctx->queue = queue;
 
-	err = crypto_init_spawn(&ctx->aead_spawn.base, alg, inst,
-			CRYPTO_ALG_TYPE_MASK | CRYPTO_ALG_ASYNC);
+	crypto_set_aead_spawn(&ctx->aead_spawn, inst);
+	err = crypto_grab_aead(&ctx->aead_spawn, name, type, mask);
 	if (err)
 		goto out_free_inst;
 
+	alg = crypto_aead_spawn_alg(&ctx->aead_spawn);
+	err = cryptd_init_instance(inst, alg);
+	if (err)
+		goto out_drop_aead;
+
 	type = CRYPTO_ALG_TYPE_AEAD | CRYPTO_ALG_ASYNC;
 	if (alg->cra_flags & CRYPTO_ALG_INTERNAL)
 		type |= CRYPTO_ALG_INTERNAL;
@@ -790,12 +805,11 @@ static int cryptd_create_aead(struct crypto_template *tmpl,
 
 	err = crypto_register_instance(tmpl, inst);
 	if (err) {
-		crypto_drop_spawn(&ctx->aead_spawn.base);
+out_drop_aead:
+		crypto_drop_aead(&ctx->aead_spawn);
 out_free_inst:
 		kfree(inst);
 	}
-out_put_alg:
-	crypto_mod_put(alg);
 	return err;
 }
 

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 2/16] crypto: pcrypt - Use crypto_grab_aead
  2015-05-21  7:09 [PATCH 0/16] crypto: aead - Add single SG interface and new IPsec IV generation Herbert Xu
  2015-05-21  7:10 ` [PATCH 1/16] crypto: cryptd - Use crypto_grab_aead Herbert Xu
@ 2015-05-21  7:10 ` Herbert Xu
  2015-05-21  7:10 ` [PATCH 3/16] crypto: scatterwalk - Add scatterwalk_ffwd helper Herbert Xu
                   ` (13 subsequent siblings)
  15 siblings, 0 replies; 17+ messages in thread
From: Herbert Xu @ 2015-05-21  7:10 UTC (permalink / raw)
  To: Linux Crypto Mailing List

As AEAD has switched over to using frontend types, the function
crypto_init_spawn must not be used since it does not specify a
frontend type.  Otherwise it leads to a crash when the spawn is
used.

This patch fixes it by switching over to crypto_grab_aead instead.

Fixes: 5d1d65f8bea6 ("crypto: aead - Convert top level interface to new style")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 crypto/pcrypt.c |   71 +++++++++++++++++++++++++++-----------------------------
 1 file changed, 35 insertions(+), 36 deletions(-)

diff --git a/crypto/pcrypt.c b/crypto/pcrypt.c
index ac115d5..3942a9f 100644
--- a/crypto/pcrypt.c
+++ b/crypto/pcrypt.c
@@ -60,7 +60,7 @@ static struct padata_pcrypt pdecrypt;
 static struct kset           *pcrypt_kset;
 
 struct pcrypt_instance_ctx {
-	struct crypto_spawn spawn;
+	struct crypto_aead_spawn spawn;
 	unsigned int tfm_count;
 };
 
@@ -307,57 +307,50 @@ static void pcrypt_aead_exit_tfm(struct crypto_tfm *tfm)
 	crypto_free_aead(ctx->child);
 }
 
-static struct crypto_instance *pcrypt_alloc_instance(struct crypto_alg *alg)
+static int pcrypt_init_instance(struct crypto_instance *inst,
+				struct crypto_alg *alg)
 {
-	struct crypto_instance *inst;
-	struct pcrypt_instance_ctx *ctx;
-	int err;
-
-	inst = kzalloc(sizeof(*inst) + sizeof(*ctx), GFP_KERNEL);
-	if (!inst) {
-		inst = ERR_PTR(-ENOMEM);
-		goto out;
-	}
-
-	err = -ENAMETOOLONG;
 	if (snprintf(inst->alg.cra_driver_name, CRYPTO_MAX_ALG_NAME,
 		     "pcrypt(%s)", alg->cra_driver_name) >= CRYPTO_MAX_ALG_NAME)
-		goto out_free_inst;
+		return -ENAMETOOLONG;
 
 	memcpy(inst->alg.cra_name, alg->cra_name, CRYPTO_MAX_ALG_NAME);
 
-	ctx = crypto_instance_ctx(inst);
-	err = crypto_init_spawn(&ctx->spawn, alg, inst,
-				CRYPTO_ALG_TYPE_MASK);
-	if (err)
-		goto out_free_inst;
-
 	inst->alg.cra_priority = alg->cra_priority + 100;
 	inst->alg.cra_blocksize = alg->cra_blocksize;
 	inst->alg.cra_alignmask = alg->cra_alignmask;
 
-out:
-	return inst;
-
-out_free_inst:
-	kfree(inst);
-	inst = ERR_PTR(err);
-	goto out;
+	return 0;
 }
 
 static struct crypto_instance *pcrypt_alloc_aead(struct rtattr **tb,
 						 u32 type, u32 mask)
 {
+	struct pcrypt_instance_ctx *ctx;
 	struct crypto_instance *inst;
 	struct crypto_alg *alg;
+	const char *name;
+	int err;
+
+	name = crypto_attr_alg_name(tb[1]);
+	if (IS_ERR(name))
+		return ERR_CAST(name);
+
+	inst = kzalloc(sizeof(*inst) + sizeof(*ctx), GFP_KERNEL);
+	if (!inst)
+		return ERR_PTR(-ENOMEM);
+
+	ctx = crypto_instance_ctx(inst);
+	crypto_set_aead_spawn(&ctx->spawn, inst);
 
-	alg = crypto_get_attr_alg(tb, type, (mask & CRYPTO_ALG_TYPE_MASK));
-	if (IS_ERR(alg))
-		return ERR_CAST(alg);
+	err = crypto_grab_aead(&ctx->spawn, name, 0, 0);
+	if (err)
+		goto out_free_inst;
 
-	inst = pcrypt_alloc_instance(alg);
-	if (IS_ERR(inst))
-		goto out_put_alg;
+	alg = crypto_aead_spawn_alg(&ctx->spawn);
+	err = pcrypt_init_instance(inst, alg);
+	if (err)
+		goto out_drop_aead;
 
 	inst->alg.cra_flags = CRYPTO_ALG_TYPE_AEAD | CRYPTO_ALG_ASYNC;
 	inst->alg.cra_type = &crypto_aead_type;
@@ -377,9 +370,15 @@ static struct crypto_instance *pcrypt_alloc_aead(struct rtattr **tb,
 	inst->alg.cra_aead.decrypt = pcrypt_aead_decrypt;
 	inst->alg.cra_aead.givencrypt = pcrypt_aead_givencrypt;
 
-out_put_alg:
-	crypto_mod_put(alg);
+out:
 	return inst;
+
+out_drop_aead:
+	crypto_drop_aead(&ctx->spawn);
+out_free_inst:
+	kfree(inst);
+	inst = ERR_PTR(err);
+	goto out;
 }
 
 static struct crypto_instance *pcrypt_alloc(struct rtattr **tb)
@@ -402,7 +401,7 @@ static void pcrypt_free(struct crypto_instance *inst)
 {
 	struct pcrypt_instance_ctx *ctx = crypto_instance_ctx(inst);
 
-	crypto_drop_spawn(&ctx->spawn);
+	crypto_drop_aead(&ctx->spawn);
 	kfree(inst);
 }
 

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 3/16] crypto: scatterwalk - Add scatterwalk_ffwd helper
  2015-05-21  7:09 [PATCH 0/16] crypto: aead - Add single SG interface and new IPsec IV generation Herbert Xu
  2015-05-21  7:10 ` [PATCH 1/16] crypto: cryptd - Use crypto_grab_aead Herbert Xu
  2015-05-21  7:10 ` [PATCH 2/16] crypto: pcrypt " Herbert Xu
@ 2015-05-21  7:10 ` Herbert Xu
  2015-05-21  7:11 ` [PATCH 4/16] crypto: aead - Add new interface with single SG list Herbert Xu
                   ` (12 subsequent siblings)
  15 siblings, 0 replies; 17+ messages in thread
From: Herbert Xu @ 2015-05-21  7:10 UTC (permalink / raw)
  To: Linux Crypto Mailing List

This patch adds the scatterwalk_ffwd helper which can create an
SG list that starts in the middle of an existing SG list.  The
new list may either be part of the existing list or be a chain
that latches onto part of the existing list.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 crypto/scatterwalk.c         |   22 ++++++++++++++++++++++
 include/crypto/scatterwalk.h |    4 ++++
 2 files changed, 26 insertions(+)

diff --git a/crypto/scatterwalk.c b/crypto/scatterwalk.c
index 3bd749c..db920b5 100644
--- a/crypto/scatterwalk.c
+++ b/crypto/scatterwalk.c
@@ -146,3 +146,25 @@ int scatterwalk_bytes_sglen(struct scatterlist *sg, int num_bytes)
 	return n;
 }
 EXPORT_SYMBOL_GPL(scatterwalk_bytes_sglen);
+
+struct scatterlist *scatterwalk_ffwd(struct scatterlist dst[2],
+				     struct scatterlist *src,
+				     unsigned int len)
+{
+	for (;;) {
+		if (!len)
+			return src;
+
+		if (src->length > len)
+			break;
+
+		len -= src->length;
+		src = sg_next(src);
+	}
+
+	sg_set_page(dst, sg_page(src), src->length - len, src->offset + len);
+	scatterwalk_crypto_chain(dst, sg_next(src), 0, 2);
+
+	return dst;
+}
+EXPORT_SYMBOL_GPL(scatterwalk_ffwd);
diff --git a/include/crypto/scatterwalk.h b/include/crypto/scatterwalk.h
index 20e4226..96670e7 100644
--- a/include/crypto/scatterwalk.h
+++ b/include/crypto/scatterwalk.h
@@ -102,4 +102,8 @@ void scatterwalk_map_and_copy(void *buf, struct scatterlist *sg,
 
 int scatterwalk_bytes_sglen(struct scatterlist *sg, int num_bytes);
 
+struct scatterlist *scatterwalk_ffwd(struct scatterlist dst[2],
+				     struct scatterlist *src,
+				     unsigned int len);
+
 #endif  /* _CRYPTO_SCATTERWALK_H */

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 4/16] crypto: aead - Add new interface with single SG list
  2015-05-21  7:09 [PATCH 0/16] crypto: aead - Add single SG interface and new IPsec IV generation Herbert Xu
                   ` (2 preceding siblings ...)
  2015-05-21  7:10 ` [PATCH 3/16] crypto: scatterwalk - Add scatterwalk_ffwd helper Herbert Xu
@ 2015-05-21  7:11 ` Herbert Xu
  2015-05-21  7:11 ` [PATCH 5/16] crypto: aead - Rename aead_alg to old_aead_alg Herbert Xu
                   ` (11 subsequent siblings)
  15 siblings, 0 replies; 17+ messages in thread
From: Herbert Xu @ 2015-05-21  7:11 UTC (permalink / raw)
  To: Linux Crypto Mailing List

The primary user of AEAD, IPsec includes the IV in the AD in
most cases, except where it is implicitly authenticated by the
underlying algorithm.

The way it is currently implemented is a hack because we pass
the data in piecemeal and the underlying algorithms try to stitch
them back up into one piece.

This is why this patch is adding a new interface that allows a
single SG list to be passed in that contains everything so the
algorithm implementors do not have to stitch.

The new interface accepts a single source SG list and a single
destination SG list.  Both must be laid out as follows:

	AD, skipped data, plain/cipher text, ICV

The ICV is not present from the source during encryption and from
the destination during decryption.

For the top-level IPsec AEAD algorithm the plain/cipher text will
contain the generated (or received) IV.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 crypto/aead.c         |   58 ++++++++++++++++++++++++++++++++++++++++++++++++--
 include/crypto/aead.h |   35 +++++++++++++++++++++++++-----
 2 files changed, 85 insertions(+), 8 deletions(-)

diff --git a/crypto/aead.c b/crypto/aead.c
index 717b2f6..c2bf3b3 100644
--- a/crypto/aead.c
+++ b/crypto/aead.c
@@ -13,6 +13,7 @@
  */
 
 #include <crypto/internal/aead.h>
+#include <crypto/scatterwalk.h>
 #include <linux/err.h>
 #include <linux/init.h>
 #include <linux/kernel.h>
@@ -85,6 +86,59 @@ int crypto_aead_setauthsize(struct crypto_aead *tfm, unsigned int authsize)
 }
 EXPORT_SYMBOL_GPL(crypto_aead_setauthsize);
 
+struct aead_old_request {
+	struct scatterlist srcbuf[2];
+	struct scatterlist dstbuf[2];
+	struct aead_request subreq;
+};
+
+unsigned int crypto_aead_reqsize(struct crypto_aead *tfm)
+{
+	return tfm->reqsize + sizeof(struct aead_old_request);
+}
+EXPORT_SYMBOL_GPL(crypto_aead_reqsize);
+
+static int old_crypt(struct aead_request *req,
+		     int (*crypt)(struct aead_request *req))
+{
+	struct aead_old_request *nreq = aead_request_ctx(req);
+	struct crypto_aead *aead = crypto_aead_reqtfm(req);
+	struct scatterlist *src, *dst;
+
+	if (req->old)
+		return crypt(req);
+
+	src = scatterwalk_ffwd(nreq->srcbuf, req->src,
+			       req->assoclen + req->cryptoff);
+	dst = scatterwalk_ffwd(nreq->dstbuf, req->dst,
+			       req->assoclen + req->cryptoff);
+
+	aead_request_set_tfm(&nreq->subreq, aead);
+	aead_request_set_callback(&nreq->subreq, aead_request_flags(req),
+				  req->base.complete, req->base.data);
+	aead_request_set_crypt(&nreq->subreq, src, dst, req->cryptlen,
+			       req->iv);
+	aead_request_set_assoc(&nreq->subreq, req->src, req->assoclen);
+
+	return crypt(&nreq->subreq);
+}
+
+static int old_encrypt(struct aead_request *req)
+{
+	struct crypto_aead *aead = crypto_aead_reqtfm(req);
+	struct aead_alg *alg = crypto_aead_alg(aead);
+
+	return old_crypt(req, alg->encrypt);
+}
+
+static int old_decrypt(struct aead_request *req)
+{
+	struct crypto_aead *aead = crypto_aead_reqtfm(req);
+	struct aead_alg *alg = crypto_aead_alg(aead);
+
+	return old_crypt(req, alg->decrypt);
+}
+
 static int no_givcrypt(struct aead_givcrypt_request *req)
 {
 	return -ENOSYS;
@@ -98,8 +152,8 @@ static int crypto_aead_init_tfm(struct crypto_tfm *tfm)
 	if (max(alg->maxauthsize, alg->ivsize) > PAGE_SIZE / 8)
 		return -EINVAL;
 
-	crt->encrypt = alg->encrypt;
-	crt->decrypt = alg->decrypt;
+	crt->encrypt = old_encrypt;
+	crt->decrypt = old_decrypt;
 	if (alg->ivsize) {
 		crt->givencrypt = alg->givencrypt ?: no_givcrypt;
 		crt->givdecrypt = alg->givdecrypt ?: no_givcrypt;
diff --git a/include/crypto/aead.h b/include/crypto/aead.h
index dbcad08..e2d2c3c 100644
--- a/include/crypto/aead.h
+++ b/include/crypto/aead.h
@@ -52,6 +52,7 @@
  *	@base: Common attributes for async crypto requests
  *	@assoclen: Length in bytes of associated data for authentication
  *	@cryptlen: Length of data to be encrypted or decrypted
+ *	@cryptoff: Bytes to skip after AD before plain/cipher text
  *	@iv: Initialisation vector
  *	@assoc: Associated data
  *	@src: Source data
@@ -61,8 +62,11 @@
 struct aead_request {
 	struct crypto_async_request base;
 
+	bool old;
+
 	unsigned int assoclen;
 	unsigned int cryptlen;
+	unsigned int cryptoff;
 
 	u8 *iv;
 
@@ -314,10 +318,7 @@ static inline int crypto_aead_decrypt(struct aead_request *req)
  *
  * Return: number of bytes
  */
-static inline unsigned int crypto_aead_reqsize(struct crypto_aead *tfm)
-{
-	return tfm->reqsize;
-}
+unsigned int crypto_aead_reqsize(struct crypto_aead *tfm);
 
 /**
  * aead_request_set_tfm() - update cipher handle reference in request
@@ -417,6 +418,9 @@ static inline void aead_request_set_callback(struct aead_request *req,
  * destination is the ciphertext. For a decryption operation, the use is
  * reversed - the source is the ciphertext and the destination is the plaintext.
  *
+ * For both src/dst the layout is associated data, skipped data,
+ * plain/cipher text, authentication tag.
+ *
  * IMPORTANT NOTE AEAD requires an authentication tag (MAC). For decryption,
  *		  the caller must concatenate the ciphertext followed by the
  *		  authentication tag and provide the entire data stream to the
@@ -449,8 +453,7 @@ static inline void aead_request_set_crypt(struct aead_request *req,
  * @assoc: associated data scatter / gather list
  * @assoclen: number of bytes to process from @assoc
  *
- * For encryption, the memory is filled with the associated data. For
- * decryption, the memory must point to the associated data.
+ * Obsolete, do not use.
  */
 static inline void aead_request_set_assoc(struct aead_request *req,
 					  struct scatterlist *assoc,
@@ -458,6 +461,26 @@ static inline void aead_request_set_assoc(struct aead_request *req,
 {
 	req->assoc = assoc;
 	req->assoclen = assoclen;
+	req->old = true;
+}
+
+/**
+ * aead_request_set_ad - set associated data information
+ * @req: request handle
+ * @assoclen: number of bytes in associated data
+ * @cryptoff: Number of bytes to skip after AD before plain/cipher text
+ *
+ * Setting the AD information.  This function sets the length of
+ * the associated data and the number of bytes to skip after it to
+ * access the plain/cipher text.
+ */
+static inline void aead_request_set_ad(struct aead_request *req,
+				       unsigned int assoclen,
+				       unsigned int cryptoff)
+{
+	req->assoclen = assoclen;
+	req->cryptoff = cryptoff;
+	req->old = false;
 }
 
 static inline struct crypto_aead *aead_givcrypt_reqtfm(

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 5/16] crypto: aead - Rename aead_alg to old_aead_alg
  2015-05-21  7:09 [PATCH 0/16] crypto: aead - Add single SG interface and new IPsec IV generation Herbert Xu
                   ` (3 preceding siblings ...)
  2015-05-21  7:11 ` [PATCH 4/16] crypto: aead - Add new interface with single SG list Herbert Xu
@ 2015-05-21  7:11 ` Herbert Xu
  2015-05-21  7:11 ` [PATCH 6/16] crypto: caam - Use old_aead_alg Herbert Xu
                   ` (10 subsequent siblings)
  15 siblings, 0 replies; 17+ messages in thread
From: Herbert Xu @ 2015-05-21  7:11 UTC (permalink / raw)
  To: Linux Crypto Mailing List

This patch is the first step in the introduction of a new AEAD
alg type.  Unlike normal conversions this patch only renames the
existing aead_alg structure because there are external references
to it.

Those references will be removed after this patch.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 crypto/aead.c                  |   25 +++++++++++++------------
 include/crypto/aead.h          |    2 ++
 include/crypto/internal/aead.h |    5 +++++
 include/linux/crypto.h         |    6 +++---
 4 files changed, 23 insertions(+), 15 deletions(-)

diff --git a/crypto/aead.c b/crypto/aead.c
index c2bf3b3..ebc91ea 100644
--- a/crypto/aead.c
+++ b/crypto/aead.c
@@ -33,7 +33,7 @@ static int aead_null_givdecrypt(struct aead_givcrypt_request *req);
 static int setkey_unaligned(struct crypto_aead *tfm, const u8 *key,
 			    unsigned int keylen)
 {
-	struct aead_alg *aead = crypto_aead_alg(tfm);
+	struct old_aead_alg *aead = crypto_old_aead_alg(tfm);
 	unsigned long alignmask = crypto_aead_alignmask(tfm);
 	int ret;
 	u8 *buffer, *alignbuffer;
@@ -55,7 +55,7 @@ static int setkey_unaligned(struct crypto_aead *tfm, const u8 *key,
 int crypto_aead_setkey(struct crypto_aead *tfm,
 		       const u8 *key, unsigned int keylen)
 {
-	struct aead_alg *aead = crypto_aead_alg(tfm);
+	struct old_aead_alg *aead = crypto_old_aead_alg(tfm);
 	unsigned long alignmask = crypto_aead_alignmask(tfm);
 
 	tfm = tfm->child;
@@ -71,11 +71,12 @@ int crypto_aead_setauthsize(struct crypto_aead *tfm, unsigned int authsize)
 {
 	int err;
 
-	if (authsize > crypto_aead_alg(tfm)->maxauthsize)
+	if (authsize > crypto_old_aead_alg(tfm)->maxauthsize)
 		return -EINVAL;
 
-	if (crypto_aead_alg(tfm)->setauthsize) {
-		err = crypto_aead_alg(tfm)->setauthsize(tfm->child, authsize);
+	if (crypto_old_aead_alg(tfm)->setauthsize) {
+		err = crypto_old_aead_alg(tfm)->setauthsize(
+			tfm->child, authsize);
 		if (err)
 			return err;
 	}
@@ -126,7 +127,7 @@ static int old_crypt(struct aead_request *req,
 static int old_encrypt(struct aead_request *req)
 {
 	struct crypto_aead *aead = crypto_aead_reqtfm(req);
-	struct aead_alg *alg = crypto_aead_alg(aead);
+	struct old_aead_alg *alg = crypto_old_aead_alg(aead);
 
 	return old_crypt(req, alg->encrypt);
 }
@@ -134,7 +135,7 @@ static int old_encrypt(struct aead_request *req)
 static int old_decrypt(struct aead_request *req)
 {
 	struct crypto_aead *aead = crypto_aead_reqtfm(req);
-	struct aead_alg *alg = crypto_aead_alg(aead);
+	struct old_aead_alg *alg = crypto_old_aead_alg(aead);
 
 	return old_crypt(req, alg->decrypt);
 }
@@ -146,7 +147,7 @@ static int no_givcrypt(struct aead_givcrypt_request *req)
 
 static int crypto_aead_init_tfm(struct crypto_tfm *tfm)
 {
-	struct aead_alg *alg = &tfm->__crt_alg->cra_aead;
+	struct old_aead_alg *alg = &tfm->__crt_alg->cra_aead;
 	struct crypto_aead *crt = __crypto_aead_cast(tfm);
 
 	if (max(alg->maxauthsize, alg->ivsize) > PAGE_SIZE / 8)
@@ -172,7 +173,7 @@ static int crypto_aead_init_tfm(struct crypto_tfm *tfm)
 static int crypto_aead_report(struct sk_buff *skb, struct crypto_alg *alg)
 {
 	struct crypto_report_aead raead;
-	struct aead_alg *aead = &alg->cra_aead;
+	struct old_aead_alg *aead = &alg->cra_aead;
 
 	strncpy(raead.type, "aead", sizeof(raead.type));
 	strncpy(raead.geniv, aead->geniv ?: "<built-in>", sizeof(raead.geniv));
@@ -200,7 +201,7 @@ static void crypto_aead_show(struct seq_file *m, struct crypto_alg *alg)
 	__attribute__ ((unused));
 static void crypto_aead_show(struct seq_file *m, struct crypto_alg *alg)
 {
-	struct aead_alg *aead = &alg->cra_aead;
+	struct old_aead_alg *aead = &alg->cra_aead;
 
 	seq_printf(m, "type         : aead\n");
 	seq_printf(m, "async        : %s\n", alg->cra_flags & CRYPTO_ALG_ASYNC ?
@@ -240,7 +241,7 @@ static int aead_null_givdecrypt(struct aead_givcrypt_request *req)
 static int crypto_nivaead_report(struct sk_buff *skb, struct crypto_alg *alg)
 {
 	struct crypto_report_aead raead;
-	struct aead_alg *aead = &alg->cra_aead;
+	struct old_aead_alg *aead = &alg->cra_aead;
 
 	strncpy(raead.type, "nivaead", sizeof(raead.type));
 	strncpy(raead.geniv, aead->geniv, sizeof(raead.geniv));
@@ -269,7 +270,7 @@ static void crypto_nivaead_show(struct seq_file *m, struct crypto_alg *alg)
 	__attribute__ ((unused));
 static void crypto_nivaead_show(struct seq_file *m, struct crypto_alg *alg)
 {
-	struct aead_alg *aead = &alg->cra_aead;
+	struct old_aead_alg *aead = &alg->cra_aead;
 
 	seq_printf(m, "type         : nivaead\n");
 	seq_printf(m, "async        : %s\n", alg->cra_flags & CRYPTO_ALG_ASYNC ?
diff --git a/include/crypto/aead.h b/include/crypto/aead.h
index e2d2c3c..aebf57d 100644
--- a/include/crypto/aead.h
+++ b/include/crypto/aead.h
@@ -17,6 +17,8 @@
 #include <linux/kernel.h>
 #include <linux/slab.h>
 
+#define aead_alg old_aead_alg
+
 /**
  * DOC: Authenticated Encryption With Associated Data (AEAD) Cipher API
  *
diff --git a/include/crypto/internal/aead.h b/include/crypto/internal/aead.h
index a2d104a..84c17bb 100644
--- a/include/crypto/internal/aead.h
+++ b/include/crypto/internal/aead.h
@@ -26,6 +26,11 @@ struct crypto_aead_spawn {
 extern const struct crypto_type crypto_aead_type;
 extern const struct crypto_type crypto_nivaead_type;
 
+static inline struct old_aead_alg *crypto_old_aead_alg(struct crypto_aead *tfm)
+{
+	return &crypto_aead_tfm(tfm)->__crt_alg->cra_aead;
+}
+
 static inline struct aead_alg *crypto_aead_alg(struct crypto_aead *tfm)
 {
 	return &crypto_aead_tfm(tfm)->__crt_alg->cra_aead;
diff --git a/include/linux/crypto.h b/include/linux/crypto.h
index 59ca408..7d290a9 100644
--- a/include/linux/crypto.h
+++ b/include/linux/crypto.h
@@ -268,7 +268,7 @@ struct ablkcipher_alg {
 };
 
 /**
- * struct aead_alg - AEAD cipher definition
+ * struct old_aead_alg - AEAD cipher definition
  * @maxauthsize: Set the maximum authentication tag size supported by the
  *		 transformation. A transformation may support smaller tag sizes.
  *		 As the authentication tag is a message digest to ensure the
@@ -293,7 +293,7 @@ struct ablkcipher_alg {
  * All fields except @givencrypt , @givdecrypt , @geniv and @ivsize are
  * mandatory and must be filled.
  */
-struct aead_alg {
+struct old_aead_alg {
 	int (*setkey)(struct crypto_aead *tfm, const u8 *key,
 	              unsigned int keylen);
 	int (*setauthsize)(struct crypto_aead *tfm, unsigned int authsize);
@@ -501,7 +501,7 @@ struct crypto_alg {
 
 	union {
 		struct ablkcipher_alg ablkcipher;
-		struct aead_alg aead;
+		struct old_aead_alg aead;
 		struct blkcipher_alg blkcipher;
 		struct cipher_alg cipher;
 		struct compress_alg compress;

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 6/16] crypto: caam - Use old_aead_alg
  2015-05-21  7:09 [PATCH 0/16] crypto: aead - Add single SG interface and new IPsec IV generation Herbert Xu
                   ` (4 preceding siblings ...)
  2015-05-21  7:11 ` [PATCH 5/16] crypto: aead - Rename aead_alg to old_aead_alg Herbert Xu
@ 2015-05-21  7:11 ` Herbert Xu
  2015-05-21  7:11 ` [PATCH 7/16] crypto: aead - Add crypto_aead_maxauthsize Herbert Xu
                   ` (9 subsequent siblings)
  15 siblings, 0 replies; 17+ messages in thread
From: Herbert Xu @ 2015-05-21  7:11 UTC (permalink / raw)
  To: Linux Crypto Mailing List

This patch replaces references to aead_alg with old_aead_alg.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 drivers/crypto/caam/caamalg.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/crypto/caam/caamalg.c b/drivers/crypto/caam/caamalg.c
index a34fc95..3d850ab 100644
--- a/drivers/crypto/caam/caamalg.c
+++ b/drivers/crypto/caam/caamalg.c
@@ -3379,7 +3379,7 @@ struct caam_alg_template {
 	u32 type;
 	union {
 		struct ablkcipher_alg ablkcipher;
-		struct aead_alg aead;
+		struct old_aead_alg aead;
 	} template_u;
 	u32 class1_alg_type;
 	u32 class2_alg_type;

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 7/16] crypto: aead - Add crypto_aead_maxauthsize
  2015-05-21  7:09 [PATCH 0/16] crypto: aead - Add single SG interface and new IPsec IV generation Herbert Xu
                   ` (5 preceding siblings ...)
  2015-05-21  7:11 ` [PATCH 6/16] crypto: caam - Use old_aead_alg Herbert Xu
@ 2015-05-21  7:11 ` Herbert Xu
  2015-05-21  7:11 ` [PATCH 8/16] crypto: ixp4xx - Use crypto_aead_maxauthsize Herbert Xu
                   ` (8 subsequent siblings)
  15 siblings, 0 replies; 17+ messages in thread
From: Herbert Xu @ 2015-05-21  7:11 UTC (permalink / raw)
  To: Linux Crypto Mailing List

This patch adds the helper crypto_aead_maxauthsize to remove the
need to directly dereference aead_alg internals by AEAD implementors.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 include/crypto/internal/aead.h |    5 +++++
 1 file changed, 5 insertions(+)

diff --git a/include/crypto/internal/aead.h b/include/crypto/internal/aead.h
index 84c17bb..4614f79 100644
--- a/include/crypto/internal/aead.h
+++ b/include/crypto/internal/aead.h
@@ -119,5 +119,10 @@ static inline void crypto_aead_set_reqsize(struct crypto_aead *aead,
 	crypto_aead_crt(aead)->reqsize = reqsize;
 }
 
+static inline unsigned int crypto_aead_maxauthsize(struct crypto_aead *aead)
+{
+	return crypto_old_aead_alg(aead)->maxauthsize;
+}
+
 #endif	/* _CRYPTO_INTERNAL_AEAD_H */
 

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 8/16] crypto: ixp4xx - Use crypto_aead_maxauthsize
  2015-05-21  7:09 [PATCH 0/16] crypto: aead - Add single SG interface and new IPsec IV generation Herbert Xu
                   ` (6 preceding siblings ...)
  2015-05-21  7:11 ` [PATCH 7/16] crypto: aead - Add crypto_aead_maxauthsize Herbert Xu
@ 2015-05-21  7:11 ` Herbert Xu
  2015-05-21  7:11 ` [PATCH 9/16] crypto: nx - Remove unnecessary maxauthsize check Herbert Xu
                   ` (7 subsequent siblings)
  15 siblings, 0 replies; 17+ messages in thread
From: Herbert Xu @ 2015-05-21  7:11 UTC (permalink / raw)
  To: Linux Crypto Mailing List

This patch uses the helper crypto_aead_maxauthsize instead of
directly dereferencing aead_alg.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 drivers/crypto/ixp4xx_crypto.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/crypto/ixp4xx_crypto.c b/drivers/crypto/ixp4xx_crypto.c
index 46cd696..7ba495f 100644
--- a/drivers/crypto/ixp4xx_crypto.c
+++ b/drivers/crypto/ixp4xx_crypto.c
@@ -1097,7 +1097,7 @@ static int aead_setup(struct crypto_aead *tfm, unsigned int authsize)
 {
 	struct ixp_ctx *ctx = crypto_aead_ctx(tfm);
 	u32 *flags = &tfm->base.crt_flags;
-	unsigned digest_len = crypto_aead_alg(tfm)->maxauthsize;
+	unsigned digest_len = crypto_aead_maxauthsize(tfm);
 	int ret;
 
 	if (!ctx->enckey_len && !ctx->authkey_len)
@@ -1139,7 +1139,7 @@ out:
 
 static int aead_setauthsize(struct crypto_aead *tfm, unsigned int authsize)
 {
-	int max = crypto_aead_alg(tfm)->maxauthsize >> 2;
+	int max = crypto_aead_maxauthsize(tfm) >> 2;
 
 	if ((authsize>>2) < 1 || (authsize>>2) > max || (authsize & 3))
 		return -EINVAL;

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 9/16] crypto: nx - Remove unnecessary maxauthsize check
  2015-05-21  7:09 [PATCH 0/16] crypto: aead - Add single SG interface and new IPsec IV generation Herbert Xu
                   ` (7 preceding siblings ...)
  2015-05-21  7:11 ` [PATCH 8/16] crypto: ixp4xx - Use crypto_aead_maxauthsize Herbert Xu
@ 2015-05-21  7:11 ` Herbert Xu
  2015-05-21  7:11 ` [PATCH 10/16] crypto: aead - Add support for new AEAD implementations Herbert Xu
                   ` (6 subsequent siblings)
  15 siblings, 0 replies; 17+ messages in thread
From: Herbert Xu @ 2015-05-21  7:11 UTC (permalink / raw)
  To: Linux Crypto Mailing List

The crypto layer already checks maxauthsize when setauthsize is
called.  So there is no need to check it again within setauthsize.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 drivers/crypto/nx/nx-aes-gcm.c |    3 ---
 1 file changed, 3 deletions(-)

diff --git a/drivers/crypto/nx/nx-aes-gcm.c b/drivers/crypto/nx/nx-aes-gcm.c
index 88c5624..e4e64f6 100644
--- a/drivers/crypto/nx/nx-aes-gcm.c
+++ b/drivers/crypto/nx/nx-aes-gcm.c
@@ -96,9 +96,6 @@ out:
 static int gcm_aes_nx_setauthsize(struct crypto_aead *tfm,
 				  unsigned int authsize)
 {
-	if (authsize > crypto_aead_alg(tfm)->maxauthsize)
-		return -EINVAL;
-
 	crypto_aead_crt(tfm)->authsize = authsize;
 
 	return 0;

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 10/16] crypto: aead - Add support for new AEAD implementations
  2015-05-21  7:09 [PATCH 0/16] crypto: aead - Add single SG interface and new IPsec IV generation Herbert Xu
                   ` (8 preceding siblings ...)
  2015-05-21  7:11 ` [PATCH 9/16] crypto: nx - Remove unnecessary maxauthsize check Herbert Xu
@ 2015-05-21  7:11 ` Herbert Xu
  2015-05-21  7:11 ` [PATCH 11/16] crypto: null - Add default null skcipher Herbert Xu
                   ` (5 subsequent siblings)
  15 siblings, 0 replies; 17+ messages in thread
From: Herbert Xu @ 2015-05-21  7:11 UTC (permalink / raw)
  To: Linux Crypto Mailing List

This patch adds the basic structure of the new AEAD type.  Unlike
the current version, there is no longer any concept of geniv.  IV
generation will still be carried out by wrappers but they will be
normal AEAD algorithms that simply take the IPsec sequence number
as the IV.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 crypto/aead.c                  |  152 ++++++++++++++++++++++++++++++++++++-----
 include/crypto/aead.h          |   44 +++++++++++
 include/crypto/internal/aead.h |   36 +++++++++
 3 files changed, 213 insertions(+), 19 deletions(-)

diff --git a/crypto/aead.c b/crypto/aead.c
index ebc91ea..d231e28 100644
--- a/crypto/aead.c
+++ b/crypto/aead.c
@@ -33,7 +33,6 @@ static int aead_null_givdecrypt(struct aead_givcrypt_request *req);
 static int setkey_unaligned(struct crypto_aead *tfm, const u8 *key,
 			    unsigned int keylen)
 {
-	struct old_aead_alg *aead = crypto_old_aead_alg(tfm);
 	unsigned long alignmask = crypto_aead_alignmask(tfm);
 	int ret;
 	u8 *buffer, *alignbuffer;
@@ -46,7 +45,7 @@ static int setkey_unaligned(struct crypto_aead *tfm, const u8 *key,
 
 	alignbuffer = (u8 *)ALIGN((unsigned long)buffer, alignmask + 1);
 	memcpy(alignbuffer, key, keylen);
-	ret = aead->setkey(tfm, alignbuffer, keylen);
+	ret = tfm->setkey(tfm, alignbuffer, keylen);
 	memset(alignbuffer, 0, keylen);
 	kfree(buffer);
 	return ret;
@@ -55,7 +54,6 @@ static int setkey_unaligned(struct crypto_aead *tfm, const u8 *key,
 int crypto_aead_setkey(struct crypto_aead *tfm,
 		       const u8 *key, unsigned int keylen)
 {
-	struct old_aead_alg *aead = crypto_old_aead_alg(tfm);
 	unsigned long alignmask = crypto_aead_alignmask(tfm);
 
 	tfm = tfm->child;
@@ -63,7 +61,7 @@ int crypto_aead_setkey(struct crypto_aead *tfm,
 	if ((unsigned long)key & alignmask)
 		return setkey_unaligned(tfm, key, keylen);
 
-	return aead->setkey(tfm, key, keylen);
+	return tfm->setkey(tfm, key, keylen);
 }
 EXPORT_SYMBOL_GPL(crypto_aead_setkey);
 
@@ -71,12 +69,11 @@ int crypto_aead_setauthsize(struct crypto_aead *tfm, unsigned int authsize)
 {
 	int err;
 
-	if (authsize > crypto_old_aead_alg(tfm)->maxauthsize)
+	if (authsize > tfm->maxauthsize)
 		return -EINVAL;
 
-	if (crypto_old_aead_alg(tfm)->setauthsize) {
-		err = crypto_old_aead_alg(tfm)->setauthsize(
-			tfm->child, authsize);
+	if (tfm->setauthsize) {
+		err = tfm->setauthsize(tfm->child, authsize);
 		if (err)
 			return err;
 	}
@@ -145,7 +142,7 @@ static int no_givcrypt(struct aead_givcrypt_request *req)
 	return -ENOSYS;
 }
 
-static int crypto_aead_init_tfm(struct crypto_tfm *tfm)
+static int crypto_old_aead_init_tfm(struct crypto_tfm *tfm)
 {
 	struct old_aead_alg *alg = &tfm->__crt_alg->cra_aead;
 	struct crypto_aead *crt = __crypto_aead_cast(tfm);
@@ -153,6 +150,8 @@ static int crypto_aead_init_tfm(struct crypto_tfm *tfm)
 	if (max(alg->maxauthsize, alg->ivsize) > PAGE_SIZE / 8)
 		return -EINVAL;
 
+	crt->setkey = alg->setkey;
+	crt->setauthsize = alg->setauthsize;
 	crt->encrypt = old_encrypt;
 	crt->decrypt = old_decrypt;
 	if (alg->ivsize) {
@@ -164,13 +163,34 @@ static int crypto_aead_init_tfm(struct crypto_tfm *tfm)
 	}
 	crt->child = __crypto_aead_cast(tfm);
 	crt->ivsize = alg->ivsize;
+	crt->maxauthsize = alg->maxauthsize;
 	crt->authsize = alg->maxauthsize;
 
 	return 0;
 }
 
+static int crypto_aead_init_tfm(struct crypto_tfm *tfm)
+{
+	struct crypto_aead *aead = __crypto_aead_cast(tfm);
+	struct aead_alg *alg = crypto_aead_alg(aead);
+
+	if (crypto_old_aead_alg(aead)->encrypt)
+		return crypto_old_aead_init_tfm(tfm);
+
+	aead->setkey = alg->setkey;
+	aead->setauthsize = alg->setauthsize;
+	aead->encrypt = alg->encrypt;
+	aead->decrypt = alg->decrypt;
+	aead->child = __crypto_aead_cast(tfm);
+	aead->ivsize = alg->ivsize;
+	aead->maxauthsize = alg->maxauthsize;
+	aead->authsize = alg->maxauthsize;
+
+	return 0;
+}
+
 #ifdef CONFIG_NET
-static int crypto_aead_report(struct sk_buff *skb, struct crypto_alg *alg)
+static int crypto_old_aead_report(struct sk_buff *skb, struct crypto_alg *alg)
 {
 	struct crypto_report_aead raead;
 	struct old_aead_alg *aead = &alg->cra_aead;
@@ -191,15 +211,15 @@ nla_put_failure:
 	return -EMSGSIZE;
 }
 #else
-static int crypto_aead_report(struct sk_buff *skb, struct crypto_alg *alg)
+static int crypto_old_aead_report(struct sk_buff *skb, struct crypto_alg *alg)
 {
 	return -ENOSYS;
 }
 #endif
 
-static void crypto_aead_show(struct seq_file *m, struct crypto_alg *alg)
+static void crypto_old_aead_show(struct seq_file *m, struct crypto_alg *alg)
 	__attribute__ ((unused));
-static void crypto_aead_show(struct seq_file *m, struct crypto_alg *alg)
+static void crypto_old_aead_show(struct seq_file *m, struct crypto_alg *alg)
 {
 	struct old_aead_alg *aead = &alg->cra_aead;
 
@@ -216,9 +236,9 @@ const struct crypto_type crypto_aead_type = {
 	.extsize = crypto_alg_extsize,
 	.init_tfm = crypto_aead_init_tfm,
 #ifdef CONFIG_PROC_FS
-	.show = crypto_aead_show,
+	.show = crypto_old_aead_show,
 #endif
-	.report = crypto_aead_report,
+	.report = crypto_old_aead_report,
 	.lookup = crypto_lookup_aead,
 	.maskclear = ~(CRYPTO_ALG_TYPE_MASK | CRYPTO_ALG_GENIV),
 	.maskset = CRYPTO_ALG_TYPE_MASK,
@@ -227,6 +247,62 @@ const struct crypto_type crypto_aead_type = {
 };
 EXPORT_SYMBOL_GPL(crypto_aead_type);
 
+#ifdef CONFIG_NET
+static int crypto_aead_report(struct sk_buff *skb, struct crypto_alg *alg)
+{
+	struct crypto_report_aead raead;
+	struct aead_alg *aead = container_of(alg, struct aead_alg, base);
+
+	strncpy(raead.type, "aead", sizeof(raead.type));
+	strncpy(raead.geniv, "<none>", sizeof(raead.geniv));
+
+	raead.blocksize = alg->cra_blocksize;
+	raead.maxauthsize = aead->maxauthsize;
+	raead.ivsize = aead->ivsize;
+
+	if (nla_put(skb, CRYPTOCFGA_REPORT_AEAD,
+		    sizeof(struct crypto_report_aead), &raead))
+		goto nla_put_failure;
+	return 0;
+
+nla_put_failure:
+	return -EMSGSIZE;
+}
+#else
+static int crypto_aead_report(struct sk_buff *skb, struct crypto_alg *alg)
+{
+	return -ENOSYS;
+}
+#endif
+
+static void crypto_aead_show(struct seq_file *m, struct crypto_alg *alg)
+	__attribute__ ((unused));
+static void crypto_aead_show(struct seq_file *m, struct crypto_alg *alg)
+{
+	struct aead_alg *aead = container_of(alg, struct aead_alg, base);
+
+	seq_printf(m, "type         : aead\n");
+	seq_printf(m, "async        : %s\n", alg->cra_flags & CRYPTO_ALG_ASYNC ?
+					     "yes" : "no");
+	seq_printf(m, "blocksize    : %u\n", alg->cra_blocksize);
+	seq_printf(m, "ivsize       : %u\n", aead->ivsize);
+	seq_printf(m, "maxauthsize  : %u\n", aead->maxauthsize);
+	seq_printf(m, "geniv        : <none>\n");
+}
+
+static const struct crypto_type crypto_new_aead_type = {
+	.extsize = crypto_alg_extsize,
+	.init_tfm = crypto_aead_init_tfm,
+#ifdef CONFIG_PROC_FS
+	.show = crypto_aead_show,
+#endif
+	.report = crypto_aead_report,
+	.maskclear = ~CRYPTO_ALG_TYPE_MASK,
+	.maskset = CRYPTO_ALG_TYPE_MASK,
+	.type = CRYPTO_ALG_TYPE_AEAD,
+	.tfmsize = offsetof(struct crypto_aead, base),
+};
+
 static int aead_null_givencrypt(struct aead_givcrypt_request *req)
 {
 	return crypto_aead_encrypt(&req->areq);
@@ -552,5 +628,51 @@ struct crypto_aead *crypto_alloc_aead(const char *alg_name, u32 type, u32 mask)
 }
 EXPORT_SYMBOL_GPL(crypto_alloc_aead);
 
+static int aead_prepare_alg(struct aead_alg *alg)
+{
+	struct crypto_alg *base = &alg->base;
+
+	if (max(alg->maxauthsize, alg->ivsize) > PAGE_SIZE / 8)
+		return -EINVAL;
+
+	base->cra_type = &crypto_new_aead_type;
+	base->cra_flags &= ~CRYPTO_ALG_TYPE_MASK;
+	base->cra_flags |= CRYPTO_ALG_TYPE_AEAD;
+
+	return 0;
+}
+
+int crypto_register_aead(struct aead_alg *alg)
+{
+	struct crypto_alg *base = &alg->base;
+	int err;
+
+	err = aead_prepare_alg(alg);
+	if (err)
+		return err;
+
+	return crypto_register_alg(base);
+}
+EXPORT_SYMBOL_GPL(crypto_register_aead);
+
+int crypto_unregister_aead(struct aead_alg *alg)
+{
+	return crypto_unregister_alg(&alg->base);
+}
+EXPORT_SYMBOL_GPL(crypto_unregister_aead);
+
+int aead_register_instance(struct crypto_template *tmpl,
+			   struct aead_instance *inst)
+{
+	int err;
+
+	err = aead_prepare_alg(&inst->alg);
+	if (err)
+		return err;
+
+	return crypto_register_instance(tmpl, aead_crypto_instance(inst));
+}
+EXPORT_SYMBOL_GPL(aead_register_instance);
+
 MODULE_LICENSE("GPL");
 MODULE_DESCRIPTION("Authenticated Encryption with Associated Data (AEAD)");
diff --git a/include/crypto/aead.h b/include/crypto/aead.h
index aebf57d..177e6f4 100644
--- a/include/crypto/aead.h
+++ b/include/crypto/aead.h
@@ -17,8 +17,6 @@
 #include <linux/kernel.h>
 #include <linux/slab.h>
 
-#define aead_alg old_aead_alg
-
 /**
  * DOC: Authenticated Encryption With Associated Data (AEAD) Cipher API
  *
@@ -92,7 +90,48 @@ struct aead_givcrypt_request {
 	struct aead_request areq;
 };
 
+/**
+ * struct aead_alg - AEAD cipher definition
+ * @maxauthsize: Set the maximum authentication tag size supported by the
+ *		 transformation. A transformation may support smaller tag sizes.
+ *		 As the authentication tag is a message digest to ensure the
+ *		 integrity of the encrypted data, a consumer typically wants the
+ *		 largest authentication tag possible as defined by this
+ *		 variable.
+ * @setauthsize: Set authentication size for the AEAD transformation. This
+ *		 function is used to specify the consumer requested size of the
+ * 		 authentication tag to be either generated by the transformation
+ *		 during encryption or the size of the authentication tag to be
+ *		 supplied during the decryption operation. This function is also
+ *		 responsible for checking the authentication tag size for
+ *		 validity.
+ * @setkey: see struct ablkcipher_alg
+ * @encrypt: see struct ablkcipher_alg
+ * @decrypt: see struct ablkcipher_alg
+ * @geniv: see struct ablkcipher_alg
+ * @ivsize: see struct ablkcipher_alg
+ *
+ * All fields except @ivsize is mandatory and must be filled.
+ */
+struct aead_alg {
+	int (*setkey)(struct crypto_aead *tfm, const u8 *key,
+	              unsigned int keylen);
+	int (*setauthsize)(struct crypto_aead *tfm, unsigned int authsize);
+	int (*encrypt)(struct aead_request *req);
+	int (*decrypt)(struct aead_request *req);
+
+	const char *geniv;
+
+	unsigned int ivsize;
+	unsigned int maxauthsize;
+
+	struct crypto_alg base;
+};
+
 struct crypto_aead {
+	int (*setkey)(struct crypto_aead *tfm, const u8 *key,
+	              unsigned int keylen);
+	int (*setauthsize)(struct crypto_aead *tfm, unsigned int authsize);
 	int (*encrypt)(struct aead_request *req);
 	int (*decrypt)(struct aead_request *req);
 	int (*givencrypt)(struct aead_givcrypt_request *req);
@@ -102,6 +141,7 @@ struct crypto_aead {
 
 	unsigned int ivsize;
 	unsigned int authsize;
+	unsigned int maxauthsize;
 	unsigned int reqsize;
 
 	struct crypto_tfm base;
diff --git a/include/crypto/internal/aead.h b/include/crypto/internal/aead.h
index 4614f79..6cd3151 100644
--- a/include/crypto/internal/aead.h
+++ b/include/crypto/internal/aead.h
@@ -19,6 +19,10 @@
 
 struct rtattr;
 
+struct aead_instance {
+	struct aead_alg alg;
+};
+
 struct crypto_aead_spawn {
 	struct crypto_spawn base;
 };
@@ -33,7 +37,8 @@ static inline struct old_aead_alg *crypto_old_aead_alg(struct crypto_aead *tfm)
 
 static inline struct aead_alg *crypto_aead_alg(struct crypto_aead *tfm)
 {
-	return &crypto_aead_tfm(tfm)->__crt_alg->cra_aead;
+	return container_of(crypto_aead_tfm(tfm)->__crt_alg,
+			    struct aead_alg, base);
 }
 
 static inline void *crypto_aead_ctx(struct crypto_aead *tfm)
@@ -47,6 +52,22 @@ static inline struct crypto_instance *crypto_aead_alg_instance(
 	return crypto_tfm_alg_instance(&aead->base);
 }
 
+static inline struct crypto_instance *aead_crypto_instance(
+	struct aead_instance *inst)
+{
+	return container_of(&inst->alg.base, struct crypto_instance, alg);
+}
+
+static inline struct aead_instance *aead_instance(struct crypto_instance *inst)
+{
+	return container_of(&inst->alg, struct aead_instance, alg.base);
+}
+
+static inline void *aead_instance_ctx(struct aead_instance *inst)
+{
+	return crypto_instance_ctx(aead_crypto_instance(inst));
+}
+
 static inline void *aead_request_ctx(struct aead_request *req)
 {
 	return req->__ctx;
@@ -84,6 +105,12 @@ static inline struct crypto_alg *crypto_aead_spawn_alg(
 	return spawn->base.alg;
 }
 
+static inline struct aead_alg *crypto_spawn_aead_alg(
+	struct crypto_aead_spawn *spawn)
+{
+	return container_of(spawn->base.alg, struct aead_alg, base);
+}
+
 static inline struct crypto_aead *crypto_spawn_aead(
 	struct crypto_aead_spawn *spawn)
 {
@@ -121,8 +148,13 @@ static inline void crypto_aead_set_reqsize(struct crypto_aead *aead,
 
 static inline unsigned int crypto_aead_maxauthsize(struct crypto_aead *aead)
 {
-	return crypto_old_aead_alg(aead)->maxauthsize;
+	return aead->maxauthsize;
 }
 
+int crypto_register_aead(struct aead_alg *alg);
+int crypto_unregister_aead(struct aead_alg *alg);
+int aead_register_instance(struct crypto_template *tmpl,
+			   struct aead_instance *inst);
+
 #endif	/* _CRYPTO_INTERNAL_AEAD_H */
 

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 11/16] crypto: null - Add default null skcipher
  2015-05-21  7:09 [PATCH 0/16] crypto: aead - Add single SG interface and new IPsec IV generation Herbert Xu
                   ` (9 preceding siblings ...)
  2015-05-21  7:11 ` [PATCH 10/16] crypto: aead - Add support for new AEAD implementations Herbert Xu
@ 2015-05-21  7:11 ` Herbert Xu
  2015-05-21  7:11 ` [PATCH 12/16] crypto: gcm - Use " Herbert Xu
                   ` (4 subsequent siblings)
  15 siblings, 0 replies; 17+ messages in thread
From: Herbert Xu @ 2015-05-21  7:11 UTC (permalink / raw)
  To: Linux Crypto Mailing List

This patch adds a default null skcipher for users such as gcm
to perform copies on SG lists.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 crypto/crypto_null.c  |   39 +++++++++++++++++++++++++++++++++++++++
 include/crypto/null.h |    3 +++
 2 files changed, 42 insertions(+)

diff --git a/crypto/crypto_null.c b/crypto/crypto_null.c
index a203191..941c9a4 100644
--- a/crypto/crypto_null.c
+++ b/crypto/crypto_null.c
@@ -25,6 +25,10 @@
 #include <linux/mm.h>
 #include <linux/string.h>
 
+static DEFINE_MUTEX(crypto_default_null_skcipher_lock);
+static struct crypto_blkcipher *crypto_default_null_skcipher;
+static int crypto_default_null_skcipher_refcnt;
+
 static int null_compress(struct crypto_tfm *tfm, const u8 *src,
 			 unsigned int slen, u8 *dst, unsigned int *dlen)
 {
@@ -149,6 +153,41 @@ MODULE_ALIAS_CRYPTO("compress_null");
 MODULE_ALIAS_CRYPTO("digest_null");
 MODULE_ALIAS_CRYPTO("cipher_null");
 
+struct crypto_blkcipher *crypto_get_default_null_skcipher(void)
+{
+	struct crypto_blkcipher *tfm;
+
+	mutex_lock(&crypto_default_null_skcipher_lock);
+	tfm = crypto_default_null_skcipher;
+
+	if (!tfm) {
+		tfm = crypto_alloc_blkcipher("ecb(cipher_null)", 0, 0);
+		if (IS_ERR(tfm))
+			goto unlock;
+
+		crypto_default_null_skcipher = tfm;
+	}
+
+	crypto_default_null_skcipher_refcnt++;
+
+unlock:
+	mutex_unlock(&crypto_default_null_skcipher_lock);
+
+	return tfm;
+}
+EXPORT_SYMBOL_GPL(crypto_get_default_null_skcipher);
+
+void crypto_put_default_null_skcipher(void)
+{
+	mutex_lock(&crypto_default_null_skcipher_lock);
+	if (!--crypto_default_null_skcipher_refcnt) {
+		crypto_free_blkcipher(crypto_default_null_skcipher);
+		crypto_default_null_skcipher = NULL;
+	}
+	mutex_unlock(&crypto_default_null_skcipher_lock);
+}
+EXPORT_SYMBOL_GPL(crypto_put_default_null_skcipher);
+
 static int __init crypto_null_mod_init(void)
 {
 	int ret = 0;
diff --git a/include/crypto/null.h b/include/crypto/null.h
index b7c864c..06dc30d 100644
--- a/include/crypto/null.h
+++ b/include/crypto/null.h
@@ -8,4 +8,7 @@
 #define NULL_DIGEST_SIZE	0
 #define NULL_IV_SIZE		0
 
+struct crypto_blkcipher *crypto_get_default_null_skcipher(void);
+void crypto_put_default_null_skcipher(void);
+
 #endif

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 12/16] crypto: gcm - Use default null skcipher
  2015-05-21  7:09 [PATCH 0/16] crypto: aead - Add single SG interface and new IPsec IV generation Herbert Xu
                   ` (10 preceding siblings ...)
  2015-05-21  7:11 ` [PATCH 11/16] crypto: null - Add default null skcipher Herbert Xu
@ 2015-05-21  7:11 ` Herbert Xu
  2015-05-21  7:11 ` [PATCH 13/16] crypto: scatterwalk - Check for same address in map_and_copy Herbert Xu
                   ` (3 subsequent siblings)
  15 siblings, 0 replies; 17+ messages in thread
From: Herbert Xu @ 2015-05-21  7:11 UTC (permalink / raw)
  To: Linux Crypto Mailing List

This patch makes gcm use the default null skcipher instead of
allocating a new one for each tfm.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 crypto/gcm.c |   23 ++++++-----------------
 1 file changed, 6 insertions(+), 17 deletions(-)

diff --git a/crypto/gcm.c b/crypto/gcm.c
index b56200e..fc2b55e 100644
--- a/crypto/gcm.c
+++ b/crypto/gcm.c
@@ -12,6 +12,7 @@
 #include <crypto/internal/aead.h>
 #include <crypto/internal/skcipher.h>
 #include <crypto/internal/hash.h>
+#include <crypto/null.h>
 #include <crypto/scatterwalk.h>
 #include <crypto/hash.h>
 #include "internal.h"
@@ -39,7 +40,6 @@ struct crypto_rfc4106_ctx {
 
 struct crypto_rfc4543_instance_ctx {
 	struct crypto_aead_spawn aead;
-	struct crypto_skcipher_spawn null;
 };
 
 struct crypto_rfc4543_ctx {
@@ -1246,7 +1246,7 @@ static int crypto_rfc4543_init_tfm(struct crypto_tfm *tfm)
 	if (IS_ERR(aead))
 		return PTR_ERR(aead);
 
-	null = crypto_spawn_blkcipher(&ictx->null.base);
+	null = crypto_get_default_null_skcipher();
 	err = PTR_ERR(null);
 	if (IS_ERR(null))
 		goto err_free_aead;
@@ -1273,7 +1273,7 @@ static void crypto_rfc4543_exit_tfm(struct crypto_tfm *tfm)
 	struct crypto_rfc4543_ctx *ctx = crypto_tfm_ctx(tfm);
 
 	crypto_free_aead(ctx->child);
-	crypto_free_blkcipher(ctx->null);
+	crypto_put_default_null_skcipher();
 }
 
 static struct crypto_instance *crypto_rfc4543_alloc(struct rtattr **tb)
@@ -1311,23 +1311,15 @@ static struct crypto_instance *crypto_rfc4543_alloc(struct rtattr **tb)
 
 	alg = crypto_aead_spawn_alg(spawn);
 
-	crypto_set_skcipher_spawn(&ctx->null, inst);
-	err = crypto_grab_skcipher(&ctx->null, "ecb(cipher_null)", 0,
-				   CRYPTO_ALG_ASYNC);
-	if (err)
-		goto out_drop_alg;
-
-	crypto_skcipher_spawn_alg(&ctx->null);
-
 	err = -EINVAL;
 
 	/* We only support 16-byte blocks. */
 	if (alg->cra_aead.ivsize != 16)
-		goto out_drop_ecbnull;
+		goto out_drop_alg;
 
 	/* Not a stream cipher? */
 	if (alg->cra_blocksize != 1)
-		goto out_drop_ecbnull;
+		goto out_drop_alg;
 
 	err = -ENAMETOOLONG;
 	if (snprintf(inst->alg.cra_name, CRYPTO_MAX_ALG_NAME,
@@ -1335,7 +1327,7 @@ static struct crypto_instance *crypto_rfc4543_alloc(struct rtattr **tb)
 	    snprintf(inst->alg.cra_driver_name, CRYPTO_MAX_ALG_NAME,
 		     "rfc4543(%s)", alg->cra_driver_name) >=
 	    CRYPTO_MAX_ALG_NAME)
-		goto out_drop_ecbnull;
+		goto out_drop_alg;
 
 	inst->alg.cra_flags = CRYPTO_ALG_TYPE_AEAD;
 	inst->alg.cra_flags |= alg->cra_flags & CRYPTO_ALG_ASYNC;
@@ -1362,8 +1354,6 @@ static struct crypto_instance *crypto_rfc4543_alloc(struct rtattr **tb)
 out:
 	return inst;
 
-out_drop_ecbnull:
-	crypto_drop_skcipher(&ctx->null);
 out_drop_alg:
 	crypto_drop_aead(spawn);
 out_free_inst:
@@ -1377,7 +1367,6 @@ static void crypto_rfc4543_free(struct crypto_instance *inst)
 	struct crypto_rfc4543_instance_ctx *ctx = crypto_instance_ctx(inst);
 
 	crypto_drop_aead(&ctx->aead);
-	crypto_drop_skcipher(&ctx->null);
 
 	kfree(inst);
 }

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 13/16] crypto: scatterwalk - Check for same address in map_and_copy
  2015-05-21  7:09 [PATCH 0/16] crypto: aead - Add single SG interface and new IPsec IV generation Herbert Xu
                   ` (11 preceding siblings ...)
  2015-05-21  7:11 ` [PATCH 12/16] crypto: gcm - Use " Herbert Xu
@ 2015-05-21  7:11 ` Herbert Xu
  2015-05-21  7:11 ` [PATCH 14/16] crypto: seqiv - Add support for new AEAD interface Herbert Xu
                   ` (2 subsequent siblings)
  15 siblings, 0 replies; 17+ messages in thread
From: Herbert Xu @ 2015-05-21  7:11 UTC (permalink / raw)
  To: Linux Crypto Mailing List

This patch adds a check for in scatterwalk_map_and_copy to avoid
copying from the same address to the same address.  This is going
to be used for IV copying in AEAD IV generators.

There is no provision for partial overlaps.

This patch also uses the new scatterwalk_ffwd instead of doing
it by hand in scatterwalk_map_and_copy.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 crypto/scatterwalk.c |   16 ++++++----------
 1 file changed, 6 insertions(+), 10 deletions(-)

diff --git a/crypto/scatterwalk.c b/crypto/scatterwalk.c
index db920b5..8690324 100644
--- a/crypto/scatterwalk.c
+++ b/crypto/scatterwalk.c
@@ -104,22 +104,18 @@ void scatterwalk_map_and_copy(void *buf, struct scatterlist *sg,
 			      unsigned int start, unsigned int nbytes, int out)
 {
 	struct scatter_walk walk;
-	unsigned int offset = 0;
+	struct scatterlist tmp[2];
 
 	if (!nbytes)
 		return;
 
-	for (;;) {
-		scatterwalk_start(&walk, sg);
-
-		if (start < offset + sg->length)
-			break;
+	sg = scatterwalk_ffwd(tmp, sg, start);
 
-		offset += sg->length;
-		sg = sg_next(sg);
-	}
+	if (sg_page(sg) == virt_to_page(buf) &&
+	    sg->offset == offset_in_page(buf))
+		return;
 
-	scatterwalk_advance(&walk, start - offset);
+	scatterwalk_start(&walk, sg);
 	scatterwalk_copychunks(buf, &walk, nbytes, out);
 	scatterwalk_done(&walk, out, 0);
 }

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 14/16] crypto: seqiv - Add support for new AEAD interface
  2015-05-21  7:09 [PATCH 0/16] crypto: aead - Add single SG interface and new IPsec IV generation Herbert Xu
                   ` (12 preceding siblings ...)
  2015-05-21  7:11 ` [PATCH 13/16] crypto: scatterwalk - Check for same address in map_and_copy Herbert Xu
@ 2015-05-21  7:11 ` Herbert Xu
  2015-05-21  7:11 ` [PATCH 15/16] crypto: seqiv - Add seqniv Herbert Xu
  2015-05-21  7:11 ` [PATCH 16/16] crypto: echainiv - Add encrypted chain IV generator Herbert Xu
  15 siblings, 0 replies; 17+ messages in thread
From: Herbert Xu @ 2015-05-21  7:11 UTC (permalink / raw)
  To: Linux Crypto Mailing List

This patch converts the seqiv IV generator to work with the new
AEAD interface where IV generators are just normal AEAD algorithms.

Full backwards compatibility is paramount at this point since
no users have yet switched over to the new interface.  Nor can
they switch to the new interface until IV generation is fully
supported by it.

So this means we are adding two versions of seqiv alongside the
existing one.  The first one is the one that will be used when
the underlying AEAD algorithm has switched over to the new AEAD
interface.  The second one handles the current case where the
underlying AEAD algorithm still uses the old interface.

Both versions export themselves through the new AEAD interface.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 crypto/Kconfig                 |    1 
 crypto/aead.c                  |  100 ++++++----
 crypto/seqiv.c                 |  386 +++++++++++++++++++++++++++++++++++++++--
 include/crypto/internal/aead.h |    7 
 4 files changed, 443 insertions(+), 51 deletions(-)

diff --git a/crypto/Kconfig b/crypto/Kconfig
index eba55b4..657bb82 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -221,6 +221,7 @@ config CRYPTO_SEQIV
 	tristate "Sequence Number IV Generator"
 	select CRYPTO_AEAD
 	select CRYPTO_BLKCIPHER
+	select CRYPTO_NULL
 	select CRYPTO_RNG
 	help
 	  This IV generator generates an IV based on a sequence number by
diff --git a/crypto/aead.c b/crypto/aead.c
index d231e28..5fa992a 100644
--- a/crypto/aead.c
+++ b/crypto/aead.c
@@ -378,15 +378,16 @@ static int crypto_grab_nivaead(struct crypto_aead_spawn *spawn,
 	return crypto_grab_spawn(&spawn->base, name, type, mask);
 }
 
-struct crypto_instance *aead_geniv_alloc(struct crypto_template *tmpl,
-					 struct rtattr **tb, u32 type,
-					 u32 mask)
+struct aead_instance *aead_geniv_alloc(struct crypto_template *tmpl,
+				       struct rtattr **tb, u32 type, u32 mask)
 {
 	const char *name;
 	struct crypto_aead_spawn *spawn;
 	struct crypto_attr_type *algt;
-	struct crypto_instance *inst;
-	struct crypto_alg *alg;
+	struct aead_instance *inst;
+	struct aead_alg *alg;
+	unsigned int ivsize;
+	unsigned int maxauthsize;
 	int err;
 
 	algt = crypto_get_attr_type(tb);
@@ -405,20 +406,28 @@ struct crypto_instance *aead_geniv_alloc(struct crypto_template *tmpl,
 	if (!inst)
 		return ERR_PTR(-ENOMEM);
 
-	spawn = crypto_instance_ctx(inst);
+	spawn = aead_instance_ctx(inst);
 
 	/* Ignore async algorithms if necessary. */
 	mask |= crypto_requires_sync(algt->type, algt->mask);
 
-	crypto_set_aead_spawn(spawn, inst);
+	crypto_set_aead_spawn(spawn, aead_crypto_instance(inst));
 	err = crypto_grab_nivaead(spawn, name, type, mask);
 	if (err)
 		goto err_free_inst;
 
-	alg = crypto_aead_spawn_alg(spawn);
+	alg = crypto_spawn_aead_alg(spawn);
+
+	if (alg->base.cra_aead.encrypt) {
+		ivsize = alg->base.cra_aead.ivsize;
+		maxauthsize = alg->base.cra_aead.maxauthsize;
+	} else {
+		ivsize = alg->ivsize;
+		maxauthsize = alg->maxauthsize;
+	}
 
 	err = -EINVAL;
-	if (!alg->cra_aead.ivsize)
+	if (!ivsize)
 		goto err_drop_alg;
 
 	/*
@@ -427,39 +436,56 @@ struct crypto_instance *aead_geniv_alloc(struct crypto_template *tmpl,
 	 * template name and double-check the IV generator.
 	 */
 	if (algt->mask & CRYPTO_ALG_GENIV) {
-		if (strcmp(tmpl->name, alg->cra_aead.geniv))
+		if (!alg->base.cra_aead.encrypt)
+			goto err_drop_alg;
+		if (strcmp(tmpl->name, alg->base.cra_aead.geniv))
 			goto err_drop_alg;
 
-		memcpy(inst->alg.cra_name, alg->cra_name, CRYPTO_MAX_ALG_NAME);
-		memcpy(inst->alg.cra_driver_name, alg->cra_driver_name,
+		memcpy(inst->alg.base.cra_name, alg->base.cra_name,
 		       CRYPTO_MAX_ALG_NAME);
-	} else {
-		err = -ENAMETOOLONG;
-		if (snprintf(inst->alg.cra_name, CRYPTO_MAX_ALG_NAME,
-			     "%s(%s)", tmpl->name, alg->cra_name) >=
-		    CRYPTO_MAX_ALG_NAME)
-			goto err_drop_alg;
-		if (snprintf(inst->alg.cra_driver_name, CRYPTO_MAX_ALG_NAME,
-			     "%s(%s)", tmpl->name, alg->cra_driver_name) >=
-		    CRYPTO_MAX_ALG_NAME)
-			goto err_drop_alg;
+		memcpy(inst->alg.base.cra_driver_name,
+		       alg->base.cra_driver_name, CRYPTO_MAX_ALG_NAME);
+
+		inst->alg.base.cra_flags = CRYPTO_ALG_TYPE_AEAD |
+					   CRYPTO_ALG_GENIV;
+		inst->alg.base.cra_flags |= alg->base.cra_flags &
+					    CRYPTO_ALG_ASYNC;
+		inst->alg.base.cra_priority = alg->base.cra_priority;
+		inst->alg.base.cra_blocksize = alg->base.cra_blocksize;
+		inst->alg.base.cra_alignmask = alg->base.cra_alignmask;
+		inst->alg.base.cra_type = &crypto_aead_type;
+
+		inst->alg.base.cra_aead.ivsize = ivsize;
+		inst->alg.base.cra_aead.maxauthsize = maxauthsize;
+
+		inst->alg.base.cra_aead.setkey = alg->base.cra_aead.setkey;
+		inst->alg.base.cra_aead.setauthsize =
+			alg->base.cra_aead.setauthsize;
+		inst->alg.base.cra_aead.encrypt = alg->base.cra_aead.encrypt;
+		inst->alg.base.cra_aead.decrypt = alg->base.cra_aead.decrypt;
+
+		goto out;
 	}
 
-	inst->alg.cra_flags = CRYPTO_ALG_TYPE_AEAD | CRYPTO_ALG_GENIV;
-	inst->alg.cra_flags |= alg->cra_flags & CRYPTO_ALG_ASYNC;
-	inst->alg.cra_priority = alg->cra_priority;
-	inst->alg.cra_blocksize = alg->cra_blocksize;
-	inst->alg.cra_alignmask = alg->cra_alignmask;
-	inst->alg.cra_type = &crypto_aead_type;
+	err = -ENAMETOOLONG;
+	if (snprintf(inst->alg.base.cra_name, CRYPTO_MAX_ALG_NAME,
+		     "%s(%s)", tmpl->name, alg->base.cra_name) >=
+	    CRYPTO_MAX_ALG_NAME)
+		goto err_drop_alg;
+	if (snprintf(inst->alg.base.cra_driver_name, CRYPTO_MAX_ALG_NAME,
+		     "%s(%s)", tmpl->name, alg->base.cra_driver_name) >=
+	    CRYPTO_MAX_ALG_NAME)
+		goto err_drop_alg;
 
-	inst->alg.cra_aead.ivsize = alg->cra_aead.ivsize;
-	inst->alg.cra_aead.maxauthsize = alg->cra_aead.maxauthsize;
-	inst->alg.cra_aead.geniv = alg->cra_aead.geniv;
+	inst->alg.base.cra_flags = CRYPTO_ALG_TYPE_AEAD;
+	inst->alg.base.cra_flags |= alg->base.cra_flags & CRYPTO_ALG_ASYNC;
+	inst->alg.base.cra_priority = alg->base.cra_priority;
+	inst->alg.base.cra_blocksize = alg->base.cra_blocksize;
+	inst->alg.base.cra_alignmask = alg->base.cra_alignmask;
+	inst->alg.base.cra_type = &crypto_new_aead_type;
 
-	inst->alg.cra_aead.setkey = alg->cra_aead.setkey;
-	inst->alg.cra_aead.setauthsize = alg->cra_aead.setauthsize;
-	inst->alg.cra_aead.encrypt = alg->cra_aead.encrypt;
-	inst->alg.cra_aead.decrypt = alg->cra_aead.decrypt;
+	inst->alg.ivsize = ivsize;
+	inst->alg.maxauthsize = maxauthsize;
 
 out:
 	return inst;
@@ -473,9 +499,9 @@ err_free_inst:
 }
 EXPORT_SYMBOL_GPL(aead_geniv_alloc);
 
-void aead_geniv_free(struct crypto_instance *inst)
+void aead_geniv_free(struct aead_instance *inst)
 {
-	crypto_drop_aead(crypto_instance_ctx(inst));
+	crypto_drop_aead(aead_instance_ctx(inst));
 	kfree(inst);
 }
 EXPORT_SYMBOL_GPL(aead_geniv_free);
diff --git a/crypto/seqiv.c b/crypto/seqiv.c
index 5bbf2e9..27dbab8a 100644
--- a/crypto/seqiv.c
+++ b/crypto/seqiv.c
@@ -15,7 +15,9 @@
 
 #include <crypto/internal/aead.h>
 #include <crypto/internal/skcipher.h>
+#include <crypto/null.h>
 #include <crypto/rng.h>
+#include <crypto/scatterwalk.h>
 #include <linux/err.h>
 #include <linux/init.h>
 #include <linux/kernel.h>
@@ -29,6 +31,29 @@ struct seqiv_ctx {
 	u8 salt[] __attribute__ ((aligned(__alignof__(u32))));
 };
 
+struct seqiv_aead_ctx {
+	struct crypto_aead *child;
+	spinlock_t lock;
+	struct crypto_blkcipher *null;
+	u8 salt[] __attribute__ ((aligned(__alignof__(u32))));
+};
+
+static int seqiv_aead_setkey(struct crypto_aead *tfm,
+			     const u8 *key, unsigned int keylen)
+{
+	struct seqiv_aead_ctx *ctx = crypto_aead_ctx(tfm);
+
+	return crypto_aead_setkey(ctx->child, key, keylen);
+}
+
+static int seqiv_aead_setauthsize(struct crypto_aead *tfm,
+				  unsigned int authsize)
+{
+	struct seqiv_aead_ctx *ctx = crypto_aead_ctx(tfm);
+
+	return crypto_aead_setauthsize(ctx->child, authsize);
+}
+
 static void seqiv_complete2(struct skcipher_givcrypt_request *req, int err)
 {
 	struct ablkcipher_request *subreq = skcipher_givcrypt_reqctx(req);
@@ -81,6 +106,33 @@ static void seqiv_aead_complete(struct crypto_async_request *base, int err)
 	aead_givcrypt_complete(req, err);
 }
 
+static void seqiv_aead_encrypt_complete2(struct aead_request *req, int err)
+{
+	struct aead_request *subreq = aead_request_ctx(req);
+	struct crypto_aead *geniv;
+
+	if (err == -EINPROGRESS)
+		return;
+
+	if (err)
+		goto out;
+
+	geniv = crypto_aead_reqtfm(req);
+	memcpy(req->iv, subreq->iv, crypto_aead_ivsize(geniv));
+
+out:
+	kzfree(subreq->iv);
+}
+
+static void seqiv_aead_encrypt_complete(struct crypto_async_request *base,
+					int err)
+{
+	struct aead_request *req = base->data;
+
+	seqiv_aead_encrypt_complete2(req, err);
+	aead_request_complete(req, err);
+}
+
 static void seqiv_geniv(struct seqiv_ctx *ctx, u8 *info, u64 seq,
 			unsigned int ivsize)
 {
@@ -186,6 +238,171 @@ static int seqiv_aead_givencrypt(struct aead_givcrypt_request *req)
 	return err;
 }
 
+static int seqiv_aead_encrypt_compat(struct aead_request *req)
+{
+	struct crypto_aead *geniv = crypto_aead_reqtfm(req);
+	struct seqiv_aead_ctx *ctx = crypto_aead_ctx(geniv);
+	struct aead_request *subreq = aead_request_ctx(req);
+	crypto_completion_t compl;
+	void *data;
+	u8 *info;
+	unsigned int ivsize;
+	int err;
+
+	aead_request_set_tfm(subreq, ctx->child);
+
+	compl = req->base.complete;
+	data = req->base.data;
+	info = req->iv;
+
+	ivsize = crypto_aead_ivsize(geniv);
+
+	if (unlikely(!IS_ALIGNED((unsigned long)info,
+				 crypto_aead_alignmask(geniv) + 1))) {
+		info = kmalloc(ivsize, req->base.flags &
+				       CRYPTO_TFM_REQ_MAY_SLEEP ? GFP_KERNEL:
+								  GFP_ATOMIC);
+		if (!info)
+			return -ENOMEM;
+
+		memcpy(info, req->iv, ivsize);
+		compl = seqiv_aead_encrypt_complete;
+		data = req;
+	}
+
+	aead_request_set_callback(subreq, req->base.flags, compl, data);
+	aead_request_set_crypt(subreq, req->src, req->dst,
+			       req->cryptlen - ivsize, info);
+	aead_request_set_ad(subreq, req->assoclen, ivsize);
+
+	crypto_xor(info, ctx->salt, ivsize);
+	scatterwalk_map_and_copy(info, req->dst, req->assoclen, ivsize, 1);
+
+	err = crypto_aead_encrypt(subreq);
+	if (unlikely(info != req->iv))
+		seqiv_aead_encrypt_complete2(req, err);
+	return err;
+}
+
+static int seqiv_aead_encrypt(struct aead_request *req)
+{
+	struct crypto_aead *geniv = crypto_aead_reqtfm(req);
+	struct seqiv_aead_ctx *ctx = crypto_aead_ctx(geniv);
+	struct aead_request *subreq = aead_request_ctx(req);
+	crypto_completion_t compl;
+	void *data;
+	u8 *info;
+	unsigned int ivsize;
+	int err;
+
+	aead_request_set_tfm(subreq, ctx->child);
+
+	compl = req->base.complete;
+	data = req->base.data;
+	info = req->iv;
+
+	ivsize = crypto_aead_ivsize(geniv);
+
+	if (req->src != req->dst) {
+		struct scatterlist src[2];
+		struct scatterlist dst[2];
+		struct blkcipher_desc desc = {
+			.tfm = ctx->null,
+		};
+
+		err = crypto_blkcipher_encrypt(
+			&desc,
+			scatterwalk_ffwd(dst, req->dst,
+					 req->assoclen + ivsize),
+			scatterwalk_ffwd(src, req->src,
+					 req->assoclen + ivsize),
+			req->cryptlen - ivsize);
+		if (err)
+			return err;
+	}
+
+	if (unlikely(!IS_ALIGNED((unsigned long)info,
+				 crypto_aead_alignmask(geniv) + 1))) {
+		info = kmalloc(ivsize, req->base.flags &
+				       CRYPTO_TFM_REQ_MAY_SLEEP ? GFP_KERNEL:
+								  GFP_ATOMIC);
+		if (!info)
+			return -ENOMEM;
+
+		memcpy(info, req->iv, ivsize);
+		compl = seqiv_aead_encrypt_complete;
+		data = req;
+	}
+
+	aead_request_set_callback(subreq, req->base.flags, compl, data);
+	aead_request_set_crypt(subreq, req->dst, req->dst,
+			       req->cryptlen - ivsize, info);
+	aead_request_set_ad(subreq, req->assoclen + ivsize, 0);
+
+	crypto_xor(info, ctx->salt, ivsize);
+	scatterwalk_map_and_copy(info, req->dst, req->assoclen, ivsize, 1);
+
+	err = crypto_aead_encrypt(subreq);
+	if (unlikely(info != req->iv))
+		seqiv_aead_encrypt_complete2(req, err);
+	return err;
+}
+
+static int seqiv_aead_decrypt_compat(struct aead_request *req)
+{
+	struct crypto_aead *geniv = crypto_aead_reqtfm(req);
+	struct seqiv_aead_ctx *ctx = crypto_aead_ctx(geniv);
+	struct aead_request *subreq = aead_request_ctx(req);
+	crypto_completion_t compl;
+	void *data;
+	unsigned int ivsize;
+
+	aead_request_set_tfm(subreq, ctx->child);
+
+	compl = req->base.complete;
+	data = req->base.data;
+
+	ivsize = crypto_aead_ivsize(geniv);
+
+	aead_request_set_callback(subreq, req->base.flags, compl, data);
+	aead_request_set_crypt(subreq, req->src, req->dst,
+			       req->cryptlen - ivsize, req->iv);
+	aead_request_set_ad(subreq, req->assoclen, ivsize);
+
+	scatterwalk_map_and_copy(req->iv, req->src, req->assoclen, ivsize, 0);
+
+	return crypto_aead_decrypt(subreq);
+}
+
+static int seqiv_aead_decrypt(struct aead_request *req)
+{
+	struct crypto_aead *geniv = crypto_aead_reqtfm(req);
+	struct seqiv_aead_ctx *ctx = crypto_aead_ctx(geniv);
+	struct aead_request *subreq = aead_request_ctx(req);
+	crypto_completion_t compl;
+	void *data;
+	unsigned int ivsize;
+
+	aead_request_set_tfm(subreq, ctx->child);
+
+	compl = req->base.complete;
+	data = req->base.data;
+
+	ivsize = crypto_aead_ivsize(geniv);
+
+	aead_request_set_callback(subreq, req->base.flags, compl, data);
+	aead_request_set_crypt(subreq, req->src, req->dst,
+			       req->cryptlen - ivsize, req->iv);
+	aead_request_set_ad(subreq, req->assoclen + ivsize, 0);
+
+	scatterwalk_map_and_copy(req->iv, req->src, req->assoclen, ivsize, 0);
+	if (req->src != req->dst)
+		scatterwalk_map_and_copy(req->iv, req->dst,
+					 req->assoclen, ivsize, 1);
+
+	return crypto_aead_decrypt(subreq);
+}
+
 static int seqiv_givencrypt_first(struct skcipher_givcrypt_request *req)
 {
 	struct crypto_ablkcipher *geniv = skcipher_givcrypt_reqtfm(req);
@@ -232,6 +449,52 @@ unlock:
 	return seqiv_aead_givencrypt(req);
 }
 
+static int seqiv_aead_encrypt_compat_first(struct aead_request *req)
+{
+	struct crypto_aead *geniv = crypto_aead_reqtfm(req);
+	struct seqiv_aead_ctx *ctx = crypto_aead_ctx(geniv);
+	int err = 0;
+
+	spin_lock_bh(&ctx->lock);
+	if (geniv->encrypt != seqiv_aead_encrypt_compat_first)
+		goto unlock;
+
+	geniv->encrypt = seqiv_aead_encrypt_compat;
+	err = crypto_rng_get_bytes(crypto_default_rng, ctx->salt,
+				   crypto_aead_ivsize(geniv));
+
+unlock:
+	spin_unlock_bh(&ctx->lock);
+
+	if (err)
+		return err;
+
+	return seqiv_aead_encrypt_compat(req);
+}
+
+static int seqiv_aead_encrypt_first(struct aead_request *req)
+{
+	struct crypto_aead *geniv = crypto_aead_reqtfm(req);
+	struct seqiv_aead_ctx *ctx = crypto_aead_ctx(geniv);
+	int err = 0;
+
+	spin_lock_bh(&ctx->lock);
+	if (geniv->encrypt != seqiv_aead_encrypt_first)
+		goto unlock;
+
+	geniv->encrypt = seqiv_aead_encrypt;
+	err = crypto_rng_get_bytes(crypto_default_rng, ctx->salt,
+				   crypto_aead_ivsize(geniv));
+
+unlock:
+	spin_unlock_bh(&ctx->lock);
+
+	if (err)
+		return err;
+
+	return seqiv_aead_encrypt(req);
+}
+
 static int seqiv_init(struct crypto_tfm *tfm)
 {
 	struct crypto_ablkcipher *geniv = __crypto_ablkcipher_cast(tfm);
@@ -244,7 +507,7 @@ static int seqiv_init(struct crypto_tfm *tfm)
 	return skcipher_geniv_init(tfm);
 }
 
-static int seqiv_aead_init(struct crypto_tfm *tfm)
+static int seqiv_old_aead_init(struct crypto_tfm *tfm)
 {
 	struct crypto_aead *geniv = __crypto_aead_cast(tfm);
 	struct seqiv_ctx *ctx = crypto_aead_ctx(geniv);
@@ -257,6 +520,69 @@ static int seqiv_aead_init(struct crypto_tfm *tfm)
 	return aead_geniv_init(tfm);
 }
 
+static int seqiv_aead_compat_init(struct crypto_tfm *tfm)
+{
+	struct crypto_aead *geniv = __crypto_aead_cast(tfm);
+	struct seqiv_aead_ctx *ctx = crypto_aead_ctx(geniv);
+	int err;
+
+	spin_lock_init(&ctx->lock);
+
+	crypto_aead_set_reqsize(geniv, sizeof(struct aead_request));
+
+	err = aead_geniv_init(tfm);
+
+	ctx->child = geniv->child;
+	geniv->child = geniv;
+
+	return err;
+}
+
+static int seqiv_aead_init(struct crypto_tfm *tfm)
+{
+	struct crypto_aead *geniv = __crypto_aead_cast(tfm);
+	struct seqiv_aead_ctx *ctx = crypto_aead_ctx(geniv);
+	int err;
+
+	spin_lock_init(&ctx->lock);
+
+	crypto_aead_set_reqsize(geniv, sizeof(struct aead_request));
+
+	ctx->null = crypto_get_default_null_skcipher();
+	err = PTR_ERR(ctx->null);
+	if (IS_ERR(ctx->null))
+		goto out;
+
+	err = aead_geniv_init(tfm);
+	if (err)
+		goto drop_null;
+
+	ctx->child = geniv->child;
+	geniv->child = geniv;
+
+out:
+	return err;
+
+drop_null:
+	crypto_put_default_null_skcipher();
+	goto out;
+}
+
+static void seqiv_aead_compat_exit(struct crypto_tfm *tfm)
+{
+	struct seqiv_aead_ctx *ctx = crypto_tfm_ctx(tfm);
+
+	crypto_free_aead(ctx->child);
+}
+
+static void seqiv_aead_exit(struct crypto_tfm *tfm)
+{
+	struct seqiv_aead_ctx *ctx = crypto_tfm_ctx(tfm);
+
+	crypto_free_aead(ctx->child);
+	crypto_put_default_null_skcipher();
+}
+
 static struct crypto_template seqiv_tmpl;
 
 static struct crypto_instance *seqiv_ablkcipher_alloc(struct rtattr **tb)
@@ -280,35 +606,76 @@ static struct crypto_instance *seqiv_ablkcipher_alloc(struct rtattr **tb)
 	inst->alg.cra_exit = skcipher_geniv_exit;
 
 	inst->alg.cra_ctxsize += inst->alg.cra_ablkcipher.ivsize;
+	inst->alg.cra_ctxsize += sizeof(struct seqiv_ctx);
 
 out:
 	return inst;
 }
 
+static struct crypto_instance *seqiv_old_aead_alloc(struct aead_instance *aead)
+{
+	struct crypto_instance *inst = aead_crypto_instance(aead);
+
+	if (inst->alg.cra_aead.ivsize < sizeof(u64)) {
+		aead_geniv_free(aead);
+		return ERR_PTR(-EINVAL);
+	}
+
+	inst->alg.cra_aead.givencrypt = seqiv_aead_givencrypt_first;
+
+	inst->alg.cra_init = seqiv_old_aead_init;
+	inst->alg.cra_exit = aead_geniv_exit;
+
+	inst->alg.cra_ctxsize = inst->alg.cra_aead.ivsize;
+	inst->alg.cra_ctxsize += sizeof(struct seqiv_ctx);
+
+	return inst;
+}
+
 static struct crypto_instance *seqiv_aead_alloc(struct rtattr **tb)
 {
-	struct crypto_instance *inst;
+	struct aead_instance *inst;
+	struct crypto_aead_spawn *spawn;
+	struct aead_alg *alg;
 
 	inst = aead_geniv_alloc(&seqiv_tmpl, tb, 0, 0);
 
 	if (IS_ERR(inst))
 		goto out;
 
-	if (inst->alg.cra_aead.ivsize < sizeof(u64)) {
+	if (inst->alg.base.cra_aead.encrypt)
+		return seqiv_old_aead_alloc(inst);
+
+	if (inst->alg.ivsize < sizeof(u64)) {
 		aead_geniv_free(inst);
 		inst = ERR_PTR(-EINVAL);
 		goto out;
 	}
 
-	inst->alg.cra_aead.givencrypt = seqiv_aead_givencrypt_first;
+	spawn = aead_instance_ctx(inst);
+	alg = crypto_spawn_aead_alg(spawn);
 
-	inst->alg.cra_init = seqiv_aead_init;
-	inst->alg.cra_exit = aead_geniv_exit;
+	inst->alg.setkey = seqiv_aead_setkey;
+	inst->alg.setauthsize = seqiv_aead_setauthsize;
+	inst->alg.encrypt = seqiv_aead_encrypt_first;
+	inst->alg.decrypt = seqiv_aead_decrypt;
 
-	inst->alg.cra_ctxsize = inst->alg.cra_aead.ivsize;
+	inst->alg.base.cra_init = seqiv_aead_init;
+	inst->alg.base.cra_exit = seqiv_aead_exit;
+
+	inst->alg.base.cra_ctxsize = sizeof(struct seqiv_aead_ctx);
+	inst->alg.base.cra_ctxsize += inst->alg.base.cra_aead.ivsize;
+
+	if (alg->base.cra_aead.encrypt) {
+		inst->alg.encrypt = seqiv_aead_encrypt_compat_first;
+		inst->alg.decrypt = seqiv_aead_decrypt_compat;
+
+		inst->alg.base.cra_init = seqiv_aead_compat_init;
+		inst->alg.base.cra_exit = seqiv_aead_compat_exit;
+	}
 
 out:
-	return inst;
+	return aead_crypto_instance(inst);
 }
 
 static struct crypto_instance *seqiv_alloc(struct rtattr **tb)
@@ -334,7 +701,6 @@ static struct crypto_instance *seqiv_alloc(struct rtattr **tb)
 		goto put_rng;
 
 	inst->alg.cra_alignmask |= __alignof__(u32) - 1;
-	inst->alg.cra_ctxsize += sizeof(struct seqiv_ctx);
 
 out:
 	return inst;
@@ -349,7 +715,7 @@ static void seqiv_free(struct crypto_instance *inst)
 	if ((inst->alg.cra_flags ^ CRYPTO_ALG_TYPE_AEAD) & CRYPTO_ALG_TYPE_MASK)
 		skcipher_geniv_free(inst);
 	else
-		aead_geniv_free(inst);
+		aead_geniv_free(aead_instance(inst));
 	crypto_put_default_rng();
 }
 
diff --git a/include/crypto/internal/aead.h b/include/crypto/internal/aead.h
index 6cd3151..08f2ca6 100644
--- a/include/crypto/internal/aead.h
+++ b/include/crypto/internal/aead.h
@@ -117,10 +117,9 @@ static inline struct crypto_aead *crypto_spawn_aead(
 	return crypto_spawn_tfm2(&spawn->base);
 }
 
-struct crypto_instance *aead_geniv_alloc(struct crypto_template *tmpl,
-					 struct rtattr **tb, u32 type,
-					 u32 mask);
-void aead_geniv_free(struct crypto_instance *inst);
+struct aead_instance *aead_geniv_alloc(struct crypto_template *tmpl,
+				       struct rtattr **tb, u32 type, u32 mask);
+void aead_geniv_free(struct aead_instance *inst);
 int aead_geniv_init(struct crypto_tfm *tfm);
 void aead_geniv_exit(struct crypto_tfm *tfm);
 

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 15/16] crypto: seqiv - Add seqniv
  2015-05-21  7:09 [PATCH 0/16] crypto: aead - Add single SG interface and new IPsec IV generation Herbert Xu
                   ` (13 preceding siblings ...)
  2015-05-21  7:11 ` [PATCH 14/16] crypto: seqiv - Add support for new AEAD interface Herbert Xu
@ 2015-05-21  7:11 ` Herbert Xu
  2015-05-21  7:11 ` [PATCH 16/16] crypto: echainiv - Add encrypted chain IV generator Herbert Xu
  15 siblings, 0 replies; 17+ messages in thread
From: Herbert Xu @ 2015-05-21  7:11 UTC (permalink / raw)
  To: Linux Crypto Mailing List

This patch adds a new IV generator seqniv which is identical to
seqiv except that it skips the IV when authenticating.  This is
intended to be used by algorithms such as rfc4106 that does the
IV authentication implicitly.

Note that the code used for seqniv is in fact identical to the
compatibility case for seqiv.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 crypto/seqiv.c |   71 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 70 insertions(+), 1 deletion(-)

diff --git a/crypto/seqiv.c b/crypto/seqiv.c
index 27dbab8a..a9bfbda 100644
--- a/crypto/seqiv.c
+++ b/crypto/seqiv.c
@@ -584,6 +584,7 @@ static void seqiv_aead_exit(struct crypto_tfm *tfm)
 }
 
 static struct crypto_template seqiv_tmpl;
+static struct crypto_template seqniv_tmpl;
 
 static struct crypto_instance *seqiv_ablkcipher_alloc(struct rtattr **tb)
 {
@@ -710,6 +711,51 @@ put_rng:
 	goto out;
 }
 
+static struct crypto_instance *seqniv_alloc(struct rtattr **tb)
+{
+	struct aead_instance *inst;
+	struct crypto_aead_spawn *spawn;
+	struct aead_alg *alg;
+	int err;
+
+	err = crypto_get_default_rng();
+	if (err)
+		return ERR_PTR(err);
+
+	inst = aead_geniv_alloc(&seqniv_tmpl, tb, 0, 0);
+
+	if (IS_ERR(inst))
+		goto put_rng;
+
+	if (inst->alg.ivsize < sizeof(u64)) {
+		aead_geniv_free(inst);
+		inst = ERR_PTR(-EINVAL);
+		goto put_rng;
+	}
+
+	spawn = aead_instance_ctx(inst);
+	alg = crypto_spawn_aead_alg(spawn);
+
+	inst->alg.setkey = seqiv_aead_setkey;
+	inst->alg.setauthsize = seqiv_aead_setauthsize;
+	inst->alg.encrypt = seqiv_aead_encrypt_compat_first;
+	inst->alg.decrypt = seqiv_aead_decrypt_compat;
+
+	inst->alg.base.cra_init = seqiv_aead_compat_init;
+	inst->alg.base.cra_exit = seqiv_aead_compat_exit;
+
+	inst->alg.base.cra_alignmask |= __alignof__(u32) - 1;
+	inst->alg.base.cra_ctxsize = sizeof(struct seqiv_aead_ctx);
+	inst->alg.base.cra_ctxsize += inst->alg.base.cra_aead.ivsize;
+
+out:
+	return aead_crypto_instance(inst);
+
+put_rng:
+	crypto_put_default_rng();
+	goto out;
+}
+
 static void seqiv_free(struct crypto_instance *inst)
 {
 	if ((inst->alg.cra_flags ^ CRYPTO_ALG_TYPE_AEAD) & CRYPTO_ALG_TYPE_MASK)
@@ -726,9 +772,31 @@ static struct crypto_template seqiv_tmpl = {
 	.module = THIS_MODULE,
 };
 
+static struct crypto_template seqniv_tmpl = {
+	.name = "seqniv",
+	.alloc = seqniv_alloc,
+	.free = seqiv_free,
+	.module = THIS_MODULE,
+};
+
 static int __init seqiv_module_init(void)
 {
-	return crypto_register_template(&seqiv_tmpl);
+	int err;
+
+	err = crypto_register_template(&seqiv_tmpl);
+	if (err)
+		goto out;
+
+	err = crypto_register_template(&seqniv_tmpl);
+	if (err)
+		goto out_undo_niv;
+
+out:
+	return err;
+
+out_undo_niv:
+	crypto_unregister_template(&seqiv_tmpl);
+	goto out;
 }
 
 static void __exit seqiv_module_exit(void)
@@ -742,3 +810,4 @@ module_exit(seqiv_module_exit);
 MODULE_LICENSE("GPL");
 MODULE_DESCRIPTION("Sequence Number IV Generator");
 MODULE_ALIAS_CRYPTO("seqiv");
+MODULE_ALIAS_CRYPTO("seqniv");

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 16/16] crypto: echainiv - Add encrypted chain IV generator
  2015-05-21  7:09 [PATCH 0/16] crypto: aead - Add single SG interface and new IPsec IV generation Herbert Xu
                   ` (14 preceding siblings ...)
  2015-05-21  7:11 ` [PATCH 15/16] crypto: seqiv - Add seqniv Herbert Xu
@ 2015-05-21  7:11 ` Herbert Xu
  15 siblings, 0 replies; 17+ messages in thread
From: Herbert Xu @ 2015-05-21  7:11 UTC (permalink / raw)
  To: Linux Crypto Mailing List

This patch adds a new AEAD IV generator echainiv.  It is intended
to replace the existing skcipher IV generator eseqiv.

If the underlying AEAD algorithm is using the old AEAD interface,
then echainiv will simply use its IV generator.

Otherwise, echainiv will encrypt a counter just like eseqiv but
it'll first xor it against a previously stored IV similar to
chainiv.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 crypto/Kconfig    |   10 +
 crypto/Makefile   |    1 
 crypto/echainiv.c |  531 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 542 insertions(+)

diff --git a/crypto/Kconfig b/crypto/Kconfig
index 657bb82..b7088d1 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -227,6 +227,16 @@ config CRYPTO_SEQIV
 	  This IV generator generates an IV based on a sequence number by
 	  xoring it with a salt.  This algorithm is mainly useful for CTR
 
+config CRYPTO_ECHAINIV
+	tristate "Encrypted Chain IV Generator"
+	select CRYPTO_AEAD
+	select CRYPTO_NULL
+	select CRYPTO_RNG
+	help
+	  This IV generator generates an IV based on the encryption of
+	  a sequence number xored with a salt.  This is the default
+	  algorithm for CBC.
+
 comment "Block modes"
 
 config CRYPTO_CBC
diff --git a/crypto/Makefile b/crypto/Makefile
index 97b7d3a..df55363 100644
--- a/crypto/Makefile
+++ b/crypto/Makefile
@@ -21,6 +21,7 @@ obj-$(CONFIG_CRYPTO_BLKCIPHER2) += crypto_blkcipher.o
 obj-$(CONFIG_CRYPTO_BLKCIPHER2) += chainiv.o
 obj-$(CONFIG_CRYPTO_BLKCIPHER2) += eseqiv.o
 obj-$(CONFIG_CRYPTO_SEQIV) += seqiv.o
+obj-$(CONFIG_CRYPTO_ECHAINIV) += echainiv.o
 
 crypto_hash-y += ahash.o
 crypto_hash-y += shash.o
diff --git a/crypto/echainiv.c b/crypto/echainiv.c
new file mode 100644
index 0000000..e5a9878
--- /dev/null
+++ b/crypto/echainiv.c
@@ -0,0 +1,531 @@
+/*
+ * echainiv: Encrypted Chain IV Generator
+ *
+ * This generator generates an IV based on a sequence number by xoring it
+ * with a salt and then encrypting it with the same key as used to encrypt
+ * the plain text.  This algorithm requires that the block size be equal
+ * to the IV size.  It is mainly useful for CBC.
+ *
+ * This generator can only be used by algorithms where authentication
+ * is performed after encryption (i.e., authenc).
+ *
+ * Copyright (c) 2015 Herbert Xu <herbert@gondor.apana.org.au>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ *
+ */
+
+#include <crypto/internal/aead.h>
+#include <crypto/null.h>
+#include <crypto/rng.h>
+#include <crypto/scatterwalk.h>
+#include <linux/err.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/mm.h>
+#include <linux/module.h>
+#include <linux/percpu.h>
+#include <linux/spinlock.h>
+#include <linux/string.h>
+
+#define MAX_IV_SIZE 16
+
+struct echainiv_request_ctx {
+	struct scatterlist src[2];
+	struct scatterlist dst[2];
+	struct scatterlist ivbuf[2];
+	struct scatterlist *ivsg;
+	struct aead_givcrypt_request subreq;
+};
+
+struct echainiv_ctx {
+	struct crypto_aead *child;
+	spinlock_t lock;
+	struct crypto_blkcipher *null;
+	u8 salt[] __attribute__ ((aligned(__alignof__(u32))));
+};
+
+static DEFINE_PER_CPU(u32 [MAX_IV_SIZE / sizeof(u32)], echainiv_iv);
+
+static int echainiv_setkey(struct crypto_aead *tfm,
+			      const u8 *key, unsigned int keylen)
+{
+	struct echainiv_ctx *ctx = crypto_aead_ctx(tfm);
+
+	return crypto_aead_setkey(ctx->child, key, keylen);
+}
+
+static int echainiv_setauthsize(struct crypto_aead *tfm,
+				  unsigned int authsize)
+{
+	struct echainiv_ctx *ctx = crypto_aead_ctx(tfm);
+
+	return crypto_aead_setauthsize(ctx->child, authsize);
+}
+
+/* We don't care if we get preempted and read/write IVs from the next CPU. */
+void echainiv_read_iv(u8 *dst, unsigned size)
+{
+	u32 *a = (u32 *)dst;
+	u32 __percpu *b = echainiv_iv;
+
+	for (; size >= 4; size -= 4) {
+		*a++ = this_cpu_read(*b);
+		b++;
+	}
+}
+
+void echainiv_write_iv(const u8 *src, unsigned size)
+{
+	const u32 *a = (const u32 *)src;
+	u32 __percpu *b = echainiv_iv;
+
+	for (; size >= 4; size -= 4) {
+		this_cpu_write(*b, *a);
+		a++;
+		b++;
+	}
+}
+
+static void echainiv_encrypt_compat_complete2(struct aead_request *req,
+						 int err)
+{
+	struct echainiv_request_ctx *rctx = aead_request_ctx(req);
+	struct aead_givcrypt_request *subreq = &rctx->subreq;
+	struct crypto_aead *geniv;
+
+	if (err == -EINPROGRESS)
+		return;
+
+	if (err)
+		goto out;
+
+	geniv = crypto_aead_reqtfm(req);
+	scatterwalk_map_and_copy(subreq->giv, rctx->ivsg, 0,
+				 crypto_aead_ivsize(geniv), 1);
+
+out:
+	kzfree(subreq->giv);
+}
+
+static void echainiv_encrypt_compat_complete(
+	struct crypto_async_request *base, int err)
+{
+	struct aead_request *req = base->data;
+
+	echainiv_encrypt_compat_complete2(req, err);
+	aead_request_complete(req, err);
+}
+
+static void echainiv_encrypt_complete2(struct aead_request *req, int err)
+{
+	struct aead_request *subreq = aead_request_ctx(req);
+	struct crypto_aead *geniv;
+	unsigned int ivsize;
+
+	if (err == -EINPROGRESS)
+		return;
+
+	if (err)
+		goto out;
+
+	geniv = crypto_aead_reqtfm(req);
+	ivsize = crypto_aead_ivsize(geniv);
+
+	echainiv_write_iv(subreq->iv, ivsize);
+
+	if (req->iv != subreq->iv)
+		memcpy(req->iv, subreq->iv, ivsize);
+
+out:
+	if (req->iv != subreq->iv)
+		kzfree(subreq->iv);
+}
+
+static void echainiv_encrypt_complete(struct crypto_async_request *base,
+					 int err)
+{
+	struct aead_request *req = base->data;
+
+	echainiv_encrypt_complete2(req, err);
+	aead_request_complete(req, err);
+}
+
+static int echainiv_encrypt_compat(struct aead_request *req)
+{
+	struct crypto_aead *geniv = crypto_aead_reqtfm(req);
+	struct echainiv_ctx *ctx = crypto_aead_ctx(geniv);
+	struct echainiv_request_ctx *rctx = aead_request_ctx(req);
+	struct aead_givcrypt_request *subreq = &rctx->subreq;
+	unsigned int ivsize = crypto_aead_ivsize(geniv);
+	crypto_completion_t compl;
+	void *data;
+	u8 *info;
+	__be64 seq;
+	int err;
+
+	compl = req->base.complete;
+	data = req->base.data;
+
+	rctx->ivsg = scatterwalk_ffwd(rctx->ivbuf, req->dst, req->assoclen);
+	info = PageHighMem(sg_page(rctx->ivsg)) ? NULL : sg_virt(rctx->ivsg);
+
+	if (!info) {
+		info = kmalloc(ivsize, req->base.flags &
+				       CRYPTO_TFM_REQ_MAY_SLEEP ? GFP_KERNEL:
+								  GFP_ATOMIC);
+		if (!info)
+			return -ENOMEM;
+
+		compl = echainiv_encrypt_compat_complete;
+		data = req;
+	}
+
+	memcpy(&seq, req->iv + ivsize - sizeof(seq), sizeof(seq));
+
+	aead_givcrypt_set_tfm(subreq, ctx->child);
+	aead_givcrypt_set_callback(subreq, req->base.flags,
+				   req->base.complete, req->base.data);
+	aead_givcrypt_set_crypt(subreq,
+				scatterwalk_ffwd(rctx->src, req->src,
+						 req->assoclen + ivsize),
+				scatterwalk_ffwd(rctx->dst, rctx->ivsg,
+						 ivsize),
+				req->cryptlen - ivsize, req->iv);
+	aead_givcrypt_set_assoc(subreq, req->src, req->assoclen);
+	aead_givcrypt_set_giv(subreq, info, be64_to_cpu(seq));
+
+	err = crypto_aead_givencrypt(subreq);
+	if (unlikely(PageHighMem(sg_page(rctx->ivsg))))
+		echainiv_encrypt_compat_complete2(req, err);
+	return err;
+}
+
+static int echainiv_encrypt(struct aead_request *req)
+{
+	struct crypto_aead *geniv = crypto_aead_reqtfm(req);
+	struct echainiv_ctx *ctx = crypto_aead_ctx(geniv);
+	struct aead_request *subreq = aead_request_ctx(req);
+	crypto_completion_t compl;
+	void *data;
+	u8 *info;
+	unsigned int ivsize;
+	int err;
+
+	aead_request_set_tfm(subreq, ctx->child);
+
+	compl = echainiv_encrypt_complete;
+	data = req;
+	info = req->iv;
+
+	ivsize = crypto_aead_ivsize(geniv);
+
+	if (req->src != req->dst) {
+		struct scatterlist src[2];
+		struct scatterlist dst[2];
+		struct blkcipher_desc desc = {
+			.tfm = ctx->null,
+		};
+
+		err = crypto_blkcipher_encrypt(
+			&desc,
+			scatterwalk_ffwd(dst, req->dst,
+					 req->assoclen + ivsize),
+			scatterwalk_ffwd(src, req->src,
+					 req->assoclen + ivsize),
+			req->cryptlen - ivsize);
+		if (err)
+			return err;
+	}
+
+	if (unlikely(!IS_ALIGNED((unsigned long)info,
+				 crypto_aead_alignmask(geniv) + 1))) {
+		info = kmalloc(ivsize, req->base.flags &
+				       CRYPTO_TFM_REQ_MAY_SLEEP ? GFP_KERNEL:
+								  GFP_ATOMIC);
+		if (!info)
+			return -ENOMEM;
+
+		memcpy(info, req->iv, ivsize);
+	}
+
+	aead_request_set_callback(subreq, req->base.flags, compl, data);
+	aead_request_set_crypt(subreq, req->dst, req->dst,
+			       req->cryptlen - ivsize, info);
+	aead_request_set_ad(subreq, req->assoclen + ivsize, 0);
+
+	crypto_xor(info, ctx->salt, ivsize);
+	scatterwalk_map_and_copy(info, req->dst, req->assoclen, ivsize, 1);
+	echainiv_read_iv(info, ivsize);
+
+	err = crypto_aead_encrypt(subreq);
+	echainiv_encrypt_complete2(req, err);
+	return err;
+}
+
+static int echainiv_decrypt_compat(struct aead_request *req)
+{
+	struct crypto_aead *geniv = crypto_aead_reqtfm(req);
+	struct echainiv_ctx *ctx = crypto_aead_ctx(geniv);
+	struct aead_request *subreq = aead_request_ctx(req);
+	crypto_completion_t compl;
+	void *data;
+	unsigned int ivsize;
+
+	aead_request_set_tfm(subreq, ctx->child);
+
+	compl = req->base.complete;
+	data = req->base.data;
+
+	ivsize = crypto_aead_ivsize(geniv);
+
+	aead_request_set_callback(subreq, req->base.flags, compl, data);
+	aead_request_set_crypt(subreq, req->src, req->dst,
+			       req->cryptlen - ivsize, req->iv);
+	aead_request_set_ad(subreq, req->assoclen, ivsize);
+
+	scatterwalk_map_and_copy(req->iv, req->src, req->assoclen, ivsize, 0);
+
+	return crypto_aead_decrypt(subreq);
+}
+
+static int echainiv_decrypt(struct aead_request *req)
+{
+	struct crypto_aead *geniv = crypto_aead_reqtfm(req);
+	struct echainiv_ctx *ctx = crypto_aead_ctx(geniv);
+	struct aead_request *subreq = aead_request_ctx(req);
+	crypto_completion_t compl;
+	void *data;
+	unsigned int ivsize;
+
+	aead_request_set_tfm(subreq, ctx->child);
+
+	compl = req->base.complete;
+	data = req->base.data;
+
+	ivsize = crypto_aead_ivsize(geniv);
+
+	aead_request_set_callback(subreq, req->base.flags, compl, data);
+	aead_request_set_crypt(subreq, req->src, req->dst,
+			       req->cryptlen - ivsize, req->iv);
+	aead_request_set_ad(subreq, req->assoclen + ivsize, 0);
+
+	scatterwalk_map_and_copy(req->iv, req->src, req->assoclen, ivsize, 0);
+	if (req->src != req->dst)
+		scatterwalk_map_and_copy(req->iv, req->dst,
+					 req->assoclen, ivsize, 1);
+
+	return crypto_aead_decrypt(subreq);
+}
+
+static int echainiv_encrypt_compat_first(struct aead_request *req)
+{
+	struct crypto_aead *geniv = crypto_aead_reqtfm(req);
+	struct echainiv_ctx *ctx = crypto_aead_ctx(geniv);
+	int err = 0;
+
+	spin_lock_bh(&ctx->lock);
+	if (geniv->encrypt != echainiv_encrypt_compat_first)
+		goto unlock;
+
+	geniv->encrypt = echainiv_encrypt_compat;
+	err = crypto_rng_get_bytes(crypto_default_rng, ctx->salt,
+				   crypto_aead_ivsize(geniv));
+
+unlock:
+	spin_unlock_bh(&ctx->lock);
+
+	if (err)
+		return err;
+
+	return echainiv_encrypt_compat(req);
+}
+
+static int echainiv_encrypt_first(struct aead_request *req)
+{
+	struct crypto_aead *geniv = crypto_aead_reqtfm(req);
+	struct echainiv_ctx *ctx = crypto_aead_ctx(geniv);
+	int err = 0;
+
+	spin_lock_bh(&ctx->lock);
+	if (geniv->encrypt != echainiv_encrypt_first)
+		goto unlock;
+
+	geniv->encrypt = echainiv_encrypt;
+	err = crypto_rng_get_bytes(crypto_default_rng, ctx->salt,
+				   crypto_aead_ivsize(geniv));
+
+unlock:
+	spin_unlock_bh(&ctx->lock);
+
+	if (err)
+		return err;
+
+	return echainiv_encrypt(req);
+}
+
+static int echainiv_compat_init(struct crypto_tfm *tfm)
+{
+	struct crypto_aead *geniv = __crypto_aead_cast(tfm);
+	struct echainiv_ctx *ctx = crypto_aead_ctx(geniv);
+	int err;
+
+	spin_lock_init(&ctx->lock);
+
+	crypto_aead_set_reqsize(geniv, sizeof(struct echainiv_request_ctx));
+
+	err = aead_geniv_init(tfm);
+
+	ctx->child = geniv->child;
+	geniv->child = geniv;
+
+	return err;
+}
+
+static int echainiv_init(struct crypto_tfm *tfm)
+{
+	struct crypto_aead *geniv = __crypto_aead_cast(tfm);
+	struct echainiv_ctx *ctx = crypto_aead_ctx(geniv);
+	int err;
+
+	spin_lock_init(&ctx->lock);
+
+	crypto_aead_set_reqsize(geniv, sizeof(struct aead_request));
+
+	ctx->null = crypto_get_default_null_skcipher();
+	err = PTR_ERR(ctx->null);
+	if (IS_ERR(ctx->null))
+		goto out;
+
+	err = aead_geniv_init(tfm);
+	if (err)
+		goto drop_null;
+
+	ctx->child = geniv->child;
+	geniv->child = geniv;
+
+out:
+	return err;
+
+drop_null:
+	crypto_put_default_null_skcipher();
+	goto out;
+}
+
+static void echainiv_compat_exit(struct crypto_tfm *tfm)
+{
+	struct echainiv_ctx *ctx = crypto_tfm_ctx(tfm);
+
+	crypto_free_aead(ctx->child);
+}
+
+static void echainiv_exit(struct crypto_tfm *tfm)
+{
+	struct echainiv_ctx *ctx = crypto_tfm_ctx(tfm);
+
+	crypto_free_aead(ctx->child);
+	crypto_put_default_null_skcipher();
+}
+
+static struct crypto_template echainiv_tmpl;
+
+static struct crypto_instance *echainiv_aead_alloc(struct rtattr **tb)
+{
+	struct aead_instance *inst;
+	struct crypto_aead_spawn *spawn;
+	struct aead_alg *alg;
+
+	inst = aead_geniv_alloc(&echainiv_tmpl, tb, 0, 0);
+
+	if (IS_ERR(inst))
+		goto out;
+
+	if (inst->alg.ivsize < sizeof(u64) ||
+	    inst->alg.ivsize & (sizeof(u32) - 1) ||
+	    inst->alg.ivsize > MAX_IV_SIZE) {
+		aead_geniv_free(inst);
+		inst = ERR_PTR(-EINVAL);
+		goto out;
+	}
+
+	spawn = aead_instance_ctx(inst);
+	alg = crypto_spawn_aead_alg(spawn);
+
+	inst->alg.setkey = echainiv_setkey;
+	inst->alg.setauthsize = echainiv_setauthsize;
+	inst->alg.encrypt = echainiv_encrypt_first;
+	inst->alg.decrypt = echainiv_decrypt;
+
+	inst->alg.base.cra_init = echainiv_init;
+	inst->alg.base.cra_exit = echainiv_exit;
+
+	inst->alg.base.cra_alignmask |= __alignof__(u32) - 1;
+	inst->alg.base.cra_ctxsize = sizeof(struct echainiv_ctx);
+	inst->alg.base.cra_ctxsize += inst->alg.base.cra_aead.ivsize;
+
+	if (alg->base.cra_aead.encrypt) {
+		inst->alg.encrypt = echainiv_encrypt_compat_first;
+		inst->alg.decrypt = echainiv_decrypt_compat;
+
+		inst->alg.base.cra_init = echainiv_compat_init;
+		inst->alg.base.cra_exit = echainiv_compat_exit;
+	}
+
+out:
+	return aead_crypto_instance(inst);
+}
+
+static struct crypto_instance *echainiv_alloc(struct rtattr **tb)
+{
+	struct crypto_instance *inst;
+	int err;
+
+	err = crypto_get_default_rng();
+	if (err)
+		return ERR_PTR(err);
+
+	inst = echainiv_aead_alloc(tb);
+
+	if (IS_ERR(inst))
+		goto put_rng;
+
+out:
+	return inst;
+
+put_rng:
+	crypto_put_default_rng();
+	goto out;
+}
+
+static void echainiv_free(struct crypto_instance *inst)
+{
+	aead_geniv_free(aead_instance(inst));
+	crypto_put_default_rng();
+}
+
+static struct crypto_template echainiv_tmpl = {
+	.name = "echainiv",
+	.alloc = echainiv_alloc,
+	.free = echainiv_free,
+	.module = THIS_MODULE,
+};
+
+static int __init echainiv_module_init(void)
+{
+	return crypto_register_template(&echainiv_tmpl);
+}
+
+static void __exit echainiv_module_exit(void)
+{
+	crypto_unregister_template(&echainiv_tmpl);
+}
+
+module_init(echainiv_module_init);
+module_exit(echainiv_module_exit);
+
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("Encrypted Chain IV Generator");
+MODULE_ALIAS_CRYPTO("echainiv");

^ permalink raw reply related	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2015-05-21  7:11 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-05-21  7:09 [PATCH 0/16] crypto: aead - Add single SG interface and new IPsec IV generation Herbert Xu
2015-05-21  7:10 ` [PATCH 1/16] crypto: cryptd - Use crypto_grab_aead Herbert Xu
2015-05-21  7:10 ` [PATCH 2/16] crypto: pcrypt " Herbert Xu
2015-05-21  7:10 ` [PATCH 3/16] crypto: scatterwalk - Add scatterwalk_ffwd helper Herbert Xu
2015-05-21  7:11 ` [PATCH 4/16] crypto: aead - Add new interface with single SG list Herbert Xu
2015-05-21  7:11 ` [PATCH 5/16] crypto: aead - Rename aead_alg to old_aead_alg Herbert Xu
2015-05-21  7:11 ` [PATCH 6/16] crypto: caam - Use old_aead_alg Herbert Xu
2015-05-21  7:11 ` [PATCH 7/16] crypto: aead - Add crypto_aead_maxauthsize Herbert Xu
2015-05-21  7:11 ` [PATCH 8/16] crypto: ixp4xx - Use crypto_aead_maxauthsize Herbert Xu
2015-05-21  7:11 ` [PATCH 9/16] crypto: nx - Remove unnecessary maxauthsize check Herbert Xu
2015-05-21  7:11 ` [PATCH 10/16] crypto: aead - Add support for new AEAD implementations Herbert Xu
2015-05-21  7:11 ` [PATCH 11/16] crypto: null - Add default null skcipher Herbert Xu
2015-05-21  7:11 ` [PATCH 12/16] crypto: gcm - Use " Herbert Xu
2015-05-21  7:11 ` [PATCH 13/16] crypto: scatterwalk - Check for same address in map_and_copy Herbert Xu
2015-05-21  7:11 ` [PATCH 14/16] crypto: seqiv - Add support for new AEAD interface Herbert Xu
2015-05-21  7:11 ` [PATCH 15/16] crypto: seqiv - Add seqniv Herbert Xu
2015-05-21  7:11 ` [PATCH 16/16] crypto: echainiv - Add encrypted chain IV generator Herbert Xu

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.