linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH crypto-next 00/23] crypto: skcipher - Remove VLA usage
@ 2018-09-19  2:10 Kees Cook
  2018-09-19  2:10 ` [PATCH crypto-next 01/23] crypto: skcipher - Introduce crypto_sync_skcipher Kees Cook
                   ` (24 more replies)
  0 siblings, 25 replies; 42+ messages in thread
From: Kees Cook @ 2018-09-19  2:10 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Kees Cook, Ard Biesheuvel, Eric Biggers, linux-crypto,
	Linux Kernel Mailing List

This is the full follow-up to earlier discussions[1] that suggested
adding a new struct crypto_sync_skcipher to handle the VLA removal from
SKCIPHER_REQUEST_ON_STACK.

This series is effectively a no-op change: everything is a wrapper
around struct crypto_skcipher, but provides compile-time enforcement
for not putting an ASYNC skcipher on the stack, which allows us to
declare the on-stack requests with a fixed stack size.

[1] https://lkml.kernel.org/r/CAGXu5j+bpLK=EQ9LHkO8V=sdaQwt==6fbGhgn2Vi1E9_WxSGRQ@mail.gmail.com

-Kees

Kees Cook (23):
  crypto: skcipher - Introduce crypto_sync_skcipher
  gss_krb5: Remove VLA usage of skcipher
  lib80211: Remove VLA usage of skcipher
  mac802154: Remove VLA usage of skcipher
  s390/crypto: Remove VLA usage of skcipher
  x86/fpu: Remove VLA usage of skcipher
  block: cryptoloop: Remove VLA usage of skcipher
  libceph: Remove VLA usage of skcipher
  ppp: mppe: Remove VLA usage of skcipher
  rxrpc: Remove VLA usage of skcipher
  wusb: Remove VLA usage of skcipher
  crypto: ccp - Remove VLA usage of skcipher
  crypto: vmx - Remove VLA usage of skcipher
  crypto: null - Remove VLA usage of skcipher
  crypto: cryptd - Remove VLA usage of skcipher
  crypto: sahara - Remove VLA usage of skcipher
  crypto: qce - Remove VLA usage of skcipher
  crypto: artpec6 - Remove VLA usage of skcipher
  crypto: chelsio - Remove VLA usage of skcipher
  crypto: mxs-dcp - Remove VLA usage of skcipher
  crypto: omap-aes - Remove VLA usage of skcipher
  crypto: picoxcell - Remove VLA usage of skcipher
  crypto: skcipher - Remove SKCIPHER_REQUEST_ON_STACK()

 arch/s390/crypto/aes_s390.c                   | 48 +++++-----
 arch/x86/crypto/fpu.c                         | 30 ++++---
 crypto/algif_aead.c                           | 12 +--
 crypto/authenc.c                              |  8 +-
 crypto/authencesn.c                           |  8 +-
 crypto/cryptd.c                               | 32 +++----
 crypto/crypto_null.c                          | 11 ++-
 crypto/echainiv.c                             |  4 +-
 crypto/gcm.c                                  |  8 +-
 crypto/seqiv.c                                |  4 +-
 crypto/skcipher.c                             | 24 +++++
 drivers/block/cryptoloop.c                    | 22 ++---
 drivers/crypto/axis/artpec6_crypto.c          | 19 ++--
 drivers/crypto/ccp/ccp-crypto-aes-xts.c       | 13 +--
 drivers/crypto/ccp/ccp-crypto.h               |  2 +-
 drivers/crypto/chelsio/chcr_algo.c            | 27 +++---
 drivers/crypto/chelsio/chcr_crypto.h          |  2 +-
 drivers/crypto/mxs-dcp.c                      | 21 +++--
 drivers/crypto/omap-aes.c                     | 17 ++--
 drivers/crypto/omap-aes.h                     |  2 +-
 drivers/crypto/picoxcell_crypto.c             | 21 +++--
 drivers/crypto/qce/ablkcipher.c               | 13 ++-
 drivers/crypto/qce/cipher.h                   |  2 +-
 drivers/crypto/sahara.c                       | 31 ++++---
 drivers/crypto/vmx/aes_cbc.c                  | 22 ++---
 drivers/crypto/vmx/aes_ctr.c                  | 18 ++--
 drivers/crypto/vmx/aes_xts.c                  | 18 ++--
 drivers/net/ppp/ppp_mppe.c                    | 27 +++---
 drivers/staging/rtl8192e/rtllib_crypt_tkip.c  | 34 ++++----
 drivers/staging/rtl8192e/rtllib_crypt_wep.c   | 28 +++---
 .../rtl8192u/ieee80211/ieee80211_crypt_tkip.c | 34 ++++----
 .../rtl8192u/ieee80211/ieee80211_crypt_wep.c  | 26 +++---
 drivers/usb/wusbcore/crypto.c                 | 16 ++--
 include/crypto/internal/geniv.h               |  2 +-
 include/crypto/null.h                         |  2 +-
 include/crypto/skcipher.h                     | 74 +++++++++++++++-
 include/linux/sunrpc/gss_krb5.h               | 30 +++----
 net/ceph/crypto.c                             | 12 +--
 net/ceph/crypto.h                             |  2 +-
 net/mac802154/llsec.c                         | 16 ++--
 net/mac802154/llsec.h                         |  2 +-
 net/rxrpc/ar-internal.h                       |  2 +-
 net/rxrpc/rxkad.c                             | 44 +++++-----
 net/sunrpc/auth_gss/gss_krb5_crypto.c         | 87 ++++++++++---------
 net/sunrpc/auth_gss/gss_krb5_keys.c           |  9 +-
 net/sunrpc/auth_gss/gss_krb5_mech.c           | 53 ++++++-----
 net/sunrpc/auth_gss/gss_krb5_seqnum.c         | 18 ++--
 net/sunrpc/auth_gss/gss_krb5_wrap.c           | 20 ++---
 net/wireless/lib80211_crypt_tkip.c            | 34 ++++----
 net/wireless/lib80211_crypt_wep.c             | 28 +++---
 50 files changed, 563 insertions(+), 476 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 42+ messages in thread

* [PATCH crypto-next 01/23] crypto: skcipher - Introduce crypto_sync_skcipher
  2018-09-19  2:10 [PATCH crypto-next 00/23] crypto: skcipher - Remove VLA usage Kees Cook
@ 2018-09-19  2:10 ` Kees Cook
  2018-09-24 11:48   ` Ard Biesheuvel
  2018-09-19  2:10 ` [PATCH crypto-next 02/23] gss_krb5: Remove VLA usage of skcipher Kees Cook
                   ` (23 subsequent siblings)
  24 siblings, 1 reply; 42+ messages in thread
From: Kees Cook @ 2018-09-19  2:10 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Kees Cook, Ard Biesheuvel, Eric Biggers, linux-crypto,
	Linux Kernel Mailing List

In preparation for removal of VLAs due to skcipher requests on the stack
via SKCIPHER_REQUEST_ON_STACK() usage, this introduces the infrastructure
for the "sync skcipher" tfm, which is for handling the on-stack cases of
skcipher, which are always non-ASYNC and have a known limited request
size.

The crypto API additions:

	struct crypto_sync_skcipher (wrapper for struct crypto_skcipher)
	crypto_alloc_sync_skcipher()
	crypto_free_sync_skcipher()
	crypto_sync_skcipher_setkey()
	crypto_sync_skcipher_get_flags()
	crypto_sync_skcipher_set_flags()
	crypto_sync_skcipher_clear_flags()
	crypto_sync_skcipher_blocksize()
	crypto_sync_skcipher_ivsize()
	crypto_sync_skcipher_reqtfm()
	skcipher_request_set_sync_tfm()
	SYNC_SKCIPHER_REQUEST_ON_STACK() (with tfm type check)

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 crypto/skcipher.c         | 24 +++++++++++++
 include/crypto/skcipher.h | 75 +++++++++++++++++++++++++++++++++++++++
 2 files changed, 99 insertions(+)

diff --git a/crypto/skcipher.c b/crypto/skcipher.c
index 0bd8c6caa498..4caab81d2d02 100644
--- a/crypto/skcipher.c
+++ b/crypto/skcipher.c
@@ -949,6 +949,30 @@ struct crypto_skcipher *crypto_alloc_skcipher(const char *alg_name,
 }
 EXPORT_SYMBOL_GPL(crypto_alloc_skcipher);
 
+struct crypto_sync_skcipher *crypto_alloc_sync_skcipher(
+				const char *alg_name, u32 type, u32 mask)
+{
+	struct crypto_skcipher *tfm;
+
+	/* Only sync algorithms allowed. */
+	mask |= CRYPTO_ALG_ASYNC;
+
+	tfm = crypto_alloc_tfm(alg_name, &crypto_skcipher_type2, type, mask);
+
+	/*
+	 * Make sure we do not allocate something that might get used with
+	 * an on-stack request: check the request size.
+	 */
+	if (!IS_ERR(tfm) && WARN_ON(crypto_skcipher_reqsize(tfm) >
+				    MAX_SYNC_SKCIPHER_REQSIZE)) {
+		crypto_free_skcipher(tfm);
+		return ERR_PTR(-EINVAL);
+	}
+
+	return (struct crypto_sync_skcipher *)tfm;
+}
+EXPORT_SYMBOL_GPL(crypto_alloc_sync_skcipher);
+
 int crypto_has_skcipher2(const char *alg_name, u32 type, u32 mask)
 {
 	return crypto_type_has_alg(alg_name, &crypto_skcipher_type2,
diff --git a/include/crypto/skcipher.h b/include/crypto/skcipher.h
index 2f327f090c3e..d00ce90dc7da 100644
--- a/include/crypto/skcipher.h
+++ b/include/crypto/skcipher.h
@@ -65,6 +65,10 @@ struct crypto_skcipher {
 	struct crypto_tfm base;
 };
 
+struct crypto_sync_skcipher {
+	struct crypto_skcipher base;
+};
+
 /**
  * struct skcipher_alg - symmetric key cipher definition
  * @min_keysize: Minimum key size supported by the transformation. This is the
@@ -139,6 +143,19 @@ struct skcipher_alg {
 	struct crypto_alg base;
 };
 
+#define MAX_SYNC_SKCIPHER_REQSIZE      384
+/*
+ * This performs a type-check against the "tfm" argument to make sure
+ * all users have the correct skcipher tfm for doing on-stack requests.
+ */
+#define SYNC_SKCIPHER_REQUEST_ON_STACK(name, tfm) \
+	char __##name##_desc[sizeof(struct skcipher_request) + \
+			     MAX_SYNC_SKCIPHER_REQSIZE + \
+			     (!(sizeof((struct crypto_sync_skcipher *)1 == \
+				       (typeof(tfm))1))) \
+			    ] CRYPTO_MINALIGN_ATTR; \
+	struct skcipher_request *name = (void *)__##name##_desc
+
 #define SKCIPHER_REQUEST_ON_STACK(name, tfm) \
 	char __##name##_desc[sizeof(struct skcipher_request) + \
 		crypto_skcipher_reqsize(tfm)] CRYPTO_MINALIGN_ATTR; \
@@ -197,6 +214,9 @@ static inline struct crypto_skcipher *__crypto_skcipher_cast(
 struct crypto_skcipher *crypto_alloc_skcipher(const char *alg_name,
 					      u32 type, u32 mask);
 
+struct crypto_sync_skcipher *crypto_alloc_sync_skcipher(const char *alg_name,
+					      u32 type, u32 mask);
+
 static inline struct crypto_tfm *crypto_skcipher_tfm(
 	struct crypto_skcipher *tfm)
 {
@@ -212,6 +232,11 @@ static inline void crypto_free_skcipher(struct crypto_skcipher *tfm)
 	crypto_destroy_tfm(tfm, crypto_skcipher_tfm(tfm));
 }
 
+static inline void crypto_free_sync_skcipher(struct crypto_sync_skcipher *tfm)
+{
+	crypto_free_skcipher(&tfm->base);
+}
+
 /**
  * crypto_has_skcipher() - Search for the availability of an skcipher.
  * @alg_name: is the cra_name / name or cra_driver_name / driver name of the
@@ -280,6 +305,12 @@ static inline unsigned int crypto_skcipher_ivsize(struct crypto_skcipher *tfm)
 	return tfm->ivsize;
 }
 
+static inline unsigned int crypto_sync_skcipher_ivsize(
+	struct crypto_sync_skcipher *tfm)
+{
+	return crypto_skcipher_ivsize(&tfm->base);
+}
+
 static inline unsigned int crypto_skcipher_alg_chunksize(
 	struct skcipher_alg *alg)
 {
@@ -356,6 +387,12 @@ static inline unsigned int crypto_skcipher_blocksize(
 	return crypto_tfm_alg_blocksize(crypto_skcipher_tfm(tfm));
 }
 
+static inline unsigned int crypto_sync_skcipher_blocksize(
+	struct crypto_sync_skcipher *tfm)
+{
+	return crypto_skcipher_blocksize(&tfm->base);
+}
+
 static inline unsigned int crypto_skcipher_alignmask(
 	struct crypto_skcipher *tfm)
 {
@@ -379,6 +416,24 @@ static inline void crypto_skcipher_clear_flags(struct crypto_skcipher *tfm,
 	crypto_tfm_clear_flags(crypto_skcipher_tfm(tfm), flags);
 }
 
+static inline u32 crypto_sync_skcipher_get_flags(
+	struct crypto_sync_skcipher *tfm)
+{
+	return crypto_skcipher_get_flags(&tfm->base);
+}
+
+static inline void crypto_sync_skcipher_set_flags(
+	struct crypto_sync_skcipher *tfm, u32 flags)
+{
+	crypto_skcipher_set_flags(&tfm->base, flags);
+}
+
+static inline void crypto_sync_skcipher_clear_flags(
+	struct crypto_sync_skcipher *tfm, u32 flags)
+{
+	crypto_skcipher_clear_flags(&tfm->base, flags);
+}
+
 /**
  * crypto_skcipher_setkey() - set key for cipher
  * @tfm: cipher handle
@@ -401,6 +456,12 @@ static inline int crypto_skcipher_setkey(struct crypto_skcipher *tfm,
 	return tfm->setkey(tfm, key, keylen);
 }
 
+static inline int crypto_sync_skcipher_setkey(struct crypto_sync_skcipher *tfm,
+					 const u8 *key, unsigned int keylen)
+{
+	return crypto_skcipher_setkey(&tfm->base, key, keylen);
+}
+
 static inline unsigned int crypto_skcipher_default_keysize(
 	struct crypto_skcipher *tfm)
 {
@@ -422,6 +483,14 @@ static inline struct crypto_skcipher *crypto_skcipher_reqtfm(
 	return __crypto_skcipher_cast(req->base.tfm);
 }
 
+static inline struct crypto_sync_skcipher *crypto_sync_skcipher_reqtfm(
+	struct skcipher_request *req)
+{
+	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+
+	return container_of(tfm, struct crypto_sync_skcipher, base);
+}
+
 /**
  * crypto_skcipher_encrypt() - encrypt plaintext
  * @req: reference to the skcipher_request handle that holds all information
@@ -500,6 +569,12 @@ static inline void skcipher_request_set_tfm(struct skcipher_request *req,
 	req->base.tfm = crypto_skcipher_tfm(tfm);
 }
 
+static inline void skcipher_request_set_sync_tfm(struct skcipher_request *req,
+					    struct crypto_sync_skcipher *tfm)
+{
+	skcipher_request_set_tfm(req, &tfm->base);
+}
+
 static inline struct skcipher_request *skcipher_request_cast(
 	struct crypto_async_request *req)
 {
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH crypto-next 02/23] gss_krb5: Remove VLA usage of skcipher
  2018-09-19  2:10 [PATCH crypto-next 00/23] crypto: skcipher - Remove VLA usage Kees Cook
  2018-09-19  2:10 ` [PATCH crypto-next 01/23] crypto: skcipher - Introduce crypto_sync_skcipher Kees Cook
@ 2018-09-19  2:10 ` Kees Cook
  2018-09-19  2:10 ` [PATCH crypto-next 03/23] lib80211: " Kees Cook
                   ` (22 subsequent siblings)
  24 siblings, 0 replies; 42+ messages in thread
From: Kees Cook @ 2018-09-19  2:10 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Kees Cook, Trond Myklebust, Anna Schumaker, J. Bruce Fields,
	Jeff Layton, YueHaibing, linux-nfs, Ard Biesheuvel, Eric Biggers,
	linux-crypto, Linux Kernel Mailing List

In the quest to remove all stack VLA usage from the kernel[1], this
replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage
with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(),
which uses a fixed stack size.

[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com

Cc: Trond Myklebust <trond.myklebust@hammerspace.com>
Cc: Anna Schumaker <anna.schumaker@netapp.com>
Cc: "J. Bruce Fields" <bfields@fieldses.org>
Cc: Jeff Layton <jlayton@kernel.org>
Cc: YueHaibing <yuehaibing@huawei.com>
Cc: linux-nfs@vger.kernel.org
Signed-off-by: Kees Cook <keescook@chromium.org>
---
 include/linux/sunrpc/gss_krb5.h       | 30 ++++-----
 net/sunrpc/auth_gss/gss_krb5_crypto.c | 87 ++++++++++++++-------------
 net/sunrpc/auth_gss/gss_krb5_keys.c   |  9 ++-
 net/sunrpc/auth_gss/gss_krb5_mech.c   | 53 ++++++++--------
 net/sunrpc/auth_gss/gss_krb5_seqnum.c | 18 +++---
 net/sunrpc/auth_gss/gss_krb5_wrap.c   | 20 +++---
 6 files changed, 108 insertions(+), 109 deletions(-)

diff --git a/include/linux/sunrpc/gss_krb5.h b/include/linux/sunrpc/gss_krb5.h
index 7df625d41e35..f6e8ceafafd8 100644
--- a/include/linux/sunrpc/gss_krb5.h
+++ b/include/linux/sunrpc/gss_krb5.h
@@ -71,10 +71,10 @@ struct gss_krb5_enctype {
 	const u32		keyed_cksum;	/* is it a keyed cksum? */
 	const u32		keybytes;	/* raw key len, in bytes */
 	const u32		keylength;	/* final key len, in bytes */
-	u32 (*encrypt) (struct crypto_skcipher *tfm,
+	u32 (*encrypt) (struct crypto_sync_skcipher *tfm,
 			void *iv, void *in, void *out,
 			int length);		/* encryption function */
-	u32 (*decrypt) (struct crypto_skcipher *tfm,
+	u32 (*decrypt) (struct crypto_sync_skcipher *tfm,
 			void *iv, void *in, void *out,
 			int length);		/* decryption function */
 	u32 (*mk_key) (const struct gss_krb5_enctype *gk5e,
@@ -98,12 +98,12 @@ struct krb5_ctx {
 	u32			enctype;
 	u32			flags;
 	const struct gss_krb5_enctype *gk5e; /* enctype-specific info */
-	struct crypto_skcipher	*enc;
-	struct crypto_skcipher	*seq;
-	struct crypto_skcipher *acceptor_enc;
-	struct crypto_skcipher *initiator_enc;
-	struct crypto_skcipher *acceptor_enc_aux;
-	struct crypto_skcipher *initiator_enc_aux;
+	struct crypto_sync_skcipher *enc;
+	struct crypto_sync_skcipher *seq;
+	struct crypto_sync_skcipher *acceptor_enc;
+	struct crypto_sync_skcipher *initiator_enc;
+	struct crypto_sync_skcipher *acceptor_enc_aux;
+	struct crypto_sync_skcipher *initiator_enc_aux;
 	u8			Ksess[GSS_KRB5_MAX_KEYLEN]; /* session key */
 	u8			cksum[GSS_KRB5_MAX_KEYLEN];
 	s32			endtime;
@@ -262,24 +262,24 @@ gss_unwrap_kerberos(struct gss_ctx *ctx_id, int offset,
 
 
 u32
-krb5_encrypt(struct crypto_skcipher *key,
+krb5_encrypt(struct crypto_sync_skcipher *key,
 	     void *iv, void *in, void *out, int length);
 
 u32
-krb5_decrypt(struct crypto_skcipher *key,
+krb5_decrypt(struct crypto_sync_skcipher *key,
 	     void *iv, void *in, void *out, int length); 
 
 int
-gss_encrypt_xdr_buf(struct crypto_skcipher *tfm, struct xdr_buf *outbuf,
+gss_encrypt_xdr_buf(struct crypto_sync_skcipher *tfm, struct xdr_buf *outbuf,
 		    int offset, struct page **pages);
 
 int
-gss_decrypt_xdr_buf(struct crypto_skcipher *tfm, struct xdr_buf *inbuf,
+gss_decrypt_xdr_buf(struct crypto_sync_skcipher *tfm, struct xdr_buf *inbuf,
 		    int offset);
 
 s32
 krb5_make_seq_num(struct krb5_ctx *kctx,
-		struct crypto_skcipher *key,
+		struct crypto_sync_skcipher *key,
 		int direction,
 		u32 seqnum, unsigned char *cksum, unsigned char *buf);
 
@@ -320,12 +320,12 @@ gss_krb5_aes_decrypt(struct krb5_ctx *kctx, u32 offset,
 
 int
 krb5_rc4_setup_seq_key(struct krb5_ctx *kctx,
-		       struct crypto_skcipher *cipher,
+		       struct crypto_sync_skcipher *cipher,
 		       unsigned char *cksum);
 
 int
 krb5_rc4_setup_enc_key(struct krb5_ctx *kctx,
-		       struct crypto_skcipher *cipher,
+		       struct crypto_sync_skcipher *cipher,
 		       s32 seqnum);
 void
 gss_krb5_make_confounder(char *p, u32 conflen);
diff --git a/net/sunrpc/auth_gss/gss_krb5_crypto.c b/net/sunrpc/auth_gss/gss_krb5_crypto.c
index 0220e1ca5280..4f43383971ba 100644
--- a/net/sunrpc/auth_gss/gss_krb5_crypto.c
+++ b/net/sunrpc/auth_gss/gss_krb5_crypto.c
@@ -53,7 +53,7 @@
 
 u32
 krb5_encrypt(
-	struct crypto_skcipher *tfm,
+	struct crypto_sync_skcipher *tfm,
 	void * iv,
 	void * in,
 	void * out,
@@ -62,24 +62,24 @@ krb5_encrypt(
 	u32 ret = -EINVAL;
 	struct scatterlist sg[1];
 	u8 local_iv[GSS_KRB5_MAX_BLOCKSIZE] = {0};
-	SKCIPHER_REQUEST_ON_STACK(req, tfm);
+	SYNC_SKCIPHER_REQUEST_ON_STACK(req, tfm);
 
-	if (length % crypto_skcipher_blocksize(tfm) != 0)
+	if (length % crypto_sync_skcipher_blocksize(tfm) != 0)
 		goto out;
 
-	if (crypto_skcipher_ivsize(tfm) > GSS_KRB5_MAX_BLOCKSIZE) {
+	if (crypto_sync_skcipher_ivsize(tfm) > GSS_KRB5_MAX_BLOCKSIZE) {
 		dprintk("RPC:       gss_k5encrypt: tfm iv size too large %d\n",
-			crypto_skcipher_ivsize(tfm));
+			crypto_sync_skcipher_ivsize(tfm));
 		goto out;
 	}
 
 	if (iv)
-		memcpy(local_iv, iv, crypto_skcipher_ivsize(tfm));
+		memcpy(local_iv, iv, crypto_sync_skcipher_ivsize(tfm));
 
 	memcpy(out, in, length);
 	sg_init_one(sg, out, length);
 
-	skcipher_request_set_tfm(req, tfm);
+	skcipher_request_set_sync_tfm(req, tfm);
 	skcipher_request_set_callback(req, 0, NULL, NULL);
 	skcipher_request_set_crypt(req, sg, sg, length, local_iv);
 
@@ -92,7 +92,7 @@ krb5_encrypt(
 
 u32
 krb5_decrypt(
-     struct crypto_skcipher *tfm,
+     struct crypto_sync_skcipher *tfm,
      void * iv,
      void * in,
      void * out,
@@ -101,23 +101,23 @@ krb5_decrypt(
 	u32 ret = -EINVAL;
 	struct scatterlist sg[1];
 	u8 local_iv[GSS_KRB5_MAX_BLOCKSIZE] = {0};
-	SKCIPHER_REQUEST_ON_STACK(req, tfm);
+	SYNC_SKCIPHER_REQUEST_ON_STACK(req, tfm);
 
-	if (length % crypto_skcipher_blocksize(tfm) != 0)
+	if (length % crypto_sync_skcipher_blocksize(tfm) != 0)
 		goto out;
 
-	if (crypto_skcipher_ivsize(tfm) > GSS_KRB5_MAX_BLOCKSIZE) {
+	if (crypto_sync_skcipher_ivsize(tfm) > GSS_KRB5_MAX_BLOCKSIZE) {
 		dprintk("RPC:       gss_k5decrypt: tfm iv size too large %d\n",
-			crypto_skcipher_ivsize(tfm));
+			crypto_sync_skcipher_ivsize(tfm));
 		goto out;
 	}
 	if (iv)
-		memcpy(local_iv,iv, crypto_skcipher_ivsize(tfm));
+		memcpy(local_iv, iv, crypto_sync_skcipher_ivsize(tfm));
 
 	memcpy(out, in, length);
 	sg_init_one(sg, out, length);
 
-	skcipher_request_set_tfm(req, tfm);
+	skcipher_request_set_sync_tfm(req, tfm);
 	skcipher_request_set_callback(req, 0, NULL, NULL);
 	skcipher_request_set_crypt(req, sg, sg, length, local_iv);
 
@@ -466,7 +466,8 @@ encryptor(struct scatterlist *sg, void *data)
 {
 	struct encryptor_desc *desc = data;
 	struct xdr_buf *outbuf = desc->outbuf;
-	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(desc->req);
+	struct crypto_sync_skcipher *tfm =
+		crypto_sync_skcipher_reqtfm(desc->req);
 	struct page *in_page;
 	int thislen = desc->fraglen + sg->length;
 	int fraglen, ret;
@@ -492,7 +493,7 @@ encryptor(struct scatterlist *sg, void *data)
 	desc->fraglen += sg->length;
 	desc->pos += sg->length;
 
-	fraglen = thislen & (crypto_skcipher_blocksize(tfm) - 1);
+	fraglen = thislen & (crypto_sync_skcipher_blocksize(tfm) - 1);
 	thislen -= fraglen;
 
 	if (thislen == 0)
@@ -526,16 +527,16 @@ encryptor(struct scatterlist *sg, void *data)
 }
 
 int
-gss_encrypt_xdr_buf(struct crypto_skcipher *tfm, struct xdr_buf *buf,
+gss_encrypt_xdr_buf(struct crypto_sync_skcipher *tfm, struct xdr_buf *buf,
 		    int offset, struct page **pages)
 {
 	int ret;
 	struct encryptor_desc desc;
-	SKCIPHER_REQUEST_ON_STACK(req, tfm);
+	SYNC_SKCIPHER_REQUEST_ON_STACK(req, tfm);
 
-	BUG_ON((buf->len - offset) % crypto_skcipher_blocksize(tfm) != 0);
+	BUG_ON((buf->len - offset) % crypto_sync_skcipher_blocksize(tfm) != 0);
 
-	skcipher_request_set_tfm(req, tfm);
+	skcipher_request_set_sync_tfm(req, tfm);
 	skcipher_request_set_callback(req, 0, NULL, NULL);
 
 	memset(desc.iv, 0, sizeof(desc.iv));
@@ -567,7 +568,8 @@ decryptor(struct scatterlist *sg, void *data)
 {
 	struct decryptor_desc *desc = data;
 	int thislen = desc->fraglen + sg->length;
-	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(desc->req);
+	struct crypto_sync_skcipher *tfm =
+		crypto_sync_skcipher_reqtfm(desc->req);
 	int fraglen, ret;
 
 	/* Worst case is 4 fragments: head, end of page 1, start
@@ -578,7 +580,7 @@ decryptor(struct scatterlist *sg, void *data)
 	desc->fragno++;
 	desc->fraglen += sg->length;
 
-	fraglen = thislen & (crypto_skcipher_blocksize(tfm) - 1);
+	fraglen = thislen & (crypto_sync_skcipher_blocksize(tfm) - 1);
 	thislen -= fraglen;
 
 	if (thislen == 0)
@@ -608,17 +610,17 @@ decryptor(struct scatterlist *sg, void *data)
 }
 
 int
-gss_decrypt_xdr_buf(struct crypto_skcipher *tfm, struct xdr_buf *buf,
+gss_decrypt_xdr_buf(struct crypto_sync_skcipher *tfm, struct xdr_buf *buf,
 		    int offset)
 {
 	int ret;
 	struct decryptor_desc desc;
-	SKCIPHER_REQUEST_ON_STACK(req, tfm);
+	SYNC_SKCIPHER_REQUEST_ON_STACK(req, tfm);
 
 	/* XXXJBF: */
-	BUG_ON((buf->len - offset) % crypto_skcipher_blocksize(tfm) != 0);
+	BUG_ON((buf->len - offset) % crypto_sync_skcipher_blocksize(tfm) != 0);
 
-	skcipher_request_set_tfm(req, tfm);
+	skcipher_request_set_sync_tfm(req, tfm);
 	skcipher_request_set_callback(req, 0, NULL, NULL);
 
 	memset(desc.iv, 0, sizeof(desc.iv));
@@ -672,12 +674,12 @@ xdr_extend_head(struct xdr_buf *buf, unsigned int base, unsigned int shiftlen)
 }
 
 static u32
-gss_krb5_cts_crypt(struct crypto_skcipher *cipher, struct xdr_buf *buf,
+gss_krb5_cts_crypt(struct crypto_sync_skcipher *cipher, struct xdr_buf *buf,
 		   u32 offset, u8 *iv, struct page **pages, int encrypt)
 {
 	u32 ret;
 	struct scatterlist sg[1];
-	SKCIPHER_REQUEST_ON_STACK(req, cipher);
+	SYNC_SKCIPHER_REQUEST_ON_STACK(req, cipher);
 	u8 *data;
 	struct page **save_pages;
 	u32 len = buf->len - offset;
@@ -706,7 +708,7 @@ gss_krb5_cts_crypt(struct crypto_skcipher *cipher, struct xdr_buf *buf,
 
 	sg_init_one(sg, data, len);
 
-	skcipher_request_set_tfm(req, cipher);
+	skcipher_request_set_sync_tfm(req, cipher);
 	skcipher_request_set_callback(req, 0, NULL, NULL);
 	skcipher_request_set_crypt(req, sg, sg, len, iv);
 
@@ -735,7 +737,7 @@ gss_krb5_aes_encrypt(struct krb5_ctx *kctx, u32 offset,
 	struct xdr_netobj hmac;
 	u8 *cksumkey;
 	u8 *ecptr;
-	struct crypto_skcipher *cipher, *aux_cipher;
+	struct crypto_sync_skcipher *cipher, *aux_cipher;
 	int blocksize;
 	struct page **save_pages;
 	int nblocks, nbytes;
@@ -754,7 +756,7 @@ gss_krb5_aes_encrypt(struct krb5_ctx *kctx, u32 offset,
 		cksumkey = kctx->acceptor_integ;
 		usage = KG_USAGE_ACCEPTOR_SEAL;
 	}
-	blocksize = crypto_skcipher_blocksize(cipher);
+	blocksize = crypto_sync_skcipher_blocksize(cipher);
 
 	/* hide the gss token header and insert the confounder */
 	offset += GSS_KRB5_TOK_HDR_LEN;
@@ -807,7 +809,7 @@ gss_krb5_aes_encrypt(struct krb5_ctx *kctx, u32 offset,
 	memset(desc.iv, 0, sizeof(desc.iv));
 
 	if (cbcbytes) {
-		SKCIPHER_REQUEST_ON_STACK(req, aux_cipher);
+		SYNC_SKCIPHER_REQUEST_ON_STACK(req, aux_cipher);
 
 		desc.pos = offset + GSS_KRB5_TOK_HDR_LEN;
 		desc.fragno = 0;
@@ -816,7 +818,7 @@ gss_krb5_aes_encrypt(struct krb5_ctx *kctx, u32 offset,
 		desc.outbuf = buf;
 		desc.req = req;
 
-		skcipher_request_set_tfm(req, aux_cipher);
+		skcipher_request_set_sync_tfm(req, aux_cipher);
 		skcipher_request_set_callback(req, 0, NULL, NULL);
 
 		sg_init_table(desc.infrags, 4);
@@ -855,7 +857,7 @@ gss_krb5_aes_decrypt(struct krb5_ctx *kctx, u32 offset, struct xdr_buf *buf,
 	struct xdr_buf subbuf;
 	u32 ret = 0;
 	u8 *cksum_key;
-	struct crypto_skcipher *cipher, *aux_cipher;
+	struct crypto_sync_skcipher *cipher, *aux_cipher;
 	struct xdr_netobj our_hmac_obj;
 	u8 our_hmac[GSS_KRB5_MAX_CKSUM_LEN];
 	u8 pkt_hmac[GSS_KRB5_MAX_CKSUM_LEN];
@@ -874,7 +876,7 @@ gss_krb5_aes_decrypt(struct krb5_ctx *kctx, u32 offset, struct xdr_buf *buf,
 		cksum_key = kctx->initiator_integ;
 		usage = KG_USAGE_INITIATOR_SEAL;
 	}
-	blocksize = crypto_skcipher_blocksize(cipher);
+	blocksize = crypto_sync_skcipher_blocksize(cipher);
 
 
 	/* create a segment skipping the header and leaving out the checksum */
@@ -891,13 +893,13 @@ gss_krb5_aes_decrypt(struct krb5_ctx *kctx, u32 offset, struct xdr_buf *buf,
 	memset(desc.iv, 0, sizeof(desc.iv));
 
 	if (cbcbytes) {
-		SKCIPHER_REQUEST_ON_STACK(req, aux_cipher);
+		SYNC_SKCIPHER_REQUEST_ON_STACK(req, aux_cipher);
 
 		desc.fragno = 0;
 		desc.fraglen = 0;
 		desc.req = req;
 
-		skcipher_request_set_tfm(req, aux_cipher);
+		skcipher_request_set_sync_tfm(req, aux_cipher);
 		skcipher_request_set_callback(req, 0, NULL, NULL);
 
 		sg_init_table(desc.frags, 4);
@@ -946,7 +948,8 @@ gss_krb5_aes_decrypt(struct krb5_ctx *kctx, u32 offset, struct xdr_buf *buf,
  * Set the key of the given cipher.
  */
 int
-krb5_rc4_setup_seq_key(struct krb5_ctx *kctx, struct crypto_skcipher *cipher,
+krb5_rc4_setup_seq_key(struct krb5_ctx *kctx,
+		       struct crypto_sync_skcipher *cipher,
 		       unsigned char *cksum)
 {
 	struct crypto_shash *hmac;
@@ -994,7 +997,7 @@ krb5_rc4_setup_seq_key(struct krb5_ctx *kctx, struct crypto_skcipher *cipher,
 	if (err)
 		goto out_err;
 
-	err = crypto_skcipher_setkey(cipher, Kseq, kctx->gk5e->keylength);
+	err = crypto_sync_skcipher_setkey(cipher, Kseq, kctx->gk5e->keylength);
 	if (err)
 		goto out_err;
 
@@ -1012,7 +1015,8 @@ krb5_rc4_setup_seq_key(struct krb5_ctx *kctx, struct crypto_skcipher *cipher,
  * Set the key of cipher kctx->enc.
  */
 int
-krb5_rc4_setup_enc_key(struct krb5_ctx *kctx, struct crypto_skcipher *cipher,
+krb5_rc4_setup_enc_key(struct krb5_ctx *kctx,
+		       struct crypto_sync_skcipher *cipher,
 		       s32 seqnum)
 {
 	struct crypto_shash *hmac;
@@ -1069,7 +1073,8 @@ krb5_rc4_setup_enc_key(struct krb5_ctx *kctx, struct crypto_skcipher *cipher,
 	if (err)
 		goto out_err;
 
-	err = crypto_skcipher_setkey(cipher, Kcrypt, kctx->gk5e->keylength);
+	err = crypto_sync_skcipher_setkey(cipher, Kcrypt,
+					  kctx->gk5e->keylength);
 	if (err)
 		goto out_err;
 
diff --git a/net/sunrpc/auth_gss/gss_krb5_keys.c b/net/sunrpc/auth_gss/gss_krb5_keys.c
index f7fe2d2b851f..550fdf18d3b3 100644
--- a/net/sunrpc/auth_gss/gss_krb5_keys.c
+++ b/net/sunrpc/auth_gss/gss_krb5_keys.c
@@ -147,7 +147,7 @@ u32 krb5_derive_key(const struct gss_krb5_enctype *gk5e,
 	size_t blocksize, keybytes, keylength, n;
 	unsigned char *inblockdata, *outblockdata, *rawkey;
 	struct xdr_netobj inblock, outblock;
-	struct crypto_skcipher *cipher;
+	struct crypto_sync_skcipher *cipher;
 	u32 ret = EINVAL;
 
 	blocksize = gk5e->blocksize;
@@ -157,11 +157,10 @@ u32 krb5_derive_key(const struct gss_krb5_enctype *gk5e,
 	if ((inkey->len != keylength) || (outkey->len != keylength))
 		goto err_return;
 
-	cipher = crypto_alloc_skcipher(gk5e->encrypt_name, 0,
-				       CRYPTO_ALG_ASYNC);
+	cipher = crypto_alloc_sync_skcipher(gk5e->encrypt_name, 0, 0);
 	if (IS_ERR(cipher))
 		goto err_return;
-	if (crypto_skcipher_setkey(cipher, inkey->data, inkey->len))
+	if (crypto_sync_skcipher_setkey(cipher, inkey->data, inkey->len))
 		goto err_return;
 
 	/* allocate and set up buffers */
@@ -238,7 +237,7 @@ u32 krb5_derive_key(const struct gss_krb5_enctype *gk5e,
 	memset(inblockdata, 0, blocksize);
 	kfree(inblockdata);
 err_free_cipher:
-	crypto_free_skcipher(cipher);
+	crypto_free_sync_skcipher(cipher);
 err_return:
 	return ret;
 }
diff --git a/net/sunrpc/auth_gss/gss_krb5_mech.c b/net/sunrpc/auth_gss/gss_krb5_mech.c
index 7bb2514aadd9..7f0424dfa8f6 100644
--- a/net/sunrpc/auth_gss/gss_krb5_mech.c
+++ b/net/sunrpc/auth_gss/gss_krb5_mech.c
@@ -218,7 +218,7 @@ simple_get_netobj(const void *p, const void *end, struct xdr_netobj *res)
 
 static inline const void *
 get_key(const void *p, const void *end,
-	struct krb5_ctx *ctx, struct crypto_skcipher **res)
+	struct krb5_ctx *ctx, struct crypto_sync_skcipher **res)
 {
 	struct xdr_netobj	key;
 	int			alg;
@@ -246,15 +246,14 @@ get_key(const void *p, const void *end,
 	if (IS_ERR(p))
 		goto out_err;
 
-	*res = crypto_alloc_skcipher(ctx->gk5e->encrypt_name, 0,
-							CRYPTO_ALG_ASYNC);
+	*res = crypto_alloc_sync_skcipher(ctx->gk5e->encrypt_name, 0, 0);
 	if (IS_ERR(*res)) {
 		printk(KERN_WARNING "gss_kerberos_mech: unable to initialize "
 			"crypto algorithm %s\n", ctx->gk5e->encrypt_name);
 		*res = NULL;
 		goto out_err_free_key;
 	}
-	if (crypto_skcipher_setkey(*res, key.data, key.len)) {
+	if (crypto_sync_skcipher_setkey(*res, key.data, key.len)) {
 		printk(KERN_WARNING "gss_kerberos_mech: error setting key for "
 			"crypto algorithm %s\n", ctx->gk5e->encrypt_name);
 		goto out_err_free_tfm;
@@ -264,7 +263,7 @@ get_key(const void *p, const void *end,
 	return p;
 
 out_err_free_tfm:
-	crypto_free_skcipher(*res);
+	crypto_free_sync_skcipher(*res);
 out_err_free_key:
 	kfree(key.data);
 	p = ERR_PTR(-EINVAL);
@@ -336,30 +335,30 @@ gss_import_v1_context(const void *p, const void *end, struct krb5_ctx *ctx)
 	return 0;
 
 out_err_free_key2:
-	crypto_free_skcipher(ctx->seq);
+	crypto_free_sync_skcipher(ctx->seq);
 out_err_free_key1:
-	crypto_free_skcipher(ctx->enc);
+	crypto_free_sync_skcipher(ctx->enc);
 out_err_free_mech:
 	kfree(ctx->mech_used.data);
 out_err:
 	return PTR_ERR(p);
 }
 
-static struct crypto_skcipher *
+static struct crypto_sync_skcipher *
 context_v2_alloc_cipher(struct krb5_ctx *ctx, const char *cname, u8 *key)
 {
-	struct crypto_skcipher *cp;
+	struct crypto_sync_skcipher *cp;
 
-	cp = crypto_alloc_skcipher(cname, 0, CRYPTO_ALG_ASYNC);
+	cp = crypto_alloc_sync_skcipher(cname, 0, 0);
 	if (IS_ERR(cp)) {
 		dprintk("gss_kerberos_mech: unable to initialize "
 			"crypto algorithm %s\n", cname);
 		return NULL;
 	}
-	if (crypto_skcipher_setkey(cp, key, ctx->gk5e->keylength)) {
+	if (crypto_sync_skcipher_setkey(cp, key, ctx->gk5e->keylength)) {
 		dprintk("gss_kerberos_mech: error setting key for "
 			"crypto algorithm %s\n", cname);
-		crypto_free_skcipher(cp);
+		crypto_free_sync_skcipher(cp);
 		return NULL;
 	}
 	return cp;
@@ -413,9 +412,9 @@ context_derive_keys_des3(struct krb5_ctx *ctx, gfp_t gfp_mask)
 	return 0;
 
 out_free_enc:
-	crypto_free_skcipher(ctx->enc);
+	crypto_free_sync_skcipher(ctx->enc);
 out_free_seq:
-	crypto_free_skcipher(ctx->seq);
+	crypto_free_sync_skcipher(ctx->seq);
 out_err:
 	return -EINVAL;
 }
@@ -469,17 +468,15 @@ context_derive_keys_rc4(struct krb5_ctx *ctx)
 	/*
 	 * allocate hash, and skciphers for data and seqnum encryption
 	 */
-	ctx->enc = crypto_alloc_skcipher(ctx->gk5e->encrypt_name, 0,
-					 CRYPTO_ALG_ASYNC);
+	ctx->enc = crypto_alloc_sync_skcipher(ctx->gk5e->encrypt_name, 0, 0);
 	if (IS_ERR(ctx->enc)) {
 		err = PTR_ERR(ctx->enc);
 		goto out_err_free_hmac;
 	}
 
-	ctx->seq = crypto_alloc_skcipher(ctx->gk5e->encrypt_name, 0,
-					 CRYPTO_ALG_ASYNC);
+	ctx->seq = crypto_alloc_sync_skcipher(ctx->gk5e->encrypt_name, 0, 0);
 	if (IS_ERR(ctx->seq)) {
-		crypto_free_skcipher(ctx->enc);
+		crypto_free_sync_skcipher(ctx->enc);
 		err = PTR_ERR(ctx->seq);
 		goto out_err_free_hmac;
 	}
@@ -591,7 +588,7 @@ context_derive_keys_new(struct krb5_ctx *ctx, gfp_t gfp_mask)
 			context_v2_alloc_cipher(ctx, "cbc(aes)",
 						ctx->acceptor_seal);
 		if (ctx->acceptor_enc_aux == NULL) {
-			crypto_free_skcipher(ctx->initiator_enc_aux);
+			crypto_free_sync_skcipher(ctx->initiator_enc_aux);
 			goto out_free_acceptor_enc;
 		}
 	}
@@ -599,9 +596,9 @@ context_derive_keys_new(struct krb5_ctx *ctx, gfp_t gfp_mask)
 	return 0;
 
 out_free_acceptor_enc:
-	crypto_free_skcipher(ctx->acceptor_enc);
+	crypto_free_sync_skcipher(ctx->acceptor_enc);
 out_free_initiator_enc:
-	crypto_free_skcipher(ctx->initiator_enc);
+	crypto_free_sync_skcipher(ctx->initiator_enc);
 out_err:
 	return -EINVAL;
 }
@@ -713,12 +710,12 @@ static void
 gss_delete_sec_context_kerberos(void *internal_ctx) {
 	struct krb5_ctx *kctx = internal_ctx;
 
-	crypto_free_skcipher(kctx->seq);
-	crypto_free_skcipher(kctx->enc);
-	crypto_free_skcipher(kctx->acceptor_enc);
-	crypto_free_skcipher(kctx->initiator_enc);
-	crypto_free_skcipher(kctx->acceptor_enc_aux);
-	crypto_free_skcipher(kctx->initiator_enc_aux);
+	crypto_free_sync_skcipher(kctx->seq);
+	crypto_free_sync_skcipher(kctx->enc);
+	crypto_free_sync_skcipher(kctx->acceptor_enc);
+	crypto_free_sync_skcipher(kctx->initiator_enc);
+	crypto_free_sync_skcipher(kctx->acceptor_enc_aux);
+	crypto_free_sync_skcipher(kctx->initiator_enc_aux);
 	kfree(kctx->mech_used.data);
 	kfree(kctx);
 }
diff --git a/net/sunrpc/auth_gss/gss_krb5_seqnum.c b/net/sunrpc/auth_gss/gss_krb5_seqnum.c
index c8b9082f4a9d..fb6656295204 100644
--- a/net/sunrpc/auth_gss/gss_krb5_seqnum.c
+++ b/net/sunrpc/auth_gss/gss_krb5_seqnum.c
@@ -43,13 +43,12 @@ static s32
 krb5_make_rc4_seq_num(struct krb5_ctx *kctx, int direction, s32 seqnum,
 		      unsigned char *cksum, unsigned char *buf)
 {
-	struct crypto_skcipher *cipher;
+	struct crypto_sync_skcipher *cipher;
 	unsigned char plain[8];
 	s32 code;
 
 	dprintk("RPC:       %s:\n", __func__);
-	cipher = crypto_alloc_skcipher(kctx->gk5e->encrypt_name, 0,
-				       CRYPTO_ALG_ASYNC);
+	cipher = crypto_alloc_sync_skcipher(kctx->gk5e->encrypt_name, 0, 0);
 	if (IS_ERR(cipher))
 		return PTR_ERR(cipher);
 
@@ -68,12 +67,12 @@ krb5_make_rc4_seq_num(struct krb5_ctx *kctx, int direction, s32 seqnum,
 
 	code = krb5_encrypt(cipher, cksum, plain, buf, 8);
 out:
-	crypto_free_skcipher(cipher);
+	crypto_free_sync_skcipher(cipher);
 	return code;
 }
 s32
 krb5_make_seq_num(struct krb5_ctx *kctx,
-		struct crypto_skcipher *key,
+		struct crypto_sync_skcipher *key,
 		int direction,
 		u32 seqnum,
 		unsigned char *cksum, unsigned char *buf)
@@ -101,13 +100,12 @@ static s32
 krb5_get_rc4_seq_num(struct krb5_ctx *kctx, unsigned char *cksum,
 		     unsigned char *buf, int *direction, s32 *seqnum)
 {
-	struct crypto_skcipher *cipher;
+	struct crypto_sync_skcipher *cipher;
 	unsigned char plain[8];
 	s32 code;
 
 	dprintk("RPC:       %s:\n", __func__);
-	cipher = crypto_alloc_skcipher(kctx->gk5e->encrypt_name, 0,
-				       CRYPTO_ALG_ASYNC);
+	cipher = crypto_alloc_sync_skcipher(kctx->gk5e->encrypt_name, 0, 0);
 	if (IS_ERR(cipher))
 		return PTR_ERR(cipher);
 
@@ -130,7 +128,7 @@ krb5_get_rc4_seq_num(struct krb5_ctx *kctx, unsigned char *cksum,
 	*seqnum = ((plain[0] << 24) | (plain[1] << 16) |
 					(plain[2] << 8) | (plain[3]));
 out:
-	crypto_free_skcipher(cipher);
+	crypto_free_sync_skcipher(cipher);
 	return code;
 }
 
@@ -142,7 +140,7 @@ krb5_get_seq_num(struct krb5_ctx *kctx,
 {
 	s32 code;
 	unsigned char plain[8];
-	struct crypto_skcipher *key = kctx->seq;
+	struct crypto_sync_skcipher *key = kctx->seq;
 
 	dprintk("RPC:       krb5_get_seq_num:\n");
 
diff --git a/net/sunrpc/auth_gss/gss_krb5_wrap.c b/net/sunrpc/auth_gss/gss_krb5_wrap.c
index 39a2e672900b..3d975a4013d2 100644
--- a/net/sunrpc/auth_gss/gss_krb5_wrap.c
+++ b/net/sunrpc/auth_gss/gss_krb5_wrap.c
@@ -174,7 +174,7 @@ gss_wrap_kerberos_v1(struct krb5_ctx *kctx, int offset,
 
 	now = get_seconds();
 
-	blocksize = crypto_skcipher_blocksize(kctx->enc);
+	blocksize = crypto_sync_skcipher_blocksize(kctx->enc);
 	gss_krb5_add_padding(buf, offset, blocksize);
 	BUG_ON((buf->len - offset) % blocksize);
 	plainlen = conflen + buf->len - offset;
@@ -239,10 +239,10 @@ gss_wrap_kerberos_v1(struct krb5_ctx *kctx, int offset,
 		return GSS_S_FAILURE;
 
 	if (kctx->enctype == ENCTYPE_ARCFOUR_HMAC) {
-		struct crypto_skcipher *cipher;
+		struct crypto_sync_skcipher *cipher;
 		int err;
-		cipher = crypto_alloc_skcipher(kctx->gk5e->encrypt_name, 0,
-					       CRYPTO_ALG_ASYNC);
+		cipher = crypto_alloc_sync_skcipher(kctx->gk5e->encrypt_name,
+						    0, 0);
 		if (IS_ERR(cipher))
 			return GSS_S_FAILURE;
 
@@ -250,7 +250,7 @@ gss_wrap_kerberos_v1(struct krb5_ctx *kctx, int offset,
 
 		err = gss_encrypt_xdr_buf(cipher, buf,
 					  offset + headlen - conflen, pages);
-		crypto_free_skcipher(cipher);
+		crypto_free_sync_skcipher(cipher);
 		if (err)
 			return GSS_S_FAILURE;
 	} else {
@@ -327,18 +327,18 @@ gss_unwrap_kerberos_v1(struct krb5_ctx *kctx, int offset, struct xdr_buf *buf)
 		return GSS_S_BAD_SIG;
 
 	if (kctx->enctype == ENCTYPE_ARCFOUR_HMAC) {
-		struct crypto_skcipher *cipher;
+		struct crypto_sync_skcipher *cipher;
 		int err;
 
-		cipher = crypto_alloc_skcipher(kctx->gk5e->encrypt_name, 0,
-					       CRYPTO_ALG_ASYNC);
+		cipher = crypto_alloc_sync_skcipher(kctx->gk5e->encrypt_name,
+						    0, 0);
 		if (IS_ERR(cipher))
 			return GSS_S_FAILURE;
 
 		krb5_rc4_setup_enc_key(kctx, cipher, seqnum);
 
 		err = gss_decrypt_xdr_buf(cipher, buf, crypt_offset);
-		crypto_free_skcipher(cipher);
+		crypto_free_sync_skcipher(cipher);
 		if (err)
 			return GSS_S_DEFECTIVE_TOKEN;
 	} else {
@@ -371,7 +371,7 @@ gss_unwrap_kerberos_v1(struct krb5_ctx *kctx, int offset, struct xdr_buf *buf)
 	/* Copy the data back to the right position.  XXX: Would probably be
 	 * better to copy and encrypt at the same time. */
 
-	blocksize = crypto_skcipher_blocksize(kctx->enc);
+	blocksize = crypto_sync_skcipher_blocksize(kctx->enc);
 	data_start = ptr + (GSS_KRB5_TOK_HDR_LEN + kctx->gk5e->cksumlength) +
 					conflen;
 	orig_start = buf->head[0].iov_base + offset;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH crypto-next 03/23] lib80211: Remove VLA usage of skcipher
  2018-09-19  2:10 [PATCH crypto-next 00/23] crypto: skcipher - Remove VLA usage Kees Cook
  2018-09-19  2:10 ` [PATCH crypto-next 01/23] crypto: skcipher - Introduce crypto_sync_skcipher Kees Cook
  2018-09-19  2:10 ` [PATCH crypto-next 02/23] gss_krb5: Remove VLA usage of skcipher Kees Cook
@ 2018-09-19  2:10 ` Kees Cook
  2018-09-19 20:37   ` Johannes Berg
  2018-09-19  2:10 ` [PATCH crypto-next 04/23] mac802154: " Kees Cook
                   ` (21 subsequent siblings)
  24 siblings, 1 reply; 42+ messages in thread
From: Kees Cook @ 2018-09-19  2:10 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Kees Cook, Johannes Berg, linux-wireless, Ard Biesheuvel,
	Eric Biggers, linux-crypto, Linux Kernel Mailing List

In the quest to remove all stack VLA usage from the kernel[1], this
replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage
with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(),
which uses a fixed stack size.

[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com

Cc: Johannes Berg <johannes@sipsolutions.net>
Cc: linux-wireless@vger.kernel.org
Signed-off-by: Kees Cook <keescook@chromium.org>
---
 drivers/staging/rtl8192e/rtllib_crypt_tkip.c  | 34 +++++++++----------
 drivers/staging/rtl8192e/rtllib_crypt_wep.c   | 28 +++++++--------
 .../rtl8192u/ieee80211/ieee80211_crypt_tkip.c | 34 +++++++++----------
 .../rtl8192u/ieee80211/ieee80211_crypt_wep.c  | 26 +++++++-------
 net/wireless/lib80211_crypt_tkip.c            | 34 +++++++++----------
 net/wireless/lib80211_crypt_wep.c             | 28 +++++++--------
 6 files changed, 89 insertions(+), 95 deletions(-)

diff --git a/drivers/staging/rtl8192e/rtllib_crypt_tkip.c b/drivers/staging/rtl8192e/rtllib_crypt_tkip.c
index 9f18be14dda6..f38f1f74fcd6 100644
--- a/drivers/staging/rtl8192e/rtllib_crypt_tkip.c
+++ b/drivers/staging/rtl8192e/rtllib_crypt_tkip.c
@@ -49,9 +49,9 @@ struct rtllib_tkip_data {
 	u32 dot11RSNAStatsTKIPLocalMICFailures;
 
 	int key_idx;
-	struct crypto_skcipher *rx_tfm_arc4;
+	struct crypto_sync_skcipher *rx_tfm_arc4;
 	struct crypto_shash *rx_tfm_michael;
-	struct crypto_skcipher *tx_tfm_arc4;
+	struct crypto_sync_skcipher *tx_tfm_arc4;
 	struct crypto_shash *tx_tfm_michael;
 	/* scratch buffers for virt_to_page() (crypto API) */
 	u8 rx_hdr[16];
@@ -66,8 +66,7 @@ static void *rtllib_tkip_init(int key_idx)
 	if (priv == NULL)
 		goto fail;
 	priv->key_idx = key_idx;
-	priv->tx_tfm_arc4 = crypto_alloc_skcipher("ecb(arc4)", 0,
-						  CRYPTO_ALG_ASYNC);
+	priv->tx_tfm_arc4 = crypto_alloc_sync_skcipher("ecb(arc4)", 0, 0);
 	if (IS_ERR(priv->tx_tfm_arc4)) {
 		pr_debug("Could not allocate crypto API arc4\n");
 		priv->tx_tfm_arc4 = NULL;
@@ -81,8 +80,7 @@ static void *rtllib_tkip_init(int key_idx)
 		goto fail;
 	}
 
-	priv->rx_tfm_arc4 = crypto_alloc_skcipher("ecb(arc4)", 0,
-						  CRYPTO_ALG_ASYNC);
+	priv->rx_tfm_arc4 = crypto_alloc_sync_skcipher("ecb(arc4)", 0, 0);
 	if (IS_ERR(priv->rx_tfm_arc4)) {
 		pr_debug("Could not allocate crypto API arc4\n");
 		priv->rx_tfm_arc4 = NULL;
@@ -100,9 +98,9 @@ static void *rtllib_tkip_init(int key_idx)
 fail:
 	if (priv) {
 		crypto_free_shash(priv->tx_tfm_michael);
-		crypto_free_skcipher(priv->tx_tfm_arc4);
+		crypto_free_sync_skcipher(priv->tx_tfm_arc4);
 		crypto_free_shash(priv->rx_tfm_michael);
-		crypto_free_skcipher(priv->rx_tfm_arc4);
+		crypto_free_sync_skcipher(priv->rx_tfm_arc4);
 		kfree(priv);
 	}
 
@@ -116,9 +114,9 @@ static void rtllib_tkip_deinit(void *priv)
 
 	if (_priv) {
 		crypto_free_shash(_priv->tx_tfm_michael);
-		crypto_free_skcipher(_priv->tx_tfm_arc4);
+		crypto_free_sync_skcipher(_priv->tx_tfm_arc4);
 		crypto_free_shash(_priv->rx_tfm_michael);
-		crypto_free_skcipher(_priv->rx_tfm_arc4);
+		crypto_free_sync_skcipher(_priv->rx_tfm_arc4);
 	}
 	kfree(priv);
 }
@@ -337,7 +335,7 @@ static int rtllib_tkip_encrypt(struct sk_buff *skb, int hdr_len, void *priv)
 	*pos++ = (tkey->tx_iv32 >> 24) & 0xff;
 
 	if (!tcb_desc->bHwSec) {
-		SKCIPHER_REQUEST_ON_STACK(req, tkey->tx_tfm_arc4);
+		SYNC_SKCIPHER_REQUEST_ON_STACK(req, tkey->tx_tfm_arc4);
 
 		icv = skb_put(skb, 4);
 		crc = ~crc32_le(~0, pos, len);
@@ -349,8 +347,8 @@ static int rtllib_tkip_encrypt(struct sk_buff *skb, int hdr_len, void *priv)
 		sg_init_one(&sg, pos, len+4);
 
 
-		crypto_skcipher_setkey(tkey->tx_tfm_arc4, rc4key, 16);
-		skcipher_request_set_tfm(req, tkey->tx_tfm_arc4);
+		crypto_sync_skcipher_setkey(tkey->tx_tfm_arc4, rc4key, 16);
+		skcipher_request_set_sync_tfm(req, tkey->tx_tfm_arc4);
 		skcipher_request_set_callback(req, 0, NULL, NULL);
 		skcipher_request_set_crypt(req, &sg, &sg, len + 4, NULL);
 		ret = crypto_skcipher_encrypt(req);
@@ -420,7 +418,7 @@ static int rtllib_tkip_decrypt(struct sk_buff *skb, int hdr_len, void *priv)
 	pos += 8;
 
 	if (!tcb_desc->bHwSec || (skb->cb[0] == 1)) {
-		SKCIPHER_REQUEST_ON_STACK(req, tkey->rx_tfm_arc4);
+		SYNC_SKCIPHER_REQUEST_ON_STACK(req, tkey->rx_tfm_arc4);
 
 		if ((iv32 < tkey->rx_iv32 ||
 		    (iv32 == tkey->rx_iv32 && iv16 <= tkey->rx_iv16)) &&
@@ -447,8 +445,8 @@ static int rtllib_tkip_decrypt(struct sk_buff *skb, int hdr_len, void *priv)
 
 		sg_init_one(&sg, pos, plen+4);
 
-		crypto_skcipher_setkey(tkey->rx_tfm_arc4, rc4key, 16);
-		skcipher_request_set_tfm(req, tkey->rx_tfm_arc4);
+		crypto_sync_skcipher_setkey(tkey->rx_tfm_arc4, rc4key, 16);
+		skcipher_request_set_sync_tfm(req, tkey->rx_tfm_arc4);
 		skcipher_request_set_callback(req, 0, NULL, NULL);
 		skcipher_request_set_crypt(req, &sg, &sg, plen + 4, NULL);
 		err = crypto_skcipher_decrypt(req);
@@ -664,9 +662,9 @@ static int rtllib_tkip_set_key(void *key, int len, u8 *seq, void *priv)
 	struct rtllib_tkip_data *tkey = priv;
 	int keyidx;
 	struct crypto_shash *tfm = tkey->tx_tfm_michael;
-	struct crypto_skcipher *tfm2 = tkey->tx_tfm_arc4;
+	struct crypto_sync_skcipher *tfm2 = tkey->tx_tfm_arc4;
 	struct crypto_shash *tfm3 = tkey->rx_tfm_michael;
-	struct crypto_skcipher *tfm4 = tkey->rx_tfm_arc4;
+	struct crypto_sync_skcipher *tfm4 = tkey->rx_tfm_arc4;
 
 	keyidx = tkey->key_idx;
 	memset(tkey, 0, sizeof(*tkey));
diff --git a/drivers/staging/rtl8192e/rtllib_crypt_wep.c b/drivers/staging/rtl8192e/rtllib_crypt_wep.c
index b3343a5d0fd6..d11ec39171d5 100644
--- a/drivers/staging/rtl8192e/rtllib_crypt_wep.c
+++ b/drivers/staging/rtl8192e/rtllib_crypt_wep.c
@@ -27,8 +27,8 @@ struct prism2_wep_data {
 	u8 key[WEP_KEY_LEN + 1];
 	u8 key_len;
 	u8 key_idx;
-	struct crypto_skcipher *tx_tfm;
-	struct crypto_skcipher *rx_tfm;
+	struct crypto_sync_skcipher *tx_tfm;
+	struct crypto_sync_skcipher *rx_tfm;
 };
 
 
@@ -41,13 +41,13 @@ static void *prism2_wep_init(int keyidx)
 		goto fail;
 	priv->key_idx = keyidx;
 
-	priv->tx_tfm = crypto_alloc_skcipher("ecb(arc4)", 0, CRYPTO_ALG_ASYNC);
+	priv->tx_tfm = crypto_alloc_sync_skcipher("ecb(arc4)", 0, 0);
 	if (IS_ERR(priv->tx_tfm)) {
 		pr_debug("rtllib_crypt_wep: could not allocate crypto API arc4\n");
 		priv->tx_tfm = NULL;
 		goto fail;
 	}
-	priv->rx_tfm = crypto_alloc_skcipher("ecb(arc4)", 0, CRYPTO_ALG_ASYNC);
+	priv->rx_tfm = crypto_alloc_sync_skcipher("ecb(arc4)", 0, 0);
 	if (IS_ERR(priv->rx_tfm)) {
 		pr_debug("rtllib_crypt_wep: could not allocate crypto API arc4\n");
 		priv->rx_tfm = NULL;
@@ -61,8 +61,8 @@ static void *prism2_wep_init(int keyidx)
 
 fail:
 	if (priv) {
-		crypto_free_skcipher(priv->tx_tfm);
-		crypto_free_skcipher(priv->rx_tfm);
+		crypto_free_sync_skcipher(priv->tx_tfm);
+		crypto_free_sync_skcipher(priv->rx_tfm);
 		kfree(priv);
 	}
 	return NULL;
@@ -74,8 +74,8 @@ static void prism2_wep_deinit(void *priv)
 	struct prism2_wep_data *_priv = priv;
 
 	if (_priv) {
-		crypto_free_skcipher(_priv->tx_tfm);
-		crypto_free_skcipher(_priv->rx_tfm);
+		crypto_free_sync_skcipher(_priv->tx_tfm);
+		crypto_free_sync_skcipher(_priv->rx_tfm);
 	}
 	kfree(priv);
 }
@@ -135,7 +135,7 @@ static int prism2_wep_encrypt(struct sk_buff *skb, int hdr_len, void *priv)
 	memcpy(key + 3, wep->key, wep->key_len);
 
 	if (!tcb_desc->bHwSec) {
-		SKCIPHER_REQUEST_ON_STACK(req, wep->tx_tfm);
+		SYNC_SKCIPHER_REQUEST_ON_STACK(req, wep->tx_tfm);
 
 		/* Append little-endian CRC32 and encrypt it to produce ICV */
 		crc = ~crc32_le(~0, pos, len);
@@ -146,8 +146,8 @@ static int prism2_wep_encrypt(struct sk_buff *skb, int hdr_len, void *priv)
 		icv[3] = crc >> 24;
 
 		sg_init_one(&sg, pos, len+4);
-		crypto_skcipher_setkey(wep->tx_tfm, key, klen);
-		skcipher_request_set_tfm(req, wep->tx_tfm);
+		crypto_sync_skcipher_setkey(wep->tx_tfm, key, klen);
+		skcipher_request_set_sync_tfm(req, wep->tx_tfm);
 		skcipher_request_set_callback(req, 0, NULL, NULL);
 		skcipher_request_set_crypt(req, &sg, &sg, len + 4, NULL);
 		err = crypto_skcipher_encrypt(req);
@@ -199,11 +199,11 @@ static int prism2_wep_decrypt(struct sk_buff *skb, int hdr_len, void *priv)
 	plen = skb->len - hdr_len - 8;
 
 	if (!tcb_desc->bHwSec) {
-		SKCIPHER_REQUEST_ON_STACK(req, wep->rx_tfm);
+		SYNC_SKCIPHER_REQUEST_ON_STACK(req, wep->rx_tfm);
 
 		sg_init_one(&sg, pos, plen+4);
-		crypto_skcipher_setkey(wep->rx_tfm, key, klen);
-		skcipher_request_set_tfm(req, wep->rx_tfm);
+		crypto_sync_skcipher_setkey(wep->rx_tfm, key, klen);
+		skcipher_request_set_sync_tfm(req, wep->rx_tfm);
 		skcipher_request_set_callback(req, 0, NULL, NULL);
 		skcipher_request_set_crypt(req, &sg, &sg, plen + 4, NULL);
 		err = crypto_skcipher_decrypt(req);
diff --git a/drivers/staging/rtl8192u/ieee80211/ieee80211_crypt_tkip.c b/drivers/staging/rtl8192u/ieee80211/ieee80211_crypt_tkip.c
index 1088fa0aee0e..829fa4bd253c 100644
--- a/drivers/staging/rtl8192u/ieee80211/ieee80211_crypt_tkip.c
+++ b/drivers/staging/rtl8192u/ieee80211/ieee80211_crypt_tkip.c
@@ -53,9 +53,9 @@ struct ieee80211_tkip_data {
 
 	int key_idx;
 
-	struct crypto_skcipher *rx_tfm_arc4;
+	struct crypto_sync_skcipher *rx_tfm_arc4;
 	struct crypto_shash *rx_tfm_michael;
-	struct crypto_skcipher *tx_tfm_arc4;
+	struct crypto_sync_skcipher *tx_tfm_arc4;
 	struct crypto_shash *tx_tfm_michael;
 
 	/* scratch buffers for virt_to_page() (crypto API) */
@@ -71,8 +71,7 @@ static void *ieee80211_tkip_init(int key_idx)
 		goto fail;
 	priv->key_idx = key_idx;
 
-	priv->tx_tfm_arc4 = crypto_alloc_skcipher("ecb(arc4)", 0,
-			CRYPTO_ALG_ASYNC);
+	priv->tx_tfm_arc4 = crypto_alloc_sync_skcipher("ecb(arc4)", 0, 0);
 	if (IS_ERR(priv->tx_tfm_arc4)) {
 		printk(KERN_DEBUG "ieee80211_crypt_tkip: could not allocate "
 				"crypto API arc4\n");
@@ -88,8 +87,7 @@ static void *ieee80211_tkip_init(int key_idx)
 		goto fail;
 	}
 
-	priv->rx_tfm_arc4 = crypto_alloc_skcipher("ecb(arc4)", 0,
-			CRYPTO_ALG_ASYNC);
+	priv->rx_tfm_arc4 = crypto_alloc_sync_skcipher("ecb(arc4)", 0, 0);
 	if (IS_ERR(priv->rx_tfm_arc4)) {
 		printk(KERN_DEBUG "ieee80211_crypt_tkip: could not allocate "
 				"crypto API arc4\n");
@@ -110,9 +108,9 @@ static void *ieee80211_tkip_init(int key_idx)
 fail:
 	if (priv) {
 		crypto_free_shash(priv->tx_tfm_michael);
-		crypto_free_skcipher(priv->tx_tfm_arc4);
+		crypto_free_sync_skcipher(priv->tx_tfm_arc4);
 		crypto_free_shash(priv->rx_tfm_michael);
-		crypto_free_skcipher(priv->rx_tfm_arc4);
+		crypto_free_sync_skcipher(priv->rx_tfm_arc4);
 		kfree(priv);
 	}
 
@@ -126,9 +124,9 @@ static void ieee80211_tkip_deinit(void *priv)
 
 	if (_priv) {
 		crypto_free_shash(_priv->tx_tfm_michael);
-		crypto_free_skcipher(_priv->tx_tfm_arc4);
+		crypto_free_sync_skcipher(_priv->tx_tfm_arc4);
 		crypto_free_shash(_priv->rx_tfm_michael);
-		crypto_free_skcipher(_priv->rx_tfm_arc4);
+		crypto_free_sync_skcipher(_priv->rx_tfm_arc4);
 	}
 	kfree(priv);
 }
@@ -340,7 +338,7 @@ static int ieee80211_tkip_encrypt(struct sk_buff *skb, int hdr_len, void *priv)
 	*pos++ = (tkey->tx_iv32 >> 24) & 0xff;
 
 	if (!tcb_desc->bHwSec) {
-		SKCIPHER_REQUEST_ON_STACK(req, tkey->tx_tfm_arc4);
+		SYNC_SKCIPHER_REQUEST_ON_STACK(req, tkey->tx_tfm_arc4);
 
 		icv = skb_put(skb, 4);
 		crc = ~crc32_le(~0, pos, len);
@@ -348,9 +346,9 @@ static int ieee80211_tkip_encrypt(struct sk_buff *skb, int hdr_len, void *priv)
 		icv[1] = crc >> 8;
 		icv[2] = crc >> 16;
 		icv[3] = crc >> 24;
-		crypto_skcipher_setkey(tkey->tx_tfm_arc4, rc4key, 16);
+		crypto_sync_skcipher_setkey(tkey->tx_tfm_arc4, rc4key, 16);
 		sg_init_one(&sg, pos, len+4);
-		skcipher_request_set_tfm(req, tkey->tx_tfm_arc4);
+		skcipher_request_set_sync_tfm(req, tkey->tx_tfm_arc4);
 		skcipher_request_set_callback(req, 0, NULL, NULL);
 		skcipher_request_set_crypt(req, &sg, &sg, len + 4, NULL);
 		ret = crypto_skcipher_encrypt(req);
@@ -418,7 +416,7 @@ static int ieee80211_tkip_decrypt(struct sk_buff *skb, int hdr_len, void *priv)
 	pos += 8;
 
 	if (!tcb_desc->bHwSec) {
-		SKCIPHER_REQUEST_ON_STACK(req, tkey->rx_tfm_arc4);
+		SYNC_SKCIPHER_REQUEST_ON_STACK(req, tkey->rx_tfm_arc4);
 
 		if (iv32 < tkey->rx_iv32 ||
 		(iv32 == tkey->rx_iv32 && iv16 <= tkey->rx_iv16)) {
@@ -440,10 +438,10 @@ static int ieee80211_tkip_decrypt(struct sk_buff *skb, int hdr_len, void *priv)
 
 		plen = skb->len - hdr_len - 12;
 
-		crypto_skcipher_setkey(tkey->rx_tfm_arc4, rc4key, 16);
+		crypto_sync_skcipher_setkey(tkey->rx_tfm_arc4, rc4key, 16);
 		sg_init_one(&sg, pos, plen+4);
 
-		skcipher_request_set_tfm(req, tkey->rx_tfm_arc4);
+		skcipher_request_set_sync_tfm(req, tkey->rx_tfm_arc4);
 		skcipher_request_set_callback(req, 0, NULL, NULL);
 		skcipher_request_set_crypt(req, &sg, &sg, plen + 4, NULL);
 
@@ -663,9 +661,9 @@ static int ieee80211_tkip_set_key(void *key, int len, u8 *seq, void *priv)
 	struct ieee80211_tkip_data *tkey = priv;
 	int keyidx;
 	struct crypto_shash *tfm = tkey->tx_tfm_michael;
-	struct crypto_skcipher *tfm2 = tkey->tx_tfm_arc4;
+	struct crypto_sync_skcipher *tfm2 = tkey->tx_tfm_arc4;
 	struct crypto_shash *tfm3 = tkey->rx_tfm_michael;
-	struct crypto_skcipher *tfm4 = tkey->rx_tfm_arc4;
+	struct crypto_sync_skcipher *tfm4 = tkey->rx_tfm_arc4;
 
 	keyidx = tkey->key_idx;
 	memset(tkey, 0, sizeof(*tkey));
diff --git a/drivers/staging/rtl8192u/ieee80211/ieee80211_crypt_wep.c b/drivers/staging/rtl8192u/ieee80211/ieee80211_crypt_wep.c
index b9f86be9e52b..d4a1bf0caa7a 100644
--- a/drivers/staging/rtl8192u/ieee80211/ieee80211_crypt_wep.c
+++ b/drivers/staging/rtl8192u/ieee80211/ieee80211_crypt_wep.c
@@ -32,8 +32,8 @@ struct prism2_wep_data {
 	u8 key[WEP_KEY_LEN + 1];
 	u8 key_len;
 	u8 key_idx;
-	struct crypto_skcipher *tx_tfm;
-	struct crypto_skcipher *rx_tfm;
+	struct crypto_sync_skcipher *tx_tfm;
+	struct crypto_sync_skcipher *rx_tfm;
 };
 
 
@@ -46,10 +46,10 @@ static void *prism2_wep_init(int keyidx)
 		return NULL;
 	priv->key_idx = keyidx;
 
-	priv->tx_tfm = crypto_alloc_skcipher("ecb(arc4)", 0, CRYPTO_ALG_ASYNC);
+	priv->tx_tfm = crypto_alloc_sync_skcipher("ecb(arc4)", 0, 0);
 	if (IS_ERR(priv->tx_tfm))
 		goto free_priv;
-	priv->rx_tfm = crypto_alloc_skcipher("ecb(arc4)", 0, CRYPTO_ALG_ASYNC);
+	priv->rx_tfm = crypto_alloc_sync_skcipher("ecb(arc4)", 0, 0);
 	if (IS_ERR(priv->rx_tfm))
 		goto free_tx;
 
@@ -58,7 +58,7 @@ static void *prism2_wep_init(int keyidx)
 
 	return priv;
 free_tx:
-	crypto_free_skcipher(priv->tx_tfm);
+	crypto_free_sync_skcipher(priv->tx_tfm);
 free_priv:
 	kfree(priv);
 	return NULL;
@@ -70,8 +70,8 @@ static void prism2_wep_deinit(void *priv)
 	struct prism2_wep_data *_priv = priv;
 
 	if (_priv) {
-		crypto_free_skcipher(_priv->tx_tfm);
-		crypto_free_skcipher(_priv->rx_tfm);
+		crypto_free_sync_skcipher(_priv->tx_tfm);
+		crypto_free_sync_skcipher(_priv->rx_tfm);
 	}
 	kfree(priv);
 }
@@ -128,7 +128,7 @@ static int prism2_wep_encrypt(struct sk_buff *skb, int hdr_len, void *priv)
 	memcpy(key + 3, wep->key, wep->key_len);
 
 	if (!tcb_desc->bHwSec) {
-		SKCIPHER_REQUEST_ON_STACK(req, wep->tx_tfm);
+		SYNC_SKCIPHER_REQUEST_ON_STACK(req, wep->tx_tfm);
 
 		/* Append little-endian CRC32 and encrypt it to produce ICV */
 		crc = ~crc32_le(~0, pos, len);
@@ -138,10 +138,10 @@ static int prism2_wep_encrypt(struct sk_buff *skb, int hdr_len, void *priv)
 		icv[2] = crc >> 16;
 		icv[3] = crc >> 24;
 
-		crypto_skcipher_setkey(wep->tx_tfm, key, klen);
+		crypto_sync_skcipher_setkey(wep->tx_tfm, key, klen);
 		sg_init_one(&sg, pos, len+4);
 
-		skcipher_request_set_tfm(req, wep->tx_tfm);
+		skcipher_request_set_sync_tfm(req, wep->tx_tfm);
 		skcipher_request_set_callback(req, 0, NULL, NULL);
 		skcipher_request_set_crypt(req, &sg, &sg, len + 4, NULL);
 
@@ -193,12 +193,12 @@ static int prism2_wep_decrypt(struct sk_buff *skb, int hdr_len, void *priv)
 	plen = skb->len - hdr_len - 8;
 
 	if (!tcb_desc->bHwSec) {
-		SKCIPHER_REQUEST_ON_STACK(req, wep->rx_tfm);
+		SYNC_SKCIPHER_REQUEST_ON_STACK(req, wep->rx_tfm);
 
-		crypto_skcipher_setkey(wep->rx_tfm, key, klen);
+		crypto_sync_skcipher_setkey(wep->rx_tfm, key, klen);
 		sg_init_one(&sg, pos, plen+4);
 
-		skcipher_request_set_tfm(req, wep->rx_tfm);
+		skcipher_request_set_sync_tfm(req, wep->rx_tfm);
 		skcipher_request_set_callback(req, 0, NULL, NULL);
 		skcipher_request_set_crypt(req, &sg, &sg, plen + 4, NULL);
 
diff --git a/net/wireless/lib80211_crypt_tkip.c b/net/wireless/lib80211_crypt_tkip.c
index e6bce1f130c9..346e19cbdf59 100644
--- a/net/wireless/lib80211_crypt_tkip.c
+++ b/net/wireless/lib80211_crypt_tkip.c
@@ -64,9 +64,9 @@ struct lib80211_tkip_data {
 
 	int key_idx;
 
-	struct crypto_skcipher *rx_tfm_arc4;
+	struct crypto_sync_skcipher *rx_tfm_arc4;
 	struct crypto_shash *rx_tfm_michael;
-	struct crypto_skcipher *tx_tfm_arc4;
+	struct crypto_sync_skcipher *tx_tfm_arc4;
 	struct crypto_shash *tx_tfm_michael;
 
 	/* scratch buffers for virt_to_page() (crypto API) */
@@ -99,8 +99,7 @@ static void *lib80211_tkip_init(int key_idx)
 
 	priv->key_idx = key_idx;
 
-	priv->tx_tfm_arc4 = crypto_alloc_skcipher("ecb(arc4)", 0,
-						  CRYPTO_ALG_ASYNC);
+	priv->tx_tfm_arc4 = crypto_alloc_sync_skcipher("ecb(arc4)", 0, 0);
 	if (IS_ERR(priv->tx_tfm_arc4)) {
 		priv->tx_tfm_arc4 = NULL;
 		goto fail;
@@ -112,8 +111,7 @@ static void *lib80211_tkip_init(int key_idx)
 		goto fail;
 	}
 
-	priv->rx_tfm_arc4 = crypto_alloc_skcipher("ecb(arc4)", 0,
-						  CRYPTO_ALG_ASYNC);
+	priv->rx_tfm_arc4 = crypto_alloc_sync_skcipher("ecb(arc4)", 0, 0);
 	if (IS_ERR(priv->rx_tfm_arc4)) {
 		priv->rx_tfm_arc4 = NULL;
 		goto fail;
@@ -130,9 +128,9 @@ static void *lib80211_tkip_init(int key_idx)
       fail:
 	if (priv) {
 		crypto_free_shash(priv->tx_tfm_michael);
-		crypto_free_skcipher(priv->tx_tfm_arc4);
+		crypto_free_sync_skcipher(priv->tx_tfm_arc4);
 		crypto_free_shash(priv->rx_tfm_michael);
-		crypto_free_skcipher(priv->rx_tfm_arc4);
+		crypto_free_sync_skcipher(priv->rx_tfm_arc4);
 		kfree(priv);
 	}
 
@@ -144,9 +142,9 @@ static void lib80211_tkip_deinit(void *priv)
 	struct lib80211_tkip_data *_priv = priv;
 	if (_priv) {
 		crypto_free_shash(_priv->tx_tfm_michael);
-		crypto_free_skcipher(_priv->tx_tfm_arc4);
+		crypto_free_sync_skcipher(_priv->tx_tfm_arc4);
 		crypto_free_shash(_priv->rx_tfm_michael);
-		crypto_free_skcipher(_priv->rx_tfm_arc4);
+		crypto_free_sync_skcipher(_priv->rx_tfm_arc4);
 	}
 	kfree(priv);
 }
@@ -344,7 +342,7 @@ static int lib80211_tkip_hdr(struct sk_buff *skb, int hdr_len,
 static int lib80211_tkip_encrypt(struct sk_buff *skb, int hdr_len, void *priv)
 {
 	struct lib80211_tkip_data *tkey = priv;
-	SKCIPHER_REQUEST_ON_STACK(req, tkey->tx_tfm_arc4);
+	SYNC_SKCIPHER_REQUEST_ON_STACK(req, tkey->tx_tfm_arc4);
 	int len;
 	u8 rc4key[16], *pos, *icv;
 	u32 crc;
@@ -374,9 +372,9 @@ static int lib80211_tkip_encrypt(struct sk_buff *skb, int hdr_len, void *priv)
 	icv[2] = crc >> 16;
 	icv[3] = crc >> 24;
 
-	crypto_skcipher_setkey(tkey->tx_tfm_arc4, rc4key, 16);
+	crypto_sync_skcipher_setkey(tkey->tx_tfm_arc4, rc4key, 16);
 	sg_init_one(&sg, pos, len + 4);
-	skcipher_request_set_tfm(req, tkey->tx_tfm_arc4);
+	skcipher_request_set_sync_tfm(req, tkey->tx_tfm_arc4);
 	skcipher_request_set_callback(req, 0, NULL, NULL);
 	skcipher_request_set_crypt(req, &sg, &sg, len + 4, NULL);
 	err = crypto_skcipher_encrypt(req);
@@ -400,7 +398,7 @@ static inline int tkip_replay_check(u32 iv32_n, u16 iv16_n,
 static int lib80211_tkip_decrypt(struct sk_buff *skb, int hdr_len, void *priv)
 {
 	struct lib80211_tkip_data *tkey = priv;
-	SKCIPHER_REQUEST_ON_STACK(req, tkey->rx_tfm_arc4);
+	SYNC_SKCIPHER_REQUEST_ON_STACK(req, tkey->rx_tfm_arc4);
 	u8 rc4key[16];
 	u8 keyidx, *pos;
 	u32 iv32;
@@ -463,9 +461,9 @@ static int lib80211_tkip_decrypt(struct sk_buff *skb, int hdr_len, void *priv)
 
 	plen = skb->len - hdr_len - 12;
 
-	crypto_skcipher_setkey(tkey->rx_tfm_arc4, rc4key, 16);
+	crypto_sync_skcipher_setkey(tkey->rx_tfm_arc4, rc4key, 16);
 	sg_init_one(&sg, pos, plen + 4);
-	skcipher_request_set_tfm(req, tkey->rx_tfm_arc4);
+	skcipher_request_set_sync_tfm(req, tkey->rx_tfm_arc4);
 	skcipher_request_set_callback(req, 0, NULL, NULL);
 	skcipher_request_set_crypt(req, &sg, &sg, plen + 4, NULL);
 	err = crypto_skcipher_decrypt(req);
@@ -660,9 +658,9 @@ static int lib80211_tkip_set_key(void *key, int len, u8 * seq, void *priv)
 	struct lib80211_tkip_data *tkey = priv;
 	int keyidx;
 	struct crypto_shash *tfm = tkey->tx_tfm_michael;
-	struct crypto_skcipher *tfm2 = tkey->tx_tfm_arc4;
+	struct crypto_sync_skcipher *tfm2 = tkey->tx_tfm_arc4;
 	struct crypto_shash *tfm3 = tkey->rx_tfm_michael;
-	struct crypto_skcipher *tfm4 = tkey->rx_tfm_arc4;
+	struct crypto_sync_skcipher *tfm4 = tkey->rx_tfm_arc4;
 
 	keyidx = tkey->key_idx;
 	memset(tkey, 0, sizeof(*tkey));
diff --git a/net/wireless/lib80211_crypt_wep.c b/net/wireless/lib80211_crypt_wep.c
index d05f58b0fd04..bdadee497f57 100644
--- a/net/wireless/lib80211_crypt_wep.c
+++ b/net/wireless/lib80211_crypt_wep.c
@@ -35,8 +35,8 @@ struct lib80211_wep_data {
 	u8 key[WEP_KEY_LEN + 1];
 	u8 key_len;
 	u8 key_idx;
-	struct crypto_skcipher *tx_tfm;
-	struct crypto_skcipher *rx_tfm;
+	struct crypto_sync_skcipher *tx_tfm;
+	struct crypto_sync_skcipher *rx_tfm;
 };
 
 static void *lib80211_wep_init(int keyidx)
@@ -48,13 +48,13 @@ static void *lib80211_wep_init(int keyidx)
 		goto fail;
 	priv->key_idx = keyidx;
 
-	priv->tx_tfm = crypto_alloc_skcipher("ecb(arc4)", 0, CRYPTO_ALG_ASYNC);
+	priv->tx_tfm = crypto_alloc_sync_skcipher("ecb(arc4)", 0, 0);
 	if (IS_ERR(priv->tx_tfm)) {
 		priv->tx_tfm = NULL;
 		goto fail;
 	}
 
-	priv->rx_tfm = crypto_alloc_skcipher("ecb(arc4)", 0, CRYPTO_ALG_ASYNC);
+	priv->rx_tfm = crypto_alloc_sync_skcipher("ecb(arc4)", 0, 0);
 	if (IS_ERR(priv->rx_tfm)) {
 		priv->rx_tfm = NULL;
 		goto fail;
@@ -66,8 +66,8 @@ static void *lib80211_wep_init(int keyidx)
 
       fail:
 	if (priv) {
-		crypto_free_skcipher(priv->tx_tfm);
-		crypto_free_skcipher(priv->rx_tfm);
+		crypto_free_sync_skcipher(priv->tx_tfm);
+		crypto_free_sync_skcipher(priv->rx_tfm);
 		kfree(priv);
 	}
 	return NULL;
@@ -77,8 +77,8 @@ static void lib80211_wep_deinit(void *priv)
 {
 	struct lib80211_wep_data *_priv = priv;
 	if (_priv) {
-		crypto_free_skcipher(_priv->tx_tfm);
-		crypto_free_skcipher(_priv->rx_tfm);
+		crypto_free_sync_skcipher(_priv->tx_tfm);
+		crypto_free_sync_skcipher(_priv->rx_tfm);
 	}
 	kfree(priv);
 }
@@ -129,7 +129,7 @@ static int lib80211_wep_build_iv(struct sk_buff *skb, int hdr_len,
 static int lib80211_wep_encrypt(struct sk_buff *skb, int hdr_len, void *priv)
 {
 	struct lib80211_wep_data *wep = priv;
-	SKCIPHER_REQUEST_ON_STACK(req, wep->tx_tfm);
+	SYNC_SKCIPHER_REQUEST_ON_STACK(req, wep->tx_tfm);
 	u32 crc, klen, len;
 	u8 *pos, *icv;
 	struct scatterlist sg;
@@ -162,9 +162,9 @@ static int lib80211_wep_encrypt(struct sk_buff *skb, int hdr_len, void *priv)
 	icv[2] = crc >> 16;
 	icv[3] = crc >> 24;
 
-	crypto_skcipher_setkey(wep->tx_tfm, key, klen);
+	crypto_sync_skcipher_setkey(wep->tx_tfm, key, klen);
 	sg_init_one(&sg, pos, len + 4);
-	skcipher_request_set_tfm(req, wep->tx_tfm);
+	skcipher_request_set_sync_tfm(req, wep->tx_tfm);
 	skcipher_request_set_callback(req, 0, NULL, NULL);
 	skcipher_request_set_crypt(req, &sg, &sg, len + 4, NULL);
 	err = crypto_skcipher_encrypt(req);
@@ -182,7 +182,7 @@ static int lib80211_wep_encrypt(struct sk_buff *skb, int hdr_len, void *priv)
 static int lib80211_wep_decrypt(struct sk_buff *skb, int hdr_len, void *priv)
 {
 	struct lib80211_wep_data *wep = priv;
-	SKCIPHER_REQUEST_ON_STACK(req, wep->rx_tfm);
+	SYNC_SKCIPHER_REQUEST_ON_STACK(req, wep->rx_tfm);
 	u32 crc, klen, plen;
 	u8 key[WEP_KEY_LEN + 3];
 	u8 keyidx, *pos, icv[4];
@@ -208,9 +208,9 @@ static int lib80211_wep_decrypt(struct sk_buff *skb, int hdr_len, void *priv)
 	/* Apply RC4 to data and compute CRC32 over decrypted data */
 	plen = skb->len - hdr_len - 8;
 
-	crypto_skcipher_setkey(wep->rx_tfm, key, klen);
+	crypto_sync_skcipher_setkey(wep->rx_tfm, key, klen);
 	sg_init_one(&sg, pos, plen + 4);
-	skcipher_request_set_tfm(req, wep->rx_tfm);
+	skcipher_request_set_sync_tfm(req, wep->rx_tfm);
 	skcipher_request_set_callback(req, 0, NULL, NULL);
 	skcipher_request_set_crypt(req, &sg, &sg, plen + 4, NULL);
 	err = crypto_skcipher_decrypt(req);
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH crypto-next 04/23] mac802154: Remove VLA usage of skcipher
  2018-09-19  2:10 [PATCH crypto-next 00/23] crypto: skcipher - Remove VLA usage Kees Cook
                   ` (2 preceding siblings ...)
  2018-09-19  2:10 ` [PATCH crypto-next 03/23] lib80211: " Kees Cook
@ 2018-09-19  2:10 ` Kees Cook
  2018-09-19  2:10 ` [PATCH crypto-next 05/23] s390/crypto: " Kees Cook
                   ` (20 subsequent siblings)
  24 siblings, 0 replies; 42+ messages in thread
From: Kees Cook @ 2018-09-19  2:10 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Kees Cook, Alexander Aring, Stefan Schmidt, linux-wpan,
	Ard Biesheuvel, Eric Biggers, linux-crypto,
	Linux Kernel Mailing List

In the quest to remove all stack VLA usage from the kernel[1], this
replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage
with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(),
which uses a fixed stack size.

[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com

Cc: Alexander Aring <alex.aring@gmail.com>
Cc: Stefan Schmidt <stefan@datenfreihafen.org>
Cc: linux-wpan@vger.kernel.org
Signed-off-by: Kees Cook <keescook@chromium.org>
---
 net/mac802154/llsec.c | 16 ++++++++--------
 net/mac802154/llsec.h |  2 +-
 2 files changed, 9 insertions(+), 9 deletions(-)

diff --git a/net/mac802154/llsec.c b/net/mac802154/llsec.c
index 2fb703d70803..7e29f88dbf6a 100644
--- a/net/mac802154/llsec.c
+++ b/net/mac802154/llsec.c
@@ -146,18 +146,18 @@ llsec_key_alloc(const struct ieee802154_llsec_key *template)
 			goto err_tfm;
 	}
 
-	key->tfm0 = crypto_alloc_skcipher("ctr(aes)", 0, CRYPTO_ALG_ASYNC);
+	key->tfm0 = crypto_alloc_sync_skcipher("ctr(aes)", 0, 0);
 	if (IS_ERR(key->tfm0))
 		goto err_tfm;
 
-	if (crypto_skcipher_setkey(key->tfm0, template->key,
+	if (crypto_sync_skcipher_setkey(key->tfm0, template->key,
 				   IEEE802154_LLSEC_KEY_SIZE))
 		goto err_tfm0;
 
 	return key;
 
 err_tfm0:
-	crypto_free_skcipher(key->tfm0);
+	crypto_free_sync_skcipher(key->tfm0);
 err_tfm:
 	for (i = 0; i < ARRAY_SIZE(key->tfm); i++)
 		if (key->tfm[i])
@@ -177,7 +177,7 @@ static void llsec_key_release(struct kref *ref)
 	for (i = 0; i < ARRAY_SIZE(key->tfm); i++)
 		crypto_free_aead(key->tfm[i]);
 
-	crypto_free_skcipher(key->tfm0);
+	crypto_free_sync_skcipher(key->tfm0);
 	kzfree(key);
 }
 
@@ -622,7 +622,7 @@ llsec_do_encrypt_unauth(struct sk_buff *skb, const struct mac802154_llsec *sec,
 {
 	u8 iv[16];
 	struct scatterlist src;
-	SKCIPHER_REQUEST_ON_STACK(req, key->tfm0);
+	SYNC_SKCIPHER_REQUEST_ON_STACK(req, key->tfm0);
 	int err, datalen;
 	unsigned char *data;
 
@@ -632,7 +632,7 @@ llsec_do_encrypt_unauth(struct sk_buff *skb, const struct mac802154_llsec *sec,
 	datalen = skb_tail_pointer(skb) - data;
 	sg_init_one(&src, data, datalen);
 
-	skcipher_request_set_tfm(req, key->tfm0);
+	skcipher_request_set_sync_tfm(req, key->tfm0);
 	skcipher_request_set_callback(req, 0, NULL, NULL);
 	skcipher_request_set_crypt(req, &src, &src, datalen, iv);
 	err = crypto_skcipher_encrypt(req);
@@ -840,7 +840,7 @@ llsec_do_decrypt_unauth(struct sk_buff *skb, const struct mac802154_llsec *sec,
 	unsigned char *data;
 	int datalen;
 	struct scatterlist src;
-	SKCIPHER_REQUEST_ON_STACK(req, key->tfm0);
+	SYNC_SKCIPHER_REQUEST_ON_STACK(req, key->tfm0);
 	int err;
 
 	llsec_geniv(iv, dev_addr, &hdr->sec);
@@ -849,7 +849,7 @@ llsec_do_decrypt_unauth(struct sk_buff *skb, const struct mac802154_llsec *sec,
 
 	sg_init_one(&src, data, datalen);
 
-	skcipher_request_set_tfm(req, key->tfm0);
+	skcipher_request_set_sync_tfm(req, key->tfm0);
 	skcipher_request_set_callback(req, 0, NULL, NULL);
 	skcipher_request_set_crypt(req, &src, &src, datalen, iv);
 
diff --git a/net/mac802154/llsec.h b/net/mac802154/llsec.h
index 6f3b658e3279..8be46d74dc39 100644
--- a/net/mac802154/llsec.h
+++ b/net/mac802154/llsec.h
@@ -29,7 +29,7 @@ struct mac802154_llsec_key {
 
 	/* one tfm for each authsize (4/8/16) */
 	struct crypto_aead *tfm[3];
-	struct crypto_skcipher *tfm0;
+	struct crypto_sync_skcipher *tfm0;
 
 	struct kref ref;
 };
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH crypto-next 05/23] s390/crypto: Remove VLA usage of skcipher
  2018-09-19  2:10 [PATCH crypto-next 00/23] crypto: skcipher - Remove VLA usage Kees Cook
                   ` (3 preceding siblings ...)
  2018-09-19  2:10 ` [PATCH crypto-next 04/23] mac802154: " Kees Cook
@ 2018-09-19  2:10 ` Kees Cook
  2018-09-19  2:10 ` [PATCH crypto-next 06/23] x86/fpu: " Kees Cook
                   ` (19 subsequent siblings)
  24 siblings, 0 replies; 42+ messages in thread
From: Kees Cook @ 2018-09-19  2:10 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Kees Cook, Martin Schwidefsky, Heiko Carstens, linux-s390,
	Ard Biesheuvel, Eric Biggers, linux-crypto,
	Linux Kernel Mailing List

In the quest to remove all stack VLA usage from the kernel[1], this
replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage
with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(),
which uses a fixed stack size.

[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com

Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: linux-s390@vger.kernel.org
Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/s390/crypto/aes_s390.c | 48 ++++++++++++++++++-------------------
 1 file changed, 24 insertions(+), 24 deletions(-)

diff --git a/arch/s390/crypto/aes_s390.c b/arch/s390/crypto/aes_s390.c
index c54cb26eb7f5..812d9498d97b 100644
--- a/arch/s390/crypto/aes_s390.c
+++ b/arch/s390/crypto/aes_s390.c
@@ -44,7 +44,7 @@ struct s390_aes_ctx {
 	int key_len;
 	unsigned long fc;
 	union {
-		struct crypto_skcipher *blk;
+		struct crypto_sync_skcipher *blk;
 		struct crypto_cipher *cip;
 	} fallback;
 };
@@ -54,7 +54,7 @@ struct s390_xts_ctx {
 	u8 pcc_key[32];
 	int key_len;
 	unsigned long fc;
-	struct crypto_skcipher *fallback;
+	struct crypto_sync_skcipher *fallback;
 };
 
 struct gcm_sg_walk {
@@ -184,14 +184,15 @@ static int setkey_fallback_blk(struct crypto_tfm *tfm, const u8 *key,
 	struct s390_aes_ctx *sctx = crypto_tfm_ctx(tfm);
 	unsigned int ret;
 
-	crypto_skcipher_clear_flags(sctx->fallback.blk, CRYPTO_TFM_REQ_MASK);
-	crypto_skcipher_set_flags(sctx->fallback.blk, tfm->crt_flags &
+	crypto_sync_skcipher_clear_flags(sctx->fallback.blk,
+					 CRYPTO_TFM_REQ_MASK);
+	crypto_sync_skcipher_set_flags(sctx->fallback.blk, tfm->crt_flags &
 						      CRYPTO_TFM_REQ_MASK);
 
-	ret = crypto_skcipher_setkey(sctx->fallback.blk, key, len);
+	ret = crypto_sync_skcipher_setkey(sctx->fallback.blk, key, len);
 
 	tfm->crt_flags &= ~CRYPTO_TFM_RES_MASK;
-	tfm->crt_flags |= crypto_skcipher_get_flags(sctx->fallback.blk) &
+	tfm->crt_flags |= crypto_sync_skcipher_get_flags(sctx->fallback.blk) &
 			  CRYPTO_TFM_RES_MASK;
 
 	return ret;
@@ -204,9 +205,9 @@ static int fallback_blk_dec(struct blkcipher_desc *desc,
 	unsigned int ret;
 	struct crypto_blkcipher *tfm = desc->tfm;
 	struct s390_aes_ctx *sctx = crypto_blkcipher_ctx(tfm);
-	SKCIPHER_REQUEST_ON_STACK(req, sctx->fallback.blk);
+	SYNC_SKCIPHER_REQUEST_ON_STACK(req, sctx->fallback.blk);
 
-	skcipher_request_set_tfm(req, sctx->fallback.blk);
+	skcipher_request_set_sync_tfm(req, sctx->fallback.blk);
 	skcipher_request_set_callback(req, desc->flags, NULL, NULL);
 	skcipher_request_set_crypt(req, src, dst, nbytes, desc->info);
 
@@ -223,9 +224,9 @@ static int fallback_blk_enc(struct blkcipher_desc *desc,
 	unsigned int ret;
 	struct crypto_blkcipher *tfm = desc->tfm;
 	struct s390_aes_ctx *sctx = crypto_blkcipher_ctx(tfm);
-	SKCIPHER_REQUEST_ON_STACK(req, sctx->fallback.blk);
+	SYNC_SKCIPHER_REQUEST_ON_STACK(req, sctx->fallback.blk);
 
-	skcipher_request_set_tfm(req, sctx->fallback.blk);
+	skcipher_request_set_sync_tfm(req, sctx->fallback.blk);
 	skcipher_request_set_callback(req, desc->flags, NULL, NULL);
 	skcipher_request_set_crypt(req, src, dst, nbytes, desc->info);
 
@@ -306,8 +307,7 @@ static int fallback_init_blk(struct crypto_tfm *tfm)
 	const char *name = tfm->__crt_alg->cra_name;
 	struct s390_aes_ctx *sctx = crypto_tfm_ctx(tfm);
 
-	sctx->fallback.blk = crypto_alloc_skcipher(name, 0,
-						   CRYPTO_ALG_ASYNC |
+	sctx->fallback.blk = crypto_alloc_sync_skcipher(name, 0,
 						   CRYPTO_ALG_NEED_FALLBACK);
 
 	if (IS_ERR(sctx->fallback.blk)) {
@@ -323,7 +323,7 @@ static void fallback_exit_blk(struct crypto_tfm *tfm)
 {
 	struct s390_aes_ctx *sctx = crypto_tfm_ctx(tfm);
 
-	crypto_free_skcipher(sctx->fallback.blk);
+	crypto_free_sync_skcipher(sctx->fallback.blk);
 }
 
 static struct crypto_alg ecb_aes_alg = {
@@ -453,14 +453,15 @@ static int xts_fallback_setkey(struct crypto_tfm *tfm, const u8 *key,
 	struct s390_xts_ctx *xts_ctx = crypto_tfm_ctx(tfm);
 	unsigned int ret;
 
-	crypto_skcipher_clear_flags(xts_ctx->fallback, CRYPTO_TFM_REQ_MASK);
-	crypto_skcipher_set_flags(xts_ctx->fallback, tfm->crt_flags &
+	crypto_sync_skcipher_clear_flags(xts_ctx->fallback,
+					 CRYPTO_TFM_REQ_MASK);
+	crypto_sync_skcipher_set_flags(xts_ctx->fallback, tfm->crt_flags &
 						     CRYPTO_TFM_REQ_MASK);
 
-	ret = crypto_skcipher_setkey(xts_ctx->fallback, key, len);
+	ret = crypto_sync_skcipher_setkey(xts_ctx->fallback, key, len);
 
 	tfm->crt_flags &= ~CRYPTO_TFM_RES_MASK;
-	tfm->crt_flags |= crypto_skcipher_get_flags(xts_ctx->fallback) &
+	tfm->crt_flags |= crypto_sync_skcipher_get_flags(xts_ctx->fallback) &
 			  CRYPTO_TFM_RES_MASK;
 
 	return ret;
@@ -472,10 +473,10 @@ static int xts_fallback_decrypt(struct blkcipher_desc *desc,
 {
 	struct crypto_blkcipher *tfm = desc->tfm;
 	struct s390_xts_ctx *xts_ctx = crypto_blkcipher_ctx(tfm);
-	SKCIPHER_REQUEST_ON_STACK(req, xts_ctx->fallback);
+	SYNC_SKCIPHER_REQUEST_ON_STACK(req, xts_ctx->fallback);
 	unsigned int ret;
 
-	skcipher_request_set_tfm(req, xts_ctx->fallback);
+	skcipher_request_set_sync_tfm(req, xts_ctx->fallback);
 	skcipher_request_set_callback(req, desc->flags, NULL, NULL);
 	skcipher_request_set_crypt(req, src, dst, nbytes, desc->info);
 
@@ -491,10 +492,10 @@ static int xts_fallback_encrypt(struct blkcipher_desc *desc,
 {
 	struct crypto_blkcipher *tfm = desc->tfm;
 	struct s390_xts_ctx *xts_ctx = crypto_blkcipher_ctx(tfm);
-	SKCIPHER_REQUEST_ON_STACK(req, xts_ctx->fallback);
+	SYNC_SKCIPHER_REQUEST_ON_STACK(req, xts_ctx->fallback);
 	unsigned int ret;
 
-	skcipher_request_set_tfm(req, xts_ctx->fallback);
+	skcipher_request_set_sync_tfm(req, xts_ctx->fallback);
 	skcipher_request_set_callback(req, desc->flags, NULL, NULL);
 	skcipher_request_set_crypt(req, src, dst, nbytes, desc->info);
 
@@ -611,8 +612,7 @@ static int xts_fallback_init(struct crypto_tfm *tfm)
 	const char *name = tfm->__crt_alg->cra_name;
 	struct s390_xts_ctx *xts_ctx = crypto_tfm_ctx(tfm);
 
-	xts_ctx->fallback = crypto_alloc_skcipher(name, 0,
-						  CRYPTO_ALG_ASYNC |
+	xts_ctx->fallback = crypto_alloc_sync_skcipher(name, 0,
 						  CRYPTO_ALG_NEED_FALLBACK);
 
 	if (IS_ERR(xts_ctx->fallback)) {
@@ -627,7 +627,7 @@ static void xts_fallback_exit(struct crypto_tfm *tfm)
 {
 	struct s390_xts_ctx *xts_ctx = crypto_tfm_ctx(tfm);
 
-	crypto_free_skcipher(xts_ctx->fallback);
+	crypto_free_sync_skcipher(xts_ctx->fallback);
 }
 
 static struct crypto_alg xts_aes_alg = {
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH crypto-next 06/23] x86/fpu: Remove VLA usage of skcipher
  2018-09-19  2:10 [PATCH crypto-next 00/23] crypto: skcipher - Remove VLA usage Kees Cook
                   ` (4 preceding siblings ...)
  2018-09-19  2:10 ` [PATCH crypto-next 05/23] s390/crypto: " Kees Cook
@ 2018-09-19  2:10 ` Kees Cook
  2018-09-24 11:45   ` Ard Biesheuvel
  2018-09-19  2:10 ` [PATCH crypto-next 07/23] block: cryptoloop: " Kees Cook
                   ` (18 subsequent siblings)
  24 siblings, 1 reply; 42+ messages in thread
From: Kees Cook @ 2018-09-19  2:10 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Kees Cook, x86, Ard Biesheuvel, Eric Biggers, linux-crypto,
	Linux Kernel Mailing List

In the quest to remove all stack VLA usage from the kernel[1], this
replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage
with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(),
which uses a fixed stack size.

[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com

Cc: x86@kernel.org
Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/x86/crypto/fpu.c | 30 ++++++++++++++++--------------
 1 file changed, 16 insertions(+), 14 deletions(-)

diff --git a/arch/x86/crypto/fpu.c b/arch/x86/crypto/fpu.c
index 406680476c52..be9b3766f241 100644
--- a/arch/x86/crypto/fpu.c
+++ b/arch/x86/crypto/fpu.c
@@ -20,21 +20,23 @@
 #include <asm/fpu/api.h>
 
 struct crypto_fpu_ctx {
-	struct crypto_skcipher *child;
+	struct crypto_sync_skcipher *child;
 };
 
 static int crypto_fpu_setkey(struct crypto_skcipher *parent, const u8 *key,
 			     unsigned int keylen)
 {
 	struct crypto_fpu_ctx *ctx = crypto_skcipher_ctx(parent);
-	struct crypto_skcipher *child = ctx->child;
+	struct crypto_sync_skcipher *child = ctx->child;
 	int err;
 
-	crypto_skcipher_clear_flags(child, CRYPTO_TFM_REQ_MASK);
-	crypto_skcipher_set_flags(child, crypto_skcipher_get_flags(parent) &
+	crypto_sync_skcipher_clear_flags(child, CRYPTO_TFM_REQ_MASK);
+	crypto_sync_skcipher_set_flags(child,
+				       crypto_skcipher_get_flags(parent) &
 					 CRYPTO_TFM_REQ_MASK);
-	err = crypto_skcipher_setkey(child, key, keylen);
-	crypto_skcipher_set_flags(parent, crypto_skcipher_get_flags(child) &
+	err = crypto_sync_skcipher_setkey(child, key, keylen);
+	crypto_skcipher_set_flags(parent,
+				  crypto_sync_skcipher_get_flags(child) &
 					  CRYPTO_TFM_RES_MASK);
 	return err;
 }
@@ -43,11 +45,11 @@ static int crypto_fpu_encrypt(struct skcipher_request *req)
 {
 	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
 	struct crypto_fpu_ctx *ctx = crypto_skcipher_ctx(tfm);
-	struct crypto_skcipher *child = ctx->child;
-	SKCIPHER_REQUEST_ON_STACK(subreq, child);
+	struct crypto_sync_skcipher *child = ctx->child;
+	SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, child);
 	int err;
 
-	skcipher_request_set_tfm(subreq, child);
+	skcipher_request_set_sync_tfm(subreq, child);
 	skcipher_request_set_callback(subreq, 0, NULL, NULL);
 	skcipher_request_set_crypt(subreq, req->src, req->dst, req->cryptlen,
 				   req->iv);
@@ -64,11 +66,11 @@ static int crypto_fpu_decrypt(struct skcipher_request *req)
 {
 	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
 	struct crypto_fpu_ctx *ctx = crypto_skcipher_ctx(tfm);
-	struct crypto_skcipher *child = ctx->child;
-	SKCIPHER_REQUEST_ON_STACK(subreq, child);
+	struct crypto_sync_skcipher *child = ctx->child;
+	SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, child);
 	int err;
 
-	skcipher_request_set_tfm(subreq, child);
+	skcipher_request_set_sync_tfm(subreq, child);
 	skcipher_request_set_callback(subreq, 0, NULL, NULL);
 	skcipher_request_set_crypt(subreq, req->src, req->dst, req->cryptlen,
 				   req->iv);
@@ -93,7 +95,7 @@ static int crypto_fpu_init_tfm(struct crypto_skcipher *tfm)
 	if (IS_ERR(cipher))
 		return PTR_ERR(cipher);
 
-	ctx->child = cipher;
+	ctx->child = (struct crypto_sync_skcipher *)cipher;
 
 	return 0;
 }
@@ -102,7 +104,7 @@ static void crypto_fpu_exit_tfm(struct crypto_skcipher *tfm)
 {
 	struct crypto_fpu_ctx *ctx = crypto_skcipher_ctx(tfm);
 
-	crypto_free_skcipher(ctx->child);
+	crypto_free_sync_skcipher(ctx->child);
 }
 
 static void crypto_fpu_free(struct skcipher_instance *inst)
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH crypto-next 07/23] block: cryptoloop: Remove VLA usage of skcipher
  2018-09-19  2:10 [PATCH crypto-next 00/23] crypto: skcipher - Remove VLA usage Kees Cook
                   ` (5 preceding siblings ...)
  2018-09-19  2:10 ` [PATCH crypto-next 06/23] x86/fpu: " Kees Cook
@ 2018-09-19  2:10 ` Kees Cook
  2018-09-24 11:52   ` Ard Biesheuvel
  2018-09-19  2:10 ` [PATCH crypto-next 08/23] libceph: " Kees Cook
                   ` (17 subsequent siblings)
  24 siblings, 1 reply; 42+ messages in thread
From: Kees Cook @ 2018-09-19  2:10 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Kees Cook, Jens Axboe, linux-block, Ard Biesheuvel, Eric Biggers,
	linux-crypto, Linux Kernel Mailing List

In the quest to remove all stack VLA usage from the kernel[1], this
replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage
with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(),
which uses a fixed stack size.

[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com

Cc: Jens Axboe <axboe@kernel.dk>
Cc: linux-block@vger.kernel.org
Signed-off-by: Kees Cook <keescook@chromium.org>
---
 drivers/block/cryptoloop.c | 22 +++++++++++-----------
 1 file changed, 11 insertions(+), 11 deletions(-)

diff --git a/drivers/block/cryptoloop.c b/drivers/block/cryptoloop.c
index 7033a4beda66..254ee7d54e91 100644
--- a/drivers/block/cryptoloop.c
+++ b/drivers/block/cryptoloop.c
@@ -45,7 +45,7 @@ cryptoloop_init(struct loop_device *lo, const struct loop_info64 *info)
 	char cms[LO_NAME_SIZE];			/* cipher-mode string */
 	char *mode;
 	char *cmsp = cms;			/* c-m string pointer */
-	struct crypto_skcipher *tfm;
+	struct crypto_sync_skcipher *tfm;
 
 	/* encryption breaks for non sector aligned offsets */
 
@@ -80,13 +80,13 @@ cryptoloop_init(struct loop_device *lo, const struct loop_info64 *info)
 	*cmsp++ = ')';
 	*cmsp = 0;
 
-	tfm = crypto_alloc_skcipher(cms, 0, CRYPTO_ALG_ASYNC);
+	tfm = crypto_alloc_sync_skcipher(cms, 0, 0);
 	if (IS_ERR(tfm))
 		return PTR_ERR(tfm);
 
-	err = crypto_skcipher_setkey(tfm, info->lo_encrypt_key,
-				     info->lo_encrypt_key_size);
-	
+	err = crypto_sync_skcipher_setkey(tfm, info->lo_encrypt_key,
+					  info->lo_encrypt_key_size);
+
 	if (err != 0)
 		goto out_free_tfm;
 
@@ -94,7 +94,7 @@ cryptoloop_init(struct loop_device *lo, const struct loop_info64 *info)
 	return 0;
 
  out_free_tfm:
-	crypto_free_skcipher(tfm);
+	crypto_free_sync_skcipher(tfm);
 
  out:
 	return err;
@@ -109,8 +109,8 @@ cryptoloop_transfer(struct loop_device *lo, int cmd,
 		    struct page *loop_page, unsigned loop_off,
 		    int size, sector_t IV)
 {
-	struct crypto_skcipher *tfm = lo->key_data;
-	SKCIPHER_REQUEST_ON_STACK(req, tfm);
+	struct crypto_sync_skcipher *tfm = lo->key_data;
+	SYNC_SKCIPHER_REQUEST_ON_STACK(req, tfm);
 	struct scatterlist sg_out;
 	struct scatterlist sg_in;
 
@@ -119,7 +119,7 @@ cryptoloop_transfer(struct loop_device *lo, int cmd,
 	unsigned in_offs, out_offs;
 	int err;
 
-	skcipher_request_set_tfm(req, tfm);
+	skcipher_request_set_sync_tfm(req, tfm);
 	skcipher_request_set_callback(req, CRYPTO_TFM_REQ_MAY_SLEEP,
 				      NULL, NULL);
 
@@ -175,9 +175,9 @@ cryptoloop_ioctl(struct loop_device *lo, int cmd, unsigned long arg)
 static int
 cryptoloop_release(struct loop_device *lo)
 {
-	struct crypto_skcipher *tfm = lo->key_data;
+	struct crypto_sync_skcipher *tfm = lo->key_data;
 	if (tfm != NULL) {
-		crypto_free_skcipher(tfm);
+		crypto_free_sync_skcipher(tfm);
 		lo->key_data = NULL;
 		return 0;
 	}
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH crypto-next 08/23] libceph: Remove VLA usage of skcipher
  2018-09-19  2:10 [PATCH crypto-next 00/23] crypto: skcipher - Remove VLA usage Kees Cook
                   ` (6 preceding siblings ...)
  2018-09-19  2:10 ` [PATCH crypto-next 07/23] block: cryptoloop: " Kees Cook
@ 2018-09-19  2:10 ` Kees Cook
  2018-09-19  2:10 ` [PATCH crypto-next 09/23] ppp: mppe: " Kees Cook
                   ` (16 subsequent siblings)
  24 siblings, 0 replies; 42+ messages in thread
From: Kees Cook @ 2018-09-19  2:10 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Kees Cook, Ilya Dryomov, Yan, Zheng, Sage Weil, ceph-devel,
	Ard Biesheuvel, Eric Biggers, linux-crypto,
	Linux Kernel Mailing List

In the quest to remove all stack VLA usage from the kernel[1], this
replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage
with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(),
which uses a fixed stack size.

[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com

Cc: Ilya Dryomov <idryomov@gmail.com>
Cc: "Yan, Zheng" <zyan@redhat.com>
Cc: Sage Weil <sage@redhat.com>
Cc: ceph-devel@vger.kernel.org
Signed-off-by: Kees Cook <keescook@chromium.org>
---
 net/ceph/crypto.c | 12 ++++++------
 net/ceph/crypto.h |  2 +-
 2 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/net/ceph/crypto.c b/net/ceph/crypto.c
index 02172c408ff2..5d6724cee38f 100644
--- a/net/ceph/crypto.c
+++ b/net/ceph/crypto.c
@@ -46,9 +46,9 @@ static int set_secret(struct ceph_crypto_key *key, void *buf)
 		goto fail;
 	}
 
-	/* crypto_alloc_skcipher() allocates with GFP_KERNEL */
+	/* crypto_alloc_sync_skcipher() allocates with GFP_KERNEL */
 	noio_flag = memalloc_noio_save();
-	key->tfm = crypto_alloc_skcipher("cbc(aes)", 0, CRYPTO_ALG_ASYNC);
+	key->tfm = crypto_alloc_sync_skcipher("cbc(aes)", 0, 0);
 	memalloc_noio_restore(noio_flag);
 	if (IS_ERR(key->tfm)) {
 		ret = PTR_ERR(key->tfm);
@@ -56,7 +56,7 @@ static int set_secret(struct ceph_crypto_key *key, void *buf)
 		goto fail;
 	}
 
-	ret = crypto_skcipher_setkey(key->tfm, key->key, key->len);
+	ret = crypto_sync_skcipher_setkey(key->tfm, key->key, key->len);
 	if (ret)
 		goto fail;
 
@@ -136,7 +136,7 @@ void ceph_crypto_key_destroy(struct ceph_crypto_key *key)
 	if (key) {
 		kfree(key->key);
 		key->key = NULL;
-		crypto_free_skcipher(key->tfm);
+		crypto_free_sync_skcipher(key->tfm);
 		key->tfm = NULL;
 	}
 }
@@ -216,7 +216,7 @@ static void teardown_sgtable(struct sg_table *sgt)
 static int ceph_aes_crypt(const struct ceph_crypto_key *key, bool encrypt,
 			  void *buf, int buf_len, int in_len, int *pout_len)
 {
-	SKCIPHER_REQUEST_ON_STACK(req, key->tfm);
+	SYNC_SKCIPHER_REQUEST_ON_STACK(req, key->tfm);
 	struct sg_table sgt;
 	struct scatterlist prealloc_sg;
 	char iv[AES_BLOCK_SIZE] __aligned(8);
@@ -232,7 +232,7 @@ static int ceph_aes_crypt(const struct ceph_crypto_key *key, bool encrypt,
 		return ret;
 
 	memcpy(iv, aes_iv, AES_BLOCK_SIZE);
-	skcipher_request_set_tfm(req, key->tfm);
+	skcipher_request_set_sync_tfm(req, key->tfm);
 	skcipher_request_set_callback(req, 0, NULL, NULL);
 	skcipher_request_set_crypt(req, sgt.sgl, sgt.sgl, crypt_len, iv);
 
diff --git a/net/ceph/crypto.h b/net/ceph/crypto.h
index bb45c7d43739..96ef4d860bc9 100644
--- a/net/ceph/crypto.h
+++ b/net/ceph/crypto.h
@@ -13,7 +13,7 @@ struct ceph_crypto_key {
 	struct ceph_timespec created;
 	int len;
 	void *key;
-	struct crypto_skcipher *tfm;
+	struct crypto_sync_skcipher *tfm;
 };
 
 int ceph_crypto_key_clone(struct ceph_crypto_key *dst,
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH crypto-next 09/23] ppp: mppe: Remove VLA usage of skcipher
  2018-09-19  2:10 [PATCH crypto-next 00/23] crypto: skcipher - Remove VLA usage Kees Cook
                   ` (7 preceding siblings ...)
  2018-09-19  2:10 ` [PATCH crypto-next 08/23] libceph: " Kees Cook
@ 2018-09-19  2:10 ` Kees Cook
  2018-09-19  2:10 ` [PATCH crypto-next 10/23] rxrpc: " Kees Cook
                   ` (15 subsequent siblings)
  24 siblings, 0 replies; 42+ messages in thread
From: Kees Cook @ 2018-09-19  2:10 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Kees Cook, Paul Mackerras, linux-ppp, Ard Biesheuvel,
	Eric Biggers, linux-crypto, Linux Kernel Mailing List

In the quest to remove all stack VLA usage from the kernel[1], this
replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage
with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(),
which uses a fixed stack size.

[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com

Cc: Paul Mackerras <paulus@samba.org>
Cc: linux-ppp@vger.kernel.org
Signed-off-by: Kees Cook <keescook@chromium.org>
---
 drivers/net/ppp/ppp_mppe.c | 27 ++++++++++++++-------------
 1 file changed, 14 insertions(+), 13 deletions(-)

diff --git a/drivers/net/ppp/ppp_mppe.c b/drivers/net/ppp/ppp_mppe.c
index a205750b431b..7ccdc62c6052 100644
--- a/drivers/net/ppp/ppp_mppe.c
+++ b/drivers/net/ppp/ppp_mppe.c
@@ -95,7 +95,7 @@ static inline void sha_pad_init(struct sha_pad *shapad)
  * State for an MPPE (de)compressor.
  */
 struct ppp_mppe_state {
-	struct crypto_skcipher *arc4;
+	struct crypto_sync_skcipher *arc4;
 	struct shash_desc *sha1;
 	unsigned char *sha1_digest;
 	unsigned char master_key[MPPE_MAX_KEY_LEN];
@@ -155,15 +155,15 @@ static void get_new_key_from_sha(struct ppp_mppe_state * state)
 static void mppe_rekey(struct ppp_mppe_state * state, int initial_key)
 {
 	struct scatterlist sg_in[1], sg_out[1];
-	SKCIPHER_REQUEST_ON_STACK(req, state->arc4);
+	SYNC_SKCIPHER_REQUEST_ON_STACK(req, state->arc4);
 
-	skcipher_request_set_tfm(req, state->arc4);
+	skcipher_request_set_sync_tfm(req, state->arc4);
 	skcipher_request_set_callback(req, 0, NULL, NULL);
 
 	get_new_key_from_sha(state);
 	if (!initial_key) {
-		crypto_skcipher_setkey(state->arc4, state->sha1_digest,
-				       state->keylen);
+		crypto_sync_skcipher_setkey(state->arc4, state->sha1_digest,
+					    state->keylen);
 		sg_init_table(sg_in, 1);
 		sg_init_table(sg_out, 1);
 		setup_sg(sg_in, state->sha1_digest, state->keylen);
@@ -181,7 +181,8 @@ static void mppe_rekey(struct ppp_mppe_state * state, int initial_key)
 		state->session_key[1] = 0x26;
 		state->session_key[2] = 0x9e;
 	}
-	crypto_skcipher_setkey(state->arc4, state->session_key, state->keylen);
+	crypto_sync_skcipher_setkey(state->arc4, state->session_key,
+				    state->keylen);
 	skcipher_request_zero(req);
 }
 
@@ -203,7 +204,7 @@ static void *mppe_alloc(unsigned char *options, int optlen)
 		goto out;
 
 
-	state->arc4 = crypto_alloc_skcipher("ecb(arc4)", 0, CRYPTO_ALG_ASYNC);
+	state->arc4 = crypto_alloc_sync_skcipher("ecb(arc4)", 0, 0);
 	if (IS_ERR(state->arc4)) {
 		state->arc4 = NULL;
 		goto out_free;
@@ -250,7 +251,7 @@ static void *mppe_alloc(unsigned char *options, int optlen)
 		crypto_free_shash(state->sha1->tfm);
 		kzfree(state->sha1);
 	}
-	crypto_free_skcipher(state->arc4);
+	crypto_free_sync_skcipher(state->arc4);
 	kfree(state);
 out:
 	return NULL;
@@ -266,7 +267,7 @@ static void mppe_free(void *arg)
 		kfree(state->sha1_digest);
 		crypto_free_shash(state->sha1->tfm);
 		kzfree(state->sha1);
-		crypto_free_skcipher(state->arc4);
+		crypto_free_sync_skcipher(state->arc4);
 		kfree(state);
 	}
 }
@@ -366,7 +367,7 @@ mppe_compress(void *arg, unsigned char *ibuf, unsigned char *obuf,
 	      int isize, int osize)
 {
 	struct ppp_mppe_state *state = (struct ppp_mppe_state *) arg;
-	SKCIPHER_REQUEST_ON_STACK(req, state->arc4);
+	SYNC_SKCIPHER_REQUEST_ON_STACK(req, state->arc4);
 	int proto;
 	int err;
 	struct scatterlist sg_in[1], sg_out[1];
@@ -426,7 +427,7 @@ mppe_compress(void *arg, unsigned char *ibuf, unsigned char *obuf,
 	setup_sg(sg_in, ibuf, isize);
 	setup_sg(sg_out, obuf, osize);
 
-	skcipher_request_set_tfm(req, state->arc4);
+	skcipher_request_set_sync_tfm(req, state->arc4);
 	skcipher_request_set_callback(req, 0, NULL, NULL);
 	skcipher_request_set_crypt(req, sg_in, sg_out, isize, NULL);
 	err = crypto_skcipher_encrypt(req);
@@ -480,7 +481,7 @@ mppe_decompress(void *arg, unsigned char *ibuf, int isize, unsigned char *obuf,
 		int osize)
 {
 	struct ppp_mppe_state *state = (struct ppp_mppe_state *) arg;
-	SKCIPHER_REQUEST_ON_STACK(req, state->arc4);
+	SYNC_SKCIPHER_REQUEST_ON_STACK(req, state->arc4);
 	unsigned ccount;
 	int flushed = MPPE_BITS(ibuf) & MPPE_BIT_FLUSHED;
 	struct scatterlist sg_in[1], sg_out[1];
@@ -615,7 +616,7 @@ mppe_decompress(void *arg, unsigned char *ibuf, int isize, unsigned char *obuf,
 	setup_sg(sg_in, ibuf, 1);
 	setup_sg(sg_out, obuf, 1);
 
-	skcipher_request_set_tfm(req, state->arc4);
+	skcipher_request_set_sync_tfm(req, state->arc4);
 	skcipher_request_set_callback(req, 0, NULL, NULL);
 	skcipher_request_set_crypt(req, sg_in, sg_out, 1, NULL);
 	if (crypto_skcipher_decrypt(req)) {
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH crypto-next 10/23] rxrpc: Remove VLA usage of skcipher
  2018-09-19  2:10 [PATCH crypto-next 00/23] crypto: skcipher - Remove VLA usage Kees Cook
                   ` (8 preceding siblings ...)
  2018-09-19  2:10 ` [PATCH crypto-next 09/23] ppp: mppe: " Kees Cook
@ 2018-09-19  2:10 ` Kees Cook
  2018-09-19  2:10 ` [PATCH crypto-next 11/23] wusb: " Kees Cook
                   ` (14 subsequent siblings)
  24 siblings, 0 replies; 42+ messages in thread
From: Kees Cook @ 2018-09-19  2:10 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Kees Cook, David Howells, linux-afs, Ard Biesheuvel,
	Eric Biggers, linux-crypto, Linux Kernel Mailing List

In the quest to remove all stack VLA usage from the kernel[1], this
replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage
with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(),
which uses a fixed stack size.

[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com

Cc: David Howells <dhowells@redhat.com>
Cc: linux-afs@lists.infradead.org
Signed-off-by: Kees Cook <keescook@chromium.org>
---
 net/rxrpc/ar-internal.h |  2 +-
 net/rxrpc/rxkad.c       | 44 ++++++++++++++++++++---------------------
 2 files changed, 23 insertions(+), 23 deletions(-)

diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h
index c97558710421..41be33c9eecf 100644
--- a/net/rxrpc/ar-internal.h
+++ b/net/rxrpc/ar-internal.h
@@ -442,7 +442,7 @@ struct rxrpc_connection {
 	struct sk_buff_head	rx_queue;	/* received conn-level packets */
 	const struct rxrpc_security *security;	/* applied security module */
 	struct key		*server_key;	/* security for this service */
-	struct crypto_skcipher	*cipher;	/* encryption handle */
+	struct crypto_sync_skcipher *cipher;	/* encryption handle */
 	struct rxrpc_crypt	csum_iv;	/* packet checksum base */
 	unsigned long		flags;
 	unsigned long		events;
diff --git a/net/rxrpc/rxkad.c b/net/rxrpc/rxkad.c
index cea16838d588..cbef9ea43dec 100644
--- a/net/rxrpc/rxkad.c
+++ b/net/rxrpc/rxkad.c
@@ -46,7 +46,7 @@ struct rxkad_level2_hdr {
  * alloc routine, but since we have it to hand, we use it to decrypt RESPONSE
  * packets
  */
-static struct crypto_skcipher *rxkad_ci;
+static struct crypto_sync_skcipher *rxkad_ci;
 static DEFINE_MUTEX(rxkad_ci_mutex);
 
 /*
@@ -54,7 +54,7 @@ static DEFINE_MUTEX(rxkad_ci_mutex);
  */
 static int rxkad_init_connection_security(struct rxrpc_connection *conn)
 {
-	struct crypto_skcipher *ci;
+	struct crypto_sync_skcipher *ci;
 	struct rxrpc_key_token *token;
 	int ret;
 
@@ -63,14 +63,14 @@ static int rxkad_init_connection_security(struct rxrpc_connection *conn)
 	token = conn->params.key->payload.data[0];
 	conn->security_ix = token->security_index;
 
-	ci = crypto_alloc_skcipher("pcbc(fcrypt)", 0, CRYPTO_ALG_ASYNC);
+	ci = crypto_alloc_sync_skcipher("pcbc(fcrypt)", 0, 0);
 	if (IS_ERR(ci)) {
 		_debug("no cipher");
 		ret = PTR_ERR(ci);
 		goto error;
 	}
 
-	if (crypto_skcipher_setkey(ci, token->kad->session_key,
+	if (crypto_sync_skcipher_setkey(ci, token->kad->session_key,
 				   sizeof(token->kad->session_key)) < 0)
 		BUG();
 
@@ -104,7 +104,7 @@ static int rxkad_init_connection_security(struct rxrpc_connection *conn)
 static int rxkad_prime_packet_security(struct rxrpc_connection *conn)
 {
 	struct rxrpc_key_token *token;
-	SKCIPHER_REQUEST_ON_STACK(req, conn->cipher);
+	SYNC_SKCIPHER_REQUEST_ON_STACK(req, conn->cipher);
 	struct scatterlist sg;
 	struct rxrpc_crypt iv;
 	__be32 *tmpbuf;
@@ -128,7 +128,7 @@ static int rxkad_prime_packet_security(struct rxrpc_connection *conn)
 	tmpbuf[3] = htonl(conn->security_ix);
 
 	sg_init_one(&sg, tmpbuf, tmpsize);
-	skcipher_request_set_tfm(req, conn->cipher);
+	skcipher_request_set_sync_tfm(req, conn->cipher);
 	skcipher_request_set_callback(req, 0, NULL, NULL);
 	skcipher_request_set_crypt(req, &sg, &sg, tmpsize, iv.x);
 	crypto_skcipher_encrypt(req);
@@ -167,7 +167,7 @@ static int rxkad_secure_packet_auth(const struct rxrpc_call *call,
 	memset(&iv, 0, sizeof(iv));
 
 	sg_init_one(&sg, sechdr, 8);
-	skcipher_request_set_tfm(req, call->conn->cipher);
+	skcipher_request_set_sync_tfm(req, call->conn->cipher);
 	skcipher_request_set_callback(req, 0, NULL, NULL);
 	skcipher_request_set_crypt(req, &sg, &sg, 8, iv.x);
 	crypto_skcipher_encrypt(req);
@@ -212,7 +212,7 @@ static int rxkad_secure_packet_encrypt(const struct rxrpc_call *call,
 	memcpy(&iv, token->kad->session_key, sizeof(iv));
 
 	sg_init_one(&sg[0], sechdr, sizeof(rxkhdr));
-	skcipher_request_set_tfm(req, call->conn->cipher);
+	skcipher_request_set_sync_tfm(req, call->conn->cipher);
 	skcipher_request_set_callback(req, 0, NULL, NULL);
 	skcipher_request_set_crypt(req, &sg[0], &sg[0], sizeof(rxkhdr), iv.x);
 	crypto_skcipher_encrypt(req);
@@ -250,7 +250,7 @@ static int rxkad_secure_packet(struct rxrpc_call *call,
 			       void *sechdr)
 {
 	struct rxrpc_skb_priv *sp;
-	SKCIPHER_REQUEST_ON_STACK(req, call->conn->cipher);
+	SYNC_SKCIPHER_REQUEST_ON_STACK(req, call->conn->cipher);
 	struct rxrpc_crypt iv;
 	struct scatterlist sg;
 	u32 x, y;
@@ -279,7 +279,7 @@ static int rxkad_secure_packet(struct rxrpc_call *call,
 	call->crypto_buf[1] = htonl(x);
 
 	sg_init_one(&sg, call->crypto_buf, 8);
-	skcipher_request_set_tfm(req, call->conn->cipher);
+	skcipher_request_set_sync_tfm(req, call->conn->cipher);
 	skcipher_request_set_callback(req, 0, NULL, NULL);
 	skcipher_request_set_crypt(req, &sg, &sg, 8, iv.x);
 	crypto_skcipher_encrypt(req);
@@ -352,7 +352,7 @@ static int rxkad_verify_packet_1(struct rxrpc_call *call, struct sk_buff *skb,
 	/* start the decryption afresh */
 	memset(&iv, 0, sizeof(iv));
 
-	skcipher_request_set_tfm(req, call->conn->cipher);
+	skcipher_request_set_sync_tfm(req, call->conn->cipher);
 	skcipher_request_set_callback(req, 0, NULL, NULL);
 	skcipher_request_set_crypt(req, sg, sg, 8, iv.x);
 	crypto_skcipher_decrypt(req);
@@ -450,7 +450,7 @@ static int rxkad_verify_packet_2(struct rxrpc_call *call, struct sk_buff *skb,
 	token = call->conn->params.key->payload.data[0];
 	memcpy(&iv, token->kad->session_key, sizeof(iv));
 
-	skcipher_request_set_tfm(req, call->conn->cipher);
+	skcipher_request_set_sync_tfm(req, call->conn->cipher);
 	skcipher_request_set_callback(req, 0, NULL, NULL);
 	skcipher_request_set_crypt(req, sg, sg, len, iv.x);
 	crypto_skcipher_decrypt(req);
@@ -506,7 +506,7 @@ static int rxkad_verify_packet(struct rxrpc_call *call, struct sk_buff *skb,
 			       unsigned int offset, unsigned int len,
 			       rxrpc_seq_t seq, u16 expected_cksum)
 {
-	SKCIPHER_REQUEST_ON_STACK(req, call->conn->cipher);
+	SYNC_SKCIPHER_REQUEST_ON_STACK(req, call->conn->cipher);
 	struct rxrpc_crypt iv;
 	struct scatterlist sg;
 	bool aborted;
@@ -529,7 +529,7 @@ static int rxkad_verify_packet(struct rxrpc_call *call, struct sk_buff *skb,
 	call->crypto_buf[1] = htonl(x);
 
 	sg_init_one(&sg, call->crypto_buf, 8);
-	skcipher_request_set_tfm(req, call->conn->cipher);
+	skcipher_request_set_sync_tfm(req, call->conn->cipher);
 	skcipher_request_set_callback(req, 0, NULL, NULL);
 	skcipher_request_set_crypt(req, &sg, &sg, 8, iv.x);
 	crypto_skcipher_encrypt(req);
@@ -755,7 +755,7 @@ static void rxkad_encrypt_response(struct rxrpc_connection *conn,
 				   struct rxkad_response *resp,
 				   const struct rxkad_key *s2)
 {
-	SKCIPHER_REQUEST_ON_STACK(req, conn->cipher);
+	SYNC_SKCIPHER_REQUEST_ON_STACK(req, conn->cipher);
 	struct rxrpc_crypt iv;
 	struct scatterlist sg[1];
 
@@ -764,7 +764,7 @@ static void rxkad_encrypt_response(struct rxrpc_connection *conn,
 
 	sg_init_table(sg, 1);
 	sg_set_buf(sg, &resp->encrypted, sizeof(resp->encrypted));
-	skcipher_request_set_tfm(req, conn->cipher);
+	skcipher_request_set_sync_tfm(req, conn->cipher);
 	skcipher_request_set_callback(req, 0, NULL, NULL);
 	skcipher_request_set_crypt(req, sg, sg, sizeof(resp->encrypted), iv.x);
 	crypto_skcipher_encrypt(req);
@@ -1021,7 +1021,7 @@ static void rxkad_decrypt_response(struct rxrpc_connection *conn,
 				   struct rxkad_response *resp,
 				   const struct rxrpc_crypt *session_key)
 {
-	SKCIPHER_REQUEST_ON_STACK(req, rxkad_ci);
+	SYNC_SKCIPHER_REQUEST_ON_STACK(req, rxkad_ci);
 	struct scatterlist sg[1];
 	struct rxrpc_crypt iv;
 
@@ -1031,7 +1031,7 @@ static void rxkad_decrypt_response(struct rxrpc_connection *conn,
 	ASSERT(rxkad_ci != NULL);
 
 	mutex_lock(&rxkad_ci_mutex);
-	if (crypto_skcipher_setkey(rxkad_ci, session_key->x,
+	if (crypto_sync_skcipher_setkey(rxkad_ci, session_key->x,
 				   sizeof(*session_key)) < 0)
 		BUG();
 
@@ -1039,7 +1039,7 @@ static void rxkad_decrypt_response(struct rxrpc_connection *conn,
 
 	sg_init_table(sg, 1);
 	sg_set_buf(sg, &resp->encrypted, sizeof(resp->encrypted));
-	skcipher_request_set_tfm(req, rxkad_ci);
+	skcipher_request_set_sync_tfm(req, rxkad_ci);
 	skcipher_request_set_callback(req, 0, NULL, NULL);
 	skcipher_request_set_crypt(req, sg, sg, sizeof(resp->encrypted), iv.x);
 	crypto_skcipher_decrypt(req);
@@ -1218,7 +1218,7 @@ static void rxkad_clear(struct rxrpc_connection *conn)
 	_enter("");
 
 	if (conn->cipher)
-		crypto_free_skcipher(conn->cipher);
+		crypto_free_sync_skcipher(conn->cipher);
 }
 
 /*
@@ -1228,7 +1228,7 @@ static int rxkad_init(void)
 {
 	/* pin the cipher we need so that the crypto layer doesn't invoke
 	 * keventd to go get it */
-	rxkad_ci = crypto_alloc_skcipher("pcbc(fcrypt)", 0, CRYPTO_ALG_ASYNC);
+	rxkad_ci = crypto_alloc_sync_skcipher("pcbc(fcrypt)", 0, 0);
 	return PTR_ERR_OR_ZERO(rxkad_ci);
 }
 
@@ -1238,7 +1238,7 @@ static int rxkad_init(void)
 static void rxkad_exit(void)
 {
 	if (rxkad_ci)
-		crypto_free_skcipher(rxkad_ci);
+		crypto_free_sync_skcipher(rxkad_ci);
 }
 
 /*
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH crypto-next 11/23] wusb: Remove VLA usage of skcipher
  2018-09-19  2:10 [PATCH crypto-next 00/23] crypto: skcipher - Remove VLA usage Kees Cook
                   ` (9 preceding siblings ...)
  2018-09-19  2:10 ` [PATCH crypto-next 10/23] rxrpc: " Kees Cook
@ 2018-09-19  2:10 ` Kees Cook
  2018-09-20 10:39   ` Greg Kroah-Hartman
  2018-09-19  2:10 ` [PATCH crypto-next 12/23] crypto: ccp - " Kees Cook
                   ` (13 subsequent siblings)
  24 siblings, 1 reply; 42+ messages in thread
From: Kees Cook @ 2018-09-19  2:10 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Kees Cook, Greg Kroah-Hartman, Felipe Balbi, Johan Hovold,
	Gustavo A. R. Silva, linux-usb, Ard Biesheuvel, Eric Biggers,
	linux-crypto, Linux Kernel Mailing List

In the quest to remove all stack VLA usage from the kernel[1], this
replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage
with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(),
which uses a fixed stack size.

[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com

Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Felipe Balbi <felipe.balbi@linux.intel.com>
Cc: Johan Hovold <johan@kernel.org>
Cc: "Gustavo A. R. Silva" <gustavo@embeddedor.com>
Cc: linux-usb@vger.kernel.org
Signed-off-by: Kees Cook <keescook@chromium.org>
---
 drivers/usb/wusbcore/crypto.c | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/drivers/usb/wusbcore/crypto.c b/drivers/usb/wusbcore/crypto.c
index aff50eb09ca9..68ddee86a886 100644
--- a/drivers/usb/wusbcore/crypto.c
+++ b/drivers/usb/wusbcore/crypto.c
@@ -189,7 +189,7 @@ struct wusb_mac_scratch {
  * NOTE: blen is not aligned to a block size, we'll pad zeros, that's
  *       what sg[4] is for. Maybe there is a smarter way to do this.
  */
-static int wusb_ccm_mac(struct crypto_skcipher *tfm_cbc,
+static int wusb_ccm_mac(struct crypto_sync_skcipher *tfm_cbc,
 			struct crypto_cipher *tfm_aes,
 			struct wusb_mac_scratch *scratch,
 			void *mic,
@@ -198,7 +198,7 @@ static int wusb_ccm_mac(struct crypto_skcipher *tfm_cbc,
 			size_t blen)
 {
 	int result = 0;
-	SKCIPHER_REQUEST_ON_STACK(req, tfm_cbc);
+	SYNC_SKCIPHER_REQUEST_ON_STACK(req, tfm_cbc);
 	struct scatterlist sg[4], sg_dst;
 	void *dst_buf;
 	size_t dst_size;
@@ -224,7 +224,7 @@ static int wusb_ccm_mac(struct crypto_skcipher *tfm_cbc,
 	if (!dst_buf)
 		goto error_dst_buf;
 
-	iv = kzalloc(crypto_skcipher_ivsize(tfm_cbc), GFP_KERNEL);
+	iv = kzalloc(crypto_sync_skcipher_ivsize(tfm_cbc), GFP_KERNEL);
 	if (!iv)
 		goto error_iv;
 
@@ -251,7 +251,7 @@ static int wusb_ccm_mac(struct crypto_skcipher *tfm_cbc,
 	sg_set_page(&sg[3], ZERO_PAGE(0), zero_padding, 0);
 	sg_init_one(&sg_dst, dst_buf, dst_size);
 
-	skcipher_request_set_tfm(req, tfm_cbc);
+	skcipher_request_set_sync_tfm(req, tfm_cbc);
 	skcipher_request_set_callback(req, 0, NULL, NULL);
 	skcipher_request_set_crypt(req, sg, &sg_dst, dst_size, iv);
 	result = crypto_skcipher_encrypt(req);
@@ -298,19 +298,19 @@ ssize_t wusb_prf(void *out, size_t out_size,
 {
 	ssize_t result, bytes = 0, bitr;
 	struct aes_ccm_nonce n = *_n;
-	struct crypto_skcipher *tfm_cbc;
+	struct crypto_sync_skcipher *tfm_cbc;
 	struct crypto_cipher *tfm_aes;
 	struct wusb_mac_scratch *scratch;
 	u64 sfn = 0;
 	__le64 sfn_le;
 
-	tfm_cbc = crypto_alloc_skcipher("cbc(aes)", 0, CRYPTO_ALG_ASYNC);
+	tfm_cbc = crypto_alloc_sync_skcipher("cbc(aes)", 0, 0);
 	if (IS_ERR(tfm_cbc)) {
 		result = PTR_ERR(tfm_cbc);
 		printk(KERN_ERR "E: can't load CBC(AES): %d\n", (int)result);
 		goto error_alloc_cbc;
 	}
-	result = crypto_skcipher_setkey(tfm_cbc, key, 16);
+	result = crypto_sync_skcipher_setkey(tfm_cbc, key, 16);
 	if (result < 0) {
 		printk(KERN_ERR "E: can't set CBC key: %d\n", (int)result);
 		goto error_setkey_cbc;
@@ -351,7 +351,7 @@ ssize_t wusb_prf(void *out, size_t out_size,
 	crypto_free_cipher(tfm_aes);
 error_alloc_aes:
 error_setkey_cbc:
-	crypto_free_skcipher(tfm_cbc);
+	crypto_free_sync_skcipher(tfm_cbc);
 error_alloc_cbc:
 	return result;
 }
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH crypto-next 12/23] crypto: ccp - Remove VLA usage of skcipher
  2018-09-19  2:10 [PATCH crypto-next 00/23] crypto: skcipher - Remove VLA usage Kees Cook
                   ` (10 preceding siblings ...)
  2018-09-19  2:10 ` [PATCH crypto-next 11/23] wusb: " Kees Cook
@ 2018-09-19  2:10 ` Kees Cook
  2018-09-19  2:10 ` [PATCH crypto-next 13/23] crypto: vmx " Kees Cook
                   ` (12 subsequent siblings)
  24 siblings, 0 replies; 42+ messages in thread
From: Kees Cook @ 2018-09-19  2:10 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Kees Cook, Tom Lendacky, Gary Hook, Ard Biesheuvel, Eric Biggers,
	linux-crypto, Linux Kernel Mailing List

In the quest to remove all stack VLA usage from the kernel[1], this
replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage
with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(),
which uses a fixed stack size.

[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com

Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Gary Hook <gary.hook@amd.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
---
 drivers/crypto/ccp/ccp-crypto-aes-xts.c | 13 +++++++------
 drivers/crypto/ccp/ccp-crypto.h         |  2 +-
 2 files changed, 8 insertions(+), 7 deletions(-)

diff --git a/drivers/crypto/ccp/ccp-crypto-aes-xts.c b/drivers/crypto/ccp/ccp-crypto-aes-xts.c
index 94b5bcf5b628..ca4630b8395f 100644
--- a/drivers/crypto/ccp/ccp-crypto-aes-xts.c
+++ b/drivers/crypto/ccp/ccp-crypto-aes-xts.c
@@ -102,7 +102,7 @@ static int ccp_aes_xts_setkey(struct crypto_ablkcipher *tfm, const u8 *key,
 	ctx->u.aes.key_len = key_len / 2;
 	sg_init_one(&ctx->u.aes.key_sg, ctx->u.aes.key, key_len);
 
-	return crypto_skcipher_setkey(ctx->u.aes.tfm_skcipher, key, key_len);
+	return crypto_sync_skcipher_setkey(ctx->u.aes.tfm_skcipher, key, key_len);
 }
 
 static int ccp_aes_xts_crypt(struct ablkcipher_request *req,
@@ -151,12 +151,13 @@ static int ccp_aes_xts_crypt(struct ablkcipher_request *req,
 	    (ctx->u.aes.key_len != AES_KEYSIZE_256))
 		fallback = 1;
 	if (fallback) {
-		SKCIPHER_REQUEST_ON_STACK(subreq, ctx->u.aes.tfm_skcipher);
+		SYNC_SKCIPHER_REQUEST_ON_STACK(subreq,
+					       ctx->u.aes.tfm_skcipher);
 
 		/* Use the fallback to process the request for any
 		 * unsupported unit sizes or key sizes
 		 */
-		skcipher_request_set_tfm(subreq, ctx->u.aes.tfm_skcipher);
+		skcipher_request_set_sync_tfm(subreq, ctx->u.aes.tfm_skcipher);
 		skcipher_request_set_callback(subreq, req->base.flags,
 					      NULL, NULL);
 		skcipher_request_set_crypt(subreq, req->src, req->dst,
@@ -203,12 +204,12 @@ static int ccp_aes_xts_decrypt(struct ablkcipher_request *req)
 static int ccp_aes_xts_cra_init(struct crypto_tfm *tfm)
 {
 	struct ccp_ctx *ctx = crypto_tfm_ctx(tfm);
-	struct crypto_skcipher *fallback_tfm;
+	struct crypto_sync_skcipher *fallback_tfm;
 
 	ctx->complete = ccp_aes_xts_complete;
 	ctx->u.aes.key_len = 0;
 
-	fallback_tfm = crypto_alloc_skcipher("xts(aes)", 0,
+	fallback_tfm = crypto_alloc_sync_skcipher("xts(aes)", 0,
 					     CRYPTO_ALG_ASYNC |
 					     CRYPTO_ALG_NEED_FALLBACK);
 	if (IS_ERR(fallback_tfm)) {
@@ -226,7 +227,7 @@ static void ccp_aes_xts_cra_exit(struct crypto_tfm *tfm)
 {
 	struct ccp_ctx *ctx = crypto_tfm_ctx(tfm);
 
-	crypto_free_skcipher(ctx->u.aes.tfm_skcipher);
+	crypto_free_sync_skcipher(ctx->u.aes.tfm_skcipher);
 }
 
 static int ccp_register_aes_xts_alg(struct list_head *head,
diff --git a/drivers/crypto/ccp/ccp-crypto.h b/drivers/crypto/ccp/ccp-crypto.h
index b9fd090c46c2..28819e11db96 100644
--- a/drivers/crypto/ccp/ccp-crypto.h
+++ b/drivers/crypto/ccp/ccp-crypto.h
@@ -88,7 +88,7 @@ static inline struct ccp_crypto_ahash_alg *
 /***** AES related defines *****/
 struct ccp_aes_ctx {
 	/* Fallback cipher for XTS with unsupported unit sizes */
-	struct crypto_skcipher *tfm_skcipher;
+	struct crypto_sync_skcipher *tfm_skcipher;
 
 	/* Cipher used to generate CMAC K1/K2 keys */
 	struct crypto_cipher *tfm_cipher;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH crypto-next 13/23] crypto: vmx - Remove VLA usage of skcipher
  2018-09-19  2:10 [PATCH crypto-next 00/23] crypto: skcipher - Remove VLA usage Kees Cook
                   ` (11 preceding siblings ...)
  2018-09-19  2:10 ` [PATCH crypto-next 12/23] crypto: ccp - " Kees Cook
@ 2018-09-19  2:10 ` Kees Cook
  2018-09-19  2:10 ` [PATCH crypto-next 14/23] crypto: null " Kees Cook
                   ` (11 subsequent siblings)
  24 siblings, 0 replies; 42+ messages in thread
From: Kees Cook @ 2018-09-19  2:10 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Kees Cook, Leonidas S. Barbosa, Paulo Flabiano Smorigo,
	Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman,
	linuxppc-dev, Ard Biesheuvel, Eric Biggers, linux-crypto,
	Linux Kernel Mailing List

In the quest to remove all stack VLA usage from the kernel[1], this
replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage
with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(),
which uses a fixed stack size.

[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com

Cc: "Leonidas S. Barbosa" <leosilva@linux.vnet.ibm.com>
Cc: Paulo Flabiano Smorigo <pfsmorigo@linux.vnet.ibm.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: linuxppc-dev@lists.ozlabs.org
Signed-off-by: Kees Cook <keescook@chromium.org>
---
 drivers/crypto/vmx/aes_cbc.c | 22 +++++++++++-----------
 drivers/crypto/vmx/aes_ctr.c | 18 +++++++++---------
 drivers/crypto/vmx/aes_xts.c | 18 +++++++++---------
 3 files changed, 29 insertions(+), 29 deletions(-)

diff --git a/drivers/crypto/vmx/aes_cbc.c b/drivers/crypto/vmx/aes_cbc.c
index b71895871be3..c5c5ff82b52e 100644
--- a/drivers/crypto/vmx/aes_cbc.c
+++ b/drivers/crypto/vmx/aes_cbc.c
@@ -32,7 +32,7 @@
 #include "aesp8-ppc.h"
 
 struct p8_aes_cbc_ctx {
-	struct crypto_skcipher *fallback;
+	struct crypto_sync_skcipher *fallback;
 	struct aes_key enc_key;
 	struct aes_key dec_key;
 };
@@ -40,11 +40,11 @@ struct p8_aes_cbc_ctx {
 static int p8_aes_cbc_init(struct crypto_tfm *tfm)
 {
 	const char *alg = crypto_tfm_alg_name(tfm);
-	struct crypto_skcipher *fallback;
+	struct crypto_sync_skcipher *fallback;
 	struct p8_aes_cbc_ctx *ctx = crypto_tfm_ctx(tfm);
 
-	fallback = crypto_alloc_skcipher(alg, 0,
-			CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK);
+	fallback = crypto_alloc_sync_skcipher(alg, 0,
+					      CRYPTO_ALG_NEED_FALLBACK);
 
 	if (IS_ERR(fallback)) {
 		printk(KERN_ERR
@@ -53,7 +53,7 @@ static int p8_aes_cbc_init(struct crypto_tfm *tfm)
 		return PTR_ERR(fallback);
 	}
 
-	crypto_skcipher_set_flags(
+	crypto_sync_skcipher_set_flags(
 		fallback,
 		crypto_skcipher_get_flags((struct crypto_skcipher *)tfm));
 	ctx->fallback = fallback;
@@ -66,7 +66,7 @@ static void p8_aes_cbc_exit(struct crypto_tfm *tfm)
 	struct p8_aes_cbc_ctx *ctx = crypto_tfm_ctx(tfm);
 
 	if (ctx->fallback) {
-		crypto_free_skcipher(ctx->fallback);
+		crypto_free_sync_skcipher(ctx->fallback);
 		ctx->fallback = NULL;
 	}
 }
@@ -86,7 +86,7 @@ static int p8_aes_cbc_setkey(struct crypto_tfm *tfm, const u8 *key,
 	pagefault_enable();
 	preempt_enable();
 
-	ret += crypto_skcipher_setkey(ctx->fallback, key, keylen);
+	ret += crypto_sync_skcipher_setkey(ctx->fallback, key, keylen);
 	return ret;
 }
 
@@ -100,8 +100,8 @@ static int p8_aes_cbc_encrypt(struct blkcipher_desc *desc,
 		crypto_tfm_ctx(crypto_blkcipher_tfm(desc->tfm));
 
 	if (in_interrupt()) {
-		SKCIPHER_REQUEST_ON_STACK(req, ctx->fallback);
-		skcipher_request_set_tfm(req, ctx->fallback);
+		SYNC_SKCIPHER_REQUEST_ON_STACK(req, ctx->fallback);
+		skcipher_request_set_sync_tfm(req, ctx->fallback);
 		skcipher_request_set_callback(req, desc->flags, NULL, NULL);
 		skcipher_request_set_crypt(req, src, dst, nbytes, desc->info);
 		ret = crypto_skcipher_encrypt(req);
@@ -139,8 +139,8 @@ static int p8_aes_cbc_decrypt(struct blkcipher_desc *desc,
 		crypto_tfm_ctx(crypto_blkcipher_tfm(desc->tfm));
 
 	if (in_interrupt()) {
-		SKCIPHER_REQUEST_ON_STACK(req, ctx->fallback);
-		skcipher_request_set_tfm(req, ctx->fallback);
+		SYNC_SKCIPHER_REQUEST_ON_STACK(req, ctx->fallback);
+		skcipher_request_set_sync_tfm(req, ctx->fallback);
 		skcipher_request_set_callback(req, desc->flags, NULL, NULL);
 		skcipher_request_set_crypt(req, src, dst, nbytes, desc->info);
 		ret = crypto_skcipher_decrypt(req);
diff --git a/drivers/crypto/vmx/aes_ctr.c b/drivers/crypto/vmx/aes_ctr.c
index cd777c75291d..8a2fe092cb8e 100644
--- a/drivers/crypto/vmx/aes_ctr.c
+++ b/drivers/crypto/vmx/aes_ctr.c
@@ -32,18 +32,18 @@
 #include "aesp8-ppc.h"
 
 struct p8_aes_ctr_ctx {
-	struct crypto_skcipher *fallback;
+	struct crypto_sync_skcipher *fallback;
 	struct aes_key enc_key;
 };
 
 static int p8_aes_ctr_init(struct crypto_tfm *tfm)
 {
 	const char *alg = crypto_tfm_alg_name(tfm);
-	struct crypto_skcipher *fallback;
+	struct crypto_sync_skcipher *fallback;
 	struct p8_aes_ctr_ctx *ctx = crypto_tfm_ctx(tfm);
 
-	fallback = crypto_alloc_skcipher(alg, 0,
-			CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK);
+	fallback = crypto_alloc_sync_skcipher(alg, 0,
+					      CRYPTO_ALG_NEED_FALLBACK);
 	if (IS_ERR(fallback)) {
 		printk(KERN_ERR
 		       "Failed to allocate transformation for '%s': %ld\n",
@@ -51,7 +51,7 @@ static int p8_aes_ctr_init(struct crypto_tfm *tfm)
 		return PTR_ERR(fallback);
 	}
 
-	crypto_skcipher_set_flags(
+	crypto_sync_skcipher_set_flags(
 		fallback,
 		crypto_skcipher_get_flags((struct crypto_skcipher *)tfm));
 	ctx->fallback = fallback;
@@ -64,7 +64,7 @@ static void p8_aes_ctr_exit(struct crypto_tfm *tfm)
 	struct p8_aes_ctr_ctx *ctx = crypto_tfm_ctx(tfm);
 
 	if (ctx->fallback) {
-		crypto_free_skcipher(ctx->fallback);
+		crypto_free_sync_skcipher(ctx->fallback);
 		ctx->fallback = NULL;
 	}
 }
@@ -83,7 +83,7 @@ static int p8_aes_ctr_setkey(struct crypto_tfm *tfm, const u8 *key,
 	pagefault_enable();
 	preempt_enable();
 
-	ret += crypto_skcipher_setkey(ctx->fallback, key, keylen);
+	ret += crypto_sync_skcipher_setkey(ctx->fallback, key, keylen);
 	return ret;
 }
 
@@ -119,8 +119,8 @@ static int p8_aes_ctr_crypt(struct blkcipher_desc *desc,
 		crypto_tfm_ctx(crypto_blkcipher_tfm(desc->tfm));
 
 	if (in_interrupt()) {
-		SKCIPHER_REQUEST_ON_STACK(req, ctx->fallback);
-		skcipher_request_set_tfm(req, ctx->fallback);
+		SYNC_SKCIPHER_REQUEST_ON_STACK(req, ctx->fallback);
+		skcipher_request_set_sync_tfm(req, ctx->fallback);
 		skcipher_request_set_callback(req, desc->flags, NULL, NULL);
 		skcipher_request_set_crypt(req, src, dst, nbytes, desc->info);
 		ret = crypto_skcipher_encrypt(req);
diff --git a/drivers/crypto/vmx/aes_xts.c b/drivers/crypto/vmx/aes_xts.c
index e9954a7d4694..ecd64e5cc5bb 100644
--- a/drivers/crypto/vmx/aes_xts.c
+++ b/drivers/crypto/vmx/aes_xts.c
@@ -33,7 +33,7 @@
 #include "aesp8-ppc.h"
 
 struct p8_aes_xts_ctx {
-	struct crypto_skcipher *fallback;
+	struct crypto_sync_skcipher *fallback;
 	struct aes_key enc_key;
 	struct aes_key dec_key;
 	struct aes_key tweak_key;
@@ -42,11 +42,11 @@ struct p8_aes_xts_ctx {
 static int p8_aes_xts_init(struct crypto_tfm *tfm)
 {
 	const char *alg = crypto_tfm_alg_name(tfm);
-	struct crypto_skcipher *fallback;
+	struct crypto_sync_skcipher *fallback;
 	struct p8_aes_xts_ctx *ctx = crypto_tfm_ctx(tfm);
 
-	fallback = crypto_alloc_skcipher(alg, 0,
-			CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK);
+	fallback = crypto_alloc_sync_skcipher(alg, 0,
+					      CRYPTO_ALG_NEED_FALLBACK);
 	if (IS_ERR(fallback)) {
 		printk(KERN_ERR
 			"Failed to allocate transformation for '%s': %ld\n",
@@ -54,7 +54,7 @@ static int p8_aes_xts_init(struct crypto_tfm *tfm)
 		return PTR_ERR(fallback);
 	}
 
-	crypto_skcipher_set_flags(
+	crypto_sync_skcipher_set_flags(
 		fallback,
 		crypto_skcipher_get_flags((struct crypto_skcipher *)tfm));
 	ctx->fallback = fallback;
@@ -67,7 +67,7 @@ static void p8_aes_xts_exit(struct crypto_tfm *tfm)
 	struct p8_aes_xts_ctx *ctx = crypto_tfm_ctx(tfm);
 
 	if (ctx->fallback) {
-		crypto_free_skcipher(ctx->fallback);
+		crypto_free_sync_skcipher(ctx->fallback);
 		ctx->fallback = NULL;
 	}
 }
@@ -92,7 +92,7 @@ static int p8_aes_xts_setkey(struct crypto_tfm *tfm, const u8 *key,
 	pagefault_enable();
 	preempt_enable();
 
-	ret += crypto_skcipher_setkey(ctx->fallback, key, keylen);
+	ret += crypto_sync_skcipher_setkey(ctx->fallback, key, keylen);
 	return ret;
 }
 
@@ -109,8 +109,8 @@ static int p8_aes_xts_crypt(struct blkcipher_desc *desc,
 		crypto_tfm_ctx(crypto_blkcipher_tfm(desc->tfm));
 
 	if (in_interrupt()) {
-		SKCIPHER_REQUEST_ON_STACK(req, ctx->fallback);
-		skcipher_request_set_tfm(req, ctx->fallback);
+		SYNC_SKCIPHER_REQUEST_ON_STACK(req, ctx->fallback);
+		skcipher_request_set_sync_tfm(req, ctx->fallback);
 		skcipher_request_set_callback(req, desc->flags, NULL, NULL);
 		skcipher_request_set_crypt(req, src, dst, nbytes, desc->info);
 		ret = enc? crypto_skcipher_encrypt(req) : crypto_skcipher_decrypt(req);
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH crypto-next 14/23] crypto: null - Remove VLA usage of skcipher
  2018-09-19  2:10 [PATCH crypto-next 00/23] crypto: skcipher - Remove VLA usage Kees Cook
                   ` (12 preceding siblings ...)
  2018-09-19  2:10 ` [PATCH crypto-next 13/23] crypto: vmx " Kees Cook
@ 2018-09-19  2:10 ` Kees Cook
  2018-09-19  2:10 ` [PATCH crypto-next 15/23] crypto: cryptd " Kees Cook
                   ` (10 subsequent siblings)
  24 siblings, 0 replies; 42+ messages in thread
From: Kees Cook @ 2018-09-19  2:10 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Kees Cook, Ard Biesheuvel, Eric Biggers, linux-crypto,
	Linux Kernel Mailing List

In the quest to remove all stack VLA usage from the kernel[1], this
replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage
with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(),
which uses a fixed stack size.

[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 crypto/algif_aead.c             | 12 ++++++------
 crypto/authenc.c                |  8 ++++----
 crypto/authencesn.c             |  8 ++++----
 crypto/crypto_null.c            | 11 +++++------
 crypto/echainiv.c               |  4 ++--
 crypto/gcm.c                    |  8 ++++----
 crypto/seqiv.c                  |  4 ++--
 include/crypto/internal/geniv.h |  2 +-
 include/crypto/null.h           |  2 +-
 9 files changed, 29 insertions(+), 30 deletions(-)

diff --git a/crypto/algif_aead.c b/crypto/algif_aead.c
index c40a8c7ee8ae..eb100a04ce9f 100644
--- a/crypto/algif_aead.c
+++ b/crypto/algif_aead.c
@@ -42,7 +42,7 @@
 
 struct aead_tfm {
 	struct crypto_aead *aead;
-	struct crypto_skcipher *null_tfm;
+	struct crypto_sync_skcipher *null_tfm;
 };
 
 static inline bool aead_sufficient_data(struct sock *sk)
@@ -75,13 +75,13 @@ static int aead_sendmsg(struct socket *sock, struct msghdr *msg, size_t size)
 	return af_alg_sendmsg(sock, msg, size, ivsize);
 }
 
-static int crypto_aead_copy_sgl(struct crypto_skcipher *null_tfm,
+static int crypto_aead_copy_sgl(struct crypto_sync_skcipher *null_tfm,
 				struct scatterlist *src,
 				struct scatterlist *dst, unsigned int len)
 {
-	SKCIPHER_REQUEST_ON_STACK(skreq, null_tfm);
+	SYNC_SKCIPHER_REQUEST_ON_STACK(skreq, null_tfm);
 
-	skcipher_request_set_tfm(skreq, null_tfm);
+	skcipher_request_set_sync_tfm(skreq, null_tfm);
 	skcipher_request_set_callback(skreq, CRYPTO_TFM_REQ_MAY_BACKLOG,
 				      NULL, NULL);
 	skcipher_request_set_crypt(skreq, src, dst, len, NULL);
@@ -99,7 +99,7 @@ static int _aead_recvmsg(struct socket *sock, struct msghdr *msg,
 	struct af_alg_ctx *ctx = ask->private;
 	struct aead_tfm *aeadc = pask->private;
 	struct crypto_aead *tfm = aeadc->aead;
-	struct crypto_skcipher *null_tfm = aeadc->null_tfm;
+	struct crypto_sync_skcipher *null_tfm = aeadc->null_tfm;
 	unsigned int i, as = crypto_aead_authsize(tfm);
 	struct af_alg_async_req *areq;
 	struct af_alg_tsgl *tsgl, *tmp;
@@ -478,7 +478,7 @@ static void *aead_bind(const char *name, u32 type, u32 mask)
 {
 	struct aead_tfm *tfm;
 	struct crypto_aead *aead;
-	struct crypto_skcipher *null_tfm;
+	struct crypto_sync_skcipher *null_tfm;
 
 	tfm = kzalloc(sizeof(*tfm), GFP_KERNEL);
 	if (!tfm)
diff --git a/crypto/authenc.c b/crypto/authenc.c
index 4fa8d40d947b..37f54d1b2f66 100644
--- a/crypto/authenc.c
+++ b/crypto/authenc.c
@@ -33,7 +33,7 @@ struct authenc_instance_ctx {
 struct crypto_authenc_ctx {
 	struct crypto_ahash *auth;
 	struct crypto_skcipher *enc;
-	struct crypto_skcipher *null;
+	struct crypto_sync_skcipher *null;
 };
 
 struct authenc_request_ctx {
@@ -185,9 +185,9 @@ static int crypto_authenc_copy_assoc(struct aead_request *req)
 {
 	struct crypto_aead *authenc = crypto_aead_reqtfm(req);
 	struct crypto_authenc_ctx *ctx = crypto_aead_ctx(authenc);
-	SKCIPHER_REQUEST_ON_STACK(skreq, ctx->null);
+	SYNC_SKCIPHER_REQUEST_ON_STACK(skreq, ctx->null);
 
-	skcipher_request_set_tfm(skreq, ctx->null);
+	skcipher_request_set_sync_tfm(skreq, ctx->null);
 	skcipher_request_set_callback(skreq, aead_request_flags(req),
 				      NULL, NULL);
 	skcipher_request_set_crypt(skreq, req->src, req->dst, req->assoclen,
@@ -318,7 +318,7 @@ static int crypto_authenc_init_tfm(struct crypto_aead *tfm)
 	struct crypto_authenc_ctx *ctx = crypto_aead_ctx(tfm);
 	struct crypto_ahash *auth;
 	struct crypto_skcipher *enc;
-	struct crypto_skcipher *null;
+	struct crypto_sync_skcipher *null;
 	int err;
 
 	auth = crypto_spawn_ahash(&ictx->auth);
diff --git a/crypto/authencesn.c b/crypto/authencesn.c
index 50b804747e20..80a25cc04aec 100644
--- a/crypto/authencesn.c
+++ b/crypto/authencesn.c
@@ -36,7 +36,7 @@ struct crypto_authenc_esn_ctx {
 	unsigned int reqoff;
 	struct crypto_ahash *auth;
 	struct crypto_skcipher *enc;
-	struct crypto_skcipher *null;
+	struct crypto_sync_skcipher *null;
 };
 
 struct authenc_esn_request_ctx {
@@ -183,9 +183,9 @@ static int crypto_authenc_esn_copy(struct aead_request *req, unsigned int len)
 {
 	struct crypto_aead *authenc_esn = crypto_aead_reqtfm(req);
 	struct crypto_authenc_esn_ctx *ctx = crypto_aead_ctx(authenc_esn);
-	SKCIPHER_REQUEST_ON_STACK(skreq, ctx->null);
+	SYNC_SKCIPHER_REQUEST_ON_STACK(skreq, ctx->null);
 
-	skcipher_request_set_tfm(skreq, ctx->null);
+	skcipher_request_set_sync_tfm(skreq, ctx->null);
 	skcipher_request_set_callback(skreq, aead_request_flags(req),
 				      NULL, NULL);
 	skcipher_request_set_crypt(skreq, req->src, req->dst, len, NULL);
@@ -341,7 +341,7 @@ static int crypto_authenc_esn_init_tfm(struct crypto_aead *tfm)
 	struct crypto_authenc_esn_ctx *ctx = crypto_aead_ctx(tfm);
 	struct crypto_ahash *auth;
 	struct crypto_skcipher *enc;
-	struct crypto_skcipher *null;
+	struct crypto_sync_skcipher *null;
 	int err;
 
 	auth = crypto_spawn_ahash(&ictx->auth);
diff --git a/crypto/crypto_null.c b/crypto/crypto_null.c
index 0959b268966c..0bae59922a80 100644
--- a/crypto/crypto_null.c
+++ b/crypto/crypto_null.c
@@ -26,7 +26,7 @@
 #include <linux/string.h>
 
 static DEFINE_MUTEX(crypto_default_null_skcipher_lock);
-static struct crypto_skcipher *crypto_default_null_skcipher;
+static struct crypto_sync_skcipher *crypto_default_null_skcipher;
 static int crypto_default_null_skcipher_refcnt;
 
 static int null_compress(struct crypto_tfm *tfm, const u8 *src,
@@ -152,16 +152,15 @@ MODULE_ALIAS_CRYPTO("compress_null");
 MODULE_ALIAS_CRYPTO("digest_null");
 MODULE_ALIAS_CRYPTO("cipher_null");
 
-struct crypto_skcipher *crypto_get_default_null_skcipher(void)
+struct crypto_sync_skcipher *crypto_get_default_null_skcipher(void)
 {
-	struct crypto_skcipher *tfm;
+	struct crypto_sync_skcipher *tfm;
 
 	mutex_lock(&crypto_default_null_skcipher_lock);
 	tfm = crypto_default_null_skcipher;
 
 	if (!tfm) {
-		tfm = crypto_alloc_skcipher("ecb(cipher_null)",
-					    0, CRYPTO_ALG_ASYNC);
+		tfm = crypto_alloc_sync_skcipher("ecb(cipher_null)", 0, 0);
 		if (IS_ERR(tfm))
 			goto unlock;
 
@@ -181,7 +180,7 @@ void crypto_put_default_null_skcipher(void)
 {
 	mutex_lock(&crypto_default_null_skcipher_lock);
 	if (!--crypto_default_null_skcipher_refcnt) {
-		crypto_free_skcipher(crypto_default_null_skcipher);
+		crypto_free_sync_skcipher(crypto_default_null_skcipher);
 		crypto_default_null_skcipher = NULL;
 	}
 	mutex_unlock(&crypto_default_null_skcipher_lock);
diff --git a/crypto/echainiv.c b/crypto/echainiv.c
index 45819e6015bf..77e607fdbfb7 100644
--- a/crypto/echainiv.c
+++ b/crypto/echainiv.c
@@ -47,9 +47,9 @@ static int echainiv_encrypt(struct aead_request *req)
 	info = req->iv;
 
 	if (req->src != req->dst) {
-		SKCIPHER_REQUEST_ON_STACK(nreq, ctx->sknull);
+		SYNC_SKCIPHER_REQUEST_ON_STACK(nreq, ctx->sknull);
 
-		skcipher_request_set_tfm(nreq, ctx->sknull);
+		skcipher_request_set_sync_tfm(nreq, ctx->sknull);
 		skcipher_request_set_callback(nreq, req->base.flags,
 					      NULL, NULL);
 		skcipher_request_set_crypt(nreq, req->src, req->dst,
diff --git a/crypto/gcm.c b/crypto/gcm.c
index 0ad879e1f9b2..e438492db2ca 100644
--- a/crypto/gcm.c
+++ b/crypto/gcm.c
@@ -50,7 +50,7 @@ struct crypto_rfc4543_instance_ctx {
 
 struct crypto_rfc4543_ctx {
 	struct crypto_aead *child;
-	struct crypto_skcipher *null;
+	struct crypto_sync_skcipher *null;
 	u8 nonce[4];
 };
 
@@ -1067,9 +1067,9 @@ static int crypto_rfc4543_copy_src_to_dst(struct aead_request *req, bool enc)
 	unsigned int authsize = crypto_aead_authsize(aead);
 	unsigned int nbytes = req->assoclen + req->cryptlen -
 			      (enc ? 0 : authsize);
-	SKCIPHER_REQUEST_ON_STACK(nreq, ctx->null);
+	SYNC_SKCIPHER_REQUEST_ON_STACK(nreq, ctx->null);
 
-	skcipher_request_set_tfm(nreq, ctx->null);
+	skcipher_request_set_sync_tfm(nreq, ctx->null);
 	skcipher_request_set_callback(nreq, req->base.flags, NULL, NULL);
 	skcipher_request_set_crypt(nreq, req->src, req->dst, nbytes, NULL);
 
@@ -1093,7 +1093,7 @@ static int crypto_rfc4543_init_tfm(struct crypto_aead *tfm)
 	struct crypto_aead_spawn *spawn = &ictx->aead;
 	struct crypto_rfc4543_ctx *ctx = crypto_aead_ctx(tfm);
 	struct crypto_aead *aead;
-	struct crypto_skcipher *null;
+	struct crypto_sync_skcipher *null;
 	unsigned long align;
 	int err = 0;
 
diff --git a/crypto/seqiv.c b/crypto/seqiv.c
index 39dbf2f7e5f5..64a412be255e 100644
--- a/crypto/seqiv.c
+++ b/crypto/seqiv.c
@@ -73,9 +73,9 @@ static int seqiv_aead_encrypt(struct aead_request *req)
 	info = req->iv;
 
 	if (req->src != req->dst) {
-		SKCIPHER_REQUEST_ON_STACK(nreq, ctx->sknull);
+		SYNC_SKCIPHER_REQUEST_ON_STACK(nreq, ctx->sknull);
 
-		skcipher_request_set_tfm(nreq, ctx->sknull);
+		skcipher_request_set_sync_tfm(nreq, ctx->sknull);
 		skcipher_request_set_callback(nreq, req->base.flags,
 					      NULL, NULL);
 		skcipher_request_set_crypt(nreq, req->src, req->dst,
diff --git a/include/crypto/internal/geniv.h b/include/crypto/internal/geniv.h
index 2bcfb931bc5b..71be24cd59bd 100644
--- a/include/crypto/internal/geniv.h
+++ b/include/crypto/internal/geniv.h
@@ -20,7 +20,7 @@
 struct aead_geniv_ctx {
 	spinlock_t lock;
 	struct crypto_aead *child;
-	struct crypto_skcipher *sknull;
+	struct crypto_sync_skcipher *sknull;
 	u8 salt[] __attribute__ ((aligned(__alignof__(u32))));
 };
 
diff --git a/include/crypto/null.h b/include/crypto/null.h
index 15aeef6e30ef..0ef577cc00e3 100644
--- a/include/crypto/null.h
+++ b/include/crypto/null.h
@@ -9,7 +9,7 @@
 #define NULL_DIGEST_SIZE	0
 #define NULL_IV_SIZE		0
 
-struct crypto_skcipher *crypto_get_default_null_skcipher(void);
+struct crypto_sync_skcipher *crypto_get_default_null_skcipher(void);
 void crypto_put_default_null_skcipher(void);
 
 #endif
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH crypto-next 15/23] crypto: cryptd - Remove VLA usage of skcipher
  2018-09-19  2:10 [PATCH crypto-next 00/23] crypto: skcipher - Remove VLA usage Kees Cook
                   ` (13 preceding siblings ...)
  2018-09-19  2:10 ` [PATCH crypto-next 14/23] crypto: null " Kees Cook
@ 2018-09-19  2:10 ` Kees Cook
  2018-09-19  2:10 ` [PATCH crypto-next 16/23] crypto: sahara " Kees Cook
                   ` (9 subsequent siblings)
  24 siblings, 0 replies; 42+ messages in thread
From: Kees Cook @ 2018-09-19  2:10 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Kees Cook, Ard Biesheuvel, Eric Biggers, linux-crypto,
	Linux Kernel Mailing List

In the quest to remove all stack VLA usage from the kernel[1], this
replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage
with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(),
which uses a fixed stack size.

[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 crypto/cryptd.c | 32 +++++++++++++++++---------------
 1 file changed, 17 insertions(+), 15 deletions(-)

diff --git a/crypto/cryptd.c b/crypto/cryptd.c
index addca7bae33f..7118fb5efbaa 100644
--- a/crypto/cryptd.c
+++ b/crypto/cryptd.c
@@ -76,7 +76,7 @@ struct cryptd_blkcipher_request_ctx {
 
 struct cryptd_skcipher_ctx {
 	atomic_t refcnt;
-	struct crypto_skcipher *child;
+	struct crypto_sync_skcipher *child;
 };
 
 struct cryptd_skcipher_request_ctx {
@@ -449,14 +449,16 @@ static int cryptd_skcipher_setkey(struct crypto_skcipher *parent,
 				  const u8 *key, unsigned int keylen)
 {
 	struct cryptd_skcipher_ctx *ctx = crypto_skcipher_ctx(parent);
-	struct crypto_skcipher *child = ctx->child;
+	struct crypto_sync_skcipher *child = ctx->child;
 	int err;
 
-	crypto_skcipher_clear_flags(child, CRYPTO_TFM_REQ_MASK);
-	crypto_skcipher_set_flags(child, crypto_skcipher_get_flags(parent) &
+	crypto_sync_skcipher_clear_flags(child, CRYPTO_TFM_REQ_MASK);
+	crypto_sync_skcipher_set_flags(child,
+				       crypto_skcipher_get_flags(parent) &
 					 CRYPTO_TFM_REQ_MASK);
-	err = crypto_skcipher_setkey(child, key, keylen);
-	crypto_skcipher_set_flags(parent, crypto_skcipher_get_flags(child) &
+	err = crypto_sync_skcipher_setkey(child, key, keylen);
+	crypto_skcipher_set_flags(parent,
+				  crypto_sync_skcipher_get_flags(child) &
 					  CRYPTO_TFM_RES_MASK);
 	return err;
 }
@@ -483,13 +485,13 @@ static void cryptd_skcipher_encrypt(struct crypto_async_request *base,
 	struct cryptd_skcipher_request_ctx *rctx = skcipher_request_ctx(req);
 	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
 	struct cryptd_skcipher_ctx *ctx = crypto_skcipher_ctx(tfm);
-	struct crypto_skcipher *child = ctx->child;
-	SKCIPHER_REQUEST_ON_STACK(subreq, child);
+	struct crypto_sync_skcipher *child = ctx->child;
+	SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, child);
 
 	if (unlikely(err == -EINPROGRESS))
 		goto out;
 
-	skcipher_request_set_tfm(subreq, child);
+	skcipher_request_set_sync_tfm(subreq, child);
 	skcipher_request_set_callback(subreq, CRYPTO_TFM_REQ_MAY_SLEEP,
 				      NULL, NULL);
 	skcipher_request_set_crypt(subreq, req->src, req->dst, req->cryptlen,
@@ -511,13 +513,13 @@ static void cryptd_skcipher_decrypt(struct crypto_async_request *base,
 	struct cryptd_skcipher_request_ctx *rctx = skcipher_request_ctx(req);
 	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
 	struct cryptd_skcipher_ctx *ctx = crypto_skcipher_ctx(tfm);
-	struct crypto_skcipher *child = ctx->child;
-	SKCIPHER_REQUEST_ON_STACK(subreq, child);
+	struct crypto_sync_skcipher *child = ctx->child;
+	SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, child);
 
 	if (unlikely(err == -EINPROGRESS))
 		goto out;
 
-	skcipher_request_set_tfm(subreq, child);
+	skcipher_request_set_sync_tfm(subreq, child);
 	skcipher_request_set_callback(subreq, CRYPTO_TFM_REQ_MAY_SLEEP,
 				      NULL, NULL);
 	skcipher_request_set_crypt(subreq, req->src, req->dst, req->cryptlen,
@@ -568,7 +570,7 @@ static int cryptd_skcipher_init_tfm(struct crypto_skcipher *tfm)
 	if (IS_ERR(cipher))
 		return PTR_ERR(cipher);
 
-	ctx->child = cipher;
+	ctx->child = (struct crypto_sync_skcipher *)cipher;
 	crypto_skcipher_set_reqsize(
 		tfm, sizeof(struct cryptd_skcipher_request_ctx));
 	return 0;
@@ -578,7 +580,7 @@ static void cryptd_skcipher_exit_tfm(struct crypto_skcipher *tfm)
 {
 	struct cryptd_skcipher_ctx *ctx = crypto_skcipher_ctx(tfm);
 
-	crypto_free_skcipher(ctx->child);
+	crypto_free_sync_skcipher(ctx->child);
 }
 
 static void cryptd_skcipher_free(struct skcipher_instance *inst)
@@ -1243,7 +1245,7 @@ struct crypto_skcipher *cryptd_skcipher_child(struct cryptd_skcipher *tfm)
 {
 	struct cryptd_skcipher_ctx *ctx = crypto_skcipher_ctx(&tfm->base);
 
-	return ctx->child;
+	return &ctx->child->base;
 }
 EXPORT_SYMBOL_GPL(cryptd_skcipher_child);
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH crypto-next 16/23] crypto: sahara - Remove VLA usage of skcipher
  2018-09-19  2:10 [PATCH crypto-next 00/23] crypto: skcipher - Remove VLA usage Kees Cook
                   ` (14 preceding siblings ...)
  2018-09-19  2:10 ` [PATCH crypto-next 15/23] crypto: cryptd " Kees Cook
@ 2018-09-19  2:10 ` Kees Cook
  2018-09-19  2:10 ` [PATCH crypto-next 17/23] crypto: qce " Kees Cook
                   ` (8 subsequent siblings)
  24 siblings, 0 replies; 42+ messages in thread
From: Kees Cook @ 2018-09-19  2:10 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Kees Cook, Ard Biesheuvel, Eric Biggers, linux-crypto,
	Linux Kernel Mailing List

In the quest to remove all stack VLA usage from the kernel[1], this
replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage
with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(),
which uses a fixed stack size.

[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 drivers/crypto/sahara.c | 31 +++++++++++++++----------------
 1 file changed, 15 insertions(+), 16 deletions(-)

diff --git a/drivers/crypto/sahara.c b/drivers/crypto/sahara.c
index e7540a5b8197..bbf166a97ad3 100644
--- a/drivers/crypto/sahara.c
+++ b/drivers/crypto/sahara.c
@@ -149,7 +149,7 @@ struct sahara_ctx {
 	/* AES-specific context */
 	int keylen;
 	u8 key[AES_KEYSIZE_128];
-	struct crypto_skcipher *fallback;
+	struct crypto_sync_skcipher *fallback;
 };
 
 struct sahara_aes_reqctx {
@@ -621,14 +621,14 @@ static int sahara_aes_setkey(struct crypto_ablkcipher *tfm, const u8 *key,
 	/*
 	 * The requested key size is not supported by HW, do a fallback.
 	 */
-	crypto_skcipher_clear_flags(ctx->fallback, CRYPTO_TFM_REQ_MASK);
-	crypto_skcipher_set_flags(ctx->fallback, tfm->base.crt_flags &
+	crypto_sync_skcipher_clear_flags(ctx->fallback, CRYPTO_TFM_REQ_MASK);
+	crypto_sync_skcipher_set_flags(ctx->fallback, tfm->base.crt_flags &
 						 CRYPTO_TFM_REQ_MASK);
 
-	ret = crypto_skcipher_setkey(ctx->fallback, key, keylen);
+	ret = crypto_sync_skcipher_setkey(ctx->fallback, key, keylen);
 
 	tfm->base.crt_flags &= ~CRYPTO_TFM_RES_MASK;
-	tfm->base.crt_flags |= crypto_skcipher_get_flags(ctx->fallback) &
+	tfm->base.crt_flags |= crypto_sync_skcipher_get_flags(ctx->fallback) &
 			       CRYPTO_TFM_RES_MASK;
 	return ret;
 }
@@ -666,9 +666,9 @@ static int sahara_aes_ecb_encrypt(struct ablkcipher_request *req)
 	int err;
 
 	if (unlikely(ctx->keylen != AES_KEYSIZE_128)) {
-		SKCIPHER_REQUEST_ON_STACK(subreq, ctx->fallback);
+		SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, ctx->fallback);
 
-		skcipher_request_set_tfm(subreq, ctx->fallback);
+		skcipher_request_set_sync_tfm(subreq, ctx->fallback);
 		skcipher_request_set_callback(subreq, req->base.flags,
 					      NULL, NULL);
 		skcipher_request_set_crypt(subreq, req->src, req->dst,
@@ -688,9 +688,9 @@ static int sahara_aes_ecb_decrypt(struct ablkcipher_request *req)
 	int err;
 
 	if (unlikely(ctx->keylen != AES_KEYSIZE_128)) {
-		SKCIPHER_REQUEST_ON_STACK(subreq, ctx->fallback);
+		SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, ctx->fallback);
 
-		skcipher_request_set_tfm(subreq, ctx->fallback);
+		skcipher_request_set_sync_tfm(subreq, ctx->fallback);
 		skcipher_request_set_callback(subreq, req->base.flags,
 					      NULL, NULL);
 		skcipher_request_set_crypt(subreq, req->src, req->dst,
@@ -710,9 +710,9 @@ static int sahara_aes_cbc_encrypt(struct ablkcipher_request *req)
 	int err;
 
 	if (unlikely(ctx->keylen != AES_KEYSIZE_128)) {
-		SKCIPHER_REQUEST_ON_STACK(subreq, ctx->fallback);
+		SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, ctx->fallback);
 
-		skcipher_request_set_tfm(subreq, ctx->fallback);
+		skcipher_request_set_sync_tfm(subreq, ctx->fallback);
 		skcipher_request_set_callback(subreq, req->base.flags,
 					      NULL, NULL);
 		skcipher_request_set_crypt(subreq, req->src, req->dst,
@@ -732,9 +732,9 @@ static int sahara_aes_cbc_decrypt(struct ablkcipher_request *req)
 	int err;
 
 	if (unlikely(ctx->keylen != AES_KEYSIZE_128)) {
-		SKCIPHER_REQUEST_ON_STACK(subreq, ctx->fallback);
+		SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, ctx->fallback);
 
-		skcipher_request_set_tfm(subreq, ctx->fallback);
+		skcipher_request_set_sync_tfm(subreq, ctx->fallback);
 		skcipher_request_set_callback(subreq, req->base.flags,
 					      NULL, NULL);
 		skcipher_request_set_crypt(subreq, req->src, req->dst,
@@ -752,8 +752,7 @@ static int sahara_aes_cra_init(struct crypto_tfm *tfm)
 	const char *name = crypto_tfm_alg_name(tfm);
 	struct sahara_ctx *ctx = crypto_tfm_ctx(tfm);
 
-	ctx->fallback = crypto_alloc_skcipher(name, 0,
-					      CRYPTO_ALG_ASYNC |
+	ctx->fallback = crypto_alloc_sync_skcipher(name, 0,
 					      CRYPTO_ALG_NEED_FALLBACK);
 	if (IS_ERR(ctx->fallback)) {
 		pr_err("Error allocating fallback algo %s\n", name);
@@ -769,7 +768,7 @@ static void sahara_aes_cra_exit(struct crypto_tfm *tfm)
 {
 	struct sahara_ctx *ctx = crypto_tfm_ctx(tfm);
 
-	crypto_free_skcipher(ctx->fallback);
+	crypto_free_sync_skcipher(ctx->fallback);
 }
 
 static u32 sahara_sha_init_hdr(struct sahara_dev *dev,
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH crypto-next 17/23] crypto: qce - Remove VLA usage of skcipher
  2018-09-19  2:10 [PATCH crypto-next 00/23] crypto: skcipher - Remove VLA usage Kees Cook
                   ` (15 preceding siblings ...)
  2018-09-19  2:10 ` [PATCH crypto-next 16/23] crypto: sahara " Kees Cook
@ 2018-09-19  2:10 ` Kees Cook
  2018-09-19  2:10 ` [PATCH crypto-next 18/23] crypto: artpec6 " Kees Cook
                   ` (7 subsequent siblings)
  24 siblings, 0 replies; 42+ messages in thread
From: Kees Cook @ 2018-09-19  2:10 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Kees Cook, Himanshu Jha, Ard Biesheuvel, Eric Biggers,
	linux-crypto, Linux Kernel Mailing List

In the quest to remove all stack VLA usage from the kernel[1], this
replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage
with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(),
which uses a fixed stack size.

[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com

Cc: Himanshu Jha <himanshujha199640@gmail.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
---
 drivers/crypto/qce/ablkcipher.c | 13 ++++++-------
 drivers/crypto/qce/cipher.h     |  2 +-
 2 files changed, 7 insertions(+), 8 deletions(-)

diff --git a/drivers/crypto/qce/ablkcipher.c b/drivers/crypto/qce/ablkcipher.c
index ea4d96bf47e8..585e1cab9ae3 100644
--- a/drivers/crypto/qce/ablkcipher.c
+++ b/drivers/crypto/qce/ablkcipher.c
@@ -189,7 +189,7 @@ static int qce_ablkcipher_setkey(struct crypto_ablkcipher *ablk, const u8 *key,
 	memcpy(ctx->enc_key, key, keylen);
 	return 0;
 fallback:
-	ret = crypto_skcipher_setkey(ctx->fallback, key, keylen);
+	ret = crypto_sync_skcipher_setkey(ctx->fallback, key, keylen);
 	if (!ret)
 		ctx->enc_keylen = keylen;
 	return ret;
@@ -212,9 +212,9 @@ static int qce_ablkcipher_crypt(struct ablkcipher_request *req, int encrypt)
 
 	if (IS_AES(rctx->flags) && ctx->enc_keylen != AES_KEYSIZE_128 &&
 	    ctx->enc_keylen != AES_KEYSIZE_256) {
-		SKCIPHER_REQUEST_ON_STACK(subreq, ctx->fallback);
+		SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, ctx->fallback);
 
-		skcipher_request_set_tfm(subreq, ctx->fallback);
+		skcipher_request_set_sync_tfm(subreq, ctx->fallback);
 		skcipher_request_set_callback(subreq, req->base.flags,
 					      NULL, NULL);
 		skcipher_request_set_crypt(subreq, req->src, req->dst,
@@ -245,9 +245,8 @@ static int qce_ablkcipher_init(struct crypto_tfm *tfm)
 	memset(ctx, 0, sizeof(*ctx));
 	tfm->crt_ablkcipher.reqsize = sizeof(struct qce_cipher_reqctx);
 
-	ctx->fallback = crypto_alloc_skcipher(crypto_tfm_alg_name(tfm), 0,
-					      CRYPTO_ALG_ASYNC |
-					      CRYPTO_ALG_NEED_FALLBACK);
+	ctx->fallback = crypto_alloc_sync_skcipher(crypto_tfm_alg_name(tfm),
+						   0, CRYPTO_ALG_NEED_FALLBACK);
 	return PTR_ERR_OR_ZERO(ctx->fallback);
 }
 
@@ -255,7 +254,7 @@ static void qce_ablkcipher_exit(struct crypto_tfm *tfm)
 {
 	struct qce_cipher_ctx *ctx = crypto_tfm_ctx(tfm);
 
-	crypto_free_skcipher(ctx->fallback);
+	crypto_free_sync_skcipher(ctx->fallback);
 }
 
 struct qce_ablkcipher_def {
diff --git a/drivers/crypto/qce/cipher.h b/drivers/crypto/qce/cipher.h
index 2b0278bb6e92..ee055bfe98a0 100644
--- a/drivers/crypto/qce/cipher.h
+++ b/drivers/crypto/qce/cipher.h
@@ -22,7 +22,7 @@
 struct qce_cipher_ctx {
 	u8 enc_key[QCE_MAX_KEY_SIZE];
 	unsigned int enc_keylen;
-	struct crypto_skcipher *fallback;
+	struct crypto_sync_skcipher *fallback;
 };
 
 /**
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH crypto-next 18/23] crypto: artpec6 - Remove VLA usage of skcipher
  2018-09-19  2:10 [PATCH crypto-next 00/23] crypto: skcipher - Remove VLA usage Kees Cook
                   ` (16 preceding siblings ...)
  2018-09-19  2:10 ` [PATCH crypto-next 17/23] crypto: qce " Kees Cook
@ 2018-09-19  2:10 ` Kees Cook
  2018-09-23 12:13   ` Lars Persson
  2018-09-19  2:10 ` [PATCH crypto-next 19/23] crypto: chelsio " Kees Cook
                   ` (6 subsequent siblings)
  24 siblings, 1 reply; 42+ messages in thread
From: Kees Cook @ 2018-09-19  2:10 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Kees Cook, Jesper Nilsson, Lars Persson, linux-arm-kernel,
	Ard Biesheuvel, Eric Biggers, linux-crypto,
	Linux Kernel Mailing List

In the quest to remove all stack VLA usage from the kernel[1], this
replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage
with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(),
which uses a fixed stack size.

[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com

Cc: Jesper Nilsson <jesper.nilsson@axis.com>
Cc: Lars Persson <lars.persson@axis.com>
Cc: linux-arm-kernel@axis.com
Signed-off-by: Kees Cook <keescook@chromium.org>
---
 drivers/crypto/axis/artpec6_crypto.c | 19 +++++++++----------
 1 file changed, 9 insertions(+), 10 deletions(-)

diff --git a/drivers/crypto/axis/artpec6_crypto.c b/drivers/crypto/axis/artpec6_crypto.c
index 7f07a5085e9b..e5a080e87ea8 100644
--- a/drivers/crypto/axis/artpec6_crypto.c
+++ b/drivers/crypto/axis/artpec6_crypto.c
@@ -330,7 +330,7 @@ struct artpec6_cryptotfm_context {
 	size_t key_length;
 	u32 key_md;
 	int crypto_type;
-	struct crypto_skcipher *fallback;
+	struct crypto_sync_skcipher *fallback;
 };
 
 struct artpec6_crypto_aead_hw_ctx {
@@ -1199,15 +1199,15 @@ artpec6_crypto_ctr_crypt(struct skcipher_request *req, bool encrypt)
 		pr_debug("counter %x will overflow (nblks %u), falling back\n",
 			 counter, counter + nblks);
 
-		ret = crypto_skcipher_setkey(ctx->fallback, ctx->aes_key,
-					     ctx->key_length);
+		ret = crypto_sync_skcipher_setkey(ctx->fallback, ctx->aes_key,
+						  ctx->key_length);
 		if (ret)
 			return ret;
 
 		{
-			SKCIPHER_REQUEST_ON_STACK(subreq, ctx->fallback);
+			SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, ctx->fallback);
 
-			skcipher_request_set_tfm(subreq, ctx->fallback);
+			skcipher_request_set_sync_tfm(subreq, ctx->fallback);
 			skcipher_request_set_callback(subreq, req->base.flags,
 						      NULL, NULL);
 			skcipher_request_set_crypt(subreq, req->src, req->dst,
@@ -1561,10 +1561,9 @@ static int artpec6_crypto_aes_ctr_init(struct crypto_skcipher *tfm)
 {
 	struct artpec6_cryptotfm_context *ctx = crypto_skcipher_ctx(tfm);
 
-	ctx->fallback = crypto_alloc_skcipher(crypto_tfm_alg_name(&tfm->base),
-					      0,
-					      CRYPTO_ALG_ASYNC |
-					      CRYPTO_ALG_NEED_FALLBACK);
+	ctx->fallback =
+		crypto_alloc_sync_skcipher(crypto_tfm_alg_name(&tfm->base),
+					   0, CRYPTO_ALG_NEED_FALLBACK);
 	if (IS_ERR(ctx->fallback))
 		return PTR_ERR(ctx->fallback);
 
@@ -1605,7 +1604,7 @@ static void artpec6_crypto_aes_ctr_exit(struct crypto_skcipher *tfm)
 {
 	struct artpec6_cryptotfm_context *ctx = crypto_skcipher_ctx(tfm);
 
-	crypto_free_skcipher(ctx->fallback);
+	crypto_free_sync_skcipher(ctx->fallback);
 	artpec6_crypto_aes_exit(tfm);
 }
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH crypto-next 19/23] crypto: chelsio - Remove VLA usage of skcipher
  2018-09-19  2:10 [PATCH crypto-next 00/23] crypto: skcipher - Remove VLA usage Kees Cook
                   ` (17 preceding siblings ...)
  2018-09-19  2:10 ` [PATCH crypto-next 18/23] crypto: artpec6 " Kees Cook
@ 2018-09-19  2:10 ` Kees Cook
  2018-09-19  2:10 ` [PATCH crypto-next 20/23] crypto: mxs-dcp " Kees Cook
                   ` (5 subsequent siblings)
  24 siblings, 0 replies; 42+ messages in thread
From: Kees Cook @ 2018-09-19  2:10 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Kees Cook, Harsh Jain, Ard Biesheuvel, Eric Biggers,
	linux-crypto, Linux Kernel Mailing List

In the quest to remove all stack VLA usage from the kernel[1], this
replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage
with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(),
which uses a fixed stack size.

[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com

Cc: Harsh Jain <harsh@chelsio.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
---
 drivers/crypto/chelsio/chcr_algo.c   | 27 ++++++++++++++-------------
 drivers/crypto/chelsio/chcr_crypto.h |  2 +-
 2 files changed, 15 insertions(+), 14 deletions(-)

diff --git a/drivers/crypto/chelsio/chcr_algo.c b/drivers/crypto/chelsio/chcr_algo.c
index 5c539af8ed60..dfc3a10bb55b 100644
--- a/drivers/crypto/chelsio/chcr_algo.c
+++ b/drivers/crypto/chelsio/chcr_algo.c
@@ -671,7 +671,7 @@ static int chcr_sg_ent_in_wr(struct scatterlist *src,
 	return min(srclen, dstlen);
 }
 
-static int chcr_cipher_fallback(struct crypto_skcipher *cipher,
+static int chcr_cipher_fallback(struct crypto_sync_skcipher *cipher,
 				u32 flags,
 				struct scatterlist *src,
 				struct scatterlist *dst,
@@ -681,9 +681,9 @@ static int chcr_cipher_fallback(struct crypto_skcipher *cipher,
 {
 	int err;
 
-	SKCIPHER_REQUEST_ON_STACK(subreq, cipher);
+	SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, cipher);
 
-	skcipher_request_set_tfm(subreq, cipher);
+	skcipher_request_set_sync_tfm(subreq, cipher);
 	skcipher_request_set_callback(subreq, flags, NULL, NULL);
 	skcipher_request_set_crypt(subreq, src, dst,
 				   nbytes, iv);
@@ -854,13 +854,14 @@ static int chcr_cipher_fallback_setkey(struct crypto_ablkcipher *cipher,
 	struct ablk_ctx *ablkctx = ABLK_CTX(c_ctx(cipher));
 	int err = 0;
 
-	crypto_skcipher_clear_flags(ablkctx->sw_cipher, CRYPTO_TFM_REQ_MASK);
-	crypto_skcipher_set_flags(ablkctx->sw_cipher, cipher->base.crt_flags &
-				  CRYPTO_TFM_REQ_MASK);
-	err = crypto_skcipher_setkey(ablkctx->sw_cipher, key, keylen);
+	crypto_sync_skcipher_clear_flags(ablkctx->sw_cipher,
+				CRYPTO_TFM_REQ_MASK);
+	crypto_sync_skcipher_set_flags(ablkctx->sw_cipher,
+				cipher->base.crt_flags & CRYPTO_TFM_REQ_MASK);
+	err = crypto_sync_skcipher_setkey(ablkctx->sw_cipher, key, keylen);
 	tfm->crt_flags &= ~CRYPTO_TFM_RES_MASK;
 	tfm->crt_flags |=
-		crypto_skcipher_get_flags(ablkctx->sw_cipher) &
+		crypto_sync_skcipher_get_flags(ablkctx->sw_cipher) &
 		CRYPTO_TFM_RES_MASK;
 	return err;
 }
@@ -1360,8 +1361,8 @@ static int chcr_cra_init(struct crypto_tfm *tfm)
 	struct chcr_context *ctx = crypto_tfm_ctx(tfm);
 	struct ablk_ctx *ablkctx = ABLK_CTX(ctx);
 
-	ablkctx->sw_cipher = crypto_alloc_skcipher(alg->cra_name, 0,
-				CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK);
+	ablkctx->sw_cipher = crypto_alloc_sync_skcipher(alg->cra_name, 0,
+				CRYPTO_ALG_NEED_FALLBACK);
 	if (IS_ERR(ablkctx->sw_cipher)) {
 		pr_err("failed to allocate fallback for %s\n", alg->cra_name);
 		return PTR_ERR(ablkctx->sw_cipher);
@@ -1390,8 +1391,8 @@ static int chcr_rfc3686_init(struct crypto_tfm *tfm)
 	/*RFC3686 initialises IV counter value to 1, rfc3686(ctr(aes))
 	 * cannot be used as fallback in chcr_handle_cipher_response
 	 */
-	ablkctx->sw_cipher = crypto_alloc_skcipher("ctr(aes)", 0,
-				CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK);
+	ablkctx->sw_cipher = crypto_alloc_sync_skcipher("ctr(aes)", 0,
+				CRYPTO_ALG_NEED_FALLBACK);
 	if (IS_ERR(ablkctx->sw_cipher)) {
 		pr_err("failed to allocate fallback for %s\n", alg->cra_name);
 		return PTR_ERR(ablkctx->sw_cipher);
@@ -1406,7 +1407,7 @@ static void chcr_cra_exit(struct crypto_tfm *tfm)
 	struct chcr_context *ctx = crypto_tfm_ctx(tfm);
 	struct ablk_ctx *ablkctx = ABLK_CTX(ctx);
 
-	crypto_free_skcipher(ablkctx->sw_cipher);
+	crypto_free_sync_skcipher(ablkctx->sw_cipher);
 	if (ablkctx->aes_generic)
 		crypto_free_cipher(ablkctx->aes_generic);
 }
diff --git a/drivers/crypto/chelsio/chcr_crypto.h b/drivers/crypto/chelsio/chcr_crypto.h
index 54835cb109e5..e26b72cfe4b6 100644
--- a/drivers/crypto/chelsio/chcr_crypto.h
+++ b/drivers/crypto/chelsio/chcr_crypto.h
@@ -170,7 +170,7 @@ static inline struct chcr_context *h_ctx(struct crypto_ahash *tfm)
 }
 
 struct ablk_ctx {
-	struct crypto_skcipher *sw_cipher;
+	struct crypto_sync_skcipher *sw_cipher;
 	struct crypto_cipher *aes_generic;
 	__be32 key_ctx_hdr;
 	unsigned int enckey_len;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH crypto-next 20/23] crypto: mxs-dcp - Remove VLA usage of skcipher
  2018-09-19  2:10 [PATCH crypto-next 00/23] crypto: skcipher - Remove VLA usage Kees Cook
                   ` (18 preceding siblings ...)
  2018-09-19  2:10 ` [PATCH crypto-next 19/23] crypto: chelsio " Kees Cook
@ 2018-09-19  2:10 ` Kees Cook
  2018-09-19  2:10 ` [PATCH crypto-next 21/23] crypto: omap-aes " Kees Cook
                   ` (4 subsequent siblings)
  24 siblings, 0 replies; 42+ messages in thread
From: Kees Cook @ 2018-09-19  2:10 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Kees Cook, Ard Biesheuvel, Eric Biggers, linux-crypto,
	Linux Kernel Mailing List

In the quest to remove all stack VLA usage from the kernel[1], this
replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage
with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(),
which uses a fixed stack size.

[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 drivers/crypto/mxs-dcp.c | 21 ++++++++++-----------
 1 file changed, 10 insertions(+), 11 deletions(-)

diff --git a/drivers/crypto/mxs-dcp.c b/drivers/crypto/mxs-dcp.c
index a10c418d4e5c..430174be6f92 100644
--- a/drivers/crypto/mxs-dcp.c
+++ b/drivers/crypto/mxs-dcp.c
@@ -84,7 +84,7 @@ struct dcp_async_ctx {
 	unsigned int			hot:1;
 
 	/* Crypto-specific context */
-	struct crypto_skcipher		*fallback;
+	struct crypto_sync_skcipher	*fallback;
 	unsigned int			key_len;
 	uint8_t				key[AES_KEYSIZE_128];
 };
@@ -376,10 +376,10 @@ static int mxs_dcp_block_fallback(struct ablkcipher_request *req, int enc)
 {
 	struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(req);
 	struct dcp_async_ctx *ctx = crypto_ablkcipher_ctx(tfm);
-	SKCIPHER_REQUEST_ON_STACK(subreq, ctx->fallback);
+	SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, ctx->fallback);
 	int ret;
 
-	skcipher_request_set_tfm(subreq, ctx->fallback);
+	skcipher_request_set_sync_tfm(subreq, ctx->fallback);
 	skcipher_request_set_callback(subreq, req->base.flags, NULL, NULL);
 	skcipher_request_set_crypt(subreq, req->src, req->dst,
 				   req->nbytes, req->info);
@@ -460,16 +460,16 @@ static int mxs_dcp_aes_setkey(struct crypto_ablkcipher *tfm, const u8 *key,
 	 * but is supported by in-kernel software implementation, we use
 	 * software fallback.
 	 */
-	crypto_skcipher_clear_flags(actx->fallback, CRYPTO_TFM_REQ_MASK);
-	crypto_skcipher_set_flags(actx->fallback,
+	crypto_sync_skcipher_clear_flags(actx->fallback, CRYPTO_TFM_REQ_MASK);
+	crypto_sync_skcipher_set_flags(actx->fallback,
 				  tfm->base.crt_flags & CRYPTO_TFM_REQ_MASK);
 
-	ret = crypto_skcipher_setkey(actx->fallback, key, len);
+	ret = crypto_sync_skcipher_setkey(actx->fallback, key, len);
 	if (!ret)
 		return 0;
 
 	tfm->base.crt_flags &= ~CRYPTO_TFM_RES_MASK;
-	tfm->base.crt_flags |= crypto_skcipher_get_flags(actx->fallback) &
+	tfm->base.crt_flags |= crypto_sync_skcipher_get_flags(actx->fallback) &
 			       CRYPTO_TFM_RES_MASK;
 
 	return ret;
@@ -478,11 +478,10 @@ static int mxs_dcp_aes_setkey(struct crypto_ablkcipher *tfm, const u8 *key,
 static int mxs_dcp_aes_fallback_init(struct crypto_tfm *tfm)
 {
 	const char *name = crypto_tfm_alg_name(tfm);
-	const uint32_t flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK;
 	struct dcp_async_ctx *actx = crypto_tfm_ctx(tfm);
-	struct crypto_skcipher *blk;
+	struct crypto_sync_skcipher *blk;
 
-	blk = crypto_alloc_skcipher(name, 0, flags);
+	blk = crypto_alloc_sync_skcipher(name, 0, CRYPTO_ALG_NEED_FALLBACK);
 	if (IS_ERR(blk))
 		return PTR_ERR(blk);
 
@@ -495,7 +494,7 @@ static void mxs_dcp_aes_fallback_exit(struct crypto_tfm *tfm)
 {
 	struct dcp_async_ctx *actx = crypto_tfm_ctx(tfm);
 
-	crypto_free_skcipher(actx->fallback);
+	crypto_free_sync_skcipher(actx->fallback);
 }
 
 /*
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH crypto-next 21/23] crypto: omap-aes - Remove VLA usage of skcipher
  2018-09-19  2:10 [PATCH crypto-next 00/23] crypto: skcipher - Remove VLA usage Kees Cook
                   ` (19 preceding siblings ...)
  2018-09-19  2:10 ` [PATCH crypto-next 20/23] crypto: mxs-dcp " Kees Cook
@ 2018-09-19  2:10 ` Kees Cook
  2018-09-19  2:10 ` [PATCH crypto-next 22/23] crypto: picoxcell " Kees Cook
                   ` (3 subsequent siblings)
  24 siblings, 0 replies; 42+ messages in thread
From: Kees Cook @ 2018-09-19  2:10 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Kees Cook, Ard Biesheuvel, Eric Biggers, linux-crypto,
	Linux Kernel Mailing List

In the quest to remove all stack VLA usage from the kernel[1], this
replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage
with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(),
which uses a fixed stack size.

[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 drivers/crypto/omap-aes.c | 17 ++++++++---------
 drivers/crypto/omap-aes.h |  2 +-
 2 files changed, 9 insertions(+), 10 deletions(-)

diff --git a/drivers/crypto/omap-aes.c b/drivers/crypto/omap-aes.c
index 9019f6b67986..a553ffddb11b 100644
--- a/drivers/crypto/omap-aes.c
+++ b/drivers/crypto/omap-aes.c
@@ -522,9 +522,9 @@ static int omap_aes_crypt(struct ablkcipher_request *req, unsigned long mode)
 		  !!(mode & FLAGS_CBC));
 
 	if (req->nbytes < aes_fallback_sz) {
-		SKCIPHER_REQUEST_ON_STACK(subreq, ctx->fallback);
+		SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, ctx->fallback);
 
-		skcipher_request_set_tfm(subreq, ctx->fallback);
+		skcipher_request_set_sync_tfm(subreq, ctx->fallback);
 		skcipher_request_set_callback(subreq, req->base.flags, NULL,
 					      NULL);
 		skcipher_request_set_crypt(subreq, req->src, req->dst,
@@ -564,11 +564,11 @@ static int omap_aes_setkey(struct crypto_ablkcipher *tfm, const u8 *key,
 	memcpy(ctx->key, key, keylen);
 	ctx->keylen = keylen;
 
-	crypto_skcipher_clear_flags(ctx->fallback, CRYPTO_TFM_REQ_MASK);
-	crypto_skcipher_set_flags(ctx->fallback, tfm->base.crt_flags &
+	crypto_sync_skcipher_clear_flags(ctx->fallback, CRYPTO_TFM_REQ_MASK);
+	crypto_sync_skcipher_set_flags(ctx->fallback, tfm->base.crt_flags &
 						 CRYPTO_TFM_REQ_MASK);
 
-	ret = crypto_skcipher_setkey(ctx->fallback, key, keylen);
+	ret = crypto_sync_skcipher_setkey(ctx->fallback, key, keylen);
 	if (!ret)
 		return 0;
 
@@ -613,11 +613,10 @@ static int omap_aes_crypt_req(struct crypto_engine *engine,
 static int omap_aes_cra_init(struct crypto_tfm *tfm)
 {
 	const char *name = crypto_tfm_alg_name(tfm);
-	const u32 flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK;
 	struct omap_aes_ctx *ctx = crypto_tfm_ctx(tfm);
-	struct crypto_skcipher *blk;
+	struct crypto_sync_skcipher *blk;
 
-	blk = crypto_alloc_skcipher(name, 0, flags);
+	blk = crypto_alloc_sync_skcipher(name, 0, CRYPTO_ALG_NEED_FALLBACK);
 	if (IS_ERR(blk))
 		return PTR_ERR(blk);
 
@@ -667,7 +666,7 @@ static void omap_aes_cra_exit(struct crypto_tfm *tfm)
 	struct omap_aes_ctx *ctx = crypto_tfm_ctx(tfm);
 
 	if (ctx->fallback)
-		crypto_free_skcipher(ctx->fallback);
+		crypto_free_sync_skcipher(ctx->fallback);
 
 	ctx->fallback = NULL;
 }
diff --git a/drivers/crypto/omap-aes.h b/drivers/crypto/omap-aes.h
index fc3b46a85809..7e02920ef6f8 100644
--- a/drivers/crypto/omap-aes.h
+++ b/drivers/crypto/omap-aes.h
@@ -101,7 +101,7 @@ struct omap_aes_ctx {
 	int		keylen;
 	u32		key[AES_KEYSIZE_256 / sizeof(u32)];
 	u8		nonce[4];
-	struct crypto_skcipher	*fallback;
+	struct crypto_sync_skcipher	*fallback;
 	struct crypto_skcipher	*ctr;
 };
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH crypto-next 22/23] crypto: picoxcell - Remove VLA usage of skcipher
  2018-09-19  2:10 [PATCH crypto-next 00/23] crypto: skcipher - Remove VLA usage Kees Cook
                   ` (20 preceding siblings ...)
  2018-09-19  2:10 ` [PATCH crypto-next 21/23] crypto: omap-aes " Kees Cook
@ 2018-09-19  2:10 ` Kees Cook
  2018-09-19  2:11 ` [PATCH crypto-next 23/23] crypto: skcipher - Remove SKCIPHER_REQUEST_ON_STACK() Kees Cook
                   ` (2 subsequent siblings)
  24 siblings, 0 replies; 42+ messages in thread
From: Kees Cook @ 2018-09-19  2:10 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Kees Cook, Jamie Iles, linux-arm-kernel, Ard Biesheuvel,
	Eric Biggers, linux-crypto, Linux Kernel Mailing List

In the quest to remove all stack VLA usage from the kernel[1], this
replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage
with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(),
which uses a fixed stack size.

[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com

Cc: Jamie Iles <jamie@jamieiles.com>
Cc: linux-arm-kernel@lists.infradead.org
Signed-off-by: Kees Cook <keescook@chromium.org>
---
 drivers/crypto/picoxcell_crypto.c | 21 ++++++++++-----------
 1 file changed, 10 insertions(+), 11 deletions(-)

diff --git a/drivers/crypto/picoxcell_crypto.c b/drivers/crypto/picoxcell_crypto.c
index 321d5e2ac833..a28f1d18fe01 100644
--- a/drivers/crypto/picoxcell_crypto.c
+++ b/drivers/crypto/picoxcell_crypto.c
@@ -171,7 +171,7 @@ struct spacc_ablk_ctx {
 	 * The fallback cipher. If the operation can't be done in hardware,
 	 * fallback to a software version.
 	 */
-	struct crypto_skcipher		*sw_cipher;
+	struct crypto_sync_skcipher	*sw_cipher;
 };
 
 /* AEAD cipher context. */
@@ -799,17 +799,17 @@ static int spacc_aes_setkey(struct crypto_ablkcipher *cipher, const u8 *key,
 		 * Set the fallback transform to use the same request flags as
 		 * the hardware transform.
 		 */
-		crypto_skcipher_clear_flags(ctx->sw_cipher,
+		crypto_sync_skcipher_clear_flags(ctx->sw_cipher,
 					    CRYPTO_TFM_REQ_MASK);
-		crypto_skcipher_set_flags(ctx->sw_cipher,
+		crypto_sync_skcipher_set_flags(ctx->sw_cipher,
 					  cipher->base.crt_flags &
 					  CRYPTO_TFM_REQ_MASK);
 
-		err = crypto_skcipher_setkey(ctx->sw_cipher, key, len);
+		err = crypto_sync_skcipher_setkey(ctx->sw_cipher, key, len);
 
 		tfm->crt_flags &= ~CRYPTO_TFM_RES_MASK;
 		tfm->crt_flags |=
-			crypto_skcipher_get_flags(ctx->sw_cipher) &
+			crypto_sync_skcipher_get_flags(ctx->sw_cipher) &
 			CRYPTO_TFM_RES_MASK;
 
 		if (err)
@@ -914,7 +914,7 @@ static int spacc_ablk_do_fallback(struct ablkcipher_request *req,
 	struct crypto_tfm *old_tfm =
 	    crypto_ablkcipher_tfm(crypto_ablkcipher_reqtfm(req));
 	struct spacc_ablk_ctx *ctx = crypto_tfm_ctx(old_tfm);
-	SKCIPHER_REQUEST_ON_STACK(subreq, ctx->sw_cipher);
+	SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, ctx->sw_cipher);
 	int err;
 
 	/*
@@ -922,7 +922,7 @@ static int spacc_ablk_do_fallback(struct ablkcipher_request *req,
 	 * the ciphering has completed, put the old transform back into the
 	 * request.
 	 */
-	skcipher_request_set_tfm(subreq, ctx->sw_cipher);
+	skcipher_request_set_sync_tfm(subreq, ctx->sw_cipher);
 	skcipher_request_set_callback(subreq, req->base.flags, NULL, NULL);
 	skcipher_request_set_crypt(subreq, req->src, req->dst,
 				   req->nbytes, req->info);
@@ -1020,9 +1020,8 @@ static int spacc_ablk_cra_init(struct crypto_tfm *tfm)
 	ctx->generic.flags = spacc_alg->type;
 	ctx->generic.engine = engine;
 	if (alg->cra_flags & CRYPTO_ALG_NEED_FALLBACK) {
-		ctx->sw_cipher = crypto_alloc_skcipher(
-			alg->cra_name, 0, CRYPTO_ALG_ASYNC |
-					  CRYPTO_ALG_NEED_FALLBACK);
+		ctx->sw_cipher = crypto_alloc_sync_skcipher(
+			alg->cra_name, 0, CRYPTO_ALG_NEED_FALLBACK);
 		if (IS_ERR(ctx->sw_cipher)) {
 			dev_warn(engine->dev, "failed to allocate fallback for %s\n",
 				 alg->cra_name);
@@ -1041,7 +1040,7 @@ static void spacc_ablk_cra_exit(struct crypto_tfm *tfm)
 {
 	struct spacc_ablk_ctx *ctx = crypto_tfm_ctx(tfm);
 
-	crypto_free_skcipher(ctx->sw_cipher);
+	crypto_free_sync_skcipher(ctx->sw_cipher);
 }
 
 static int spacc_ablk_encrypt(struct ablkcipher_request *req)
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH crypto-next 23/23] crypto: skcipher - Remove SKCIPHER_REQUEST_ON_STACK()
  2018-09-19  2:10 [PATCH crypto-next 00/23] crypto: skcipher - Remove VLA usage Kees Cook
                   ` (21 preceding siblings ...)
  2018-09-19  2:10 ` [PATCH crypto-next 22/23] crypto: picoxcell " Kees Cook
@ 2018-09-19  2:11 ` Kees Cook
  2018-09-25  0:49 ` [PATCH crypto-next 00/23] crypto: skcipher - Remove VLA usage Kees Cook
  2018-09-28  5:08 ` Herbert Xu
  24 siblings, 0 replies; 42+ messages in thread
From: Kees Cook @ 2018-09-19  2:11 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Kees Cook, Ard Biesheuvel, Eric Biggers, linux-crypto,
	Linux Kernel Mailing List

Now that all the users of the VLA-generating SKCIPHER_REQUEST_ON_STACK()
macro have been moved to SYNC_SKCIPHER_REQUEST_ON_STACK(), we can remove
the former.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 include/crypto/skcipher.h | 5 -----
 1 file changed, 5 deletions(-)

diff --git a/include/crypto/skcipher.h b/include/crypto/skcipher.h
index d00ce90dc7da..45ae894fda32 100644
--- a/include/crypto/skcipher.h
+++ b/include/crypto/skcipher.h
@@ -156,11 +156,6 @@ struct skcipher_alg {
 			    ] CRYPTO_MINALIGN_ATTR; \
 	struct skcipher_request *name = (void *)__##name##_desc
 
-#define SKCIPHER_REQUEST_ON_STACK(name, tfm) \
-	char __##name##_desc[sizeof(struct skcipher_request) + \
-		crypto_skcipher_reqsize(tfm)] CRYPTO_MINALIGN_ATTR; \
-	struct skcipher_request *name = (void *)__##name##_desc
-
 /**
  * DOC: Symmetric Key Cipher API
  *
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* Re: [PATCH crypto-next 03/23] lib80211: Remove VLA usage of skcipher
  2018-09-19  2:10 ` [PATCH crypto-next 03/23] lib80211: " Kees Cook
@ 2018-09-19 20:37   ` Johannes Berg
  0 siblings, 0 replies; 42+ messages in thread
From: Johannes Berg @ 2018-09-19 20:37 UTC (permalink / raw)
  To: Kees Cook, Herbert Xu
  Cc: linux-wireless, Ard Biesheuvel, Eric Biggers, linux-crypto,
	Linux Kernel Mailing List

On Tue, 2018-09-18 at 19:10 -0700, Kees Cook wrote:
> In the quest to remove all stack VLA usage from the kernel[1], this
> replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage
> with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(),
> which uses a fixed stack size.
> 
> [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com
> 

I know this lib80211 stuff landed on my plate as the maintainer (and the
others are probably more or less copies thereof), but honestly I don't
even have a device I could test this on, and quite possibly never had
(after the old non-mac80211 b43 driver was killed.)

So basically I'll just trust you to be doing the right thing, since you
probably did the same transformation on other code that is better
tested...

johannes


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH crypto-next 11/23] wusb: Remove VLA usage of skcipher
  2018-09-19  2:10 ` [PATCH crypto-next 11/23] wusb: " Kees Cook
@ 2018-09-20 10:39   ` Greg Kroah-Hartman
  0 siblings, 0 replies; 42+ messages in thread
From: Greg Kroah-Hartman @ 2018-09-20 10:39 UTC (permalink / raw)
  To: Kees Cook
  Cc: Herbert Xu, Felipe Balbi, Johan Hovold, Gustavo A. R. Silva,
	linux-usb, Ard Biesheuvel, Eric Biggers, linux-crypto,
	Linux Kernel Mailing List

On Tue, Sep 18, 2018 at 07:10:48PM -0700, Kees Cook wrote:
> In the quest to remove all stack VLA usage from the kernel[1], this
> replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage
> with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(),
> which uses a fixed stack size.
> 
> [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com
> 
> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
> Cc: Felipe Balbi <felipe.balbi@linux.intel.com>
> Cc: Johan Hovold <johan@kernel.org>
> Cc: "Gustavo A. R. Silva" <gustavo@embeddedor.com>
> Cc: linux-usb@vger.kernel.org
> Signed-off-by: Kees Cook <keescook@chromium.org>

Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH crypto-next 18/23] crypto: artpec6 - Remove VLA usage of skcipher
  2018-09-19  2:10 ` [PATCH crypto-next 18/23] crypto: artpec6 " Kees Cook
@ 2018-09-23 12:13   ` Lars Persson
  0 siblings, 0 replies; 42+ messages in thread
From: Lars Persson @ 2018-09-23 12:13 UTC (permalink / raw)
  To: Kees Cook, Herbert Xu
  Cc: Jesper Nilsson, Lars Persson, linux-arm-kernel, Ard Biesheuvel,
	Eric Biggers, linux-crypto, Linux Kernel Mailing List



On 9/19/18 4:10 AM, Kees Cook wrote:
> In the quest to remove all stack VLA usage from the kernel[1], this
> replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage
> with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(),
> which uses a fixed stack size.
> 
> [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com
> 
> Cc: Jesper Nilsson <jesper.nilsson@axis.com>
> Cc: Lars Persson <lars.persson@axis.com>
> Cc: linux-arm-kernel@axis.com
> Signed-off-by: Kees Cook <keescook@chromium.org>
> ---
>   drivers/crypto/axis/artpec6_crypto.c | 19 +++++++++----------
>   1 file changed, 9 insertions(+), 10 deletions(-)
> 

Acked-by: Lars Persson <lars.persson@axis.com>


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH crypto-next 06/23] x86/fpu: Remove VLA usage of skcipher
  2018-09-19  2:10 ` [PATCH crypto-next 06/23] x86/fpu: " Kees Cook
@ 2018-09-24 11:45   ` Ard Biesheuvel
  2018-09-24 17:35     ` Kees Cook
  0 siblings, 1 reply; 42+ messages in thread
From: Ard Biesheuvel @ 2018-09-24 11:45 UTC (permalink / raw)
  To: Kees Cook
  Cc: Herbert Xu, the arch/x86 maintainers, Eric Biggers,
	open list:HARDWARE RANDOM NUMBER GENERATOR CORE,
	Linux Kernel Mailing List

On Wed, 19 Sep 2018 at 04:11, Kees Cook <keescook@chromium.org> wrote:
>
> In the quest to remove all stack VLA usage from the kernel[1], this
> replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage
> with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(),
> which uses a fixed stack size.
>
> [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com
>
> Cc: x86@kernel.org
> Signed-off-by: Kees Cook <keescook@chromium.org>

Doing some archeology on this driver, it turns out that the FPU
wrapper was introduced to support combining the generic CTR, LRW, XTS
and PCBC chaining modes with the AES-NI core transform. In the mean
time, CTR, LRW and XTS support have been implemented natively, which
leaves pcbc-aes-aesni as the only remaining user of the fpu template.

Since there are no users of pcbc(aes) in the kernel, could we perhaps
just remove this driver and all the special handling we have for it in
aesni-intel_glue.c?

If not, or in case we prefer to defer that to the next release:

Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>

> ---
>  arch/x86/crypto/fpu.c | 30 ++++++++++++++++--------------
>  1 file changed, 16 insertions(+), 14 deletions(-)
>
> diff --git a/arch/x86/crypto/fpu.c b/arch/x86/crypto/fpu.c
> index 406680476c52..be9b3766f241 100644
> --- a/arch/x86/crypto/fpu.c
> +++ b/arch/x86/crypto/fpu.c
> @@ -20,21 +20,23 @@
>  #include <asm/fpu/api.h>
>
>  struct crypto_fpu_ctx {
> -       struct crypto_skcipher *child;
> +       struct crypto_sync_skcipher *child;
>  };
>
>  static int crypto_fpu_setkey(struct crypto_skcipher *parent, const u8 *key,
>                              unsigned int keylen)
>  {
>         struct crypto_fpu_ctx *ctx = crypto_skcipher_ctx(parent);
> -       struct crypto_skcipher *child = ctx->child;
> +       struct crypto_sync_skcipher *child = ctx->child;
>         int err;
>
> -       crypto_skcipher_clear_flags(child, CRYPTO_TFM_REQ_MASK);
> -       crypto_skcipher_set_flags(child, crypto_skcipher_get_flags(parent) &
> +       crypto_sync_skcipher_clear_flags(child, CRYPTO_TFM_REQ_MASK);
> +       crypto_sync_skcipher_set_flags(child,
> +                                      crypto_skcipher_get_flags(parent) &
>                                          CRYPTO_TFM_REQ_MASK);
> -       err = crypto_skcipher_setkey(child, key, keylen);
> -       crypto_skcipher_set_flags(parent, crypto_skcipher_get_flags(child) &
> +       err = crypto_sync_skcipher_setkey(child, key, keylen);
> +       crypto_skcipher_set_flags(parent,
> +                                 crypto_sync_skcipher_get_flags(child) &
>                                           CRYPTO_TFM_RES_MASK);
>         return err;
>  }
> @@ -43,11 +45,11 @@ static int crypto_fpu_encrypt(struct skcipher_request *req)
>  {
>         struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
>         struct crypto_fpu_ctx *ctx = crypto_skcipher_ctx(tfm);
> -       struct crypto_skcipher *child = ctx->child;
> -       SKCIPHER_REQUEST_ON_STACK(subreq, child);
> +       struct crypto_sync_skcipher *child = ctx->child;
> +       SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, child);
>         int err;
>
> -       skcipher_request_set_tfm(subreq, child);
> +       skcipher_request_set_sync_tfm(subreq, child);
>         skcipher_request_set_callback(subreq, 0, NULL, NULL);
>         skcipher_request_set_crypt(subreq, req->src, req->dst, req->cryptlen,
>                                    req->iv);
> @@ -64,11 +66,11 @@ static int crypto_fpu_decrypt(struct skcipher_request *req)
>  {
>         struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
>         struct crypto_fpu_ctx *ctx = crypto_skcipher_ctx(tfm);
> -       struct crypto_skcipher *child = ctx->child;
> -       SKCIPHER_REQUEST_ON_STACK(subreq, child);
> +       struct crypto_sync_skcipher *child = ctx->child;
> +       SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, child);
>         int err;
>
> -       skcipher_request_set_tfm(subreq, child);
> +       skcipher_request_set_sync_tfm(subreq, child);
>         skcipher_request_set_callback(subreq, 0, NULL, NULL);
>         skcipher_request_set_crypt(subreq, req->src, req->dst, req->cryptlen,
>                                    req->iv);
> @@ -93,7 +95,7 @@ static int crypto_fpu_init_tfm(struct crypto_skcipher *tfm)
>         if (IS_ERR(cipher))
>                 return PTR_ERR(cipher);
>
> -       ctx->child = cipher;
> +       ctx->child = (struct crypto_sync_skcipher *)cipher;
>
>         return 0;
>  }
> @@ -102,7 +104,7 @@ static void crypto_fpu_exit_tfm(struct crypto_skcipher *tfm)
>  {
>         struct crypto_fpu_ctx *ctx = crypto_skcipher_ctx(tfm);
>
> -       crypto_free_skcipher(ctx->child);
> +       crypto_free_sync_skcipher(ctx->child);
>  }
>
>  static void crypto_fpu_free(struct skcipher_instance *inst)
> --
> 2.17.1
>

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH crypto-next 01/23] crypto: skcipher - Introduce crypto_sync_skcipher
  2018-09-19  2:10 ` [PATCH crypto-next 01/23] crypto: skcipher - Introduce crypto_sync_skcipher Kees Cook
@ 2018-09-24 11:48   ` Ard Biesheuvel
  0 siblings, 0 replies; 42+ messages in thread
From: Ard Biesheuvel @ 2018-09-24 11:48 UTC (permalink / raw)
  To: Kees Cook
  Cc: Herbert Xu, Eric Biggers,
	open list:HARDWARE RANDOM NUMBER GENERATOR CORE,
	Linux Kernel Mailing List

On Wed, 19 Sep 2018 at 04:11, Kees Cook <keescook@chromium.org> wrote:
>
> In preparation for removal of VLAs due to skcipher requests on the stack
> via SKCIPHER_REQUEST_ON_STACK() usage, this introduces the infrastructure
> for the "sync skcipher" tfm, which is for handling the on-stack cases of
> skcipher, which are always non-ASYNC and have a known limited request
> size.
>
> The crypto API additions:
>
>         struct crypto_sync_skcipher (wrapper for struct crypto_skcipher)
>         crypto_alloc_sync_skcipher()
>         crypto_free_sync_skcipher()
>         crypto_sync_skcipher_setkey()
>         crypto_sync_skcipher_get_flags()
>         crypto_sync_skcipher_set_flags()
>         crypto_sync_skcipher_clear_flags()
>         crypto_sync_skcipher_blocksize()
>         crypto_sync_skcipher_ivsize()
>         crypto_sync_skcipher_reqtfm()
>         skcipher_request_set_sync_tfm()
>         SYNC_SKCIPHER_REQUEST_ON_STACK() (with tfm type check)
>
> Signed-off-by: Kees Cook <keescook@chromium.org>

Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>

> ---
>  crypto/skcipher.c         | 24 +++++++++++++
>  include/crypto/skcipher.h | 75 +++++++++++++++++++++++++++++++++++++++
>  2 files changed, 99 insertions(+)
>
> diff --git a/crypto/skcipher.c b/crypto/skcipher.c
> index 0bd8c6caa498..4caab81d2d02 100644
> --- a/crypto/skcipher.c
> +++ b/crypto/skcipher.c
> @@ -949,6 +949,30 @@ struct crypto_skcipher *crypto_alloc_skcipher(const char *alg_name,
>  }
>  EXPORT_SYMBOL_GPL(crypto_alloc_skcipher);
>
> +struct crypto_sync_skcipher *crypto_alloc_sync_skcipher(
> +                               const char *alg_name, u32 type, u32 mask)
> +{
> +       struct crypto_skcipher *tfm;
> +
> +       /* Only sync algorithms allowed. */
> +       mask |= CRYPTO_ALG_ASYNC;
> +
> +       tfm = crypto_alloc_tfm(alg_name, &crypto_skcipher_type2, type, mask);
> +
> +       /*
> +        * Make sure we do not allocate something that might get used with
> +        * an on-stack request: check the request size.
> +        */
> +       if (!IS_ERR(tfm) && WARN_ON(crypto_skcipher_reqsize(tfm) >
> +                                   MAX_SYNC_SKCIPHER_REQSIZE)) {
> +               crypto_free_skcipher(tfm);
> +               return ERR_PTR(-EINVAL);
> +       }
> +
> +       return (struct crypto_sync_skcipher *)tfm;
> +}
> +EXPORT_SYMBOL_GPL(crypto_alloc_sync_skcipher);
> +
>  int crypto_has_skcipher2(const char *alg_name, u32 type, u32 mask)
>  {
>         return crypto_type_has_alg(alg_name, &crypto_skcipher_type2,
> diff --git a/include/crypto/skcipher.h b/include/crypto/skcipher.h
> index 2f327f090c3e..d00ce90dc7da 100644
> --- a/include/crypto/skcipher.h
> +++ b/include/crypto/skcipher.h
> @@ -65,6 +65,10 @@ struct crypto_skcipher {
>         struct crypto_tfm base;
>  };
>
> +struct crypto_sync_skcipher {
> +       struct crypto_skcipher base;
> +};
> +
>  /**
>   * struct skcipher_alg - symmetric key cipher definition
>   * @min_keysize: Minimum key size supported by the transformation. This is the
> @@ -139,6 +143,19 @@ struct skcipher_alg {
>         struct crypto_alg base;
>  };
>
> +#define MAX_SYNC_SKCIPHER_REQSIZE      384
> +/*
> + * This performs a type-check against the "tfm" argument to make sure
> + * all users have the correct skcipher tfm for doing on-stack requests.
> + */
> +#define SYNC_SKCIPHER_REQUEST_ON_STACK(name, tfm) \
> +       char __##name##_desc[sizeof(struct skcipher_request) + \
> +                            MAX_SYNC_SKCIPHER_REQSIZE + \
> +                            (!(sizeof((struct crypto_sync_skcipher *)1 == \
> +                                      (typeof(tfm))1))) \
> +                           ] CRYPTO_MINALIGN_ATTR; \
> +       struct skcipher_request *name = (void *)__##name##_desc
> +
>  #define SKCIPHER_REQUEST_ON_STACK(name, tfm) \
>         char __##name##_desc[sizeof(struct skcipher_request) + \
>                 crypto_skcipher_reqsize(tfm)] CRYPTO_MINALIGN_ATTR; \
> @@ -197,6 +214,9 @@ static inline struct crypto_skcipher *__crypto_skcipher_cast(
>  struct crypto_skcipher *crypto_alloc_skcipher(const char *alg_name,
>                                               u32 type, u32 mask);
>
> +struct crypto_sync_skcipher *crypto_alloc_sync_skcipher(const char *alg_name,
> +                                             u32 type, u32 mask);
> +
>  static inline struct crypto_tfm *crypto_skcipher_tfm(
>         struct crypto_skcipher *tfm)
>  {
> @@ -212,6 +232,11 @@ static inline void crypto_free_skcipher(struct crypto_skcipher *tfm)
>         crypto_destroy_tfm(tfm, crypto_skcipher_tfm(tfm));
>  }
>
> +static inline void crypto_free_sync_skcipher(struct crypto_sync_skcipher *tfm)
> +{
> +       crypto_free_skcipher(&tfm->base);
> +}
> +
>  /**
>   * crypto_has_skcipher() - Search for the availability of an skcipher.
>   * @alg_name: is the cra_name / name or cra_driver_name / driver name of the
> @@ -280,6 +305,12 @@ static inline unsigned int crypto_skcipher_ivsize(struct crypto_skcipher *tfm)
>         return tfm->ivsize;
>  }
>
> +static inline unsigned int crypto_sync_skcipher_ivsize(
> +       struct crypto_sync_skcipher *tfm)
> +{
> +       return crypto_skcipher_ivsize(&tfm->base);
> +}
> +
>  static inline unsigned int crypto_skcipher_alg_chunksize(
>         struct skcipher_alg *alg)
>  {
> @@ -356,6 +387,12 @@ static inline unsigned int crypto_skcipher_blocksize(
>         return crypto_tfm_alg_blocksize(crypto_skcipher_tfm(tfm));
>  }
>
> +static inline unsigned int crypto_sync_skcipher_blocksize(
> +       struct crypto_sync_skcipher *tfm)
> +{
> +       return crypto_skcipher_blocksize(&tfm->base);
> +}
> +
>  static inline unsigned int crypto_skcipher_alignmask(
>         struct crypto_skcipher *tfm)
>  {
> @@ -379,6 +416,24 @@ static inline void crypto_skcipher_clear_flags(struct crypto_skcipher *tfm,
>         crypto_tfm_clear_flags(crypto_skcipher_tfm(tfm), flags);
>  }
>
> +static inline u32 crypto_sync_skcipher_get_flags(
> +       struct crypto_sync_skcipher *tfm)
> +{
> +       return crypto_skcipher_get_flags(&tfm->base);
> +}
> +
> +static inline void crypto_sync_skcipher_set_flags(
> +       struct crypto_sync_skcipher *tfm, u32 flags)
> +{
> +       crypto_skcipher_set_flags(&tfm->base, flags);
> +}
> +
> +static inline void crypto_sync_skcipher_clear_flags(
> +       struct crypto_sync_skcipher *tfm, u32 flags)
> +{
> +       crypto_skcipher_clear_flags(&tfm->base, flags);
> +}
> +
>  /**
>   * crypto_skcipher_setkey() - set key for cipher
>   * @tfm: cipher handle
> @@ -401,6 +456,12 @@ static inline int crypto_skcipher_setkey(struct crypto_skcipher *tfm,
>         return tfm->setkey(tfm, key, keylen);
>  }
>
> +static inline int crypto_sync_skcipher_setkey(struct crypto_sync_skcipher *tfm,
> +                                        const u8 *key, unsigned int keylen)
> +{
> +       return crypto_skcipher_setkey(&tfm->base, key, keylen);
> +}
> +
>  static inline unsigned int crypto_skcipher_default_keysize(
>         struct crypto_skcipher *tfm)
>  {
> @@ -422,6 +483,14 @@ static inline struct crypto_skcipher *crypto_skcipher_reqtfm(
>         return __crypto_skcipher_cast(req->base.tfm);
>  }
>
> +static inline struct crypto_sync_skcipher *crypto_sync_skcipher_reqtfm(
> +       struct skcipher_request *req)
> +{
> +       struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
> +
> +       return container_of(tfm, struct crypto_sync_skcipher, base);
> +}
> +
>  /**
>   * crypto_skcipher_encrypt() - encrypt plaintext
>   * @req: reference to the skcipher_request handle that holds all information
> @@ -500,6 +569,12 @@ static inline void skcipher_request_set_tfm(struct skcipher_request *req,
>         req->base.tfm = crypto_skcipher_tfm(tfm);
>  }
>
> +static inline void skcipher_request_set_sync_tfm(struct skcipher_request *req,
> +                                           struct crypto_sync_skcipher *tfm)
> +{
> +       skcipher_request_set_tfm(req, &tfm->base);
> +}
> +
>  static inline struct skcipher_request *skcipher_request_cast(
>         struct crypto_async_request *req)
>  {
> --
> 2.17.1
>

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH crypto-next 07/23] block: cryptoloop: Remove VLA usage of skcipher
  2018-09-19  2:10 ` [PATCH crypto-next 07/23] block: cryptoloop: " Kees Cook
@ 2018-09-24 11:52   ` Ard Biesheuvel
  2018-09-24 17:53     ` Kees Cook
  0 siblings, 1 reply; 42+ messages in thread
From: Ard Biesheuvel @ 2018-09-24 11:52 UTC (permalink / raw)
  To: Kees Cook
  Cc: Herbert Xu, Jens Axboe, linux-block, Eric Biggers,
	open list:HARDWARE RANDOM NUMBER GENERATOR CORE,
	Linux Kernel Mailing List

On Wed, 19 Sep 2018 at 04:11, Kees Cook <keescook@chromium.org> wrote:
>
> In the quest to remove all stack VLA usage from the kernel[1], this
> replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage
> with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(),
> which uses a fixed stack size.
>
> [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com
>
> Cc: Jens Axboe <axboe@kernel.dk>
> Cc: linux-block@vger.kernel.org
> Signed-off-by: Kees Cook <keescook@chromium.org>
> ---
>  drivers/block/cryptoloop.c | 22 +++++++++++-----------
>  1 file changed, 11 insertions(+), 11 deletions(-)
>
> diff --git a/drivers/block/cryptoloop.c b/drivers/block/cryptoloop.c
> index 7033a4beda66..254ee7d54e91 100644
> --- a/drivers/block/cryptoloop.c
> +++ b/drivers/block/cryptoloop.c
> @@ -45,7 +45,7 @@ cryptoloop_init(struct loop_device *lo, const struct loop_info64 *info)
>         char cms[LO_NAME_SIZE];                 /* cipher-mode string */
>         char *mode;
>         char *cmsp = cms;                       /* c-m string pointer */
> -       struct crypto_skcipher *tfm;
> +       struct crypto_sync_skcipher *tfm;
>
>         /* encryption breaks for non sector aligned offsets */
>
> @@ -80,13 +80,13 @@ cryptoloop_init(struct loop_device *lo, const struct loop_info64 *info)
>         *cmsp++ = ')';
>         *cmsp = 0;
>
> -       tfm = crypto_alloc_skcipher(cms, 0, CRYPTO_ALG_ASYNC);
> +       tfm = crypto_alloc_sync_skcipher(cms, 0, 0);
>         if (IS_ERR(tfm))
>                 return PTR_ERR(tfm);
>
> -       err = crypto_skcipher_setkey(tfm, info->lo_encrypt_key,
> -                                    info->lo_encrypt_key_size);
> -
> +       err = crypto_sync_skcipher_setkey(tfm, info->lo_encrypt_key,
> +                                         info->lo_encrypt_key_size);
> +
>         if (err != 0)
>                 goto out_free_tfm;
>
> @@ -94,7 +94,7 @@ cryptoloop_init(struct loop_device *lo, const struct loop_info64 *info)
>         return 0;
>
>   out_free_tfm:
> -       crypto_free_skcipher(tfm);
> +       crypto_free_sync_skcipher(tfm);
>
>   out:
>         return err;
> @@ -109,8 +109,8 @@ cryptoloop_transfer(struct loop_device *lo, int cmd,
>                     struct page *loop_page, unsigned loop_off,
>                     int size, sector_t IV)
>  {
> -       struct crypto_skcipher *tfm = lo->key_data;
> -       SKCIPHER_REQUEST_ON_STACK(req, tfm);
> +       struct crypto_sync_skcipher *tfm = lo->key_data;
> +       SYNC_SKCIPHER_REQUEST_ON_STACK(req, tfm);
>         struct scatterlist sg_out;
>         struct scatterlist sg_in;
>
> @@ -119,7 +119,7 @@ cryptoloop_transfer(struct loop_device *lo, int cmd,
>         unsigned in_offs, out_offs;
>         int err;
>
> -       skcipher_request_set_tfm(req, tfm);
> +       skcipher_request_set_sync_tfm(req, tfm);
>         skcipher_request_set_callback(req, CRYPTO_TFM_REQ_MAY_SLEEP,
>                                       NULL, NULL);
>

Does this work?

> @@ -175,9 +175,9 @@ cryptoloop_ioctl(struct loop_device *lo, int cmd, unsigned long arg)
>  static int
>  cryptoloop_release(struct loop_device *lo)
>  {
> -       struct crypto_skcipher *tfm = lo->key_data;
> +       struct crypto_sync_skcipher *tfm = lo->key_data;
>         if (tfm != NULL) {
> -               crypto_free_skcipher(tfm);
> +               crypto_free_sync_skcipher(tfm);
>                 lo->key_data = NULL;
>                 return 0;
>         }
> --
> 2.17.1
>

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH crypto-next 06/23] x86/fpu: Remove VLA usage of skcipher
  2018-09-24 11:45   ` Ard Biesheuvel
@ 2018-09-24 17:35     ` Kees Cook
  0 siblings, 0 replies; 42+ messages in thread
From: Kees Cook @ 2018-09-24 17:35 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Herbert Xu, the arch/x86 maintainers, Eric Biggers,
	open list:HARDWARE RANDOM NUMBER GENERATOR CORE,
	Linux Kernel Mailing List

On Mon, Sep 24, 2018 at 4:45 AM, Ard Biesheuvel
<ard.biesheuvel@linaro.org> wrote:
> On Wed, 19 Sep 2018 at 04:11, Kees Cook <keescook@chromium.org> wrote:
>>
>> In the quest to remove all stack VLA usage from the kernel[1], this
>> replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage
>> with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(),
>> which uses a fixed stack size.
>>
>> [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com
>>
>> Cc: x86@kernel.org
>> Signed-off-by: Kees Cook <keescook@chromium.org>
>
> Doing some archeology on this driver, it turns out that the FPU
> wrapper was introduced to support combining the generic CTR, LRW, XTS
> and PCBC chaining modes with the AES-NI core transform. In the mean
> time, CTR, LRW and XTS support have been implemented natively, which
> leaves pcbc-aes-aesni as the only remaining user of the fpu template.
>
> Since there are no users of pcbc(aes) in the kernel, could we perhaps
> just remove this driver and all the special handling we have for it in
> aesni-intel_glue.c?

Both options get rid of the VLA, so I'm happy either way. ;)

> If not, or in case we prefer to defer that to the next release:
>
> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>

Thanks!

-Kees

-- 
Kees Cook
Pixel Security

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH crypto-next 07/23] block: cryptoloop: Remove VLA usage of skcipher
  2018-09-24 11:52   ` Ard Biesheuvel
@ 2018-09-24 17:53     ` Kees Cook
  2018-09-25  9:25       ` Ard Biesheuvel
  0 siblings, 1 reply; 42+ messages in thread
From: Kees Cook @ 2018-09-24 17:53 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Herbert Xu, Jens Axboe, linux-block, Eric Biggers,
	open list:HARDWARE RANDOM NUMBER GENERATOR CORE,
	Linux Kernel Mailing List

On Mon, Sep 24, 2018 at 4:52 AM, Ard Biesheuvel
<ard.biesheuvel@linaro.org> wrote:
> On Wed, 19 Sep 2018 at 04:11, Kees Cook <keescook@chromium.org> wrote:
>> @@ -119,7 +119,7 @@ cryptoloop_transfer(struct loop_device *lo, int cmd,
>>         unsigned in_offs, out_offs;
>>         int err;
>>
>> -       skcipher_request_set_tfm(req, tfm);
>> +       skcipher_request_set_sync_tfm(req, tfm);
>>         skcipher_request_set_callback(req, CRYPTO_TFM_REQ_MAY_SLEEP,
>>                                       NULL, NULL);
>>
>
> Does this work?

Everything is a direct wrapper for existing types and functions, so I
wouldn't expect any functional change. I haven't been able to test
this particular interface, though. cryptoloop is very deprecated,
isn't it?

-Kees

-- 
Kees Cook
Pixel Security

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH crypto-next 00/23] crypto: skcipher - Remove VLA usage
  2018-09-19  2:10 [PATCH crypto-next 00/23] crypto: skcipher - Remove VLA usage Kees Cook
                   ` (22 preceding siblings ...)
  2018-09-19  2:11 ` [PATCH crypto-next 23/23] crypto: skcipher - Remove SKCIPHER_REQUEST_ON_STACK() Kees Cook
@ 2018-09-25  0:49 ` Kees Cook
  2018-09-25  4:49   ` Herbert Xu
  2018-09-28  5:08 ` Herbert Xu
  24 siblings, 1 reply; 42+ messages in thread
From: Kees Cook @ 2018-09-25  0:49 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Kees Cook, Ard Biesheuvel, Eric Biggers, linux-crypto,
	Linux Kernel Mailing List

On Tue, Sep 18, 2018 at 7:10 PM, Kees Cook <keescook@chromium.org> wrote:
> This is the full follow-up to earlier discussions[1] that suggested
> adding a new struct crypto_sync_skcipher to handle the VLA removal from
> SKCIPHER_REQUEST_ON_STACK.
>
> This series is effectively a no-op change: everything is a wrapper
> around struct crypto_skcipher, but provides compile-time enforcement
> for not putting an ASYNC skcipher on the stack, which allows us to
> declare the on-stack requests with a fixed stack size.
>
> [1] https://lkml.kernel.org/r/CAGXu5j+bpLK=EQ9LHkO8V=sdaQwt==6fbGhgn2Vi1E9_WxSGRQ@mail.gmail.com
>
> -Kees
>
> Kees Cook (23):
>   crypto: skcipher - Introduce crypto_sync_skcipher
>   gss_krb5: Remove VLA usage of skcipher
>   lib80211: Remove VLA usage of skcipher
>   mac802154: Remove VLA usage of skcipher
>   s390/crypto: Remove VLA usage of skcipher
>   x86/fpu: Remove VLA usage of skcipher
>   block: cryptoloop: Remove VLA usage of skcipher
>   libceph: Remove VLA usage of skcipher
>   ppp: mppe: Remove VLA usage of skcipher
>   rxrpc: Remove VLA usage of skcipher
>   wusb: Remove VLA usage of skcipher
>   crypto: ccp - Remove VLA usage of skcipher
>   crypto: vmx - Remove VLA usage of skcipher
>   crypto: null - Remove VLA usage of skcipher
>   crypto: cryptd - Remove VLA usage of skcipher
>   crypto: sahara - Remove VLA usage of skcipher
>   crypto: qce - Remove VLA usage of skcipher
>   crypto: artpec6 - Remove VLA usage of skcipher
>   crypto: chelsio - Remove VLA usage of skcipher
>   crypto: mxs-dcp - Remove VLA usage of skcipher
>   crypto: omap-aes - Remove VLA usage of skcipher
>   crypto: picoxcell - Remove VLA usage of skcipher
>   crypto: skcipher - Remove SKCIPHER_REQUEST_ON_STACK()

How do these look to you, Herbert? I'd really like to make sure these
make it for the next merge window -- they're the last VLAs left in the
kernel now. :)

-Kees

-- 
Kees Cook
Pixel Security

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH crypto-next 00/23] crypto: skcipher - Remove VLA usage
  2018-09-25  0:49 ` [PATCH crypto-next 00/23] crypto: skcipher - Remove VLA usage Kees Cook
@ 2018-09-25  4:49   ` Herbert Xu
  2018-09-25 15:39     ` Kees Cook
  0 siblings, 1 reply; 42+ messages in thread
From: Herbert Xu @ 2018-09-25  4:49 UTC (permalink / raw)
  To: Kees Cook
  Cc: Ard Biesheuvel, Eric Biggers, linux-crypto, Linux Kernel Mailing List

On Mon, Sep 24, 2018 at 05:49:37PM -0700, Kees Cook wrote:
>
> > Kees Cook (23):
> >   crypto: skcipher - Introduce crypto_sync_skcipher
> >   gss_krb5: Remove VLA usage of skcipher
> >   lib80211: Remove VLA usage of skcipher
> >   mac802154: Remove VLA usage of skcipher
> >   s390/crypto: Remove VLA usage of skcipher
> >   x86/fpu: Remove VLA usage of skcipher
> >   block: cryptoloop: Remove VLA usage of skcipher
> >   libceph: Remove VLA usage of skcipher
> >   ppp: mppe: Remove VLA usage of skcipher
> >   rxrpc: Remove VLA usage of skcipher
> >   wusb: Remove VLA usage of skcipher
> >   crypto: ccp - Remove VLA usage of skcipher
> >   crypto: vmx - Remove VLA usage of skcipher
> >   crypto: null - Remove VLA usage of skcipher
> >   crypto: cryptd - Remove VLA usage of skcipher
> >   crypto: sahara - Remove VLA usage of skcipher
> >   crypto: qce - Remove VLA usage of skcipher
> >   crypto: artpec6 - Remove VLA usage of skcipher
> >   crypto: chelsio - Remove VLA usage of skcipher
> >   crypto: mxs-dcp - Remove VLA usage of skcipher
> >   crypto: omap-aes - Remove VLA usage of skcipher
> >   crypto: picoxcell - Remove VLA usage of skcipher
> >   crypto: skcipher - Remove SKCIPHER_REQUEST_ON_STACK()
> 
> How do these look to you, Herbert? I'd really like to make sure these
> make it for the next merge window -- they're the last VLAs left in the
> kernel now. :)

I have no problems with the crypto parts.  Do we have acks for
all of the others?

Thanks,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH crypto-next 07/23] block: cryptoloop: Remove VLA usage of skcipher
  2018-09-24 17:53     ` Kees Cook
@ 2018-09-25  9:25       ` Ard Biesheuvel
  2018-09-25 16:03         ` Jens Axboe
  0 siblings, 1 reply; 42+ messages in thread
From: Ard Biesheuvel @ 2018-09-25  9:25 UTC (permalink / raw)
  To: Kees Cook
  Cc: Herbert Xu, Jens Axboe, linux-block, Eric Biggers,
	open list:HARDWARE RANDOM NUMBER GENERATOR CORE,
	Linux Kernel Mailing List

On Mon, 24 Sep 2018 at 19:53, Kees Cook <keescook@chromium.org> wrote:
>
> On Mon, Sep 24, 2018 at 4:52 AM, Ard Biesheuvel
> <ard.biesheuvel@linaro.org> wrote:
> > On Wed, 19 Sep 2018 at 04:11, Kees Cook <keescook@chromium.org> wrote:
> >> @@ -119,7 +119,7 @@ cryptoloop_transfer(struct loop_device *lo, int cmd,
> >>         unsigned in_offs, out_offs;
> >>         int err;
> >>
> >> -       skcipher_request_set_tfm(req, tfm);
> >> +       skcipher_request_set_sync_tfm(req, tfm);
> >>         skcipher_request_set_callback(req, CRYPTO_TFM_REQ_MAY_SLEEP,
> >>                                       NULL, NULL);
> >>
> >
> > Does this work?
>
> Everything is a direct wrapper for existing types and functions, so I
> wouldn't expect any functional change. I haven't been able to test
> this particular interface, though. cryptoloop is very deprecated,
> isn't it?
>

Ah yes, I managed to confuse myself there. This looks all fine to me.

In any case, this is another example where we may decide to fix the
code rather than retain the request allocation on the stack (but that
is Jens's call ultimately, I suppose)

diff --git a/drivers/block/cryptoloop.c b/drivers/block/cryptoloop.c
index 7033a4beda66..5ed2167219ba 100644
--- a/drivers/block/cryptoloop.c
+++ b/drivers/block/cryptoloop.c
@@ -110,7 +110,7 @@ cryptoloop_transfer(struct loop_device *lo, int cmd,
                    int size, sector_t IV)
 {
        struct crypto_skcipher *tfm = lo->key_data;
-       SKCIPHER_REQUEST_ON_STACK(req, tfm);
+       struct skcipher_request *req;
        struct scatterlist sg_out;
        struct scatterlist sg_in;

@@ -119,7 +119,10 @@ cryptoloop_transfer(struct loop_device *lo, int cmd,
        unsigned in_offs, out_offs;
        int err;

-       skcipher_request_set_tfm(req, tfm);
+       req = skcipher_request_alloc(tfm, GFP_NOIO);
+       if (!req)
+               return -ENOMEM;
+
        skcipher_request_set_callback(req, CRYPTO_TFM_REQ_MAY_SLEEP,
                                      NULL, NULL);


or if we stick with the current change to sync:

Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* Re: [PATCH crypto-next 00/23] crypto: skcipher - Remove VLA usage
  2018-09-25  4:49   ` Herbert Xu
@ 2018-09-25 15:39     ` Kees Cook
  0 siblings, 0 replies; 42+ messages in thread
From: Kees Cook @ 2018-09-25 15:39 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Ard Biesheuvel, Eric Biggers, linux-crypto, Linux Kernel Mailing List

On Mon, Sep 24, 2018 at 9:49 PM, Herbert Xu <herbert@gondor.apana.org.au> wrote:
> On Mon, Sep 24, 2018 at 05:49:37PM -0700, Kees Cook wrote:
>>
>> > Kees Cook (23):
>> >   crypto: skcipher - Introduce crypto_sync_skcipher
>> >   gss_krb5: Remove VLA usage of skcipher
>> >   lib80211: Remove VLA usage of skcipher
>> >   mac802154: Remove VLA usage of skcipher
>> >   s390/crypto: Remove VLA usage of skcipher
>> >   x86/fpu: Remove VLA usage of skcipher
>> >   block: cryptoloop: Remove VLA usage of skcipher
>> >   libceph: Remove VLA usage of skcipher
>> >   ppp: mppe: Remove VLA usage of skcipher
>> >   rxrpc: Remove VLA usage of skcipher
>> >   wusb: Remove VLA usage of skcipher
>> >   crypto: ccp - Remove VLA usage of skcipher
>> >   crypto: vmx - Remove VLA usage of skcipher
>> >   crypto: null - Remove VLA usage of skcipher
>> >   crypto: cryptd - Remove VLA usage of skcipher
>> >   crypto: sahara - Remove VLA usage of skcipher
>> >   crypto: qce - Remove VLA usage of skcipher
>> >   crypto: artpec6 - Remove VLA usage of skcipher
>> >   crypto: chelsio - Remove VLA usage of skcipher
>> >   crypto: mxs-dcp - Remove VLA usage of skcipher
>> >   crypto: omap-aes - Remove VLA usage of skcipher
>> >   crypto: picoxcell - Remove VLA usage of skcipher
>> >   crypto: skcipher - Remove SKCIPHER_REQUEST_ON_STACK()
>>
>> How do these look to you, Herbert? I'd really like to make sure these
>> make it for the next merge window -- they're the last VLAs left in the
>> kernel now. :)
>
> I have no problems with the crypto parts.  Do we have acks for
> all of the others?

Some have trickled in (wusb, lib80211), along with some Reviewed-bys
from Ard. I was hoping that since it was a wrapper-only change there
wouldn't be a need to block on waiting for Acks.

Thanks for looking at it; I'm excited to finally be done with VLA removals. :)

-Kees

-- 
Kees Cook
Pixel Security

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH crypto-next 07/23] block: cryptoloop: Remove VLA usage of skcipher
  2018-09-25  9:25       ` Ard Biesheuvel
@ 2018-09-25 16:03         ` Jens Axboe
  2018-09-25 16:16           ` Ard Biesheuvel
  0 siblings, 1 reply; 42+ messages in thread
From: Jens Axboe @ 2018-09-25 16:03 UTC (permalink / raw)
  To: Ard Biesheuvel, Kees Cook
  Cc: Herbert Xu, linux-block, Eric Biggers,
	open list:HARDWARE RANDOM NUMBER GENERATOR CORE,
	Linux Kernel Mailing List

On 9/25/18 3:25 AM, Ard Biesheuvel wrote:
> On Mon, 24 Sep 2018 at 19:53, Kees Cook <keescook@chromium.org> wrote:
>>
>> On Mon, Sep 24, 2018 at 4:52 AM, Ard Biesheuvel
>> <ard.biesheuvel@linaro.org> wrote:
>>> On Wed, 19 Sep 2018 at 04:11, Kees Cook <keescook@chromium.org> wrote:
>>>> @@ -119,7 +119,7 @@ cryptoloop_transfer(struct loop_device *lo, int cmd,
>>>>         unsigned in_offs, out_offs;
>>>>         int err;
>>>>
>>>> -       skcipher_request_set_tfm(req, tfm);
>>>> +       skcipher_request_set_sync_tfm(req, tfm);
>>>>         skcipher_request_set_callback(req, CRYPTO_TFM_REQ_MAY_SLEEP,
>>>>                                       NULL, NULL);
>>>>
>>>
>>> Does this work?
>>
>> Everything is a direct wrapper for existing types and functions, so I
>> wouldn't expect any functional change. I haven't been able to test
>> this particular interface, though. cryptoloop is very deprecated,
>> isn't it?
>>
> 
> Ah yes, I managed to confuse myself there. This looks all fine to me.
> 
> In any case, this is another example where we may decide to fix the
> code rather than retain the request allocation on the stack (but that
> is Jens's call ultimately, I suppose)
> 
> diff --git a/drivers/block/cryptoloop.c b/drivers/block/cryptoloop.c
> index 7033a4beda66..5ed2167219ba 100644
> --- a/drivers/block/cryptoloop.c
> +++ b/drivers/block/cryptoloop.c
> @@ -110,7 +110,7 @@ cryptoloop_transfer(struct loop_device *lo, int cmd,
>                     int size, sector_t IV)
>  {
>         struct crypto_skcipher *tfm = lo->key_data;
> -       SKCIPHER_REQUEST_ON_STACK(req, tfm);
> +       struct skcipher_request *req;
>         struct scatterlist sg_out;
>         struct scatterlist sg_in;
> 
> @@ -119,7 +119,10 @@ cryptoloop_transfer(struct loop_device *lo, int cmd,
>         unsigned in_offs, out_offs;
>         int err;
> 
> -       skcipher_request_set_tfm(req, tfm);
> +       req = skcipher_request_alloc(tfm, GFP_NOIO);
> +       if (!req)
> +               return -ENOMEM;

Is this going to be reliable? ->transfer() is called when we're doing IO,
and you'd normally need a mempool backed allocation to make this safe
and guarantee forward progress.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH crypto-next 07/23] block: cryptoloop: Remove VLA usage of skcipher
  2018-09-25 16:03         ` Jens Axboe
@ 2018-09-25 16:16           ` Ard Biesheuvel
  2018-09-25 16:32             ` Jens Axboe
  0 siblings, 1 reply; 42+ messages in thread
From: Ard Biesheuvel @ 2018-09-25 16:16 UTC (permalink / raw)
  To: Jens Axboe
  Cc: Kees Cook, Herbert Xu, linux-block, Eric Biggers,
	open list:HARDWARE RANDOM NUMBER GENERATOR CORE,
	Linux Kernel Mailing List

On Tue, 25 Sep 2018 at 18:03, Jens Axboe <axboe@kernel.dk> wrote:
>
> On 9/25/18 3:25 AM, Ard Biesheuvel wrote:
> > On Mon, 24 Sep 2018 at 19:53, Kees Cook <keescook@chromium.org> wrote:
> >>
> >> On Mon, Sep 24, 2018 at 4:52 AM, Ard Biesheuvel
> >> <ard.biesheuvel@linaro.org> wrote:
> >>> On Wed, 19 Sep 2018 at 04:11, Kees Cook <keescook@chromium.org> wrote:
> >>>> @@ -119,7 +119,7 @@ cryptoloop_transfer(struct loop_device *lo, int cmd,
> >>>>         unsigned in_offs, out_offs;
> >>>>         int err;
> >>>>
> >>>> -       skcipher_request_set_tfm(req, tfm);
> >>>> +       skcipher_request_set_sync_tfm(req, tfm);
> >>>>         skcipher_request_set_callback(req, CRYPTO_TFM_REQ_MAY_SLEEP,
> >>>>                                       NULL, NULL);
> >>>>
> >>>
> >>> Does this work?
> >>
> >> Everything is a direct wrapper for existing types and functions, so I
> >> wouldn't expect any functional change. I haven't been able to test
> >> this particular interface, though. cryptoloop is very deprecated,
> >> isn't it?
> >>
> >
> > Ah yes, I managed to confuse myself there. This looks all fine to me.
> >
> > In any case, this is another example where we may decide to fix the
> > code rather than retain the request allocation on the stack (but that
> > is Jens's call ultimately, I suppose)
> >
> > diff --git a/drivers/block/cryptoloop.c b/drivers/block/cryptoloop.c
> > index 7033a4beda66..5ed2167219ba 100644
> > --- a/drivers/block/cryptoloop.c
> > +++ b/drivers/block/cryptoloop.c
> > @@ -110,7 +110,7 @@ cryptoloop_transfer(struct loop_device *lo, int cmd,
> >                     int size, sector_t IV)
> >  {
> >         struct crypto_skcipher *tfm = lo->key_data;
> > -       SKCIPHER_REQUEST_ON_STACK(req, tfm);
> > +       struct skcipher_request *req;
> >         struct scatterlist sg_out;
> >         struct scatterlist sg_in;
> >
> > @@ -119,7 +119,10 @@ cryptoloop_transfer(struct loop_device *lo, int cmd,
> >         unsigned in_offs, out_offs;
> >         int err;
> >
> > -       skcipher_request_set_tfm(req, tfm);
> > +       req = skcipher_request_alloc(tfm, GFP_NOIO);
> > +       if (!req)
> > +               return -ENOMEM;
>
> Is this going to be reliable? ->transfer() is called when we're doing IO,
> and you'd normally need a mempool backed allocation to make this safe
> and guarantee forward progress.
>

As far as I can tell, this function is only called from
lo_read_transfer/lo_write_transfer, both of which do an unconditional
alloc_page(GFP_NOIO), which is why I assumed that kmalloc(GFP_NOIO)
would be permissible in the same context. Are you saying this may not
be the case?

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH crypto-next 07/23] block: cryptoloop: Remove VLA usage of skcipher
  2018-09-25 16:16           ` Ard Biesheuvel
@ 2018-09-25 16:32             ` Jens Axboe
  2018-09-26  8:19               ` Ard Biesheuvel
  0 siblings, 1 reply; 42+ messages in thread
From: Jens Axboe @ 2018-09-25 16:32 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Kees Cook, Herbert Xu, linux-block, Eric Biggers,
	open list:HARDWARE RANDOM NUMBER GENERATOR CORE,
	Linux Kernel Mailing List

On 9/25/18 10:16 AM, Ard Biesheuvel wrote:
> On Tue, 25 Sep 2018 at 18:03, Jens Axboe <axboe@kernel.dk> wrote:
>>
>> On 9/25/18 3:25 AM, Ard Biesheuvel wrote:
>>> On Mon, 24 Sep 2018 at 19:53, Kees Cook <keescook@chromium.org> wrote:
>>>>
>>>> On Mon, Sep 24, 2018 at 4:52 AM, Ard Biesheuvel
>>>> <ard.biesheuvel@linaro.org> wrote:
>>>>> On Wed, 19 Sep 2018 at 04:11, Kees Cook <keescook@chromium.org> wrote:
>>>>>> @@ -119,7 +119,7 @@ cryptoloop_transfer(struct loop_device *lo, int cmd,
>>>>>>         unsigned in_offs, out_offs;
>>>>>>         int err;
>>>>>>
>>>>>> -       skcipher_request_set_tfm(req, tfm);
>>>>>> +       skcipher_request_set_sync_tfm(req, tfm);
>>>>>>         skcipher_request_set_callback(req, CRYPTO_TFM_REQ_MAY_SLEEP,
>>>>>>                                       NULL, NULL);
>>>>>>
>>>>>
>>>>> Does this work?
>>>>
>>>> Everything is a direct wrapper for existing types and functions, so I
>>>> wouldn't expect any functional change. I haven't been able to test
>>>> this particular interface, though. cryptoloop is very deprecated,
>>>> isn't it?
>>>>
>>>
>>> Ah yes, I managed to confuse myself there. This looks all fine to me.
>>>
>>> In any case, this is another example where we may decide to fix the
>>> code rather than retain the request allocation on the stack (but that
>>> is Jens's call ultimately, I suppose)
>>>
>>> diff --git a/drivers/block/cryptoloop.c b/drivers/block/cryptoloop.c
>>> index 7033a4beda66..5ed2167219ba 100644
>>> --- a/drivers/block/cryptoloop.c
>>> +++ b/drivers/block/cryptoloop.c
>>> @@ -110,7 +110,7 @@ cryptoloop_transfer(struct loop_device *lo, int cmd,
>>>                     int size, sector_t IV)
>>>  {
>>>         struct crypto_skcipher *tfm = lo->key_data;
>>> -       SKCIPHER_REQUEST_ON_STACK(req, tfm);
>>> +       struct skcipher_request *req;
>>>         struct scatterlist sg_out;
>>>         struct scatterlist sg_in;
>>>
>>> @@ -119,7 +119,10 @@ cryptoloop_transfer(struct loop_device *lo, int cmd,
>>>         unsigned in_offs, out_offs;
>>>         int err;
>>>
>>> -       skcipher_request_set_tfm(req, tfm);
>>> +       req = skcipher_request_alloc(tfm, GFP_NOIO);
>>> +       if (!req)
>>> +               return -ENOMEM;
>>
>> Is this going to be reliable? ->transfer() is called when we're doing IO,
>> and you'd normally need a mempool backed allocation to make this safe
>> and guarantee forward progress.
>>
> 
> As far as I can tell, this function is only called from
> lo_read_transfer/lo_write_transfer, both of which do an unconditional
> alloc_page(GFP_NOIO), which is why I assumed that kmalloc(GFP_NOIO)
> would be permissible in the same context. Are you saying this may not
> be the case?

Doesn't appear to be safe for either your case, nor the page it's
allocating. If the allocator fails this allocation, then you'll get
an EIO on that request. The more likely case is the allocator taking
forever to satisfy the request, in which case you'll have very
large latencies for IO when you are close to being out of memory.
The preferred setup for allocating memory for IO is having a mempool
of at least one item. If you end up blocking for memory, you'll at
most get to wait for the existing IO that's using that memory to
complete (per waiter, of course).

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH crypto-next 07/23] block: cryptoloop: Remove VLA usage of skcipher
  2018-09-25 16:32             ` Jens Axboe
@ 2018-09-26  8:19               ` Ard Biesheuvel
  0 siblings, 0 replies; 42+ messages in thread
From: Ard Biesheuvel @ 2018-09-26  8:19 UTC (permalink / raw)
  To: Jens Axboe
  Cc: Kees Cook, Herbert Xu, linux-block, Eric Biggers,
	open list:HARDWARE RANDOM NUMBER GENERATOR CORE,
	Linux Kernel Mailing List

On Tue, 25 Sep 2018 at 18:32, Jens Axboe <axboe@kernel.dk> wrote:
>
> On 9/25/18 10:16 AM, Ard Biesheuvel wrote:
> > On Tue, 25 Sep 2018 at 18:03, Jens Axboe <axboe@kernel.dk> wrote:
> >>
> >> On 9/25/18 3:25 AM, Ard Biesheuvel wrote:
> >>> On Mon, 24 Sep 2018 at 19:53, Kees Cook <keescook@chromium.org> wrote:
> >>>>
> >>>> On Mon, Sep 24, 2018 at 4:52 AM, Ard Biesheuvel
> >>>> <ard.biesheuvel@linaro.org> wrote:
> >>>>> On Wed, 19 Sep 2018 at 04:11, Kees Cook <keescook@chromium.org> wrote:
> >>>>>> @@ -119,7 +119,7 @@ cryptoloop_transfer(struct loop_device *lo, int cmd,
> >>>>>>         unsigned in_offs, out_offs;
> >>>>>>         int err;
> >>>>>>
> >>>>>> -       skcipher_request_set_tfm(req, tfm);
> >>>>>> +       skcipher_request_set_sync_tfm(req, tfm);
> >>>>>>         skcipher_request_set_callback(req, CRYPTO_TFM_REQ_MAY_SLEEP,
> >>>>>>                                       NULL, NULL);
> >>>>>>
> >>>>>
> >>>>> Does this work?
> >>>>
> >>>> Everything is a direct wrapper for existing types and functions, so I
> >>>> wouldn't expect any functional change. I haven't been able to test
> >>>> this particular interface, though. cryptoloop is very deprecated,
> >>>> isn't it?
> >>>>
> >>>
> >>> Ah yes, I managed to confuse myself there. This looks all fine to me.
> >>>
> >>> In any case, this is another example where we may decide to fix the
> >>> code rather than retain the request allocation on the stack (but that
> >>> is Jens's call ultimately, I suppose)
> >>>
> >>> diff --git a/drivers/block/cryptoloop.c b/drivers/block/cryptoloop.c
> >>> index 7033a4beda66..5ed2167219ba 100644
> >>> --- a/drivers/block/cryptoloop.c
> >>> +++ b/drivers/block/cryptoloop.c
> >>> @@ -110,7 +110,7 @@ cryptoloop_transfer(struct loop_device *lo, int cmd,
> >>>                     int size, sector_t IV)
> >>>  {
> >>>         struct crypto_skcipher *tfm = lo->key_data;
> >>> -       SKCIPHER_REQUEST_ON_STACK(req, tfm);
> >>> +       struct skcipher_request *req;
> >>>         struct scatterlist sg_out;
> >>>         struct scatterlist sg_in;
> >>>
> >>> @@ -119,7 +119,10 @@ cryptoloop_transfer(struct loop_device *lo, int cmd,
> >>>         unsigned in_offs, out_offs;
> >>>         int err;
> >>>
> >>> -       skcipher_request_set_tfm(req, tfm);
> >>> +       req = skcipher_request_alloc(tfm, GFP_NOIO);
> >>> +       if (!req)
> >>> +               return -ENOMEM;
> >>
> >> Is this going to be reliable? ->transfer() is called when we're doing IO,
> >> and you'd normally need a mempool backed allocation to make this safe
> >> and guarantee forward progress.
> >>
> >
> > As far as I can tell, this function is only called from
> > lo_read_transfer/lo_write_transfer, both of which do an unconditional
> > alloc_page(GFP_NOIO), which is why I assumed that kmalloc(GFP_NOIO)
> > would be permissible in the same context. Are you saying this may not
> > be the case?
>
> Doesn't appear to be safe for either your case, nor the page it's
> allocating. If the allocator fails this allocation, then you'll get
> an EIO on that request. The more likely case is the allocator taking
> forever to satisfy the request, in which case you'll have very
> large latencies for IO when you are close to being out of memory.
> The preferred setup for allocating memory for IO is having a mempool
> of at least one item. If you end up blocking for memory, you'll at
> most get to wait for the existing IO that's using that memory to
> complete (per waiter, of course).
>

Ah, great. So the code is already broken to begin with.

In that case, may we have your ack for Kees's original patch, which is
effectively a no-op except for the fact that the size of the stack
buffer is no longer decided at runtime?

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH crypto-next 00/23] crypto: skcipher - Remove VLA usage
  2018-09-19  2:10 [PATCH crypto-next 00/23] crypto: skcipher - Remove VLA usage Kees Cook
                   ` (23 preceding siblings ...)
  2018-09-25  0:49 ` [PATCH crypto-next 00/23] crypto: skcipher - Remove VLA usage Kees Cook
@ 2018-09-28  5:08 ` Herbert Xu
  2018-09-28 16:13   ` Kees Cook
  24 siblings, 1 reply; 42+ messages in thread
From: Herbert Xu @ 2018-09-28  5:08 UTC (permalink / raw)
  To: Kees Cook
  Cc: Ard Biesheuvel, Eric Biggers, linux-crypto, Linux Kernel Mailing List

On Tue, Sep 18, 2018 at 07:10:37PM -0700, Kees Cook wrote:
> This is the full follow-up to earlier discussions[1] that suggested
> adding a new struct crypto_sync_skcipher to handle the VLA removal from
> SKCIPHER_REQUEST_ON_STACK.
> 
> This series is effectively a no-op change: everything is a wrapper
> around struct crypto_skcipher, but provides compile-time enforcement
> for not putting an ASYNC skcipher on the stack, which allows us to
> declare the on-stack requests with a fixed stack size.
> 
> [1] https://lkml.kernel.org/r/CAGXu5j+bpLK=EQ9LHkO8V=sdaQwt==6fbGhgn2Vi1E9_WxSGRQ@mail.gmail.com
> 
> -Kees
> 
> Kees Cook (23):
>   crypto: skcipher - Introduce crypto_sync_skcipher
>   gss_krb5: Remove VLA usage of skcipher
>   lib80211: Remove VLA usage of skcipher
>   mac802154: Remove VLA usage of skcipher
>   s390/crypto: Remove VLA usage of skcipher
>   x86/fpu: Remove VLA usage of skcipher
>   block: cryptoloop: Remove VLA usage of skcipher
>   libceph: Remove VLA usage of skcipher
>   ppp: mppe: Remove VLA usage of skcipher
>   rxrpc: Remove VLA usage of skcipher
>   wusb: Remove VLA usage of skcipher
>   crypto: ccp - Remove VLA usage of skcipher
>   crypto: vmx - Remove VLA usage of skcipher
>   crypto: null - Remove VLA usage of skcipher
>   crypto: cryptd - Remove VLA usage of skcipher
>   crypto: sahara - Remove VLA usage of skcipher
>   crypto: qce - Remove VLA usage of skcipher
>   crypto: artpec6 - Remove VLA usage of skcipher
>   crypto: chelsio - Remove VLA usage of skcipher
>   crypto: mxs-dcp - Remove VLA usage of skcipher
>   crypto: omap-aes - Remove VLA usage of skcipher
>   crypto: picoxcell - Remove VLA usage of skcipher
>   crypto: skcipher - Remove SKCIPHER_REQUEST_ON_STACK()
> 
>  arch/s390/crypto/aes_s390.c                   | 48 +++++-----
>  arch/x86/crypto/fpu.c                         | 30 ++++---
>  crypto/algif_aead.c                           | 12 +--
>  crypto/authenc.c                              |  8 +-
>  crypto/authencesn.c                           |  8 +-
>  crypto/cryptd.c                               | 32 +++----
>  crypto/crypto_null.c                          | 11 ++-
>  crypto/echainiv.c                             |  4 +-
>  crypto/gcm.c                                  |  8 +-
>  crypto/seqiv.c                                |  4 +-
>  crypto/skcipher.c                             | 24 +++++
>  drivers/block/cryptoloop.c                    | 22 ++---
>  drivers/crypto/axis/artpec6_crypto.c          | 19 ++--
>  drivers/crypto/ccp/ccp-crypto-aes-xts.c       | 13 +--
>  drivers/crypto/ccp/ccp-crypto.h               |  2 +-
>  drivers/crypto/chelsio/chcr_algo.c            | 27 +++---
>  drivers/crypto/chelsio/chcr_crypto.h          |  2 +-
>  drivers/crypto/mxs-dcp.c                      | 21 +++--
>  drivers/crypto/omap-aes.c                     | 17 ++--
>  drivers/crypto/omap-aes.h                     |  2 +-
>  drivers/crypto/picoxcell_crypto.c             | 21 +++--
>  drivers/crypto/qce/ablkcipher.c               | 13 ++-
>  drivers/crypto/qce/cipher.h                   |  2 +-
>  drivers/crypto/sahara.c                       | 31 ++++---
>  drivers/crypto/vmx/aes_cbc.c                  | 22 ++---
>  drivers/crypto/vmx/aes_ctr.c                  | 18 ++--
>  drivers/crypto/vmx/aes_xts.c                  | 18 ++--
>  drivers/net/ppp/ppp_mppe.c                    | 27 +++---
>  drivers/staging/rtl8192e/rtllib_crypt_tkip.c  | 34 ++++----
>  drivers/staging/rtl8192e/rtllib_crypt_wep.c   | 28 +++---
>  .../rtl8192u/ieee80211/ieee80211_crypt_tkip.c | 34 ++++----
>  .../rtl8192u/ieee80211/ieee80211_crypt_wep.c  | 26 +++---
>  drivers/usb/wusbcore/crypto.c                 | 16 ++--
>  include/crypto/internal/geniv.h               |  2 +-
>  include/crypto/null.h                         |  2 +-
>  include/crypto/skcipher.h                     | 74 +++++++++++++++-
>  include/linux/sunrpc/gss_krb5.h               | 30 +++----
>  net/ceph/crypto.c                             | 12 +--
>  net/ceph/crypto.h                             |  2 +-
>  net/mac802154/llsec.c                         | 16 ++--
>  net/mac802154/llsec.h                         |  2 +-
>  net/rxrpc/ar-internal.h                       |  2 +-
>  net/rxrpc/rxkad.c                             | 44 +++++-----
>  net/sunrpc/auth_gss/gss_krb5_crypto.c         | 87 ++++++++++---------
>  net/sunrpc/auth_gss/gss_krb5_keys.c           |  9 +-
>  net/sunrpc/auth_gss/gss_krb5_mech.c           | 53 ++++++-----
>  net/sunrpc/auth_gss/gss_krb5_seqnum.c         | 18 ++--
>  net/sunrpc/auth_gss/gss_krb5_wrap.c           | 20 ++---
>  net/wireless/lib80211_crypt_tkip.c            | 34 ++++----
>  net/wireless/lib80211_crypt_wep.c             | 28 +++---
>  50 files changed, 563 insertions(+), 476 deletions(-)

All applied.  Thanks.
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH crypto-next 00/23] crypto: skcipher - Remove VLA usage
  2018-09-28  5:08 ` Herbert Xu
@ 2018-09-28 16:13   ` Kees Cook
  0 siblings, 0 replies; 42+ messages in thread
From: Kees Cook @ 2018-09-28 16:13 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Ard Biesheuvel, Eric Biggers, linux-crypto, Linux Kernel Mailing List

On Thu, Sep 27, 2018 at 10:08 PM, Herbert Xu
<herbert@gondor.apana.org.au> wrote:
> All applied.  Thanks.

Awesome! Thanks :)

-Kees

-- 
Kees Cook
Pixel Security

^ permalink raw reply	[flat|nested] 42+ messages in thread

end of thread, other threads:[~2018-09-28 16:13 UTC | newest]

Thread overview: 42+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-09-19  2:10 [PATCH crypto-next 00/23] crypto: skcipher - Remove VLA usage Kees Cook
2018-09-19  2:10 ` [PATCH crypto-next 01/23] crypto: skcipher - Introduce crypto_sync_skcipher Kees Cook
2018-09-24 11:48   ` Ard Biesheuvel
2018-09-19  2:10 ` [PATCH crypto-next 02/23] gss_krb5: Remove VLA usage of skcipher Kees Cook
2018-09-19  2:10 ` [PATCH crypto-next 03/23] lib80211: " Kees Cook
2018-09-19 20:37   ` Johannes Berg
2018-09-19  2:10 ` [PATCH crypto-next 04/23] mac802154: " Kees Cook
2018-09-19  2:10 ` [PATCH crypto-next 05/23] s390/crypto: " Kees Cook
2018-09-19  2:10 ` [PATCH crypto-next 06/23] x86/fpu: " Kees Cook
2018-09-24 11:45   ` Ard Biesheuvel
2018-09-24 17:35     ` Kees Cook
2018-09-19  2:10 ` [PATCH crypto-next 07/23] block: cryptoloop: " Kees Cook
2018-09-24 11:52   ` Ard Biesheuvel
2018-09-24 17:53     ` Kees Cook
2018-09-25  9:25       ` Ard Biesheuvel
2018-09-25 16:03         ` Jens Axboe
2018-09-25 16:16           ` Ard Biesheuvel
2018-09-25 16:32             ` Jens Axboe
2018-09-26  8:19               ` Ard Biesheuvel
2018-09-19  2:10 ` [PATCH crypto-next 08/23] libceph: " Kees Cook
2018-09-19  2:10 ` [PATCH crypto-next 09/23] ppp: mppe: " Kees Cook
2018-09-19  2:10 ` [PATCH crypto-next 10/23] rxrpc: " Kees Cook
2018-09-19  2:10 ` [PATCH crypto-next 11/23] wusb: " Kees Cook
2018-09-20 10:39   ` Greg Kroah-Hartman
2018-09-19  2:10 ` [PATCH crypto-next 12/23] crypto: ccp - " Kees Cook
2018-09-19  2:10 ` [PATCH crypto-next 13/23] crypto: vmx " Kees Cook
2018-09-19  2:10 ` [PATCH crypto-next 14/23] crypto: null " Kees Cook
2018-09-19  2:10 ` [PATCH crypto-next 15/23] crypto: cryptd " Kees Cook
2018-09-19  2:10 ` [PATCH crypto-next 16/23] crypto: sahara " Kees Cook
2018-09-19  2:10 ` [PATCH crypto-next 17/23] crypto: qce " Kees Cook
2018-09-19  2:10 ` [PATCH crypto-next 18/23] crypto: artpec6 " Kees Cook
2018-09-23 12:13   ` Lars Persson
2018-09-19  2:10 ` [PATCH crypto-next 19/23] crypto: chelsio " Kees Cook
2018-09-19  2:10 ` [PATCH crypto-next 20/23] crypto: mxs-dcp " Kees Cook
2018-09-19  2:10 ` [PATCH crypto-next 21/23] crypto: omap-aes " Kees Cook
2018-09-19  2:10 ` [PATCH crypto-next 22/23] crypto: picoxcell " Kees Cook
2018-09-19  2:11 ` [PATCH crypto-next 23/23] crypto: skcipher - Remove SKCIPHER_REQUEST_ON_STACK() Kees Cook
2018-09-25  0:49 ` [PATCH crypto-next 00/23] crypto: skcipher - Remove VLA usage Kees Cook
2018-09-25  4:49   ` Herbert Xu
2018-09-25 15:39     ` Kees Cook
2018-09-28  5:08 ` Herbert Xu
2018-09-28 16:13   ` Kees Cook

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).