linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v6 00/18] crypto: Remove VLA usage
@ 2018-07-24 16:49 Kees Cook
  2018-07-24 16:49 ` [PATCH v6 01/18] crypto: xcbc: " Kees Cook
                   ` (17 more replies)
  0 siblings, 18 replies; 32+ messages in thread
From: Kees Cook @ 2018-07-24 16:49 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Kees Cook, Arnd Bergmann, Eric Biggers, Gustavo A. R. Silva,
	Alasdair Kergon, Rabin Vincent, Tim Chen, Rafael J. Wysocki,
	Pavel Machek, Thomas Gleixner, Ingo Molnar, H. Peter Anvin, x86,
	Philipp Reisner, Lars Ellenberg, Jens Axboe, Giovanni Cabiddu,
	Mike Snitzer, Paul Mackerras, Greg Kroah-Hartman, David Howells,
	Johannes Berg, Tudor-Dan Ambarus, Jia-Ju Bai, Andrew Morton,
	Geert Uytterhoeven, Josh Poimboeuf, David Woodhouse, Will Deacon,
	dm-devel, linux-pm, linux-crypto, drbd-dev, linux-block,
	qat-linux, linux-ppp, netdev, devel, linux-afs, linux-wireless,
	linux-kernel

For newly added CCs on this series: your Ack/Review would be very
welcome! Especially for the ahash-to-shash conversion patches.

v6:
- make xcbc blocksize unconditional
- add ahash-to-shash conversion patch series to entirely remove
  AHASH_REQUEST_ON_STACK from the kernel

v5:
- limit AHASH_REQUEST_ON_STACK size only to non-async hash wrapping.
- sanity-check ahash reqsize only when doing shash wrapping.
- remove frame_warn changes in favor of shash conversions and other fixes.
- send ahash to shash conversion patches and other fixes separately.

v4:
- add back *_REQUEST_ON_STACK patches.
- programmatically find stack sizes for *_REQUEST_ON_STACK patches.
- whitelist some code that trips FRAME_WARN on 32-bit builds.
- fix alignment patches.

v3:
- drop *_REQUEST_ON_STACK patches. The rest of this series is pretty
  straight-forward, and I'd like to get them tested in -next while
  we continue to chip away at the *_REQUEST_ON_STACK VLA removal patches
  separately. "Part 2" will continue with those.

v2:
- use 512 instead of PAGE_SIZE / 8 to avoid bloat on large-page archs.
- swtich xcbc to "16" max universally.
- fix typo in bounds check for dm buffer size.
- examine actual reqsizes for skcipher and ahash instead of guessing.
- improve names and comments for alg maxes

This is nearly the last of the VLA removals[1], but it's one of the
largest because crypto gets used in lots of places. After looking
through code, usage, reading the threads Gustavo started, and comparing
the use-cases to the other VLA removals that have landed in the kernel,
I think this series is likely the best way forward to shut the door on
VLAs forever.

For background, the crypto stack usage is for callers to do an immediate
bit of work that doesn't allocate new memory. This means that other VLA
removal techniques (like just using kmalloc) aren't workable, and the
next common technique is needed: examination of maximum stack usage and
the addition of sanity checks. This series does that, and in several
cases, these maximums were already implicit in the code.

This series is intended to land via the crypto tree for 4.19, though it
touches dm, networking, and a few other things as well, since there are
dependent patches (new crypto #defines being used, etc).

Thanks!

-Kees

[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com

Ard Biesheuvel (1):
  crypto: ccm: Remove VLA usage

Kees Cook (17):
  crypto: xcbc: Remove VLA usage
  crypto: cbc: Remove VLA usage
  crypto: hash: Remove VLA usage
  dm: Remove VLA usage from hashes
  crypto alg: Introduce generic max blocksize and alignmask
  crypto: qat: Remove VLA usage
  crypto: shash: Remove VLA usage in unaligned hashing
  crypto: skcipher: Remove VLA usage for SKCIPHER_REQUEST_ON_STACK
  ppp: mppe: Remove VLA usage
  x86/power/64: Remove VLA usage
  dm crypt: Convert essiv from ahash to shash
  drbd: Convert from ahash to shash
  wireless/lib80211: Convert from ahash to shash
  staging: rtl8192u: ieee80211: Convert from ahash to shash
  staging: rtl8192e: ieee80211: Convert from ahash to shash
  rxrpc: Reuse SKCIPHER_REQUEST_ON_STACK buffer
  crypto: Remove AHASH_REQUEST_ON_STACK

 arch/x86/power/hibernate_64.c                 | 36 +++++++-----
 crypto/ahash.c                                |  4 +-
 crypto/algapi.c                               |  7 ++-
 crypto/algif_hash.c                           |  2 +-
 crypto/ccm.c                                  |  9 ++-
 crypto/shash.c                                | 33 ++++++-----
 crypto/xcbc.c                                 |  8 ++-
 drivers/block/drbd/drbd_int.h                 | 13 +++--
 drivers/block/drbd/drbd_main.c                | 14 ++---
 drivers/block/drbd/drbd_nl.c                  | 39 ++++---------
 drivers/block/drbd/drbd_receiver.c            | 35 +++++------
 drivers/block/drbd/drbd_worker.c              | 56 ++++++++----------
 drivers/crypto/qat/qat_common/qat_algs.c      |  8 ++-
 drivers/md/dm-crypt.c                         | 31 +++++-----
 drivers/md/dm-integrity.c                     | 23 ++++++--
 drivers/md/dm-verity-fec.c                    |  5 +-
 drivers/net/ppp/ppp_mppe.c                    | 56 +++++++++---------
 drivers/staging/rtl8192e/rtllib_crypt_tkip.c  | 56 +++++++++---------
 .../rtl8192u/ieee80211/ieee80211_crypt_tkip.c | 57 +++++++++---------
 include/crypto/algapi.h                       |  4 +-
 include/crypto/cbc.h                          |  4 +-
 include/crypto/hash.h                         | 11 ++--
 include/crypto/internal/skcipher.h            |  1 +
 include/crypto/skcipher.h                     |  4 +-
 include/linux/compiler-gcc.h                  |  1 -
 net/rxrpc/rxkad.c                             | 25 ++++----
 net/wireless/lib80211_crypt_tkip.c            | 58 +++++++++----------
 27 files changed, 313 insertions(+), 287 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH v6 01/18] crypto: xcbc: Remove VLA usage
  2018-07-24 16:49 [PATCH v6 00/18] crypto: Remove VLA usage Kees Cook
@ 2018-07-24 16:49 ` Kees Cook
  2018-07-24 16:49 ` [PATCH v6 02/18] crypto: cbc: " Kees Cook
                   ` (16 subsequent siblings)
  17 siblings, 0 replies; 32+ messages in thread
From: Kees Cook @ 2018-07-24 16:49 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Kees Cook, Arnd Bergmann, Eric Biggers, Gustavo A. R. Silva,
	Alasdair Kergon, Rabin Vincent, Tim Chen, Rafael J. Wysocki,
	Pavel Machek, Thomas Gleixner, Ingo Molnar, H. Peter Anvin, x86,
	Philipp Reisner, Lars Ellenberg, Jens Axboe, Giovanni Cabiddu,
	Mike Snitzer, Paul Mackerras, Greg Kroah-Hartman, David Howells,
	Johannes Berg, Tudor-Dan Ambarus, Jia-Ju Bai, Andrew Morton,
	Geert Uytterhoeven, Josh Poimboeuf, David Woodhouse, Will Deacon,
	dm-devel, linux-pm, linux-crypto, drbd-dev, linux-block,
	qat-linux, linux-ppp, netdev, devel, linux-afs, linux-wireless,
	linux-kernel

In the quest to remove all stack VLA usage from the kernel[1], this uses
the maximum blocksize and adds a sanity check. For xcbc, the blocksize
must always be 16, so use that, since it's already being enforced during
instantiation.

[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 crypto/xcbc.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/crypto/xcbc.c b/crypto/xcbc.c
index 25c75af50d3f..c055f57fab11 100644
--- a/crypto/xcbc.c
+++ b/crypto/xcbc.c
@@ -57,15 +57,17 @@ struct xcbc_desc_ctx {
 	u8 ctx[];
 };
 
+#define XCBC_BLOCKSIZE	16
+
 static int crypto_xcbc_digest_setkey(struct crypto_shash *parent,
 				     const u8 *inkey, unsigned int keylen)
 {
 	unsigned long alignmask = crypto_shash_alignmask(parent);
 	struct xcbc_tfm_ctx *ctx = crypto_shash_ctx(parent);
-	int bs = crypto_shash_blocksize(parent);
 	u8 *consts = PTR_ALIGN(&ctx->ctx[0], alignmask + 1);
 	int err = 0;
-	u8 key1[bs];
+	u8 key1[XCBC_BLOCKSIZE];
+	int bs = sizeof(key1);
 
 	if ((err = crypto_cipher_setkey(ctx->child, inkey, keylen)))
 		return err;
@@ -212,7 +214,7 @@ static int xcbc_create(struct crypto_template *tmpl, struct rtattr **tb)
 		return PTR_ERR(alg);
 
 	switch(alg->cra_blocksize) {
-	case 16:
+	case XCBC_BLOCKSIZE:
 		break;
 	default:
 		goto out_put_alg;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v6 02/18] crypto: cbc: Remove VLA usage
  2018-07-24 16:49 [PATCH v6 00/18] crypto: Remove VLA usage Kees Cook
  2018-07-24 16:49 ` [PATCH v6 01/18] crypto: xcbc: " Kees Cook
@ 2018-07-24 16:49 ` Kees Cook
  2018-07-24 16:49 ` [PATCH v6 03/18] crypto: hash: " Kees Cook
                   ` (15 subsequent siblings)
  17 siblings, 0 replies; 32+ messages in thread
From: Kees Cook @ 2018-07-24 16:49 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Kees Cook, Arnd Bergmann, Eric Biggers, Gustavo A. R. Silva,
	Alasdair Kergon, Rabin Vincent, Tim Chen, Rafael J. Wysocki,
	Pavel Machek, Thomas Gleixner, Ingo Molnar, H. Peter Anvin, x86,
	Philipp Reisner, Lars Ellenberg, Jens Axboe, Giovanni Cabiddu,
	Mike Snitzer, Paul Mackerras, Greg Kroah-Hartman, David Howells,
	Johannes Berg, Tudor-Dan Ambarus, Jia-Ju Bai, Andrew Morton,
	Geert Uytterhoeven, Josh Poimboeuf, David Woodhouse, Will Deacon,
	dm-devel, linux-pm, linux-crypto, drbd-dev, linux-block,
	qat-linux, linux-ppp, netdev, devel, linux-afs, linux-wireless,
	linux-kernel

In the quest to remove all stack VLA usage from the kernel[1], this
uses the upper bounds on blocksize. Since this is always a cipher
blocksize, use the existing cipher max blocksize.

[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 include/crypto/cbc.h | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/include/crypto/cbc.h b/include/crypto/cbc.h
index f5b8bfc22e6d..47db0aac2ab9 100644
--- a/include/crypto/cbc.h
+++ b/include/crypto/cbc.h
@@ -113,7 +113,9 @@ static inline int crypto_cbc_decrypt_inplace(
 	unsigned int bsize = crypto_skcipher_blocksize(tfm);
 	unsigned int nbytes = walk->nbytes;
 	u8 *src = walk->src.virt.addr;
-	u8 last_iv[bsize];
+	u8 last_iv[MAX_CIPHER_BLOCKSIZE];
+
+	BUG_ON(bsize > sizeof(last_iv));
 
 	/* Start of the last block. */
 	src += nbytes - (nbytes & (bsize - 1)) - bsize;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v6 03/18] crypto: hash: Remove VLA usage
  2018-07-24 16:49 [PATCH v6 00/18] crypto: Remove VLA usage Kees Cook
  2018-07-24 16:49 ` [PATCH v6 01/18] crypto: xcbc: " Kees Cook
  2018-07-24 16:49 ` [PATCH v6 02/18] crypto: cbc: " Kees Cook
@ 2018-07-24 16:49 ` Kees Cook
  2018-07-24 16:49 ` [PATCH v6 04/18] dm: Remove VLA usage from hashes Kees Cook
                   ` (14 subsequent siblings)
  17 siblings, 0 replies; 32+ messages in thread
From: Kees Cook @ 2018-07-24 16:49 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Kees Cook, Arnd Bergmann, Eric Biggers, Gustavo A. R. Silva,
	Alasdair Kergon, Rabin Vincent, Tim Chen, Rafael J. Wysocki,
	Pavel Machek, Thomas Gleixner, Ingo Molnar, H. Peter Anvin, x86,
	Philipp Reisner, Lars Ellenberg, Jens Axboe, Giovanni Cabiddu,
	Mike Snitzer, Paul Mackerras, Greg Kroah-Hartman, David Howells,
	Johannes Berg, Tudor-Dan Ambarus, Jia-Ju Bai, Andrew Morton,
	Geert Uytterhoeven, Josh Poimboeuf, David Woodhouse, Will Deacon,
	dm-devel, linux-pm, linux-crypto, drbd-dev, linux-block,
	qat-linux, linux-ppp, netdev, devel, linux-afs, linux-wireless,
	linux-kernel

In the quest to remove all stack VLA usage from the kernel[1], this
removes the VLAs in SHASH_DESC_ON_STACK (via crypto_shash_descsize())
by using the maximum allowable size (which is now more clearly captured
in a macro), along with a few other cases. Similar limits are turned into
macros as well.

A review of existing sizes shows that SHA512_DIGEST_SIZE (64) is the
largest digest size and that sizeof(struct sha3_state) (360) is the
largest descriptor size. The corresponding maximums are reduced.

[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 crypto/ahash.c        | 4 ++--
 crypto/algif_hash.c   | 2 +-
 crypto/shash.c        | 6 +++---
 include/crypto/hash.h | 6 +++++-
 4 files changed, 11 insertions(+), 7 deletions(-)

diff --git a/crypto/ahash.c b/crypto/ahash.c
index a64c143165b1..78aaf2158c43 100644
--- a/crypto/ahash.c
+++ b/crypto/ahash.c
@@ -550,8 +550,8 @@ static int ahash_prepare_alg(struct ahash_alg *alg)
 {
 	struct crypto_alg *base = &alg->halg.base;
 
-	if (alg->halg.digestsize > PAGE_SIZE / 8 ||
-	    alg->halg.statesize > PAGE_SIZE / 8 ||
+	if (alg->halg.digestsize > HASH_MAX_DIGESTSIZE ||
+	    alg->halg.statesize > HASH_MAX_STATESIZE ||
 	    alg->halg.statesize == 0)
 		return -EINVAL;
 
diff --git a/crypto/algif_hash.c b/crypto/algif_hash.c
index bfcf595fd8f9..d0cde541beb6 100644
--- a/crypto/algif_hash.c
+++ b/crypto/algif_hash.c
@@ -239,7 +239,7 @@ static int hash_accept(struct socket *sock, struct socket *newsock, int flags,
 	struct alg_sock *ask = alg_sk(sk);
 	struct hash_ctx *ctx = ask->private;
 	struct ahash_request *req = &ctx->req;
-	char state[crypto_ahash_statesize(crypto_ahash_reqtfm(req)) ? : 1];
+	char state[HASH_MAX_STATESIZE];
 	struct sock *sk2;
 	struct alg_sock *ask2;
 	struct hash_ctx *ctx2;
diff --git a/crypto/shash.c b/crypto/shash.c
index 5d732c6bb4b2..86d76b5c626c 100644
--- a/crypto/shash.c
+++ b/crypto/shash.c
@@ -458,9 +458,9 @@ static int shash_prepare_alg(struct shash_alg *alg)
 {
 	struct crypto_alg *base = &alg->base;
 
-	if (alg->digestsize > PAGE_SIZE / 8 ||
-	    alg->descsize > PAGE_SIZE / 8 ||
-	    alg->statesize > PAGE_SIZE / 8)
+	if (alg->digestsize > HASH_MAX_DIGESTSIZE ||
+	    alg->descsize > HASH_MAX_DESCSIZE ||
+	    alg->statesize > HASH_MAX_STATESIZE)
 		return -EINVAL;
 
 	base->cra_type = &crypto_shash_type;
diff --git a/include/crypto/hash.h b/include/crypto/hash.h
index 76e432cab75d..21587011ab0f 100644
--- a/include/crypto/hash.h
+++ b/include/crypto/hash.h
@@ -151,9 +151,13 @@ struct shash_desc {
 	void *__ctx[] CRYPTO_MINALIGN_ATTR;
 };
 
+#define HASH_MAX_DIGESTSIZE	 64
+#define HASH_MAX_DESCSIZE	360
+#define HASH_MAX_STATESIZE	512
+
 #define SHASH_DESC_ON_STACK(shash, ctx)				  \
 	char __##shash##_desc[sizeof(struct shash_desc) +	  \
-		crypto_shash_descsize(ctx)] CRYPTO_MINALIGN_ATTR; \
+		HASH_MAX_DESCSIZE] CRYPTO_MINALIGN_ATTR; \
 	struct shash_desc *shash = (struct shash_desc *)__##shash##_desc
 
 /**
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v6 04/18] dm: Remove VLA usage from hashes
  2018-07-24 16:49 [PATCH v6 00/18] crypto: Remove VLA usage Kees Cook
                   ` (2 preceding siblings ...)
  2018-07-24 16:49 ` [PATCH v6 03/18] crypto: hash: " Kees Cook
@ 2018-07-24 16:49 ` Kees Cook
  2018-07-24 16:49 ` [PATCH v6 05/18] crypto alg: Introduce generic max blocksize and alignmask Kees Cook
                   ` (13 subsequent siblings)
  17 siblings, 0 replies; 32+ messages in thread
From: Kees Cook @ 2018-07-24 16:49 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Kees Cook, Arnd Bergmann, Eric Biggers, Gustavo A. R. Silva,
	Alasdair Kergon, Rabin Vincent, Tim Chen, Rafael J. Wysocki,
	Pavel Machek, Thomas Gleixner, Ingo Molnar, H. Peter Anvin, x86,
	Philipp Reisner, Lars Ellenberg, Jens Axboe, Giovanni Cabiddu,
	Mike Snitzer, Paul Mackerras, Greg Kroah-Hartman, David Howells,
	Johannes Berg, Tudor-Dan Ambarus, Jia-Ju Bai, Andrew Morton,
	Geert Uytterhoeven, Josh Poimboeuf, David Woodhouse, Will Deacon,
	dm-devel, linux-pm, linux-crypto, drbd-dev, linux-block,
	qat-linux, linux-ppp, netdev, devel, linux-afs, linux-wireless,
	linux-kernel

In the quest to remove all stack VLA usage from the kernel[1], this uses
the new HASH_MAX_DIGESTSIZE from the crypto layer to allocate the upper
bounds on stack usage.

[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 drivers/md/dm-integrity.c  | 23 +++++++++++++++++------
 drivers/md/dm-verity-fec.c |  5 ++++-
 2 files changed, 21 insertions(+), 7 deletions(-)

diff --git a/drivers/md/dm-integrity.c b/drivers/md/dm-integrity.c
index 86438b2f10dd..884edd7cf1d0 100644
--- a/drivers/md/dm-integrity.c
+++ b/drivers/md/dm-integrity.c
@@ -521,7 +521,12 @@ static void section_mac(struct dm_integrity_c *ic, unsigned section, __u8 result
 		}
 		memset(result + size, 0, JOURNAL_MAC_SIZE - size);
 	} else {
-		__u8 digest[size];
+		__u8 digest[HASH_MAX_DIGESTSIZE];
+
+		if (WARN_ON(size > sizeof(digest))) {
+			dm_integrity_io_error(ic, "digest_size", -EINVAL);
+			goto err;
+		}
 		r = crypto_shash_final(desc, digest);
 		if (unlikely(r)) {
 			dm_integrity_io_error(ic, "crypto_shash_final", r);
@@ -1244,7 +1249,7 @@ static void integrity_metadata(struct work_struct *w)
 		struct bio *bio = dm_bio_from_per_bio_data(dio, sizeof(struct dm_integrity_io));
 		char *checksums;
 		unsigned extra_space = unlikely(digest_size > ic->tag_size) ? digest_size - ic->tag_size : 0;
-		char checksums_onstack[ic->tag_size + extra_space];
+		char checksums_onstack[HASH_MAX_DIGESTSIZE];
 		unsigned sectors_to_process = dio->range.n_sectors;
 		sector_t sector = dio->range.logical_sector;
 
@@ -1253,8 +1258,14 @@ static void integrity_metadata(struct work_struct *w)
 
 		checksums = kmalloc((PAGE_SIZE >> SECTOR_SHIFT >> ic->sb->log2_sectors_per_block) * ic->tag_size + extra_space,
 				    GFP_NOIO | __GFP_NORETRY | __GFP_NOWARN);
-		if (!checksums)
+		if (!checksums) {
 			checksums = checksums_onstack;
+			if (WARN_ON(extra_space &&
+				    digest_size > sizeof(checksums_onstack))) {
+				r = -EINVAL;
+				goto error;
+			}
+		}
 
 		__bio_for_each_segment(bv, bio, iter, dio->orig_bi_iter) {
 			unsigned pos;
@@ -1466,7 +1477,7 @@ static bool __journal_read_write(struct dm_integrity_io *dio, struct bio *bio,
 				} while (++s < ic->sectors_per_block);
 #ifdef INTERNAL_VERIFY
 				if (ic->internal_hash) {
-					char checksums_onstack[max(crypto_shash_digestsize(ic->internal_hash), ic->tag_size)];
+					char checksums_onstack[max(HASH_MAX_DIGESTSIZE, MAX_TAG_SIZE)];
 
 					integrity_sector_checksum(ic, logical_sector, mem + bv.bv_offset, checksums_onstack);
 					if (unlikely(memcmp(checksums_onstack, journal_entry_tag(ic, je), ic->tag_size))) {
@@ -1516,7 +1527,7 @@ static bool __journal_read_write(struct dm_integrity_io *dio, struct bio *bio,
 				if (ic->internal_hash) {
 					unsigned digest_size = crypto_shash_digestsize(ic->internal_hash);
 					if (unlikely(digest_size > ic->tag_size)) {
-						char checksums_onstack[digest_size];
+						char checksums_onstack[HASH_MAX_DIGESTSIZE];
 						integrity_sector_checksum(ic, logical_sector, (char *)js, checksums_onstack);
 						memcpy(journal_entry_tag(ic, je), checksums_onstack, ic->tag_size);
 					} else
@@ -1937,7 +1948,7 @@ static void do_journal_write(struct dm_integrity_c *ic, unsigned write_start,
 				    unlikely(from_replay) &&
 #endif
 				    ic->internal_hash) {
-					char test_tag[max(crypto_shash_digestsize(ic->internal_hash), ic->tag_size)];
+					char test_tag[max_t(size_t, HASH_MAX_DIGESTSIZE, MAX_TAG_SIZE)];
 
 					integrity_sector_checksum(ic, sec + ((l - j) << ic->sb->log2_sectors_per_block),
 								  (char *)access_journal_data(ic, i, l), test_tag);
diff --git a/drivers/md/dm-verity-fec.c b/drivers/md/dm-verity-fec.c
index 684af08d0747..0ce04e5b4afb 100644
--- a/drivers/md/dm-verity-fec.c
+++ b/drivers/md/dm-verity-fec.c
@@ -212,12 +212,15 @@ static int fec_read_bufs(struct dm_verity *v, struct dm_verity_io *io,
 	struct dm_verity_fec_io *fio = fec_io(io);
 	u64 block, ileaved;
 	u8 *bbuf, *rs_block;
-	u8 want_digest[v->digest_size];
+	u8 want_digest[HASH_MAX_DIGESTSIZE];
 	unsigned n, k;
 
 	if (neras)
 		*neras = 0;
 
+	if (WARN_ON(v->digest_size > sizeof(want_digest)))
+		return -EINVAL;
+
 	/*
 	 * read each of the rsn data blocks that are part of the RS block, and
 	 * interleave contents to available bufs
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v6 05/18] crypto alg: Introduce generic max blocksize and alignmask
  2018-07-24 16:49 [PATCH v6 00/18] crypto: Remove VLA usage Kees Cook
                   ` (3 preceding siblings ...)
  2018-07-24 16:49 ` [PATCH v6 04/18] dm: Remove VLA usage from hashes Kees Cook
@ 2018-07-24 16:49 ` Kees Cook
  2018-07-24 16:49 ` [PATCH v6 06/18] crypto: qat: Remove VLA usage Kees Cook
                   ` (12 subsequent siblings)
  17 siblings, 0 replies; 32+ messages in thread
From: Kees Cook @ 2018-07-24 16:49 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Kees Cook, Arnd Bergmann, Eric Biggers, Gustavo A. R. Silva,
	Alasdair Kergon, Rabin Vincent, Tim Chen, Rafael J. Wysocki,
	Pavel Machek, Thomas Gleixner, Ingo Molnar, H. Peter Anvin, x86,
	Philipp Reisner, Lars Ellenberg, Jens Axboe, Giovanni Cabiddu,
	Mike Snitzer, Paul Mackerras, Greg Kroah-Hartman, David Howells,
	Johannes Berg, Tudor-Dan Ambarus, Jia-Ju Bai, Andrew Morton,
	Geert Uytterhoeven, Josh Poimboeuf, David Woodhouse, Will Deacon,
	dm-devel, linux-pm, linux-crypto, drbd-dev, linux-block,
	qat-linux, linux-ppp, netdev, devel, linux-afs, linux-wireless,
	linux-kernel

In the quest to remove all stack VLA usage from the kernel[1], this
exposes a new general upper bound on crypto blocksize and alignmask
(higher than for the existing cipher limits) for VLA removal,
and introduces new checks.

At present, the highest cra_alignmask in the kernel is 63. The highest
cra_blocksize is 144 (SHA3_224_BLOCK_SIZE, 18 8-byte words). For the
new blocksize limit, I went with 160 (20 8-byte words).

[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 crypto/algapi.c         | 7 ++++++-
 include/crypto/algapi.h | 4 +++-
 2 files changed, 9 insertions(+), 2 deletions(-)

diff --git a/crypto/algapi.c b/crypto/algapi.c
index c0755cf4f53f..496fc51bf215 100644
--- a/crypto/algapi.c
+++ b/crypto/algapi.c
@@ -57,9 +57,14 @@ static int crypto_check_alg(struct crypto_alg *alg)
 	if (alg->cra_alignmask & (alg->cra_alignmask + 1))
 		return -EINVAL;
 
-	if (alg->cra_blocksize > PAGE_SIZE / 8)
+	/* General maximums for all algs. */
+	if (alg->cra_alignmask > MAX_ALGAPI_ALIGNMASK)
 		return -EINVAL;
 
+	if (alg->cra_blocksize > MAX_ALGAPI_BLOCKSIZE)
+		return -EINVAL;
+
+	/* Lower maximums for specific alg types. */
 	if (!alg->cra_type && (alg->cra_flags & CRYPTO_ALG_TYPE_MASK) ==
 			       CRYPTO_ALG_TYPE_CIPHER) {
 		if (alg->cra_alignmask > MAX_CIPHER_ALIGNMASK)
diff --git a/include/crypto/algapi.h b/include/crypto/algapi.h
index bd5e8ccf1687..21371ac8f355 100644
--- a/include/crypto/algapi.h
+++ b/include/crypto/algapi.h
@@ -20,8 +20,10 @@
 /*
  * Maximum values for blocksize and alignmask, used to allocate
  * static buffers that are big enough for any combination of
- * ciphers and architectures.
+ * algs and architectures. Ciphers have a lower maximum size.
  */
+#define MAX_ALGAPI_BLOCKSIZE		160
+#define MAX_ALGAPI_ALIGNMASK		63
 #define MAX_CIPHER_BLOCKSIZE		16
 #define MAX_CIPHER_ALIGNMASK		15
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v6 06/18] crypto: qat: Remove VLA usage
  2018-07-24 16:49 [PATCH v6 00/18] crypto: Remove VLA usage Kees Cook
                   ` (4 preceding siblings ...)
  2018-07-24 16:49 ` [PATCH v6 05/18] crypto alg: Introduce generic max blocksize and alignmask Kees Cook
@ 2018-07-24 16:49 ` Kees Cook
  2018-07-24 16:49 ` [PATCH v6 07/18] crypto: shash: Remove VLA usage in unaligned hashing Kees Cook
                   ` (11 subsequent siblings)
  17 siblings, 0 replies; 32+ messages in thread
From: Kees Cook @ 2018-07-24 16:49 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Kees Cook, Arnd Bergmann, Eric Biggers, Gustavo A. R. Silva,
	Alasdair Kergon, Rabin Vincent, Tim Chen, Rafael J. Wysocki,
	Pavel Machek, Thomas Gleixner, Ingo Molnar, H. Peter Anvin, x86,
	Philipp Reisner, Lars Ellenberg, Jens Axboe, Giovanni Cabiddu,
	Mike Snitzer, Paul Mackerras, Greg Kroah-Hartman, David Howells,
	Johannes Berg, Tudor-Dan Ambarus, Jia-Ju Bai, Andrew Morton,
	Geert Uytterhoeven, Josh Poimboeuf, David Woodhouse, Will Deacon,
	dm-devel, linux-pm, linux-crypto, drbd-dev, linux-block,
	qat-linux, linux-ppp, netdev, devel, linux-afs, linux-wireless,
	linux-kernel

In the quest to remove all stack VLA usage from the kernel[1], this uses
the new upper bound for the stack buffer. Also adds a sanity check.

[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 drivers/crypto/qat/qat_common/qat_algs.c | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/drivers/crypto/qat/qat_common/qat_algs.c b/drivers/crypto/qat/qat_common/qat_algs.c
index 1138e41d6805..a28edf7b792f 100644
--- a/drivers/crypto/qat/qat_common/qat_algs.c
+++ b/drivers/crypto/qat/qat_common/qat_algs.c
@@ -153,8 +153,8 @@ static int qat_alg_do_precomputes(struct icp_qat_hw_auth_algo_blk *hash,
 	struct sha512_state sha512;
 	int block_size = crypto_shash_blocksize(ctx->hash_tfm);
 	int digest_size = crypto_shash_digestsize(ctx->hash_tfm);
-	char ipad[block_size];
-	char opad[block_size];
+	char ipad[MAX_ALGAPI_BLOCKSIZE];
+	char opad[MAX_ALGAPI_BLOCKSIZE];
 	__be32 *hash_state_out;
 	__be64 *hash512_state_out;
 	int i, offset;
@@ -164,6 +164,10 @@ static int qat_alg_do_precomputes(struct icp_qat_hw_auth_algo_blk *hash,
 	shash->tfm = ctx->hash_tfm;
 	shash->flags = 0x0;
 
+	if (WARN_ON(block_size > sizeof(ipad) ||
+		    sizeof(ipad) != sizeof(opad)))
+		return -EINVAL;
+
 	if (auth_keylen > block_size) {
 		int ret = crypto_shash_digest(shash, auth_key,
 					      auth_keylen, ipad);
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v6 07/18] crypto: shash: Remove VLA usage in unaligned hashing
  2018-07-24 16:49 [PATCH v6 00/18] crypto: Remove VLA usage Kees Cook
                   ` (5 preceding siblings ...)
  2018-07-24 16:49 ` [PATCH v6 06/18] crypto: qat: Remove VLA usage Kees Cook
@ 2018-07-24 16:49 ` Kees Cook
  2018-07-24 16:49 ` [PATCH v6 08/18] crypto: skcipher: Remove VLA usage for SKCIPHER_REQUEST_ON_STACK Kees Cook
                   ` (10 subsequent siblings)
  17 siblings, 0 replies; 32+ messages in thread
From: Kees Cook @ 2018-07-24 16:49 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Kees Cook, Arnd Bergmann, Eric Biggers, Gustavo A. R. Silva,
	Alasdair Kergon, Rabin Vincent, Tim Chen, Rafael J. Wysocki,
	Pavel Machek, Thomas Gleixner, Ingo Molnar, H. Peter Anvin, x86,
	Philipp Reisner, Lars Ellenberg, Jens Axboe, Giovanni Cabiddu,
	Mike Snitzer, Paul Mackerras, Greg Kroah-Hartman, David Howells,
	Johannes Berg, Tudor-Dan Ambarus, Jia-Ju Bai, Andrew Morton,
	Geert Uytterhoeven, Josh Poimboeuf, David Woodhouse, Will Deacon,
	dm-devel, linux-pm, linux-crypto, drbd-dev, linux-block,
	qat-linux, linux-ppp, netdev, devel, linux-afs, linux-wireless,
	linux-kernel

In the quest to remove all stack VLA usage from the kernel[1], this uses
the newly defined max alignment to perform unaligned hashing to avoid
VLAs, and drops the helper function while adding sanity checks on the
resulting buffer sizes. Additionally, the __aligned_largest macro is
removed since this helper was the only user.

[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 crypto/shash.c               | 27 ++++++++++++++++-----------
 include/linux/compiler-gcc.h |  1 -
 2 files changed, 16 insertions(+), 12 deletions(-)

diff --git a/crypto/shash.c b/crypto/shash.c
index 86d76b5c626c..d21f04d70dce 100644
--- a/crypto/shash.c
+++ b/crypto/shash.c
@@ -73,13 +73,6 @@ int crypto_shash_setkey(struct crypto_shash *tfm, const u8 *key,
 }
 EXPORT_SYMBOL_GPL(crypto_shash_setkey);
 
-static inline unsigned int shash_align_buffer_size(unsigned len,
-						   unsigned long mask)
-{
-	typedef u8 __aligned_largest u8_aligned;
-	return len + (mask & ~(__alignof__(u8_aligned) - 1));
-}
-
 static int shash_update_unaligned(struct shash_desc *desc, const u8 *data,
 				  unsigned int len)
 {
@@ -88,11 +81,17 @@ static int shash_update_unaligned(struct shash_desc *desc, const u8 *data,
 	unsigned long alignmask = crypto_shash_alignmask(tfm);
 	unsigned int unaligned_len = alignmask + 1 -
 				     ((unsigned long)data & alignmask);
-	u8 ubuf[shash_align_buffer_size(unaligned_len, alignmask)]
-		__aligned_largest;
+	/*
+	 * We cannot count on __aligned() working for large values:
+	 * https://patchwork.kernel.org/patch/9507697/
+	 */
+	u8 ubuf[MAX_ALGAPI_ALIGNMASK * 2];
 	u8 *buf = PTR_ALIGN(&ubuf[0], alignmask + 1);
 	int err;
 
+	if (WARN_ON(buf + unaligned_len > ubuf + sizeof(ubuf)))
+		return -EINVAL;
+
 	if (unaligned_len > len)
 		unaligned_len = len;
 
@@ -124,11 +123,17 @@ static int shash_final_unaligned(struct shash_desc *desc, u8 *out)
 	unsigned long alignmask = crypto_shash_alignmask(tfm);
 	struct shash_alg *shash = crypto_shash_alg(tfm);
 	unsigned int ds = crypto_shash_digestsize(tfm);
-	u8 ubuf[shash_align_buffer_size(ds, alignmask)]
-		__aligned_largest;
+	/*
+	 * We cannot count on __aligned() working for large values:
+	 * https://patchwork.kernel.org/patch/9507697/
+	 */
+	u8 ubuf[MAX_ALGAPI_ALIGNMASK + HASH_MAX_DIGESTSIZE];
 	u8 *buf = PTR_ALIGN(&ubuf[0], alignmask + 1);
 	int err;
 
+	if (WARN_ON(buf + ds > ubuf + sizeof(ubuf)))
+		return -EINVAL;
+
 	err = shash->final(desc, buf);
 	if (err)
 		goto out;
diff --git a/include/linux/compiler-gcc.h b/include/linux/compiler-gcc.h
index f1a7492a5cc8..1f1cdef36a82 100644
--- a/include/linux/compiler-gcc.h
+++ b/include/linux/compiler-gcc.h
@@ -125,7 +125,6 @@
  */
 #define __pure			__attribute__((pure))
 #define __aligned(x)		__attribute__((aligned(x)))
-#define __aligned_largest	__attribute__((aligned))
 #define __printf(a, b)		__attribute__((format(printf, a, b)))
 #define __scanf(a, b)		__attribute__((format(scanf, a, b)))
 #define __attribute_const__	__attribute__((__const__))
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v6 08/18] crypto: skcipher: Remove VLA usage for SKCIPHER_REQUEST_ON_STACK
  2018-07-24 16:49 [PATCH v6 00/18] crypto: Remove VLA usage Kees Cook
                   ` (6 preceding siblings ...)
  2018-07-24 16:49 ` [PATCH v6 07/18] crypto: shash: Remove VLA usage in unaligned hashing Kees Cook
@ 2018-07-24 16:49 ` Kees Cook
  2018-07-24 16:49 ` [PATCH v6 09/18] ppp: mppe: Remove VLA usage Kees Cook
                   ` (9 subsequent siblings)
  17 siblings, 0 replies; 32+ messages in thread
From: Kees Cook @ 2018-07-24 16:49 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Kees Cook, Arnd Bergmann, Eric Biggers, Gustavo A. R. Silva,
	Alasdair Kergon, Rabin Vincent, Tim Chen, Rafael J. Wysocki,
	Pavel Machek, Thomas Gleixner, Ingo Molnar, H. Peter Anvin, x86,
	Philipp Reisner, Lars Ellenberg, Jens Axboe, Giovanni Cabiddu,
	Mike Snitzer, Paul Mackerras, Greg Kroah-Hartman, David Howells,
	Johannes Berg, Tudor-Dan Ambarus, Jia-Ju Bai, Andrew Morton,
	Geert Uytterhoeven, Josh Poimboeuf, David Woodhouse, Will Deacon,
	dm-devel, linux-pm, linux-crypto, drbd-dev, linux-block,
	qat-linux, linux-ppp, netdev, devel, linux-afs, linux-wireless,
	linux-kernel

In the quest to remove all stack VLA usage from the kernel[1], this
caps the skcipher request size similar to other limits and adds a sanity
check at registration. Looking at instrumented tcrypt output, the largest
is for lrw:

	crypt: testing lrw(aes)
	crypto_skcipher_set_reqsize: 8
	crypto_skcipher_set_reqsize: 88
	crypto_skcipher_set_reqsize: 472

[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 include/crypto/internal/skcipher.h | 1 +
 include/crypto/skcipher.h          | 4 +++-
 2 files changed, 4 insertions(+), 1 deletion(-)

diff --git a/include/crypto/internal/skcipher.h b/include/crypto/internal/skcipher.h
index e42f7063f245..5035482cbe68 100644
--- a/include/crypto/internal/skcipher.h
+++ b/include/crypto/internal/skcipher.h
@@ -130,6 +130,7 @@ static inline struct crypto_skcipher *crypto_spawn_skcipher(
 static inline void crypto_skcipher_set_reqsize(
 	struct crypto_skcipher *skcipher, unsigned int reqsize)
 {
+	BUG_ON(reqsize > SKCIPHER_MAX_REQSIZE);
 	skcipher->reqsize = reqsize;
 }
 
diff --git a/include/crypto/skcipher.h b/include/crypto/skcipher.h
index 2f327f090c3e..c48e194438cf 100644
--- a/include/crypto/skcipher.h
+++ b/include/crypto/skcipher.h
@@ -139,9 +139,11 @@ struct skcipher_alg {
 	struct crypto_alg base;
 };
 
+#define SKCIPHER_MAX_REQSIZE	472
+
 #define SKCIPHER_REQUEST_ON_STACK(name, tfm) \
 	char __##name##_desc[sizeof(struct skcipher_request) + \
-		crypto_skcipher_reqsize(tfm)] CRYPTO_MINALIGN_ATTR; \
+		SKCIPHER_MAX_REQSIZE] CRYPTO_MINALIGN_ATTR; \
 	struct skcipher_request *name = (void *)__##name##_desc
 
 /**
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v6 09/18] ppp: mppe: Remove VLA usage
  2018-07-24 16:49 [PATCH v6 00/18] crypto: Remove VLA usage Kees Cook
                   ` (7 preceding siblings ...)
  2018-07-24 16:49 ` [PATCH v6 08/18] crypto: skcipher: Remove VLA usage for SKCIPHER_REQUEST_ON_STACK Kees Cook
@ 2018-07-24 16:49 ` Kees Cook
  2018-07-24 16:49 ` [PATCH v6 10/18] x86/power/64: " Kees Cook
                   ` (8 subsequent siblings)
  17 siblings, 0 replies; 32+ messages in thread
From: Kees Cook @ 2018-07-24 16:49 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Kees Cook, Arnd Bergmann, Eric Biggers, Gustavo A. R. Silva,
	Alasdair Kergon, Rabin Vincent, Tim Chen, Rafael J. Wysocki,
	Pavel Machek, Thomas Gleixner, Ingo Molnar, H. Peter Anvin, x86,
	Philipp Reisner, Lars Ellenberg, Jens Axboe, Giovanni Cabiddu,
	Mike Snitzer, Paul Mackerras, Greg Kroah-Hartman, David Howells,
	Johannes Berg, Tudor-Dan Ambarus, Jia-Ju Bai, Andrew Morton,
	Geert Uytterhoeven, Josh Poimboeuf, David Woodhouse, Will Deacon,
	dm-devel, linux-pm, linux-crypto, drbd-dev, linux-block,
	qat-linux, linux-ppp, netdev, devel, linux-afs, linux-wireless,
	linux-kernel

In the quest to remove all stack VLA usage from the kernel[1], this
removes the discouraged use of AHASH_REQUEST_ON_STACK (and associated
VLA) by switching to shash directly and keeping the associated descriptor
allocated with the regular state on the heap.

[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com

Signed-off-by: Kees Cook <keescook@chromium.org>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 drivers/net/ppp/ppp_mppe.c | 56 ++++++++++++++++++++------------------
 1 file changed, 30 insertions(+), 26 deletions(-)

diff --git a/drivers/net/ppp/ppp_mppe.c b/drivers/net/ppp/ppp_mppe.c
index 6c7fd98cb00a..a205750b431b 100644
--- a/drivers/net/ppp/ppp_mppe.c
+++ b/drivers/net/ppp/ppp_mppe.c
@@ -96,7 +96,7 @@ static inline void sha_pad_init(struct sha_pad *shapad)
  */
 struct ppp_mppe_state {
 	struct crypto_skcipher *arc4;
-	struct crypto_ahash *sha1;
+	struct shash_desc *sha1;
 	unsigned char *sha1_digest;
 	unsigned char master_key[MPPE_MAX_KEY_LEN];
 	unsigned char session_key[MPPE_MAX_KEY_LEN];
@@ -136,25 +136,16 @@ struct ppp_mppe_state {
  */
 static void get_new_key_from_sha(struct ppp_mppe_state * state)
 {
-	AHASH_REQUEST_ON_STACK(req, state->sha1);
-	struct scatterlist sg[4];
-	unsigned int nbytes;
-
-	sg_init_table(sg, 4);
-
-	nbytes = setup_sg(&sg[0], state->master_key, state->keylen);
-	nbytes += setup_sg(&sg[1], sha_pad->sha_pad1,
-			   sizeof(sha_pad->sha_pad1));
-	nbytes += setup_sg(&sg[2], state->session_key, state->keylen);
-	nbytes += setup_sg(&sg[3], sha_pad->sha_pad2,
-			   sizeof(sha_pad->sha_pad2));
-
-	ahash_request_set_tfm(req, state->sha1);
-	ahash_request_set_callback(req, 0, NULL, NULL);
-	ahash_request_set_crypt(req, sg, state->sha1_digest, nbytes);
-
-	crypto_ahash_digest(req);
-	ahash_request_zero(req);
+	crypto_shash_init(state->sha1);
+	crypto_shash_update(state->sha1, state->master_key,
+			    state->keylen);
+	crypto_shash_update(state->sha1, sha_pad->sha_pad1,
+			    sizeof(sha_pad->sha_pad1));
+	crypto_shash_update(state->sha1, state->session_key,
+			    state->keylen);
+	crypto_shash_update(state->sha1, sha_pad->sha_pad2,
+			    sizeof(sha_pad->sha_pad2));
+	crypto_shash_final(state->sha1, state->sha1_digest);
 }
 
 /*
@@ -200,6 +191,7 @@ static void mppe_rekey(struct ppp_mppe_state * state, int initial_key)
 static void *mppe_alloc(unsigned char *options, int optlen)
 {
 	struct ppp_mppe_state *state;
+	struct crypto_shash *shash;
 	unsigned int digestsize;
 
 	if (optlen != CILEN_MPPE + sizeof(state->master_key) ||
@@ -217,13 +209,21 @@ static void *mppe_alloc(unsigned char *options, int optlen)
 		goto out_free;
 	}
 
-	state->sha1 = crypto_alloc_ahash("sha1", 0, CRYPTO_ALG_ASYNC);
-	if (IS_ERR(state->sha1)) {
-		state->sha1 = NULL;
+	shash = crypto_alloc_shash("sha1", 0, 0);
+	if (IS_ERR(shash))
+		goto out_free;
+
+	state->sha1 = kmalloc(sizeof(*state->sha1) +
+				     crypto_shash_descsize(shash),
+			      GFP_KERNEL);
+	if (!state->sha1) {
+		crypto_free_shash(shash);
 		goto out_free;
 	}
+	state->sha1->tfm = shash;
+	state->sha1->flags = 0;
 
-	digestsize = crypto_ahash_digestsize(state->sha1);
+	digestsize = crypto_shash_digestsize(shash);
 	if (digestsize < MPPE_MAX_KEY_LEN)
 		goto out_free;
 
@@ -246,7 +246,10 @@ static void *mppe_alloc(unsigned char *options, int optlen)
 
 out_free:
 	kfree(state->sha1_digest);
-	crypto_free_ahash(state->sha1);
+	if (state->sha1) {
+		crypto_free_shash(state->sha1->tfm);
+		kzfree(state->sha1);
+	}
 	crypto_free_skcipher(state->arc4);
 	kfree(state);
 out:
@@ -261,7 +264,8 @@ static void mppe_free(void *arg)
 	struct ppp_mppe_state *state = (struct ppp_mppe_state *) arg;
 	if (state) {
 		kfree(state->sha1_digest);
-		crypto_free_ahash(state->sha1);
+		crypto_free_shash(state->sha1->tfm);
+		kzfree(state->sha1);
 		crypto_free_skcipher(state->arc4);
 		kfree(state);
 	}
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v6 10/18] x86/power/64: Remove VLA usage
  2018-07-24 16:49 [PATCH v6 00/18] crypto: Remove VLA usage Kees Cook
                   ` (8 preceding siblings ...)
  2018-07-24 16:49 ` [PATCH v6 09/18] ppp: mppe: Remove VLA usage Kees Cook
@ 2018-07-24 16:49 ` Kees Cook
  2018-07-25 11:32   ` Rafael J. Wysocki
  2018-07-24 16:49 ` [PATCH v6 11/18] dm crypt: Convert essiv from ahash to shash Kees Cook
                   ` (7 subsequent siblings)
  17 siblings, 1 reply; 32+ messages in thread
From: Kees Cook @ 2018-07-24 16:49 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Kees Cook, Arnd Bergmann, Eric Biggers, Gustavo A. R. Silva,
	Alasdair Kergon, Rabin Vincent, Tim Chen, Rafael J. Wysocki,
	Pavel Machek, Thomas Gleixner, Ingo Molnar, H. Peter Anvin, x86,
	Philipp Reisner, Lars Ellenberg, Jens Axboe, Giovanni Cabiddu,
	Mike Snitzer, Paul Mackerras, Greg Kroah-Hartman, David Howells,
	Johannes Berg, Tudor-Dan Ambarus, Jia-Ju Bai, Andrew Morton,
	Geert Uytterhoeven, Josh Poimboeuf, David Woodhouse, Will Deacon,
	dm-devel, linux-pm, linux-crypto, drbd-dev, linux-block,
	qat-linux, linux-ppp, netdev, devel, linux-afs, linux-wireless,
	linux-kernel

In the quest to remove all stack VLA usage from the kernel[1], this
removes the discouraged use of AHASH_REQUEST_ON_STACK by switching to
shash directly and allocating the descriptor in heap memory (which should
be fine: the tfm has already been allocated there too).

[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com

Signed-off-by: Kees Cook <keescook@chromium.org>
Acked-by: Pavel Machek <pavel@ucw.cz>
---
 arch/x86/power/hibernate_64.c | 36 ++++++++++++++++++++---------------
 1 file changed, 21 insertions(+), 15 deletions(-)

diff --git a/arch/x86/power/hibernate_64.c b/arch/x86/power/hibernate_64.c
index 67ccf64c8bd8..f8e3b668d20b 100644
--- a/arch/x86/power/hibernate_64.c
+++ b/arch/x86/power/hibernate_64.c
@@ -233,29 +233,35 @@ struct restore_data_record {
  */
 static int get_e820_md5(struct e820_table *table, void *buf)
 {
-	struct scatterlist sg;
-	struct crypto_ahash *tfm;
+	struct crypto_shash *tfm;
+	struct shash_desc *desc;
 	int size;
 	int ret = 0;
 
-	tfm = crypto_alloc_ahash("md5", 0, CRYPTO_ALG_ASYNC);
+	tfm = crypto_alloc_shash("md5", 0, 0);
 	if (IS_ERR(tfm))
 		return -ENOMEM;
 
-	{
-		AHASH_REQUEST_ON_STACK(req, tfm);
-		size = offsetof(struct e820_table, entries) + sizeof(struct e820_entry) * table->nr_entries;
-		ahash_request_set_tfm(req, tfm);
-		sg_init_one(&sg, (u8 *)table, size);
-		ahash_request_set_callback(req, 0, NULL, NULL);
-		ahash_request_set_crypt(req, &sg, buf, size);
-
-		if (crypto_ahash_digest(req))
-			ret = -EINVAL;
-		ahash_request_zero(req);
+	desc = kmalloc(sizeof(struct shash_desc) + crypto_shash_descsize(tfm),
+		       GFP_KERNEL);
+	if (!desc) {
+		ret = -ENOMEM;
+		goto free_tfm;
 	}
-	crypto_free_ahash(tfm);
 
+	desc->tfm = tfm;
+	desc->flags = 0;
+
+	size = offsetof(struct e820_table, entries) +
+		sizeof(struct e820_entry) * table->nr_entries;
+
+	if (crypto_shash_digest(desc, (u8 *)table, size, buf))
+		ret = -EINVAL;
+
+	kzfree(desc);
+
+free_tfm:
+	crypto_free_shash(tfm);
 	return ret;
 }
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v6 11/18] dm crypt: Convert essiv from ahash to shash
  2018-07-24 16:49 [PATCH v6 00/18] crypto: Remove VLA usage Kees Cook
                   ` (9 preceding siblings ...)
  2018-07-24 16:49 ` [PATCH v6 10/18] x86/power/64: " Kees Cook
@ 2018-07-24 16:49 ` Kees Cook
  2018-07-24 16:49 ` [PATCH v6 12/18] drbd: Convert " Kees Cook
                   ` (6 subsequent siblings)
  17 siblings, 0 replies; 32+ messages in thread
From: Kees Cook @ 2018-07-24 16:49 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Kees Cook, Arnd Bergmann, Eric Biggers, Gustavo A. R. Silva,
	Alasdair Kergon, Rabin Vincent, Tim Chen, Rafael J. Wysocki,
	Pavel Machek, Thomas Gleixner, Ingo Molnar, H. Peter Anvin, x86,
	Philipp Reisner, Lars Ellenberg, Jens Axboe, Giovanni Cabiddu,
	Mike Snitzer, Paul Mackerras, Greg Kroah-Hartman, David Howells,
	Johannes Berg, Tudor-Dan Ambarus, Jia-Ju Bai, Andrew Morton,
	Geert Uytterhoeven, Josh Poimboeuf, David Woodhouse, Will Deacon,
	dm-devel, linux-pm, linux-crypto, drbd-dev, linux-block,
	qat-linux, linux-ppp, netdev, devel, linux-afs, linux-wireless,
	linux-kernel

In preparing to remove all stack VLA usage from the kernel[1], this
removes the discouraged use of AHASH_REQUEST_ON_STACK in favor of the
smaller SHASH_DESC_ON_STACK by converting from ahash-wrapped-shash to
direct shash. By removing a layer of indirection this both improves
performance and reduces stack usage. The stack allocation will be made
a fixed size in a later patch to the crypto subsystem.

[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com

Signed-off-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Eric Biggers <ebiggers@google.com>
---
 drivers/md/dm-crypt.c | 31 ++++++++++++++-----------------
 1 file changed, 14 insertions(+), 17 deletions(-)

diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
index b61b069c33af..c4c922990090 100644
--- a/drivers/md/dm-crypt.c
+++ b/drivers/md/dm-crypt.c
@@ -99,7 +99,7 @@ struct crypt_iv_operations {
 };
 
 struct iv_essiv_private {
-	struct crypto_ahash *hash_tfm;
+	struct crypto_shash *hash_tfm;
 	u8 *salt;
 };
 
@@ -327,25 +327,22 @@ static int crypt_iv_plain64be_gen(struct crypt_config *cc, u8 *iv,
 static int crypt_iv_essiv_init(struct crypt_config *cc)
 {
 	struct iv_essiv_private *essiv = &cc->iv_gen_private.essiv;
-	AHASH_REQUEST_ON_STACK(req, essiv->hash_tfm);
-	struct scatterlist sg;
+	SHASH_DESC_ON_STACK(desc, essiv->hash_tfm);
 	struct crypto_cipher *essiv_tfm;
 	int err;
 
-	sg_init_one(&sg, cc->key, cc->key_size);
-	ahash_request_set_tfm(req, essiv->hash_tfm);
-	ahash_request_set_callback(req, CRYPTO_TFM_REQ_MAY_SLEEP, NULL, NULL);
-	ahash_request_set_crypt(req, &sg, essiv->salt, cc->key_size);
+	desc->tfm = essiv->hash_tfm;
+	desc->flags = CRYPTO_TFM_REQ_MAY_SLEEP;
 
-	err = crypto_ahash_digest(req);
-	ahash_request_zero(req);
+	err = crypto_shash_digest(desc, cc->key, cc->key_size, essiv->salt);
+	shash_desc_zero(desc);
 	if (err)
 		return err;
 
 	essiv_tfm = cc->iv_private;
 
 	err = crypto_cipher_setkey(essiv_tfm, essiv->salt,
-			    crypto_ahash_digestsize(essiv->hash_tfm));
+			    crypto_shash_digestsize(essiv->hash_tfm));
 	if (err)
 		return err;
 
@@ -356,7 +353,7 @@ static int crypt_iv_essiv_init(struct crypt_config *cc)
 static int crypt_iv_essiv_wipe(struct crypt_config *cc)
 {
 	struct iv_essiv_private *essiv = &cc->iv_gen_private.essiv;
-	unsigned salt_size = crypto_ahash_digestsize(essiv->hash_tfm);
+	unsigned salt_size = crypto_shash_digestsize(essiv->hash_tfm);
 	struct crypto_cipher *essiv_tfm;
 	int r, err = 0;
 
@@ -408,7 +405,7 @@ static void crypt_iv_essiv_dtr(struct crypt_config *cc)
 	struct crypto_cipher *essiv_tfm;
 	struct iv_essiv_private *essiv = &cc->iv_gen_private.essiv;
 
-	crypto_free_ahash(essiv->hash_tfm);
+	crypto_free_shash(essiv->hash_tfm);
 	essiv->hash_tfm = NULL;
 
 	kzfree(essiv->salt);
@@ -426,7 +423,7 @@ static int crypt_iv_essiv_ctr(struct crypt_config *cc, struct dm_target *ti,
 			      const char *opts)
 {
 	struct crypto_cipher *essiv_tfm = NULL;
-	struct crypto_ahash *hash_tfm = NULL;
+	struct crypto_shash *hash_tfm = NULL;
 	u8 *salt = NULL;
 	int err;
 
@@ -436,14 +433,14 @@ static int crypt_iv_essiv_ctr(struct crypt_config *cc, struct dm_target *ti,
 	}
 
 	/* Allocate hash algorithm */
-	hash_tfm = crypto_alloc_ahash(opts, 0, CRYPTO_ALG_ASYNC);
+	hash_tfm = crypto_alloc_shash(opts, 0, 0);
 	if (IS_ERR(hash_tfm)) {
 		ti->error = "Error initializing ESSIV hash";
 		err = PTR_ERR(hash_tfm);
 		goto bad;
 	}
 
-	salt = kzalloc(crypto_ahash_digestsize(hash_tfm), GFP_KERNEL);
+	salt = kzalloc(crypto_shash_digestsize(hash_tfm), GFP_KERNEL);
 	if (!salt) {
 		ti->error = "Error kmallocing salt storage in ESSIV";
 		err = -ENOMEM;
@@ -454,7 +451,7 @@ static int crypt_iv_essiv_ctr(struct crypt_config *cc, struct dm_target *ti,
 	cc->iv_gen_private.essiv.hash_tfm = hash_tfm;
 
 	essiv_tfm = alloc_essiv_cipher(cc, ti, salt,
-				       crypto_ahash_digestsize(hash_tfm));
+				       crypto_shash_digestsize(hash_tfm));
 	if (IS_ERR(essiv_tfm)) {
 		crypt_iv_essiv_dtr(cc);
 		return PTR_ERR(essiv_tfm);
@@ -465,7 +462,7 @@ static int crypt_iv_essiv_ctr(struct crypt_config *cc, struct dm_target *ti,
 
 bad:
 	if (hash_tfm && !IS_ERR(hash_tfm))
-		crypto_free_ahash(hash_tfm);
+		crypto_free_shash(hash_tfm);
 	kfree(salt);
 	return err;
 }
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v6 12/18] drbd: Convert from ahash to shash
  2018-07-24 16:49 [PATCH v6 00/18] crypto: Remove VLA usage Kees Cook
                   ` (10 preceding siblings ...)
  2018-07-24 16:49 ` [PATCH v6 11/18] dm crypt: Convert essiv from ahash to shash Kees Cook
@ 2018-07-24 16:49 ` Kees Cook
  2018-08-02 23:43   ` Kees Cook
  2018-07-24 16:49 ` [PATCH v6 13/18] wireless/lib80211: " Kees Cook
                   ` (5 subsequent siblings)
  17 siblings, 1 reply; 32+ messages in thread
From: Kees Cook @ 2018-07-24 16:49 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Kees Cook, Arnd Bergmann, Eric Biggers, Gustavo A. R. Silva,
	Alasdair Kergon, Rabin Vincent, Tim Chen, Rafael J. Wysocki,
	Pavel Machek, Thomas Gleixner, Ingo Molnar, H. Peter Anvin, x86,
	Philipp Reisner, Lars Ellenberg, Jens Axboe, Giovanni Cabiddu,
	Mike Snitzer, Paul Mackerras, Greg Kroah-Hartman, David Howells,
	Johannes Berg, Tudor-Dan Ambarus, Jia-Ju Bai, Andrew Morton,
	Geert Uytterhoeven, Josh Poimboeuf, David Woodhouse, Will Deacon,
	dm-devel, linux-pm, linux-crypto, drbd-dev, linux-block,
	qat-linux, linux-ppp, netdev, devel, linux-afs, linux-wireless,
	linux-kernel

In preparing to remove all stack VLA usage from the kernel[1], this
removes the discouraged use of AHASH_REQUEST_ON_STACK in favor of
the smaller SHASH_DESC_ON_STACK by converting from ahash-wrapped-shash
to direct shash. By removing a layer of indirection this both improves
performance and reduces stack usage. The stack allocation will be made
a fixed size in a later patch to the crypto subsystem.

[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 drivers/block/drbd/drbd_int.h      | 13 +++----
 drivers/block/drbd/drbd_main.c     | 14 ++++----
 drivers/block/drbd/drbd_nl.c       | 39 +++++++--------------
 drivers/block/drbd/drbd_receiver.c | 35 ++++++++++---------
 drivers/block/drbd/drbd_worker.c   | 56 +++++++++++++-----------------
 5 files changed, 69 insertions(+), 88 deletions(-)

diff --git a/drivers/block/drbd/drbd_int.h b/drivers/block/drbd/drbd_int.h
index bc4ed2ed40a2..97d8e290c2be 100644
--- a/drivers/block/drbd/drbd_int.h
+++ b/drivers/block/drbd/drbd_int.h
@@ -726,10 +726,10 @@ struct drbd_connection {
 	struct list_head transfer_log;	/* all requests not yet fully processed */
 
 	struct crypto_shash *cram_hmac_tfm;
-	struct crypto_ahash *integrity_tfm;  /* checksums we compute, updates protected by connection->data->mutex */
-	struct crypto_ahash *peer_integrity_tfm;  /* checksums we verify, only accessed from receiver thread  */
-	struct crypto_ahash *csums_tfm;
-	struct crypto_ahash *verify_tfm;
+	struct crypto_shash *integrity_tfm;  /* checksums we compute, updates protected by connection->data->mutex */
+	struct crypto_shash *peer_integrity_tfm;  /* checksums we verify, only accessed from receiver thread  */
+	struct crypto_shash *csums_tfm;
+	struct crypto_shash *verify_tfm;
 	void *int_dig_in;
 	void *int_dig_vv;
 
@@ -1533,8 +1533,9 @@ static inline void ov_out_of_sync_print(struct drbd_device *device)
 }
 
 
-extern void drbd_csum_bio(struct crypto_ahash *, struct bio *, void *);
-extern void drbd_csum_ee(struct crypto_ahash *, struct drbd_peer_request *, void *);
+extern void drbd_csum_bio(struct crypto_shash *, struct bio *, void *);
+extern void drbd_csum_ee(struct crypto_shash *, struct drbd_peer_request *,
+			 void *);
 /* worker callbacks */
 extern int w_e_end_data_req(struct drbd_work *, int);
 extern int w_e_end_rsdata_req(struct drbd_work *, int);
diff --git a/drivers/block/drbd/drbd_main.c b/drivers/block/drbd/drbd_main.c
index a80809bd3057..ccb54791d39c 100644
--- a/drivers/block/drbd/drbd_main.c
+++ b/drivers/block/drbd/drbd_main.c
@@ -1377,7 +1377,7 @@ void drbd_send_ack_dp(struct drbd_peer_device *peer_device, enum drbd_packet cmd
 		      struct p_data *dp, int data_size)
 {
 	if (peer_device->connection->peer_integrity_tfm)
-		data_size -= crypto_ahash_digestsize(peer_device->connection->peer_integrity_tfm);
+		data_size -= crypto_shash_digestsize(peer_device->connection->peer_integrity_tfm);
 	_drbd_send_ack(peer_device, cmd, dp->sector, cpu_to_be32(data_size),
 		       dp->block_id);
 }
@@ -1690,7 +1690,7 @@ int drbd_send_dblock(struct drbd_peer_device *peer_device, struct drbd_request *
 	sock = &peer_device->connection->data;
 	p = drbd_prepare_command(peer_device, sock);
 	digest_size = peer_device->connection->integrity_tfm ?
-		      crypto_ahash_digestsize(peer_device->connection->integrity_tfm) : 0;
+		      crypto_shash_digestsize(peer_device->connection->integrity_tfm) : 0;
 
 	if (!p)
 		return -EIO;
@@ -1796,7 +1796,7 @@ int drbd_send_block(struct drbd_peer_device *peer_device, enum drbd_packet cmd,
 	p = drbd_prepare_command(peer_device, sock);
 
 	digest_size = peer_device->connection->integrity_tfm ?
-		      crypto_ahash_digestsize(peer_device->connection->integrity_tfm) : 0;
+		      crypto_shash_digestsize(peer_device->connection->integrity_tfm) : 0;
 
 	if (!p)
 		return -EIO;
@@ -2561,11 +2561,11 @@ void conn_free_crypto(struct drbd_connection *connection)
 {
 	drbd_free_sock(connection);
 
-	crypto_free_ahash(connection->csums_tfm);
-	crypto_free_ahash(connection->verify_tfm);
+	crypto_free_shash(connection->csums_tfm);
+	crypto_free_shash(connection->verify_tfm);
 	crypto_free_shash(connection->cram_hmac_tfm);
-	crypto_free_ahash(connection->integrity_tfm);
-	crypto_free_ahash(connection->peer_integrity_tfm);
+	crypto_free_shash(connection->integrity_tfm);
+	crypto_free_shash(connection->peer_integrity_tfm);
 	kfree(connection->int_dig_in);
 	kfree(connection->int_dig_vv);
 
diff --git a/drivers/block/drbd/drbd_nl.c b/drivers/block/drbd/drbd_nl.c
index b4f02768ba47..d15703b1ffe8 100644
--- a/drivers/block/drbd/drbd_nl.c
+++ b/drivers/block/drbd/drbd_nl.c
@@ -2303,10 +2303,10 @@ check_net_options(struct drbd_connection *connection, struct net_conf *new_net_c
 }
 
 struct crypto {
-	struct crypto_ahash *verify_tfm;
-	struct crypto_ahash *csums_tfm;
+	struct crypto_shash *verify_tfm;
+	struct crypto_shash *csums_tfm;
 	struct crypto_shash *cram_hmac_tfm;
-	struct crypto_ahash *integrity_tfm;
+	struct crypto_shash *integrity_tfm;
 };
 
 static int
@@ -2324,36 +2324,21 @@ alloc_shash(struct crypto_shash **tfm, char *tfm_name, int err_alg)
 	return NO_ERROR;
 }
 
-static int
-alloc_ahash(struct crypto_ahash **tfm, char *tfm_name, int err_alg)
-{
-	if (!tfm_name[0])
-		return NO_ERROR;
-
-	*tfm = crypto_alloc_ahash(tfm_name, 0, CRYPTO_ALG_ASYNC);
-	if (IS_ERR(*tfm)) {
-		*tfm = NULL;
-		return err_alg;
-	}
-
-	return NO_ERROR;
-}
-
 static enum drbd_ret_code
 alloc_crypto(struct crypto *crypto, struct net_conf *new_net_conf)
 {
 	char hmac_name[CRYPTO_MAX_ALG_NAME];
 	enum drbd_ret_code rv;
 
-	rv = alloc_ahash(&crypto->csums_tfm, new_net_conf->csums_alg,
+	rv = alloc_shash(&crypto->csums_tfm, new_net_conf->csums_alg,
 			 ERR_CSUMS_ALG);
 	if (rv != NO_ERROR)
 		return rv;
-	rv = alloc_ahash(&crypto->verify_tfm, new_net_conf->verify_alg,
+	rv = alloc_shash(&crypto->verify_tfm, new_net_conf->verify_alg,
 			 ERR_VERIFY_ALG);
 	if (rv != NO_ERROR)
 		return rv;
-	rv = alloc_ahash(&crypto->integrity_tfm, new_net_conf->integrity_alg,
+	rv = alloc_shash(&crypto->integrity_tfm, new_net_conf->integrity_alg,
 			 ERR_INTEGRITY_ALG);
 	if (rv != NO_ERROR)
 		return rv;
@@ -2371,9 +2356,9 @@ alloc_crypto(struct crypto *crypto, struct net_conf *new_net_conf)
 static void free_crypto(struct crypto *crypto)
 {
 	crypto_free_shash(crypto->cram_hmac_tfm);
-	crypto_free_ahash(crypto->integrity_tfm);
-	crypto_free_ahash(crypto->csums_tfm);
-	crypto_free_ahash(crypto->verify_tfm);
+	crypto_free_shash(crypto->integrity_tfm);
+	crypto_free_shash(crypto->csums_tfm);
+	crypto_free_shash(crypto->verify_tfm);
 }
 
 int drbd_adm_net_opts(struct sk_buff *skb, struct genl_info *info)
@@ -2450,17 +2435,17 @@ int drbd_adm_net_opts(struct sk_buff *skb, struct genl_info *info)
 	rcu_assign_pointer(connection->net_conf, new_net_conf);
 
 	if (!rsr) {
-		crypto_free_ahash(connection->csums_tfm);
+		crypto_free_shash(connection->csums_tfm);
 		connection->csums_tfm = crypto.csums_tfm;
 		crypto.csums_tfm = NULL;
 	}
 	if (!ovr) {
-		crypto_free_ahash(connection->verify_tfm);
+		crypto_free_shash(connection->verify_tfm);
 		connection->verify_tfm = crypto.verify_tfm;
 		crypto.verify_tfm = NULL;
 	}
 
-	crypto_free_ahash(connection->integrity_tfm);
+	crypto_free_shash(connection->integrity_tfm);
 	connection->integrity_tfm = crypto.integrity_tfm;
 	if (connection->cstate >= C_WF_REPORT_PARAMS && connection->agreed_pro_version >= 100)
 		/* Do this without trying to take connection->data.mutex again.  */
diff --git a/drivers/block/drbd/drbd_receiver.c b/drivers/block/drbd/drbd_receiver.c
index be9450f5ad1c..76243e9ef277 100644
--- a/drivers/block/drbd/drbd_receiver.c
+++ b/drivers/block/drbd/drbd_receiver.c
@@ -1732,7 +1732,7 @@ static int receive_Barrier(struct drbd_connection *connection, struct packet_inf
 }
 
 /* quick wrapper in case payload size != request_size (write same) */
-static void drbd_csum_ee_size(struct crypto_ahash *h,
+static void drbd_csum_ee_size(struct crypto_shash *h,
 			      struct drbd_peer_request *r, void *d,
 			      unsigned int payload_size)
 {
@@ -1769,7 +1769,7 @@ read_in_block(struct drbd_peer_device *peer_device, u64 id, sector_t sector,
 
 	digest_size = 0;
 	if (!trim && peer_device->connection->peer_integrity_tfm) {
-		digest_size = crypto_ahash_digestsize(peer_device->connection->peer_integrity_tfm);
+		digest_size = crypto_shash_digestsize(peer_device->connection->peer_integrity_tfm);
 		/*
 		 * FIXME: Receive the incoming digest into the receive buffer
 		 *	  here, together with its struct p_data?
@@ -1905,7 +1905,7 @@ static int recv_dless_read(struct drbd_peer_device *peer_device, struct drbd_req
 
 	digest_size = 0;
 	if (peer_device->connection->peer_integrity_tfm) {
-		digest_size = crypto_ahash_digestsize(peer_device->connection->peer_integrity_tfm);
+		digest_size = crypto_shash_digestsize(peer_device->connection->peer_integrity_tfm);
 		err = drbd_recv_all_warn(peer_device->connection, dig_in, digest_size);
 		if (err)
 			return err;
@@ -3540,7 +3540,7 @@ static int receive_protocol(struct drbd_connection *connection, struct packet_in
 	int p_proto, p_discard_my_data, p_two_primaries, cf;
 	struct net_conf *nc, *old_net_conf, *new_net_conf = NULL;
 	char integrity_alg[SHARED_SECRET_MAX] = "";
-	struct crypto_ahash *peer_integrity_tfm = NULL;
+	struct crypto_shash *peer_integrity_tfm = NULL;
 	void *int_dig_in = NULL, *int_dig_vv = NULL;
 
 	p_proto		= be32_to_cpu(p->protocol);
@@ -3621,7 +3621,7 @@ static int receive_protocol(struct drbd_connection *connection, struct packet_in
 		 * change.
 		 */
 
-		peer_integrity_tfm = crypto_alloc_ahash(integrity_alg, 0, CRYPTO_ALG_ASYNC);
+		peer_integrity_tfm = crypto_alloc_shash(integrity_alg, 0, CRYPTO_ALG_ASYNC);
 		if (IS_ERR(peer_integrity_tfm)) {
 			peer_integrity_tfm = NULL;
 			drbd_err(connection, "peer data-integrity-alg %s not supported\n",
@@ -3629,7 +3629,7 @@ static int receive_protocol(struct drbd_connection *connection, struct packet_in
 			goto disconnect;
 		}
 
-		hash_size = crypto_ahash_digestsize(peer_integrity_tfm);
+		hash_size = crypto_shash_digestsize(peer_integrity_tfm);
 		int_dig_in = kmalloc(hash_size, GFP_KERNEL);
 		int_dig_vv = kmalloc(hash_size, GFP_KERNEL);
 		if (!(int_dig_in && int_dig_vv)) {
@@ -3659,7 +3659,7 @@ static int receive_protocol(struct drbd_connection *connection, struct packet_in
 	mutex_unlock(&connection->resource->conf_update);
 	mutex_unlock(&connection->data.mutex);
 
-	crypto_free_ahash(connection->peer_integrity_tfm);
+	crypto_free_shash(connection->peer_integrity_tfm);
 	kfree(connection->int_dig_in);
 	kfree(connection->int_dig_vv);
 	connection->peer_integrity_tfm = peer_integrity_tfm;
@@ -3677,7 +3677,7 @@ static int receive_protocol(struct drbd_connection *connection, struct packet_in
 disconnect_rcu_unlock:
 	rcu_read_unlock();
 disconnect:
-	crypto_free_ahash(peer_integrity_tfm);
+	crypto_free_shash(peer_integrity_tfm);
 	kfree(int_dig_in);
 	kfree(int_dig_vv);
 	conn_request_state(connection, NS(conn, C_DISCONNECTING), CS_HARD);
@@ -3689,15 +3689,16 @@ static int receive_protocol(struct drbd_connection *connection, struct packet_in
  * return: NULL (alg name was "")
  *         ERR_PTR(error) if something goes wrong
  *         or the crypto hash ptr, if it worked out ok. */
-static struct crypto_ahash *drbd_crypto_alloc_digest_safe(const struct drbd_device *device,
+static struct crypto_shash *drbd_crypto_alloc_digest_safe(
+		const struct drbd_device *device,
 		const char *alg, const char *name)
 {
-	struct crypto_ahash *tfm;
+	struct crypto_shash *tfm;
 
 	if (!alg[0])
 		return NULL;
 
-	tfm = crypto_alloc_ahash(alg, 0, CRYPTO_ALG_ASYNC);
+	tfm = crypto_alloc_shash(alg, 0, 0);
 	if (IS_ERR(tfm)) {
 		drbd_err(device, "Can not allocate \"%s\" as %s (reason: %ld)\n",
 			alg, name, PTR_ERR(tfm));
@@ -3750,8 +3751,8 @@ static int receive_SyncParam(struct drbd_connection *connection, struct packet_i
 	struct drbd_device *device;
 	struct p_rs_param_95 *p;
 	unsigned int header_size, data_size, exp_max_sz;
-	struct crypto_ahash *verify_tfm = NULL;
-	struct crypto_ahash *csums_tfm = NULL;
+	struct crypto_shash *verify_tfm = NULL;
+	struct crypto_shash *csums_tfm = NULL;
 	struct net_conf *old_net_conf, *new_net_conf = NULL;
 	struct disk_conf *old_disk_conf = NULL, *new_disk_conf = NULL;
 	const int apv = connection->agreed_pro_version;
@@ -3898,14 +3899,14 @@ static int receive_SyncParam(struct drbd_connection *connection, struct packet_i
 			if (verify_tfm) {
 				strcpy(new_net_conf->verify_alg, p->verify_alg);
 				new_net_conf->verify_alg_len = strlen(p->verify_alg) + 1;
-				crypto_free_ahash(peer_device->connection->verify_tfm);
+				crypto_free_shash(peer_device->connection->verify_tfm);
 				peer_device->connection->verify_tfm = verify_tfm;
 				drbd_info(device, "using verify-alg: \"%s\"\n", p->verify_alg);
 			}
 			if (csums_tfm) {
 				strcpy(new_net_conf->csums_alg, p->csums_alg);
 				new_net_conf->csums_alg_len = strlen(p->csums_alg) + 1;
-				crypto_free_ahash(peer_device->connection->csums_tfm);
+				crypto_free_shash(peer_device->connection->csums_tfm);
 				peer_device->connection->csums_tfm = csums_tfm;
 				drbd_info(device, "using csums-alg: \"%s\"\n", p->csums_alg);
 			}
@@ -3949,9 +3950,9 @@ static int receive_SyncParam(struct drbd_connection *connection, struct packet_i
 	mutex_unlock(&connection->resource->conf_update);
 	/* just for completeness: actually not needed,
 	 * as this is not reached if csums_tfm was ok. */
-	crypto_free_ahash(csums_tfm);
+	crypto_free_shash(csums_tfm);
 	/* but free the verify_tfm again, if csums_tfm did not work out */
-	crypto_free_ahash(verify_tfm);
+	crypto_free_shash(verify_tfm);
 	conn_request_state(peer_device->connection, NS(conn, C_DISCONNECTING), CS_HARD);
 	return -EIO;
 }
diff --git a/drivers/block/drbd/drbd_worker.c b/drivers/block/drbd/drbd_worker.c
index 1476cb3439f4..62dd3700dd84 100644
--- a/drivers/block/drbd/drbd_worker.c
+++ b/drivers/block/drbd/drbd_worker.c
@@ -295,60 +295,54 @@ void drbd_request_endio(struct bio *bio)
 		complete_master_bio(device, &m);
 }
 
-void drbd_csum_ee(struct crypto_ahash *tfm, struct drbd_peer_request *peer_req, void *digest)
+void drbd_csum_ee(struct crypto_shash *tfm, struct drbd_peer_request *peer_req, void *digest)
 {
-	AHASH_REQUEST_ON_STACK(req, tfm);
-	struct scatterlist sg;
+	SHASH_DESC_ON_STACK(desc, tfm);
 	struct page *page = peer_req->pages;
 	struct page *tmp;
 	unsigned len;
 
-	ahash_request_set_tfm(req, tfm);
-	ahash_request_set_callback(req, 0, NULL, NULL);
+	desc->tfm = tfm;
+	desc->flags = 0;
 
-	sg_init_table(&sg, 1);
-	crypto_ahash_init(req);
+	crypto_shash_init(desc);
 
 	while ((tmp = page_chain_next(page))) {
 		/* all but the last page will be fully used */
-		sg_set_page(&sg, page, PAGE_SIZE, 0);
-		ahash_request_set_crypt(req, &sg, NULL, sg.length);
-		crypto_ahash_update(req);
+		crypto_shash_update(desc, page_to_virt(page), PAGE_SIZE);
 		page = tmp;
 	}
 	/* and now the last, possibly only partially used page */
 	len = peer_req->i.size & (PAGE_SIZE - 1);
-	sg_set_page(&sg, page, len ?: PAGE_SIZE, 0);
-	ahash_request_set_crypt(req, &sg, digest, sg.length);
-	crypto_ahash_finup(req);
-	ahash_request_zero(req);
+	crypto_shash_update(desc, page_to_virt(page), len ?: PAGE_SIZE);
+
+	crypto_shash_final(desc, digest);
+	shash_desc_zero(desc);
 }
 
-void drbd_csum_bio(struct crypto_ahash *tfm, struct bio *bio, void *digest)
+void drbd_csum_bio(struct crypto_shash *tfm, struct bio *bio, void *digest)
 {
-	AHASH_REQUEST_ON_STACK(req, tfm);
-	struct scatterlist sg;
+	SHASH_DESC_ON_STACK(desc, tfm);
 	struct bio_vec bvec;
 	struct bvec_iter iter;
 
-	ahash_request_set_tfm(req, tfm);
-	ahash_request_set_callback(req, 0, NULL, NULL);
+	desc->tfm = tfm;
+	desc->flags = 0;
 
-	sg_init_table(&sg, 1);
-	crypto_ahash_init(req);
+	crypto_shash_init(desc);
 
 	bio_for_each_segment(bvec, bio, iter) {
-		sg_set_page(&sg, bvec.bv_page, bvec.bv_len, bvec.bv_offset);
-		ahash_request_set_crypt(req, &sg, NULL, sg.length);
-		crypto_ahash_update(req);
+		crypto_shash_update(desc, (u8 *)page_to_virt(bvec.bv_page) +
+					  bvec.bv_offset,
+				    bvec.bv_len);
+
 		/* REQ_OP_WRITE_SAME has only one segment,
 		 * checksum the payload only once. */
 		if (bio_op(bio) == REQ_OP_WRITE_SAME)
 			break;
 	}
-	ahash_request_set_crypt(req, NULL, digest, 0);
-	crypto_ahash_final(req);
-	ahash_request_zero(req);
+	crypto_shash_final(desc, digest);
+	shash_desc_zero(desc);
 }
 
 /* MAYBE merge common code with w_e_end_ov_req */
@@ -367,7 +361,7 @@ static int w_e_send_csum(struct drbd_work *w, int cancel)
 	if (unlikely((peer_req->flags & EE_WAS_ERROR) != 0))
 		goto out;
 
-	digest_size = crypto_ahash_digestsize(peer_device->connection->csums_tfm);
+	digest_size = crypto_shash_digestsize(peer_device->connection->csums_tfm);
 	digest = kmalloc(digest_size, GFP_NOIO);
 	if (digest) {
 		sector_t sector = peer_req->i.sector;
@@ -1205,7 +1199,7 @@ int w_e_end_csum_rs_req(struct drbd_work *w, int cancel)
 		 * a real fix would be much more involved,
 		 * introducing more locking mechanisms */
 		if (peer_device->connection->csums_tfm) {
-			digest_size = crypto_ahash_digestsize(peer_device->connection->csums_tfm);
+			digest_size = crypto_shash_digestsize(peer_device->connection->csums_tfm);
 			D_ASSERT(device, digest_size == di->digest_size);
 			digest = kmalloc(digest_size, GFP_NOIO);
 		}
@@ -1255,7 +1249,7 @@ int w_e_end_ov_req(struct drbd_work *w, int cancel)
 	if (unlikely(cancel))
 		goto out;
 
-	digest_size = crypto_ahash_digestsize(peer_device->connection->verify_tfm);
+	digest_size = crypto_shash_digestsize(peer_device->connection->verify_tfm);
 	digest = kmalloc(digest_size, GFP_NOIO);
 	if (!digest) {
 		err = 1;	/* terminate the connection in case the allocation failed */
@@ -1327,7 +1321,7 @@ int w_e_end_ov_reply(struct drbd_work *w, int cancel)
 	di = peer_req->digest;
 
 	if (likely((peer_req->flags & EE_WAS_ERROR) == 0)) {
-		digest_size = crypto_ahash_digestsize(peer_device->connection->verify_tfm);
+		digest_size = crypto_shash_digestsize(peer_device->connection->verify_tfm);
 		digest = kmalloc(digest_size, GFP_NOIO);
 		if (digest) {
 			drbd_csum_ee(peer_device->connection->verify_tfm, peer_req, digest);
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v6 13/18] wireless/lib80211: Convert from ahash to shash
  2018-07-24 16:49 [PATCH v6 00/18] crypto: Remove VLA usage Kees Cook
                   ` (11 preceding siblings ...)
  2018-07-24 16:49 ` [PATCH v6 12/18] drbd: Convert " Kees Cook
@ 2018-07-24 16:49 ` Kees Cook
  2018-07-25  7:24   ` Johannes Berg
  2018-07-24 16:49 ` [PATCH v6 14/18] staging: rtl8192u: ieee80211: " Kees Cook
                   ` (4 subsequent siblings)
  17 siblings, 1 reply; 32+ messages in thread
From: Kees Cook @ 2018-07-24 16:49 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Kees Cook, Arnd Bergmann, Eric Biggers, Gustavo A. R. Silva,
	Alasdair Kergon, Rabin Vincent, Tim Chen, Rafael J. Wysocki,
	Pavel Machek, Thomas Gleixner, Ingo Molnar, H. Peter Anvin, x86,
	Philipp Reisner, Lars Ellenberg, Jens Axboe, Giovanni Cabiddu,
	Mike Snitzer, Paul Mackerras, Greg Kroah-Hartman, David Howells,
	Johannes Berg, Tudor-Dan Ambarus, Jia-Ju Bai, Andrew Morton,
	Geert Uytterhoeven, Josh Poimboeuf, David Woodhouse, Will Deacon,
	dm-devel, linux-pm, linux-crypto, drbd-dev, linux-block,
	qat-linux, linux-ppp, netdev, devel, linux-afs, linux-wireless,
	linux-kernel

In preparing to remove all stack VLA usage from the kernel[1], this
removes the discouraged use of AHASH_REQUEST_ON_STACK in favor of
the smaller SHASH_DESC_ON_STACK by converting from ahash-wrapped-shash
to direct shash. By removing a layer of indirection this both improves
performance and reduces stack usage. The stack allocation will be made
a fixed size in a later patch to the crypto subsystem.

[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 net/wireless/lib80211_crypt_tkip.c | 58 +++++++++++++++---------------
 1 file changed, 29 insertions(+), 29 deletions(-)

diff --git a/net/wireless/lib80211_crypt_tkip.c b/net/wireless/lib80211_crypt_tkip.c
index ba0a1f398ce5..21040aba3a81 100644
--- a/net/wireless/lib80211_crypt_tkip.c
+++ b/net/wireless/lib80211_crypt_tkip.c
@@ -65,9 +65,9 @@ struct lib80211_tkip_data {
 	int key_idx;
 
 	struct crypto_skcipher *rx_tfm_arc4;
-	struct crypto_ahash *rx_tfm_michael;
+	struct crypto_shash *rx_tfm_michael;
 	struct crypto_skcipher *tx_tfm_arc4;
-	struct crypto_ahash *tx_tfm_michael;
+	struct crypto_shash *tx_tfm_michael;
 
 	/* scratch buffers for virt_to_page() (crypto API) */
 	u8 rx_hdr[16], tx_hdr[16];
@@ -106,8 +106,7 @@ static void *lib80211_tkip_init(int key_idx)
 		goto fail;
 	}
 
-	priv->tx_tfm_michael = crypto_alloc_ahash("michael_mic", 0,
-						  CRYPTO_ALG_ASYNC);
+	priv->tx_tfm_michael = crypto_alloc_shash("michael_mic", 0, 0);
 	if (IS_ERR(priv->tx_tfm_michael)) {
 		priv->tx_tfm_michael = NULL;
 		goto fail;
@@ -120,8 +119,7 @@ static void *lib80211_tkip_init(int key_idx)
 		goto fail;
 	}
 
-	priv->rx_tfm_michael = crypto_alloc_ahash("michael_mic", 0,
-						  CRYPTO_ALG_ASYNC);
+	priv->rx_tfm_michael = crypto_alloc_shash("michael_mic", 0, 0);
 	if (IS_ERR(priv->rx_tfm_michael)) {
 		priv->rx_tfm_michael = NULL;
 		goto fail;
@@ -131,9 +129,9 @@ static void *lib80211_tkip_init(int key_idx)
 
       fail:
 	if (priv) {
-		crypto_free_ahash(priv->tx_tfm_michael);
+		crypto_free_shash(priv->tx_tfm_michael);
 		crypto_free_skcipher(priv->tx_tfm_arc4);
-		crypto_free_ahash(priv->rx_tfm_michael);
+		crypto_free_shash(priv->rx_tfm_michael);
 		crypto_free_skcipher(priv->rx_tfm_arc4);
 		kfree(priv);
 	}
@@ -145,9 +143,9 @@ static void lib80211_tkip_deinit(void *priv)
 {
 	struct lib80211_tkip_data *_priv = priv;
 	if (_priv) {
-		crypto_free_ahash(_priv->tx_tfm_michael);
+		crypto_free_shash(_priv->tx_tfm_michael);
 		crypto_free_skcipher(_priv->tx_tfm_arc4);
-		crypto_free_ahash(_priv->rx_tfm_michael);
+		crypto_free_shash(_priv->rx_tfm_michael);
 		crypto_free_skcipher(_priv->rx_tfm_arc4);
 	}
 	kfree(priv);
@@ -510,29 +508,31 @@ static int lib80211_tkip_decrypt(struct sk_buff *skb, int hdr_len, void *priv)
 	return keyidx;
 }
 
-static int michael_mic(struct crypto_ahash *tfm_michael, u8 * key, u8 * hdr,
-		       u8 * data, size_t data_len, u8 * mic)
+static int michael_mic(struct crypto_shash *tfm_michael, u8 *key, u8 *hdr,
+		       u8 *data, size_t data_len, u8 *mic)
 {
-	AHASH_REQUEST_ON_STACK(req, tfm_michael);
-	struct scatterlist sg[2];
+	SHASH_DESC_ON_STACK(desc, tfm_michael);
 	int err;
 
-	if (tfm_michael == NULL) {
-		pr_warn("%s(): tfm_michael == NULL\n", __func__);
-		return -1;
-	}
-	sg_init_table(sg, 2);
-	sg_set_buf(&sg[0], hdr, 16);
-	sg_set_buf(&sg[1], data, data_len);
+	desc->tfm = tfm_michael;
+	desc->flags = 0;
 
-	if (crypto_ahash_setkey(tfm_michael, key, 8))
+	if (crypto_shash_setkey(tfm_michael, key, 8))
 		return -1;
 
-	ahash_request_set_tfm(req, tfm_michael);
-	ahash_request_set_callback(req, 0, NULL, NULL);
-	ahash_request_set_crypt(req, sg, mic, data_len + 16);
-	err = crypto_ahash_digest(req);
-	ahash_request_zero(req);
+	err = crypto_shash_init(desc);
+	if (err)
+		goto out;
+	err = crypto_shash_update(desc, hdr, 16);
+	if (err)
+		goto out;
+	err = crypto_shash_update(desc, data, data_len);
+	if (err)
+		goto out;
+	err = crypto_shash_final(desc, mic);
+
+out:
+	shash_desc_zero(desc);
 	return err;
 }
 
@@ -654,9 +654,9 @@ static int lib80211_tkip_set_key(void *key, int len, u8 * seq, void *priv)
 {
 	struct lib80211_tkip_data *tkey = priv;
 	int keyidx;
-	struct crypto_ahash *tfm = tkey->tx_tfm_michael;
+	struct crypto_shash *tfm = tkey->tx_tfm_michael;
 	struct crypto_skcipher *tfm2 = tkey->tx_tfm_arc4;
-	struct crypto_ahash *tfm3 = tkey->rx_tfm_michael;
+	struct crypto_shash *tfm3 = tkey->rx_tfm_michael;
 	struct crypto_skcipher *tfm4 = tkey->rx_tfm_arc4;
 
 	keyidx = tkey->key_idx;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v6 14/18] staging: rtl8192u: ieee80211: Convert from ahash to shash
  2018-07-24 16:49 [PATCH v6 00/18] crypto: Remove VLA usage Kees Cook
                   ` (12 preceding siblings ...)
  2018-07-24 16:49 ` [PATCH v6 13/18] wireless/lib80211: " Kees Cook
@ 2018-07-24 16:49 ` Kees Cook
  2018-07-24 16:49 ` [PATCH v6 15/18] staging: rtl8192e: " Kees Cook
                   ` (3 subsequent siblings)
  17 siblings, 0 replies; 32+ messages in thread
From: Kees Cook @ 2018-07-24 16:49 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Kees Cook, Arnd Bergmann, Eric Biggers, Gustavo A. R. Silva,
	Alasdair Kergon, Rabin Vincent, Tim Chen, Rafael J. Wysocki,
	Pavel Machek, Thomas Gleixner, Ingo Molnar, H. Peter Anvin, x86,
	Philipp Reisner, Lars Ellenberg, Jens Axboe, Giovanni Cabiddu,
	Mike Snitzer, Paul Mackerras, Greg Kroah-Hartman, David Howells,
	Johannes Berg, Tudor-Dan Ambarus, Jia-Ju Bai, Andrew Morton,
	Geert Uytterhoeven, Josh Poimboeuf, David Woodhouse, Will Deacon,
	dm-devel, linux-pm, linux-crypto, drbd-dev, linux-block,
	qat-linux, linux-ppp, netdev, devel, linux-afs, linux-wireless,
	linux-kernel

This is an identical change to the wireless/lib80211 of the same name.
In preparing to remove all stack VLA usage from the kernel[1], this
removes the discouraged use of AHASH_REQUEST_ON_STACK in favor of
the smaller SHASH_DESC_ON_STACK by converting from ahash-wrapped-shash
to direct shash. By removing a layer of indirection this both improves
performance and reduces stack usage. The stack allocation will be made
a fixed size in a later patch to the crypto subsystem.

[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 .../rtl8192u/ieee80211/ieee80211_crypt_tkip.c | 57 +++++++++----------
 1 file changed, 28 insertions(+), 29 deletions(-)

diff --git a/drivers/staging/rtl8192u/ieee80211/ieee80211_crypt_tkip.c b/drivers/staging/rtl8192u/ieee80211/ieee80211_crypt_tkip.c
index a7efaae4e25a..1088fa0aee0e 100644
--- a/drivers/staging/rtl8192u/ieee80211/ieee80211_crypt_tkip.c
+++ b/drivers/staging/rtl8192u/ieee80211/ieee80211_crypt_tkip.c
@@ -54,9 +54,9 @@ struct ieee80211_tkip_data {
 	int key_idx;
 
 	struct crypto_skcipher *rx_tfm_arc4;
-	struct crypto_ahash *rx_tfm_michael;
+	struct crypto_shash *rx_tfm_michael;
 	struct crypto_skcipher *tx_tfm_arc4;
-	struct crypto_ahash *tx_tfm_michael;
+	struct crypto_shash *tx_tfm_michael;
 
 	/* scratch buffers for virt_to_page() (crypto API) */
 	u8 rx_hdr[16], tx_hdr[16];
@@ -80,8 +80,7 @@ static void *ieee80211_tkip_init(int key_idx)
 		goto fail;
 	}
 
-	priv->tx_tfm_michael = crypto_alloc_ahash("michael_mic", 0,
-			CRYPTO_ALG_ASYNC);
+	priv->tx_tfm_michael = crypto_alloc_shash("michael_mic", 0, 0);
 	if (IS_ERR(priv->tx_tfm_michael)) {
 		printk(KERN_DEBUG "ieee80211_crypt_tkip: could not allocate "
 				"crypto API michael_mic\n");
@@ -98,8 +97,7 @@ static void *ieee80211_tkip_init(int key_idx)
 		goto fail;
 	}
 
-	priv->rx_tfm_michael = crypto_alloc_ahash("michael_mic", 0,
-			CRYPTO_ALG_ASYNC);
+	priv->rx_tfm_michael = crypto_alloc_shash("michael_mic", 0, 0);
 	if (IS_ERR(priv->rx_tfm_michael)) {
 		printk(KERN_DEBUG "ieee80211_crypt_tkip: could not allocate "
 				"crypto API michael_mic\n");
@@ -111,9 +109,9 @@ static void *ieee80211_tkip_init(int key_idx)
 
 fail:
 	if (priv) {
-		crypto_free_ahash(priv->tx_tfm_michael);
+		crypto_free_shash(priv->tx_tfm_michael);
 		crypto_free_skcipher(priv->tx_tfm_arc4);
-		crypto_free_ahash(priv->rx_tfm_michael);
+		crypto_free_shash(priv->rx_tfm_michael);
 		crypto_free_skcipher(priv->rx_tfm_arc4);
 		kfree(priv);
 	}
@@ -127,9 +125,9 @@ static void ieee80211_tkip_deinit(void *priv)
 	struct ieee80211_tkip_data *_priv = priv;
 
 	if (_priv) {
-		crypto_free_ahash(_priv->tx_tfm_michael);
+		crypto_free_shash(_priv->tx_tfm_michael);
 		crypto_free_skcipher(_priv->tx_tfm_arc4);
-		crypto_free_ahash(_priv->rx_tfm_michael);
+		crypto_free_shash(_priv->rx_tfm_michael);
 		crypto_free_skcipher(_priv->rx_tfm_arc4);
 	}
 	kfree(priv);
@@ -500,30 +498,31 @@ static int ieee80211_tkip_decrypt(struct sk_buff *skb, int hdr_len, void *priv)
 	return keyidx;
 }
 
-static int michael_mic(struct crypto_ahash *tfm_michael, u8 *key, u8 *hdr,
+static int michael_mic(struct crypto_shash *tfm_michael, u8 *key, u8 *hdr,
 		       u8 *data, size_t data_len, u8 *mic)
 {
-	AHASH_REQUEST_ON_STACK(req, tfm_michael);
-	struct scatterlist sg[2];
+	SHASH_DESC_ON_STACK(desc, tfm_michael);
 	int err;
 
-	if (tfm_michael == NULL) {
-		printk(KERN_WARNING "michael_mic: tfm_michael == NULL\n");
-		return -1;
-	}
-
-	sg_init_table(sg, 2);
-	sg_set_buf(&sg[0], hdr, 16);
-	sg_set_buf(&sg[1], data, data_len);
+	desc->tfm = tfm_michael;
+	desc->flags = 0;
 
-	if (crypto_ahash_setkey(tfm_michael, key, 8))
+	if (crypto_shash_setkey(tfm_michael, key, 8))
 		return -1;
 
-	ahash_request_set_tfm(req, tfm_michael);
-	ahash_request_set_callback(req, 0, NULL, NULL);
-	ahash_request_set_crypt(req, sg, mic, data_len + 16);
-	err = crypto_ahash_digest(req);
-	ahash_request_zero(req);
+	err = crypto_shash_init(desc);
+	if (err)
+		goto out;
+	err = crypto_shash_update(desc, hdr, 16);
+	if (err)
+		goto out;
+	err = crypto_shash_update(desc, data, data_len);
+	if (err)
+		goto out;
+	err = crypto_shash_final(desc, mic);
+
+out:
+	shash_desc_zero(desc);
 	return err;
 }
 
@@ -663,9 +662,9 @@ static int ieee80211_tkip_set_key(void *key, int len, u8 *seq, void *priv)
 {
 	struct ieee80211_tkip_data *tkey = priv;
 	int keyidx;
-	struct crypto_ahash *tfm = tkey->tx_tfm_michael;
+	struct crypto_shash *tfm = tkey->tx_tfm_michael;
 	struct crypto_skcipher *tfm2 = tkey->tx_tfm_arc4;
-	struct crypto_ahash *tfm3 = tkey->rx_tfm_michael;
+	struct crypto_shash *tfm3 = tkey->rx_tfm_michael;
 	struct crypto_skcipher *tfm4 = tkey->rx_tfm_arc4;
 
 	keyidx = tkey->key_idx;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v6 15/18] staging: rtl8192e: ieee80211: Convert from ahash to shash
  2018-07-24 16:49 [PATCH v6 00/18] crypto: Remove VLA usage Kees Cook
                   ` (13 preceding siblings ...)
  2018-07-24 16:49 ` [PATCH v6 14/18] staging: rtl8192u: ieee80211: " Kees Cook
@ 2018-07-24 16:49 ` Kees Cook
  2018-07-24 16:49 ` [PATCH v6 16/18] rxrpc: Reuse SKCIPHER_REQUEST_ON_STACK buffer Kees Cook
                   ` (2 subsequent siblings)
  17 siblings, 0 replies; 32+ messages in thread
From: Kees Cook @ 2018-07-24 16:49 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Kees Cook, Arnd Bergmann, Eric Biggers, Gustavo A. R. Silva,
	Alasdair Kergon, Rabin Vincent, Tim Chen, Rafael J. Wysocki,
	Pavel Machek, Thomas Gleixner, Ingo Molnar, H. Peter Anvin, x86,
	Philipp Reisner, Lars Ellenberg, Jens Axboe, Giovanni Cabiddu,
	Mike Snitzer, Paul Mackerras, Greg Kroah-Hartman, David Howells,
	Johannes Berg, Tudor-Dan Ambarus, Jia-Ju Bai, Andrew Morton,
	Geert Uytterhoeven, Josh Poimboeuf, David Woodhouse, Will Deacon,
	dm-devel, linux-pm, linux-crypto, drbd-dev, linux-block,
	qat-linux, linux-ppp, netdev, devel, linux-afs, linux-wireless,
	linux-kernel

This is an identical change to the wireless/lib80211 of the same name.
In preparing to remove all stack VLA usage from the kernel[1], this
removes the discouraged use of AHASH_REQUEST_ON_STACK in favor of
the smaller SHASH_DESC_ON_STACK by converting from ahash-wrapped-shash
to direct shash. By removing a layer of indirection this both improves
performance and reduces stack usage. The stack allocation will be made
a fixed size in a later patch to the crypto subsystem.

[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 drivers/staging/rtl8192e/rtllib_crypt_tkip.c | 56 ++++++++++----------
 1 file changed, 28 insertions(+), 28 deletions(-)

diff --git a/drivers/staging/rtl8192e/rtllib_crypt_tkip.c b/drivers/staging/rtl8192e/rtllib_crypt_tkip.c
index ae103b0b7a2a..9f18be14dda6 100644
--- a/drivers/staging/rtl8192e/rtllib_crypt_tkip.c
+++ b/drivers/staging/rtl8192e/rtllib_crypt_tkip.c
@@ -50,9 +50,9 @@ struct rtllib_tkip_data {
 
 	int key_idx;
 	struct crypto_skcipher *rx_tfm_arc4;
-	struct crypto_ahash *rx_tfm_michael;
+	struct crypto_shash *rx_tfm_michael;
 	struct crypto_skcipher *tx_tfm_arc4;
-	struct crypto_ahash *tx_tfm_michael;
+	struct crypto_shash *tx_tfm_michael;
 	/* scratch buffers for virt_to_page() (crypto API) */
 	u8 rx_hdr[16];
 	u8 tx_hdr[16];
@@ -74,8 +74,7 @@ static void *rtllib_tkip_init(int key_idx)
 		goto fail;
 	}
 
-	priv->tx_tfm_michael = crypto_alloc_ahash("michael_mic", 0,
-						  CRYPTO_ALG_ASYNC);
+	priv->tx_tfm_michael = crypto_alloc_shash("michael_mic", 0, 0);
 	if (IS_ERR(priv->tx_tfm_michael)) {
 		pr_debug("Could not allocate crypto API michael_mic\n");
 		priv->tx_tfm_michael = NULL;
@@ -90,8 +89,7 @@ static void *rtllib_tkip_init(int key_idx)
 		goto fail;
 	}
 
-	priv->rx_tfm_michael = crypto_alloc_ahash("michael_mic", 0,
-						  CRYPTO_ALG_ASYNC);
+	priv->rx_tfm_michael = crypto_alloc_shash("michael_mic", 0, 0);
 	if (IS_ERR(priv->rx_tfm_michael)) {
 		pr_debug("Could not allocate crypto API michael_mic\n");
 		priv->rx_tfm_michael = NULL;
@@ -101,9 +99,9 @@ static void *rtllib_tkip_init(int key_idx)
 
 fail:
 	if (priv) {
-		crypto_free_ahash(priv->tx_tfm_michael);
+		crypto_free_shash(priv->tx_tfm_michael);
 		crypto_free_skcipher(priv->tx_tfm_arc4);
-		crypto_free_ahash(priv->rx_tfm_michael);
+		crypto_free_shash(priv->rx_tfm_michael);
 		crypto_free_skcipher(priv->rx_tfm_arc4);
 		kfree(priv);
 	}
@@ -117,9 +115,9 @@ static void rtllib_tkip_deinit(void *priv)
 	struct rtllib_tkip_data *_priv = priv;
 
 	if (_priv) {
-		crypto_free_ahash(_priv->tx_tfm_michael);
+		crypto_free_shash(_priv->tx_tfm_michael);
 		crypto_free_skcipher(_priv->tx_tfm_arc4);
-		crypto_free_ahash(_priv->rx_tfm_michael);
+		crypto_free_shash(_priv->rx_tfm_michael);
 		crypto_free_skcipher(_priv->rx_tfm_arc4);
 	}
 	kfree(priv);
@@ -504,29 +502,31 @@ static int rtllib_tkip_decrypt(struct sk_buff *skb, int hdr_len, void *priv)
 }
 
 
-static int michael_mic(struct crypto_ahash *tfm_michael, u8 *key, u8 *hdr,
+static int michael_mic(struct crypto_shash *tfm_michael, u8 *key, u8 *hdr,
 		       u8 *data, size_t data_len, u8 *mic)
 {
-	AHASH_REQUEST_ON_STACK(req, tfm_michael);
-	struct scatterlist sg[2];
+	SHASH_DESC_ON_STACK(desc, tfm_michael);
 	int err;
 
-	if (tfm_michael == NULL) {
-		pr_warn("michael_mic: tfm_michael == NULL\n");
-		return -1;
-	}
-	sg_init_table(sg, 2);
-	sg_set_buf(&sg[0], hdr, 16);
-	sg_set_buf(&sg[1], data, data_len);
+	desc->tfm = tfm_michael;
+	desc->flags = 0;
 
-	if (crypto_ahash_setkey(tfm_michael, key, 8))
+	if (crypto_shash_setkey(tfm_michael, key, 8))
 		return -1;
 
-	ahash_request_set_tfm(req, tfm_michael);
-	ahash_request_set_callback(req, 0, NULL, NULL);
-	ahash_request_set_crypt(req, sg, mic, data_len + 16);
-	err = crypto_ahash_digest(req);
-	ahash_request_zero(req);
+	err = crypto_shash_init(desc);
+	if (err)
+		goto out;
+	err = crypto_shash_update(desc, hdr, 16);
+	if (err)
+		goto out;
+	err = crypto_shash_update(desc, data, data_len);
+	if (err)
+		goto out;
+	err = crypto_shash_final(desc, mic);
+
+out:
+	shash_desc_zero(desc);
 	return err;
 }
 
@@ -663,9 +663,9 @@ static int rtllib_tkip_set_key(void *key, int len, u8 *seq, void *priv)
 {
 	struct rtllib_tkip_data *tkey = priv;
 	int keyidx;
-	struct crypto_ahash *tfm = tkey->tx_tfm_michael;
+	struct crypto_shash *tfm = tkey->tx_tfm_michael;
 	struct crypto_skcipher *tfm2 = tkey->tx_tfm_arc4;
-	struct crypto_ahash *tfm3 = tkey->rx_tfm_michael;
+	struct crypto_shash *tfm3 = tkey->rx_tfm_michael;
 	struct crypto_skcipher *tfm4 = tkey->rx_tfm_arc4;
 
 	keyidx = tkey->key_idx;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v6 16/18] rxrpc: Reuse SKCIPHER_REQUEST_ON_STACK buffer
  2018-07-24 16:49 [PATCH v6 00/18] crypto: Remove VLA usage Kees Cook
                   ` (14 preceding siblings ...)
  2018-07-24 16:49 ` [PATCH v6 15/18] staging: rtl8192e: " Kees Cook
@ 2018-07-24 16:49 ` Kees Cook
  2018-08-02 23:46   ` Kees Cook
  2018-08-03  9:14   ` David Howells
  2018-07-24 16:49 ` [PATCH v6 17/18] crypto: ccm: Remove VLA usage Kees Cook
  2018-07-24 16:49 ` [PATCH v6 18/18] crypto: Remove AHASH_REQUEST_ON_STACK Kees Cook
  17 siblings, 2 replies; 32+ messages in thread
From: Kees Cook @ 2018-07-24 16:49 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Kees Cook, Arnd Bergmann, Eric Biggers, Gustavo A. R. Silva,
	Alasdair Kergon, Rabin Vincent, Tim Chen, Rafael J. Wysocki,
	Pavel Machek, Thomas Gleixner, Ingo Molnar, H. Peter Anvin, x86,
	Philipp Reisner, Lars Ellenberg, Jens Axboe, Giovanni Cabiddu,
	Mike Snitzer, Paul Mackerras, Greg Kroah-Hartman, David Howells,
	Johannes Berg, Tudor-Dan Ambarus, Jia-Ju Bai, Andrew Morton,
	Geert Uytterhoeven, Josh Poimboeuf, David Woodhouse, Will Deacon,
	dm-devel, linux-pm, linux-crypto, drbd-dev, linux-block,
	qat-linux, linux-ppp, netdev, devel, linux-afs, linux-wireless,
	linux-kernel

The use of SKCIPHER_REQUEST_ON_STACK() will trigger FRAME_WARN warnings
(when less than 2048) once the VLA is no longer hidden from the check:

net/rxrpc/rxkad.c:398:1: warning: the frame size of 1152 bytes is larger than 1024 bytes [-Wframe-larger-than=]
net/rxrpc/rxkad.c:242:1: warning: the frame size of 1152 bytes is larger than 1024 bytes [-Wframe-larger-than=]

This passes the initial SKCIPHER_REQUEST_ON_STACK allocation to the leaf
functions for reuse. Two requests allocated on the stack is not needed
when only one is used at a time.

Signed-off-by: Kees Cook <keescook@chromium.org>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 net/rxrpc/rxkad.c | 25 +++++++++++++------------
 1 file changed, 13 insertions(+), 12 deletions(-)

diff --git a/net/rxrpc/rxkad.c b/net/rxrpc/rxkad.c
index 278ac0807a60..6393391fac86 100644
--- a/net/rxrpc/rxkad.c
+++ b/net/rxrpc/rxkad.c
@@ -146,10 +146,10 @@ static int rxkad_prime_packet_security(struct rxrpc_connection *conn)
 static int rxkad_secure_packet_auth(const struct rxrpc_call *call,
 				    struct sk_buff *skb,
 				    u32 data_size,
-				    void *sechdr)
+				    void *sechdr,
+				    struct skcipher_request *req)
 {
 	struct rxrpc_skb_priv *sp = rxrpc_skb(skb);
-	SKCIPHER_REQUEST_ON_STACK(req, call->conn->cipher);
 	struct rxkad_level1_hdr hdr;
 	struct rxrpc_crypt iv;
 	struct scatterlist sg;
@@ -183,12 +183,12 @@ static int rxkad_secure_packet_auth(const struct rxrpc_call *call,
 static int rxkad_secure_packet_encrypt(const struct rxrpc_call *call,
 				       struct sk_buff *skb,
 				       u32 data_size,
-				       void *sechdr)
+				       void *sechdr,
+				       struct skcipher_request *req)
 {
 	const struct rxrpc_key_token *token;
 	struct rxkad_level2_hdr rxkhdr;
 	struct rxrpc_skb_priv *sp;
-	SKCIPHER_REQUEST_ON_STACK(req, call->conn->cipher);
 	struct rxrpc_crypt iv;
 	struct scatterlist sg[16];
 	struct sk_buff *trailer;
@@ -296,11 +296,12 @@ static int rxkad_secure_packet(struct rxrpc_call *call,
 		ret = 0;
 		break;
 	case RXRPC_SECURITY_AUTH:
-		ret = rxkad_secure_packet_auth(call, skb, data_size, sechdr);
+		ret = rxkad_secure_packet_auth(call, skb, data_size, sechdr,
+					       req);
 		break;
 	case RXRPC_SECURITY_ENCRYPT:
 		ret = rxkad_secure_packet_encrypt(call, skb, data_size,
-						  sechdr);
+						  sechdr, req);
 		break;
 	default:
 		ret = -EPERM;
@@ -316,10 +317,10 @@ static int rxkad_secure_packet(struct rxrpc_call *call,
  */
 static int rxkad_verify_packet_1(struct rxrpc_call *call, struct sk_buff *skb,
 				 unsigned int offset, unsigned int len,
-				 rxrpc_seq_t seq)
+				 rxrpc_seq_t seq,
+				 struct skcipher_request *req)
 {
 	struct rxkad_level1_hdr sechdr;
-	SKCIPHER_REQUEST_ON_STACK(req, call->conn->cipher);
 	struct rxrpc_crypt iv;
 	struct scatterlist sg[16];
 	struct sk_buff *trailer;
@@ -402,11 +403,11 @@ static int rxkad_verify_packet_1(struct rxrpc_call *call, struct sk_buff *skb,
  */
 static int rxkad_verify_packet_2(struct rxrpc_call *call, struct sk_buff *skb,
 				 unsigned int offset, unsigned int len,
-				 rxrpc_seq_t seq)
+				 rxrpc_seq_t seq,
+				 struct skcipher_request *req)
 {
 	const struct rxrpc_key_token *token;
 	struct rxkad_level2_hdr sechdr;
-	SKCIPHER_REQUEST_ON_STACK(req, call->conn->cipher);
 	struct rxrpc_crypt iv;
 	struct scatterlist _sg[4], *sg;
 	struct sk_buff *trailer;
@@ -549,9 +550,9 @@ static int rxkad_verify_packet(struct rxrpc_call *call, struct sk_buff *skb,
 	case RXRPC_SECURITY_PLAIN:
 		return 0;
 	case RXRPC_SECURITY_AUTH:
-		return rxkad_verify_packet_1(call, skb, offset, len, seq);
+		return rxkad_verify_packet_1(call, skb, offset, len, seq, req);
 	case RXRPC_SECURITY_ENCRYPT:
-		return rxkad_verify_packet_2(call, skb, offset, len, seq);
+		return rxkad_verify_packet_2(call, skb, offset, len, seq, req);
 	default:
 		return -ENOANO;
 	}
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v6 17/18] crypto: ccm: Remove VLA usage
  2018-07-24 16:49 [PATCH v6 00/18] crypto: Remove VLA usage Kees Cook
                   ` (15 preceding siblings ...)
  2018-07-24 16:49 ` [PATCH v6 16/18] rxrpc: Reuse SKCIPHER_REQUEST_ON_STACK buffer Kees Cook
@ 2018-07-24 16:49 ` Kees Cook
  2018-07-24 16:57   ` Ard Biesheuvel
  2018-07-24 16:49 ` [PATCH v6 18/18] crypto: Remove AHASH_REQUEST_ON_STACK Kees Cook
  17 siblings, 1 reply; 32+ messages in thread
From: Kees Cook @ 2018-07-24 16:49 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Kees Cook, Ard Biesheuvel, Arnd Bergmann, Eric Biggers,
	Gustavo A. R. Silva, Alasdair Kergon, Rabin Vincent, Tim Chen,
	Rafael J. Wysocki, Pavel Machek, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, x86, Philipp Reisner, Lars Ellenberg, Jens Axboe,
	Giovanni Cabiddu, Mike Snitzer, Paul Mackerras,
	Greg Kroah-Hartman, David Howells, Johannes Berg,
	Tudor-Dan Ambarus, Jia-Ju Bai, Andrew Morton, Geert Uytterhoeven,
	Josh Poimboeuf, David Woodhouse, Will Deacon, dm-devel, linux-pm,
	linux-crypto, drbd-dev, linux-block, qat-linux, linux-ppp,
	netdev, devel, linux-afs, linux-wireless, linux-kernel

From: Ard Biesheuvel <ard.biesheuvel@linaro.org>

In the quest to remove all stack VLA usage from the kernel[1], this
drops AHASH_REQUEST_ON_STACK by preallocated the ahash request area
with the skcipher area (which are not used at the same time).

[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Kees Cook <keescook@chromium.org>
---
 crypto/ccm.c | 9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/crypto/ccm.c b/crypto/ccm.c
index 0a083342ec8c..b242fd0d3262 100644
--- a/crypto/ccm.c
+++ b/crypto/ccm.c
@@ -50,7 +50,10 @@ struct crypto_ccm_req_priv_ctx {
 	u32 flags;
 	struct scatterlist src[3];
 	struct scatterlist dst[3];
-	struct skcipher_request skreq;
+	union {
+		struct ahash_request ahreq;
+		struct skcipher_request skreq;
+	};
 };
 
 struct cbcmac_tfm_ctx {
@@ -181,7 +184,7 @@ static int crypto_ccm_auth(struct aead_request *req, struct scatterlist *plain,
 	struct crypto_ccm_req_priv_ctx *pctx = crypto_ccm_reqctx(req);
 	struct crypto_aead *aead = crypto_aead_reqtfm(req);
 	struct crypto_ccm_ctx *ctx = crypto_aead_ctx(aead);
-	AHASH_REQUEST_ON_STACK(ahreq, ctx->mac);
+	struct ahash_request *ahreq = &pctx->ahreq;
 	unsigned int assoclen = req->assoclen;
 	struct scatterlist sg[3];
 	u8 *odata = pctx->odata;
@@ -427,7 +430,7 @@ static int crypto_ccm_init_tfm(struct crypto_aead *tfm)
 	crypto_aead_set_reqsize(
 		tfm,
 		align + sizeof(struct crypto_ccm_req_priv_ctx) +
-		crypto_skcipher_reqsize(ctr));
+		max(crypto_ahash_reqsize(mac), crypto_skcipher_reqsize(ctr)));
 
 	return 0;
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v6 18/18] crypto: Remove AHASH_REQUEST_ON_STACK
  2018-07-24 16:49 [PATCH v6 00/18] crypto: Remove VLA usage Kees Cook
                   ` (16 preceding siblings ...)
  2018-07-24 16:49 ` [PATCH v6 17/18] crypto: ccm: Remove VLA usage Kees Cook
@ 2018-07-24 16:49 ` Kees Cook
  2018-07-24 17:31   ` Joe Perches
  17 siblings, 1 reply; 32+ messages in thread
From: Kees Cook @ 2018-07-24 16:49 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Kees Cook, Arnd Bergmann, Eric Biggers, Gustavo A. R. Silva,
	Alasdair Kergon, Rabin Vincent, Tim Chen, Rafael J. Wysocki,
	Pavel Machek, Thomas Gleixner, Ingo Molnar, H. Peter Anvin, x86,
	Philipp Reisner, Lars Ellenberg, Jens Axboe, Giovanni Cabiddu,
	Mike Snitzer, Paul Mackerras, Greg Kroah-Hartman, David Howells,
	Johannes Berg, Tudor-Dan Ambarus, Jia-Ju Bai, Andrew Morton,
	Geert Uytterhoeven, Josh Poimboeuf, David Woodhouse, Will Deacon,
	dm-devel, linux-pm, linux-crypto, drbd-dev, linux-block,
	qat-linux, linux-ppp, netdev, devel, linux-afs, linux-wireless,
	linux-kernel

All users of AHASH_REQUEST_ON_STACK have been removed from the kernel, so
drop it entirely so no VLAs get reintroduced by future users.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 include/crypto/hash.h | 5 -----
 1 file changed, 5 deletions(-)

diff --git a/include/crypto/hash.h b/include/crypto/hash.h
index 21587011ab0f..fca3e28c77a4 100644
--- a/include/crypto/hash.h
+++ b/include/crypto/hash.h
@@ -64,11 +64,6 @@ struct ahash_request {
 	void *__ctx[] CRYPTO_MINALIGN_ATTR;
 };
 
-#define AHASH_REQUEST_ON_STACK(name, ahash) \
-	char __##name##_desc[sizeof(struct ahash_request) + \
-		crypto_ahash_reqsize(ahash)] CRYPTO_MINALIGN_ATTR; \
-	struct ahash_request *name = (void *)__##name##_desc
-
 /**
  * struct ahash_alg - asynchronous message digest definition
  * @init: **[mandatory]** Initialize the transformation context. Intended only to initialize the
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* Re: [PATCH v6 17/18] crypto: ccm: Remove VLA usage
  2018-07-24 16:49 ` [PATCH v6 17/18] crypto: ccm: Remove VLA usage Kees Cook
@ 2018-07-24 16:57   ` Ard Biesheuvel
  2018-07-24 17:51     ` Kees Cook
  0 siblings, 1 reply; 32+ messages in thread
From: Ard Biesheuvel @ 2018-07-24 16:57 UTC (permalink / raw)
  To: Kees Cook
  Cc: Herbert Xu, Arnd Bergmann, Eric Biggers, Gustavo A. R. Silva,
	Alasdair Kergon, Rabin Vincent, Tim Chen, Rafael J. Wysocki,
	Pavel Machek, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	the arch/x86 maintainers, Philipp Reisner, Lars Ellenberg,
	Jens Axboe, Giovanni Cabiddu, Mike Snitzer, Paul Mackerras,
	Greg Kroah-Hartman, David Howells, Johannes Berg,
	Tudor-Dan Ambarus, Jia-Ju Bai, Andrew Morton, Geert Uytterhoeven,
	Josh Poimboeuf, David Woodhouse, Will Deacon,
	device-mapper development, linux-pm,
	open list:HARDWARE RANDOM NUMBER GENERATOR CORE, drbd-dev,
	linux-block, qat-linux, linux-ppp, <netdev@vger.kernel.org>,
	open list:ANDROID DRIVERS, linux-afs,
	<linux-wireless@vger.kernel.org>,
	Linux Kernel Mailing List

On 24 July 2018 at 18:49, Kees Cook <keescook@chromium.org> wrote:
> From: Ard Biesheuvel <ard.biesheuvel@linaro.org>
>
> In the quest to remove all stack VLA usage from the kernel[1], this
> drops AHASH_REQUEST_ON_STACK by preallocated the ahash request area
> with the skcipher area (which are not used at the same time).
>

-EGRAMMAR

> [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com
>
> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> Signed-off-by: Kees Cook <keescook@chromium.org>
> ---
>  crypto/ccm.c | 9 ++++++---
>  1 file changed, 6 insertions(+), 3 deletions(-)
>
> diff --git a/crypto/ccm.c b/crypto/ccm.c
> index 0a083342ec8c..b242fd0d3262 100644
> --- a/crypto/ccm.c
> +++ b/crypto/ccm.c
> @@ -50,7 +50,10 @@ struct crypto_ccm_req_priv_ctx {
>         u32 flags;
>         struct scatterlist src[3];
>         struct scatterlist dst[3];
> -       struct skcipher_request skreq;
> +       union {
> +               struct ahash_request ahreq;
> +               struct skcipher_request skreq;
> +       };
>  };
>
>  struct cbcmac_tfm_ctx {
> @@ -181,7 +184,7 @@ static int crypto_ccm_auth(struct aead_request *req, struct scatterlist *plain,
>         struct crypto_ccm_req_priv_ctx *pctx = crypto_ccm_reqctx(req);
>         struct crypto_aead *aead = crypto_aead_reqtfm(req);
>         struct crypto_ccm_ctx *ctx = crypto_aead_ctx(aead);
> -       AHASH_REQUEST_ON_STACK(ahreq, ctx->mac);
> +       struct ahash_request *ahreq = &pctx->ahreq;
>         unsigned int assoclen = req->assoclen;
>         struct scatterlist sg[3];
>         u8 *odata = pctx->odata;
> @@ -427,7 +430,7 @@ static int crypto_ccm_init_tfm(struct crypto_aead *tfm)
>         crypto_aead_set_reqsize(
>                 tfm,
>                 align + sizeof(struct crypto_ccm_req_priv_ctx) +
> -               crypto_skcipher_reqsize(ctr));
> +               max(crypto_ahash_reqsize(mac), crypto_skcipher_reqsize(ctr)));
>
>         return 0;
>
> --
> 2.17.1
>

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v6 18/18] crypto: Remove AHASH_REQUEST_ON_STACK
  2018-07-24 16:49 ` [PATCH v6 18/18] crypto: Remove AHASH_REQUEST_ON_STACK Kees Cook
@ 2018-07-24 17:31   ` Joe Perches
  2018-07-24 17:53     ` Kees Cook
  0 siblings, 1 reply; 32+ messages in thread
From: Joe Perches @ 2018-07-24 17:31 UTC (permalink / raw)
  To: Kees Cook, Herbert Xu
  Cc: Arnd Bergmann, Eric Biggers, Gustavo A. R. Silva,
	Alasdair Kergon, Rabin Vincent, Tim Chen, Rafael J. Wysocki,
	Pavel Machek, Thomas Gleixner, Ingo Molnar, H. Peter Anvin, x86,
	Philipp Reisner, Lars Ellenberg, Jens Axboe, Giovanni Cabiddu,
	Mike Snitzer, Paul Mackerras, Greg Kroah-Hartman, David Howells,
	Johannes Berg, Tudor-Dan Ambarus, Jia-Ju Bai, Andrew Morton,
	Geert Uytterhoeven, Josh Poimboeuf, David Woodhouse, Will Deacon,
	dm-devel, linux-pm, linux-crypto, drbd-dev, linux-block,
	qat-linux, linux-ppp, netdev, devel, linux-afs, linux-wireless,
	linux-kernel

On Tue, 2018-07-24 at 09:49 -0700, Kees Cook wrote:
> All users of AHASH_REQUEST_ON_STACK have been removed from the kernel, so
> drop it entirely so no VLAs get reintroduced by future users.

checkpatch has a test for that.
It could now be removed as well.
---
diff --git a/scripts/checkpatch.pl b/scripts/checkpatch.pl
index 34e4683de7a3..a3517334d661 100755
--- a/scripts/checkpatch.pl
+++ b/scripts/checkpatch.pl
@@ -796,7 +796,7 @@ our $declaration_macros = qr{(?x:
 	(?:$Storage\s+)?(?:[A-Z_][A-Z0-9]*_){0,2}(?:DEFINE|DECLARE)(?:_[A-Z0-9]+){1,6}\s*\(|
 	(?:$Storage\s+)?[HLP]?LIST_HEAD\s*\(|
 	(?:$Storage\s+)?${Type}\s+uninitialized_var\s*\(|
-	(?:SKCIPHER_REQUEST|SHASH_DESC|AHASH_REQUEST)_ON_STACK\s*\(
+	(?:SKCIPHER_REQUEST|SHASH_DESC)_ON_STACK\s*\(
 )};



^ permalink raw reply related	[flat|nested] 32+ messages in thread

* Re: [PATCH v6 17/18] crypto: ccm: Remove VLA usage
  2018-07-24 16:57   ` Ard Biesheuvel
@ 2018-07-24 17:51     ` Kees Cook
  0 siblings, 0 replies; 32+ messages in thread
From: Kees Cook @ 2018-07-24 17:51 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Herbert Xu, Arnd Bergmann, Eric Biggers, Gustavo A. R. Silva,
	Alasdair Kergon, Rabin Vincent, Tim Chen, Rafael J. Wysocki,
	Pavel Machek, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	the arch/x86 maintainers, Philipp Reisner, Lars Ellenberg,
	Jens Axboe, Giovanni Cabiddu, Mike Snitzer, Paul Mackerras,
	Greg Kroah-Hartman, David Howells, Johannes Berg,
	Tudor-Dan Ambarus, Jia-Ju Bai, Andrew Morton, Geert Uytterhoeven,
	Josh Poimboeuf, David Woodhouse, Will Deacon,
	device-mapper development, linux-pm,
	open list:HARDWARE RANDOM NUMBER GENERATOR CORE, drbd-dev,
	linux-block, qat-linux, linux-ppp, <netdev@vger.kernel.org>,
	open list:ANDROID DRIVERS, linux-afs,
	<linux-wireless@vger.kernel.org>,
	Linux Kernel Mailing List

On Tue, Jul 24, 2018 at 9:57 AM, Ard Biesheuvel
<ard.biesheuvel@linaro.org> wrote:
> On 24 July 2018 at 18:49, Kees Cook <keescook@chromium.org> wrote:
>> From: Ard Biesheuvel <ard.biesheuvel@linaro.org>
>>
>> In the quest to remove all stack VLA usage from the kernel[1], this
>> drops AHASH_REQUEST_ON_STACK by preallocated the ahash request area
>> with the skcipher area (which are not used at the same time).
>>
>
> -EGRAMMAR

Whoops. Will fix my poor sentence merging. :)

-Kees

-- 
Kees Cook
Pixel Security

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v6 18/18] crypto: Remove AHASH_REQUEST_ON_STACK
  2018-07-24 17:31   ` Joe Perches
@ 2018-07-24 17:53     ` Kees Cook
  0 siblings, 0 replies; 32+ messages in thread
From: Kees Cook @ 2018-07-24 17:53 UTC (permalink / raw)
  To: Joe Perches
  Cc: Herbert Xu, Arnd Bergmann, Eric Biggers, Gustavo A. R. Silva,
	Alasdair Kergon, Rabin Vincent, Tim Chen, Rafael J. Wysocki,
	Pavel Machek, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	X86 ML, Philipp Reisner, Lars Ellenberg, Jens Axboe,
	Giovanni Cabiddu, Mike Snitzer, Paul Mackerras,
	Greg Kroah-Hartman, David Howells, Johannes Berg,
	Tudor-Dan Ambarus, Jia-Ju Bai, Andrew Morton, Geert Uytterhoeven,
	Josh Poimboeuf, David Woodhouse, Will Deacon,
	device-mapper development, Linux PM list, linux-crypto, drbd-dev,
	linux-block, qat-linux, linux-ppp, Network Development,
	open list:ANDROID DRIVERS, linux-afs, linux-wireless, LKML

On Tue, Jul 24, 2018 at 10:31 AM, Joe Perches <joe@perches.com> wrote:
> On Tue, 2018-07-24 at 09:49 -0700, Kees Cook wrote:
>> All users of AHASH_REQUEST_ON_STACK have been removed from the kernel, so
>> drop it entirely so no VLAs get reintroduced by future users.
>
> checkpatch has a test for that.
> It could now be removed as well.
> ---
> diff --git a/scripts/checkpatch.pl b/scripts/checkpatch.pl
> index 34e4683de7a3..a3517334d661 100755
> --- a/scripts/checkpatch.pl
> +++ b/scripts/checkpatch.pl
> @@ -796,7 +796,7 @@ our $declaration_macros = qr{(?x:
>         (?:$Storage\s+)?(?:[A-Z_][A-Z0-9]*_){0,2}(?:DEFINE|DECLARE)(?:_[A-Z0-9]+){1,6}\s*\(|
>         (?:$Storage\s+)?[HLP]?LIST_HEAD\s*\(|
>         (?:$Storage\s+)?${Type}\s+uninitialized_var\s*\(|
> -       (?:SKCIPHER_REQUEST|SHASH_DESC|AHASH_REQUEST)_ON_STACK\s*\(
> +       (?:SKCIPHER_REQUEST|SHASH_DESC)_ON_STACK\s*\(
>  )};

Ah! Cool. I've added this now.

-Kees

-- 
Kees Cook
Pixel Security

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v6 13/18] wireless/lib80211: Convert from ahash to shash
  2018-07-24 16:49 ` [PATCH v6 13/18] wireless/lib80211: " Kees Cook
@ 2018-07-25  7:24   ` Johannes Berg
  0 siblings, 0 replies; 32+ messages in thread
From: Johannes Berg @ 2018-07-25  7:24 UTC (permalink / raw)
  To: Kees Cook, Herbert Xu
  Cc: Arnd Bergmann, Eric Biggers, Gustavo A. R. Silva,
	Alasdair Kergon, Rabin Vincent, Tim Chen, Rafael J. Wysocki,
	Pavel Machek, Thomas Gleixner, Ingo Molnar, H. Peter Anvin, x86,
	Philipp Reisner, Lars Ellenberg, Jens Axboe, Giovanni Cabiddu,
	Mike Snitzer, Paul Mackerras, Greg Kroah-Hartman, David Howells,
	Tudor-Dan Ambarus, Jia-Ju Bai, Andrew Morton, Geert Uytterhoeven,
	Josh Poimboeuf, David Woodhouse, Will Deacon, dm-devel, linux-pm,
	linux-crypto, drbd-dev, linux-block, qat-linux, linux-ppp,
	netdev, devel, linux-afs, linux-wireless, linux-kernel

On Tue, 2018-07-24 at 09:49 -0700, Kees Cook wrote:
> In preparing to remove all stack VLA usage from the kernel[1], this
> removes the discouraged use of AHASH_REQUEST_ON_STACK in favor of
> the smaller SHASH_DESC_ON_STACK by converting from ahash-wrapped-shash
> to direct shash. By removing a layer of indirection this both improves
> performance and reduces stack usage. The stack allocation will be made
> a fixed size in a later patch to the crypto subsystem.
> 
I think you sent this before - it should be in net-next now.

johannes

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v6 10/18] x86/power/64: Remove VLA usage
  2018-07-24 16:49 ` [PATCH v6 10/18] x86/power/64: " Kees Cook
@ 2018-07-25 11:32   ` Rafael J. Wysocki
  2018-07-25 18:01     ` Kees Cook
  0 siblings, 1 reply; 32+ messages in thread
From: Rafael J. Wysocki @ 2018-07-25 11:32 UTC (permalink / raw)
  To: Kees Cook
  Cc: Herbert Xu, Arnd Bergmann, Eric Biggers, Gustavo A. R. Silva,
	Alasdair Kergon, Rabin Vincent, Tim Chen, Rafael J. Wysocki,
	Pavel Machek, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	the arch/x86 maintainers, Philipp Reisner, Lars Ellenberg,
	Jens Axboe, Giovanni Cabiddu, Mike Snitzer, Paul Mackerras,
	Greg Kroah-Hartman, David Howells, Johannes Berg,
	Tudor-Dan Ambarus, Jia-Ju Bai, Andrew Morton, Geert Uytterhoeven,
	Josh Poimboeuf, David Woodhouse, Will Deacon, dm-devel, Linux PM,
	linux-crypto, drbd-dev, linux-block, qat-linux, linux-ppp,
	netdev, devel, linux-afs, open list:NETWORKING DRIVERS (WIRELESS),
	Linux Kernel Mailing List

On Tue, Jul 24, 2018 at 6:49 PM, Kees Cook <keescook@chromium.org> wrote:
> In the quest to remove all stack VLA usage from the kernel[1], this
> removes the discouraged use of AHASH_REQUEST_ON_STACK by switching to
> shash directly and allocating the descriptor in heap memory (which should
> be fine: the tfm has already been allocated there too).
>
> [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com
>
> Signed-off-by: Kees Cook <keescook@chromium.org>
> Acked-by: Pavel Machek <pavel@ucw.cz>

I think I can queue this up if there are no objections from others.

Do you want me to do that?

> ---
>  arch/x86/power/hibernate_64.c | 36 ++++++++++++++++++++---------------
>  1 file changed, 21 insertions(+), 15 deletions(-)
>
> diff --git a/arch/x86/power/hibernate_64.c b/arch/x86/power/hibernate_64.c
> index 67ccf64c8bd8..f8e3b668d20b 100644
> --- a/arch/x86/power/hibernate_64.c
> +++ b/arch/x86/power/hibernate_64.c
> @@ -233,29 +233,35 @@ struct restore_data_record {
>   */
>  static int get_e820_md5(struct e820_table *table, void *buf)
>  {
> -       struct scatterlist sg;
> -       struct crypto_ahash *tfm;
> +       struct crypto_shash *tfm;
> +       struct shash_desc *desc;
>         int size;
>         int ret = 0;
>
> -       tfm = crypto_alloc_ahash("md5", 0, CRYPTO_ALG_ASYNC);
> +       tfm = crypto_alloc_shash("md5", 0, 0);
>         if (IS_ERR(tfm))
>                 return -ENOMEM;
>
> -       {
> -               AHASH_REQUEST_ON_STACK(req, tfm);
> -               size = offsetof(struct e820_table, entries) + sizeof(struct e820_entry) * table->nr_entries;
> -               ahash_request_set_tfm(req, tfm);
> -               sg_init_one(&sg, (u8 *)table, size);
> -               ahash_request_set_callback(req, 0, NULL, NULL);
> -               ahash_request_set_crypt(req, &sg, buf, size);
> -
> -               if (crypto_ahash_digest(req))
> -                       ret = -EINVAL;
> -               ahash_request_zero(req);
> +       desc = kmalloc(sizeof(struct shash_desc) + crypto_shash_descsize(tfm),
> +                      GFP_KERNEL);
> +       if (!desc) {
> +               ret = -ENOMEM;
> +               goto free_tfm;
>         }
> -       crypto_free_ahash(tfm);
>
> +       desc->tfm = tfm;
> +       desc->flags = 0;
> +
> +       size = offsetof(struct e820_table, entries) +
> +               sizeof(struct e820_entry) * table->nr_entries;
> +
> +       if (crypto_shash_digest(desc, (u8 *)table, size, buf))
> +               ret = -EINVAL;
> +
> +       kzfree(desc);
> +
> +free_tfm:
> +       crypto_free_shash(tfm);
>         return ret;
>  }
>
> --
> 2.17.1
>

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v6 10/18] x86/power/64: Remove VLA usage
  2018-07-25 11:32   ` Rafael J. Wysocki
@ 2018-07-25 18:01     ` Kees Cook
  2018-08-06 10:28       ` Rafael J. Wysocki
  0 siblings, 1 reply; 32+ messages in thread
From: Kees Cook @ 2018-07-25 18:01 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Herbert Xu, Arnd Bergmann, Eric Biggers, Gustavo A. R. Silva,
	Alasdair Kergon, Rabin Vincent, Tim Chen, Rafael J. Wysocki,
	Pavel Machek, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	the arch/x86 maintainers, Philipp Reisner, Lars Ellenberg,
	Jens Axboe, Giovanni Cabiddu, Mike Snitzer, Paul Mackerras,
	Greg Kroah-Hartman, David Howells, Johannes Berg,
	Tudor-Dan Ambarus, Jia-Ju Bai, Andrew Morton, Geert Uytterhoeven,
	Josh Poimboeuf, David Woodhouse, Will Deacon,
	device-mapper development, Linux PM, linux-crypto, drbd-dev,
	linux-block, qat-linux, linux-ppp, netdev,
	open list:ANDROID DRIVERS, linux-afs,
	open list:NETWORKING DRIVERS (WIRELESS),
	Linux Kernel Mailing List

On Wed, Jul 25, 2018 at 4:32 AM, Rafael J. Wysocki <rafael@kernel.org> wrote:
> On Tue, Jul 24, 2018 at 6:49 PM, Kees Cook <keescook@chromium.org> wrote:
>> In the quest to remove all stack VLA usage from the kernel[1], this
>> removes the discouraged use of AHASH_REQUEST_ON_STACK by switching to
>> shash directly and allocating the descriptor in heap memory (which should
>> be fine: the tfm has already been allocated there too).
>>
>> [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com
>>
>> Signed-off-by: Kees Cook <keescook@chromium.org>
>> Acked-by: Pavel Machek <pavel@ucw.cz>
>
> I think I can queue this up if there are no objections from others.
>
> Do you want me to do that?

Sure thing. It looks like the other stand-alone patches like this one
are getting taken into the non-crypto trees, so that's fine.

Thanks!

-Kees

-- 
Kees Cook
Pixel Security

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v6 12/18] drbd: Convert from ahash to shash
  2018-07-24 16:49 ` [PATCH v6 12/18] drbd: Convert " Kees Cook
@ 2018-08-02 23:43   ` Kees Cook
  2018-08-03 13:44     ` Lars Ellenberg
  0 siblings, 1 reply; 32+ messages in thread
From: Kees Cook @ 2018-08-02 23:43 UTC (permalink / raw)
  To: Philipp Reisner, Lars Ellenberg, Jens Axboe
  Cc: Herbert Xu, Arnd Bergmann, Eric Biggers, Gustavo A. R. Silva,
	linux-crypto, drbd-dev, linux-block, LKML

On Tue, Jul 24, 2018 at 9:49 AM, Kees Cook <keescook@chromium.org> wrote:
> In preparing to remove all stack VLA usage from the kernel[1], this
> removes the discouraged use of AHASH_REQUEST_ON_STACK in favor of
> the smaller SHASH_DESC_ON_STACK by converting from ahash-wrapped-shash
> to direct shash. By removing a layer of indirection this both improves
> performance and reduces stack usage. The stack allocation will be made
> a fixed size in a later patch to the crypto subsystem.

Philipp or Lars, what do you think of this? Can this go via your tree
or maybe via Jens?

I'd love an Ack or Review.

Thanks!

-Kees

>
> [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com
>
> Signed-off-by: Kees Cook <keescook@chromium.org>
> ---
>  drivers/block/drbd/drbd_int.h      | 13 +++----
>  drivers/block/drbd/drbd_main.c     | 14 ++++----
>  drivers/block/drbd/drbd_nl.c       | 39 +++++++--------------
>  drivers/block/drbd/drbd_receiver.c | 35 ++++++++++---------
>  drivers/block/drbd/drbd_worker.c   | 56 +++++++++++++-----------------
>  5 files changed, 69 insertions(+), 88 deletions(-)
>
> diff --git a/drivers/block/drbd/drbd_int.h b/drivers/block/drbd/drbd_int.h
> index bc4ed2ed40a2..97d8e290c2be 100644
> --- a/drivers/block/drbd/drbd_int.h
> +++ b/drivers/block/drbd/drbd_int.h
> @@ -726,10 +726,10 @@ struct drbd_connection {
>         struct list_head transfer_log;  /* all requests not yet fully processed */
>
>         struct crypto_shash *cram_hmac_tfm;
> -       struct crypto_ahash *integrity_tfm;  /* checksums we compute, updates protected by connection->data->mutex */
> -       struct crypto_ahash *peer_integrity_tfm;  /* checksums we verify, only accessed from receiver thread  */
> -       struct crypto_ahash *csums_tfm;
> -       struct crypto_ahash *verify_tfm;
> +       struct crypto_shash *integrity_tfm;  /* checksums we compute, updates protected by connection->data->mutex */
> +       struct crypto_shash *peer_integrity_tfm;  /* checksums we verify, only accessed from receiver thread  */
> +       struct crypto_shash *csums_tfm;
> +       struct crypto_shash *verify_tfm;
>         void *int_dig_in;
>         void *int_dig_vv;
>
> @@ -1533,8 +1533,9 @@ static inline void ov_out_of_sync_print(struct drbd_device *device)
>  }
>
>
> -extern void drbd_csum_bio(struct crypto_ahash *, struct bio *, void *);
> -extern void drbd_csum_ee(struct crypto_ahash *, struct drbd_peer_request *, void *);
> +extern void drbd_csum_bio(struct crypto_shash *, struct bio *, void *);
> +extern void drbd_csum_ee(struct crypto_shash *, struct drbd_peer_request *,
> +                        void *);
>  /* worker callbacks */
>  extern int w_e_end_data_req(struct drbd_work *, int);
>  extern int w_e_end_rsdata_req(struct drbd_work *, int);
> diff --git a/drivers/block/drbd/drbd_main.c b/drivers/block/drbd/drbd_main.c
> index a80809bd3057..ccb54791d39c 100644
> --- a/drivers/block/drbd/drbd_main.c
> +++ b/drivers/block/drbd/drbd_main.c
> @@ -1377,7 +1377,7 @@ void drbd_send_ack_dp(struct drbd_peer_device *peer_device, enum drbd_packet cmd
>                       struct p_data *dp, int data_size)
>  {
>         if (peer_device->connection->peer_integrity_tfm)
> -               data_size -= crypto_ahash_digestsize(peer_device->connection->peer_integrity_tfm);
> +               data_size -= crypto_shash_digestsize(peer_device->connection->peer_integrity_tfm);
>         _drbd_send_ack(peer_device, cmd, dp->sector, cpu_to_be32(data_size),
>                        dp->block_id);
>  }
> @@ -1690,7 +1690,7 @@ int drbd_send_dblock(struct drbd_peer_device *peer_device, struct drbd_request *
>         sock = &peer_device->connection->data;
>         p = drbd_prepare_command(peer_device, sock);
>         digest_size = peer_device->connection->integrity_tfm ?
> -                     crypto_ahash_digestsize(peer_device->connection->integrity_tfm) : 0;
> +                     crypto_shash_digestsize(peer_device->connection->integrity_tfm) : 0;
>
>         if (!p)
>                 return -EIO;
> @@ -1796,7 +1796,7 @@ int drbd_send_block(struct drbd_peer_device *peer_device, enum drbd_packet cmd,
>         p = drbd_prepare_command(peer_device, sock);
>
>         digest_size = peer_device->connection->integrity_tfm ?
> -                     crypto_ahash_digestsize(peer_device->connection->integrity_tfm) : 0;
> +                     crypto_shash_digestsize(peer_device->connection->integrity_tfm) : 0;
>
>         if (!p)
>                 return -EIO;
> @@ -2561,11 +2561,11 @@ void conn_free_crypto(struct drbd_connection *connection)
>  {
>         drbd_free_sock(connection);
>
> -       crypto_free_ahash(connection->csums_tfm);
> -       crypto_free_ahash(connection->verify_tfm);
> +       crypto_free_shash(connection->csums_tfm);
> +       crypto_free_shash(connection->verify_tfm);
>         crypto_free_shash(connection->cram_hmac_tfm);
> -       crypto_free_ahash(connection->integrity_tfm);
> -       crypto_free_ahash(connection->peer_integrity_tfm);
> +       crypto_free_shash(connection->integrity_tfm);
> +       crypto_free_shash(connection->peer_integrity_tfm);
>         kfree(connection->int_dig_in);
>         kfree(connection->int_dig_vv);
>
> diff --git a/drivers/block/drbd/drbd_nl.c b/drivers/block/drbd/drbd_nl.c
> index b4f02768ba47..d15703b1ffe8 100644
> --- a/drivers/block/drbd/drbd_nl.c
> +++ b/drivers/block/drbd/drbd_nl.c
> @@ -2303,10 +2303,10 @@ check_net_options(struct drbd_connection *connection, struct net_conf *new_net_c
>  }
>
>  struct crypto {
> -       struct crypto_ahash *verify_tfm;
> -       struct crypto_ahash *csums_tfm;
> +       struct crypto_shash *verify_tfm;
> +       struct crypto_shash *csums_tfm;
>         struct crypto_shash *cram_hmac_tfm;
> -       struct crypto_ahash *integrity_tfm;
> +       struct crypto_shash *integrity_tfm;
>  };
>
>  static int
> @@ -2324,36 +2324,21 @@ alloc_shash(struct crypto_shash **tfm, char *tfm_name, int err_alg)
>         return NO_ERROR;
>  }
>
> -static int
> -alloc_ahash(struct crypto_ahash **tfm, char *tfm_name, int err_alg)
> -{
> -       if (!tfm_name[0])
> -               return NO_ERROR;
> -
> -       *tfm = crypto_alloc_ahash(tfm_name, 0, CRYPTO_ALG_ASYNC);
> -       if (IS_ERR(*tfm)) {
> -               *tfm = NULL;
> -               return err_alg;
> -       }
> -
> -       return NO_ERROR;
> -}
> -
>  static enum drbd_ret_code
>  alloc_crypto(struct crypto *crypto, struct net_conf *new_net_conf)
>  {
>         char hmac_name[CRYPTO_MAX_ALG_NAME];
>         enum drbd_ret_code rv;
>
> -       rv = alloc_ahash(&crypto->csums_tfm, new_net_conf->csums_alg,
> +       rv = alloc_shash(&crypto->csums_tfm, new_net_conf->csums_alg,
>                          ERR_CSUMS_ALG);
>         if (rv != NO_ERROR)
>                 return rv;
> -       rv = alloc_ahash(&crypto->verify_tfm, new_net_conf->verify_alg,
> +       rv = alloc_shash(&crypto->verify_tfm, new_net_conf->verify_alg,
>                          ERR_VERIFY_ALG);
>         if (rv != NO_ERROR)
>                 return rv;
> -       rv = alloc_ahash(&crypto->integrity_tfm, new_net_conf->integrity_alg,
> +       rv = alloc_shash(&crypto->integrity_tfm, new_net_conf->integrity_alg,
>                          ERR_INTEGRITY_ALG);
>         if (rv != NO_ERROR)
>                 return rv;
> @@ -2371,9 +2356,9 @@ alloc_crypto(struct crypto *crypto, struct net_conf *new_net_conf)
>  static void free_crypto(struct crypto *crypto)
>  {
>         crypto_free_shash(crypto->cram_hmac_tfm);
> -       crypto_free_ahash(crypto->integrity_tfm);
> -       crypto_free_ahash(crypto->csums_tfm);
> -       crypto_free_ahash(crypto->verify_tfm);
> +       crypto_free_shash(crypto->integrity_tfm);
> +       crypto_free_shash(crypto->csums_tfm);
> +       crypto_free_shash(crypto->verify_tfm);
>  }
>
>  int drbd_adm_net_opts(struct sk_buff *skb, struct genl_info *info)
> @@ -2450,17 +2435,17 @@ int drbd_adm_net_opts(struct sk_buff *skb, struct genl_info *info)
>         rcu_assign_pointer(connection->net_conf, new_net_conf);
>
>         if (!rsr) {
> -               crypto_free_ahash(connection->csums_tfm);
> +               crypto_free_shash(connection->csums_tfm);
>                 connection->csums_tfm = crypto.csums_tfm;
>                 crypto.csums_tfm = NULL;
>         }
>         if (!ovr) {
> -               crypto_free_ahash(connection->verify_tfm);
> +               crypto_free_shash(connection->verify_tfm);
>                 connection->verify_tfm = crypto.verify_tfm;
>                 crypto.verify_tfm = NULL;
>         }
>
> -       crypto_free_ahash(connection->integrity_tfm);
> +       crypto_free_shash(connection->integrity_tfm);
>         connection->integrity_tfm = crypto.integrity_tfm;
>         if (connection->cstate >= C_WF_REPORT_PARAMS && connection->agreed_pro_version >= 100)
>                 /* Do this without trying to take connection->data.mutex again.  */
> diff --git a/drivers/block/drbd/drbd_receiver.c b/drivers/block/drbd/drbd_receiver.c
> index be9450f5ad1c..76243e9ef277 100644
> --- a/drivers/block/drbd/drbd_receiver.c
> +++ b/drivers/block/drbd/drbd_receiver.c
> @@ -1732,7 +1732,7 @@ static int receive_Barrier(struct drbd_connection *connection, struct packet_inf
>  }
>
>  /* quick wrapper in case payload size != request_size (write same) */
> -static void drbd_csum_ee_size(struct crypto_ahash *h,
> +static void drbd_csum_ee_size(struct crypto_shash *h,
>                               struct drbd_peer_request *r, void *d,
>                               unsigned int payload_size)
>  {
> @@ -1769,7 +1769,7 @@ read_in_block(struct drbd_peer_device *peer_device, u64 id, sector_t sector,
>
>         digest_size = 0;
>         if (!trim && peer_device->connection->peer_integrity_tfm) {
> -               digest_size = crypto_ahash_digestsize(peer_device->connection->peer_integrity_tfm);
> +               digest_size = crypto_shash_digestsize(peer_device->connection->peer_integrity_tfm);
>                 /*
>                  * FIXME: Receive the incoming digest into the receive buffer
>                  *        here, together with its struct p_data?
> @@ -1905,7 +1905,7 @@ static int recv_dless_read(struct drbd_peer_device *peer_device, struct drbd_req
>
>         digest_size = 0;
>         if (peer_device->connection->peer_integrity_tfm) {
> -               digest_size = crypto_ahash_digestsize(peer_device->connection->peer_integrity_tfm);
> +               digest_size = crypto_shash_digestsize(peer_device->connection->peer_integrity_tfm);
>                 err = drbd_recv_all_warn(peer_device->connection, dig_in, digest_size);
>                 if (err)
>                         return err;
> @@ -3540,7 +3540,7 @@ static int receive_protocol(struct drbd_connection *connection, struct packet_in
>         int p_proto, p_discard_my_data, p_two_primaries, cf;
>         struct net_conf *nc, *old_net_conf, *new_net_conf = NULL;
>         char integrity_alg[SHARED_SECRET_MAX] = "";
> -       struct crypto_ahash *peer_integrity_tfm = NULL;
> +       struct crypto_shash *peer_integrity_tfm = NULL;
>         void *int_dig_in = NULL, *int_dig_vv = NULL;
>
>         p_proto         = be32_to_cpu(p->protocol);
> @@ -3621,7 +3621,7 @@ static int receive_protocol(struct drbd_connection *connection, struct packet_in
>                  * change.
>                  */
>
> -               peer_integrity_tfm = crypto_alloc_ahash(integrity_alg, 0, CRYPTO_ALG_ASYNC);
> +               peer_integrity_tfm = crypto_alloc_shash(integrity_alg, 0, CRYPTO_ALG_ASYNC);
>                 if (IS_ERR(peer_integrity_tfm)) {
>                         peer_integrity_tfm = NULL;
>                         drbd_err(connection, "peer data-integrity-alg %s not supported\n",
> @@ -3629,7 +3629,7 @@ static int receive_protocol(struct drbd_connection *connection, struct packet_in
>                         goto disconnect;
>                 }
>
> -               hash_size = crypto_ahash_digestsize(peer_integrity_tfm);
> +               hash_size = crypto_shash_digestsize(peer_integrity_tfm);
>                 int_dig_in = kmalloc(hash_size, GFP_KERNEL);
>                 int_dig_vv = kmalloc(hash_size, GFP_KERNEL);
>                 if (!(int_dig_in && int_dig_vv)) {
> @@ -3659,7 +3659,7 @@ static int receive_protocol(struct drbd_connection *connection, struct packet_in
>         mutex_unlock(&connection->resource->conf_update);
>         mutex_unlock(&connection->data.mutex);
>
> -       crypto_free_ahash(connection->peer_integrity_tfm);
> +       crypto_free_shash(connection->peer_integrity_tfm);
>         kfree(connection->int_dig_in);
>         kfree(connection->int_dig_vv);
>         connection->peer_integrity_tfm = peer_integrity_tfm;
> @@ -3677,7 +3677,7 @@ static int receive_protocol(struct drbd_connection *connection, struct packet_in
>  disconnect_rcu_unlock:
>         rcu_read_unlock();
>  disconnect:
> -       crypto_free_ahash(peer_integrity_tfm);
> +       crypto_free_shash(peer_integrity_tfm);
>         kfree(int_dig_in);
>         kfree(int_dig_vv);
>         conn_request_state(connection, NS(conn, C_DISCONNECTING), CS_HARD);
> @@ -3689,15 +3689,16 @@ static int receive_protocol(struct drbd_connection *connection, struct packet_in
>   * return: NULL (alg name was "")
>   *         ERR_PTR(error) if something goes wrong
>   *         or the crypto hash ptr, if it worked out ok. */
> -static struct crypto_ahash *drbd_crypto_alloc_digest_safe(const struct drbd_device *device,
> +static struct crypto_shash *drbd_crypto_alloc_digest_safe(
> +               const struct drbd_device *device,
>                 const char *alg, const char *name)
>  {
> -       struct crypto_ahash *tfm;
> +       struct crypto_shash *tfm;
>
>         if (!alg[0])
>                 return NULL;
>
> -       tfm = crypto_alloc_ahash(alg, 0, CRYPTO_ALG_ASYNC);
> +       tfm = crypto_alloc_shash(alg, 0, 0);
>         if (IS_ERR(tfm)) {
>                 drbd_err(device, "Can not allocate \"%s\" as %s (reason: %ld)\n",
>                         alg, name, PTR_ERR(tfm));
> @@ -3750,8 +3751,8 @@ static int receive_SyncParam(struct drbd_connection *connection, struct packet_i
>         struct drbd_device *device;
>         struct p_rs_param_95 *p;
>         unsigned int header_size, data_size, exp_max_sz;
> -       struct crypto_ahash *verify_tfm = NULL;
> -       struct crypto_ahash *csums_tfm = NULL;
> +       struct crypto_shash *verify_tfm = NULL;
> +       struct crypto_shash *csums_tfm = NULL;
>         struct net_conf *old_net_conf, *new_net_conf = NULL;
>         struct disk_conf *old_disk_conf = NULL, *new_disk_conf = NULL;
>         const int apv = connection->agreed_pro_version;
> @@ -3898,14 +3899,14 @@ static int receive_SyncParam(struct drbd_connection *connection, struct packet_i
>                         if (verify_tfm) {
>                                 strcpy(new_net_conf->verify_alg, p->verify_alg);
>                                 new_net_conf->verify_alg_len = strlen(p->verify_alg) + 1;
> -                               crypto_free_ahash(peer_device->connection->verify_tfm);
> +                               crypto_free_shash(peer_device->connection->verify_tfm);
>                                 peer_device->connection->verify_tfm = verify_tfm;
>                                 drbd_info(device, "using verify-alg: \"%s\"\n", p->verify_alg);
>                         }
>                         if (csums_tfm) {
>                                 strcpy(new_net_conf->csums_alg, p->csums_alg);
>                                 new_net_conf->csums_alg_len = strlen(p->csums_alg) + 1;
> -                               crypto_free_ahash(peer_device->connection->csums_tfm);
> +                               crypto_free_shash(peer_device->connection->csums_tfm);
>                                 peer_device->connection->csums_tfm = csums_tfm;
>                                 drbd_info(device, "using csums-alg: \"%s\"\n", p->csums_alg);
>                         }
> @@ -3949,9 +3950,9 @@ static int receive_SyncParam(struct drbd_connection *connection, struct packet_i
>         mutex_unlock(&connection->resource->conf_update);
>         /* just for completeness: actually not needed,
>          * as this is not reached if csums_tfm was ok. */
> -       crypto_free_ahash(csums_tfm);
> +       crypto_free_shash(csums_tfm);
>         /* but free the verify_tfm again, if csums_tfm did not work out */
> -       crypto_free_ahash(verify_tfm);
> +       crypto_free_shash(verify_tfm);
>         conn_request_state(peer_device->connection, NS(conn, C_DISCONNECTING), CS_HARD);
>         return -EIO;
>  }
> diff --git a/drivers/block/drbd/drbd_worker.c b/drivers/block/drbd/drbd_worker.c
> index 1476cb3439f4..62dd3700dd84 100644
> --- a/drivers/block/drbd/drbd_worker.c
> +++ b/drivers/block/drbd/drbd_worker.c
> @@ -295,60 +295,54 @@ void drbd_request_endio(struct bio *bio)
>                 complete_master_bio(device, &m);
>  }
>
> -void drbd_csum_ee(struct crypto_ahash *tfm, struct drbd_peer_request *peer_req, void *digest)
> +void drbd_csum_ee(struct crypto_shash *tfm, struct drbd_peer_request *peer_req, void *digest)
>  {
> -       AHASH_REQUEST_ON_STACK(req, tfm);
> -       struct scatterlist sg;
> +       SHASH_DESC_ON_STACK(desc, tfm);
>         struct page *page = peer_req->pages;
>         struct page *tmp;
>         unsigned len;
>
> -       ahash_request_set_tfm(req, tfm);
> -       ahash_request_set_callback(req, 0, NULL, NULL);
> +       desc->tfm = tfm;
> +       desc->flags = 0;
>
> -       sg_init_table(&sg, 1);
> -       crypto_ahash_init(req);
> +       crypto_shash_init(desc);
>
>         while ((tmp = page_chain_next(page))) {
>                 /* all but the last page will be fully used */
> -               sg_set_page(&sg, page, PAGE_SIZE, 0);
> -               ahash_request_set_crypt(req, &sg, NULL, sg.length);
> -               crypto_ahash_update(req);
> +               crypto_shash_update(desc, page_to_virt(page), PAGE_SIZE);
>                 page = tmp;
>         }
>         /* and now the last, possibly only partially used page */
>         len = peer_req->i.size & (PAGE_SIZE - 1);
> -       sg_set_page(&sg, page, len ?: PAGE_SIZE, 0);
> -       ahash_request_set_crypt(req, &sg, digest, sg.length);
> -       crypto_ahash_finup(req);
> -       ahash_request_zero(req);
> +       crypto_shash_update(desc, page_to_virt(page), len ?: PAGE_SIZE);
> +
> +       crypto_shash_final(desc, digest);
> +       shash_desc_zero(desc);
>  }
>
> -void drbd_csum_bio(struct crypto_ahash *tfm, struct bio *bio, void *digest)
> +void drbd_csum_bio(struct crypto_shash *tfm, struct bio *bio, void *digest)
>  {
> -       AHASH_REQUEST_ON_STACK(req, tfm);
> -       struct scatterlist sg;
> +       SHASH_DESC_ON_STACK(desc, tfm);
>         struct bio_vec bvec;
>         struct bvec_iter iter;
>
> -       ahash_request_set_tfm(req, tfm);
> -       ahash_request_set_callback(req, 0, NULL, NULL);
> +       desc->tfm = tfm;
> +       desc->flags = 0;
>
> -       sg_init_table(&sg, 1);
> -       crypto_ahash_init(req);
> +       crypto_shash_init(desc);
>
>         bio_for_each_segment(bvec, bio, iter) {
> -               sg_set_page(&sg, bvec.bv_page, bvec.bv_len, bvec.bv_offset);
> -               ahash_request_set_crypt(req, &sg, NULL, sg.length);
> -               crypto_ahash_update(req);
> +               crypto_shash_update(desc, (u8 *)page_to_virt(bvec.bv_page) +
> +                                         bvec.bv_offset,
> +                                   bvec.bv_len);
> +
>                 /* REQ_OP_WRITE_SAME has only one segment,
>                  * checksum the payload only once. */
>                 if (bio_op(bio) == REQ_OP_WRITE_SAME)
>                         break;
>         }
> -       ahash_request_set_crypt(req, NULL, digest, 0);
> -       crypto_ahash_final(req);
> -       ahash_request_zero(req);
> +       crypto_shash_final(desc, digest);
> +       shash_desc_zero(desc);
>  }
>
>  /* MAYBE merge common code with w_e_end_ov_req */
> @@ -367,7 +361,7 @@ static int w_e_send_csum(struct drbd_work *w, int cancel)
>         if (unlikely((peer_req->flags & EE_WAS_ERROR) != 0))
>                 goto out;
>
> -       digest_size = crypto_ahash_digestsize(peer_device->connection->csums_tfm);
> +       digest_size = crypto_shash_digestsize(peer_device->connection->csums_tfm);
>         digest = kmalloc(digest_size, GFP_NOIO);
>         if (digest) {
>                 sector_t sector = peer_req->i.sector;
> @@ -1205,7 +1199,7 @@ int w_e_end_csum_rs_req(struct drbd_work *w, int cancel)
>                  * a real fix would be much more involved,
>                  * introducing more locking mechanisms */
>                 if (peer_device->connection->csums_tfm) {
> -                       digest_size = crypto_ahash_digestsize(peer_device->connection->csums_tfm);
> +                       digest_size = crypto_shash_digestsize(peer_device->connection->csums_tfm);
>                         D_ASSERT(device, digest_size == di->digest_size);
>                         digest = kmalloc(digest_size, GFP_NOIO);
>                 }
> @@ -1255,7 +1249,7 @@ int w_e_end_ov_req(struct drbd_work *w, int cancel)
>         if (unlikely(cancel))
>                 goto out;
>
> -       digest_size = crypto_ahash_digestsize(peer_device->connection->verify_tfm);
> +       digest_size = crypto_shash_digestsize(peer_device->connection->verify_tfm);
>         digest = kmalloc(digest_size, GFP_NOIO);
>         if (!digest) {
>                 err = 1;        /* terminate the connection in case the allocation failed */
> @@ -1327,7 +1321,7 @@ int w_e_end_ov_reply(struct drbd_work *w, int cancel)
>         di = peer_req->digest;
>
>         if (likely((peer_req->flags & EE_WAS_ERROR) == 0)) {
> -               digest_size = crypto_ahash_digestsize(peer_device->connection->verify_tfm);
> +               digest_size = crypto_shash_digestsize(peer_device->connection->verify_tfm);
>                 digest = kmalloc(digest_size, GFP_NOIO);
>                 if (digest) {
>                         drbd_csum_ee(peer_device->connection->verify_tfm, peer_req, digest);
> --
> 2.17.1
>



-- 
Kees Cook
Pixel Security

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v6 16/18] rxrpc: Reuse SKCIPHER_REQUEST_ON_STACK buffer
  2018-07-24 16:49 ` [PATCH v6 16/18] rxrpc: Reuse SKCIPHER_REQUEST_ON_STACK buffer Kees Cook
@ 2018-08-02 23:46   ` Kees Cook
  2018-08-03  9:14   ` David Howells
  1 sibling, 0 replies; 32+ messages in thread
From: Kees Cook @ 2018-08-02 23:46 UTC (permalink / raw)
  To: David Howells
  Cc: Herbert Xu, Eric Biggers, Gustavo A. R. Silva, linux-crypto,
	Network Development, LKML, David S. Miller

On Tue, Jul 24, 2018 at 9:49 AM, Kees Cook <keescook@chromium.org> wrote:
> The use of SKCIPHER_REQUEST_ON_STACK() will trigger FRAME_WARN warnings
> (when less than 2048) once the VLA is no longer hidden from the check:
>
> net/rxrpc/rxkad.c:398:1: warning: the frame size of 1152 bytes is larger than 1024 bytes [-Wframe-larger-than=]
> net/rxrpc/rxkad.c:242:1: warning: the frame size of 1152 bytes is larger than 1024 bytes [-Wframe-larger-than=]
>
> This passes the initial SKCIPHER_REQUEST_ON_STACK allocation to the leaf
> functions for reuse. Two requests allocated on the stack is not needed
> when only one is used at a time.
>
> Signed-off-by: Kees Cook <keescook@chromium.org>
> Acked-by: Arnd Bergmann <arnd@arndb.de>

David (Howells), what do you think of this? Can this go via your tree
or maybe via netdev?

I'd love your Ack or Review. :)

Thanks,

-Kees

> ---
>  net/rxrpc/rxkad.c | 25 +++++++++++++------------
>  1 file changed, 13 insertions(+), 12 deletions(-)
>
> diff --git a/net/rxrpc/rxkad.c b/net/rxrpc/rxkad.c
> index 278ac0807a60..6393391fac86 100644
> --- a/net/rxrpc/rxkad.c
> +++ b/net/rxrpc/rxkad.c
> @@ -146,10 +146,10 @@ static int rxkad_prime_packet_security(struct rxrpc_connection *conn)
>  static int rxkad_secure_packet_auth(const struct rxrpc_call *call,
>                                     struct sk_buff *skb,
>                                     u32 data_size,
> -                                   void *sechdr)
> +                                   void *sechdr,
> +                                   struct skcipher_request *req)
>  {
>         struct rxrpc_skb_priv *sp = rxrpc_skb(skb);
> -       SKCIPHER_REQUEST_ON_STACK(req, call->conn->cipher);
>         struct rxkad_level1_hdr hdr;
>         struct rxrpc_crypt iv;
>         struct scatterlist sg;
> @@ -183,12 +183,12 @@ static int rxkad_secure_packet_auth(const struct rxrpc_call *call,
>  static int rxkad_secure_packet_encrypt(const struct rxrpc_call *call,
>                                        struct sk_buff *skb,
>                                        u32 data_size,
> -                                      void *sechdr)
> +                                      void *sechdr,
> +                                      struct skcipher_request *req)
>  {
>         const struct rxrpc_key_token *token;
>         struct rxkad_level2_hdr rxkhdr;
>         struct rxrpc_skb_priv *sp;
> -       SKCIPHER_REQUEST_ON_STACK(req, call->conn->cipher);
>         struct rxrpc_crypt iv;
>         struct scatterlist sg[16];
>         struct sk_buff *trailer;
> @@ -296,11 +296,12 @@ static int rxkad_secure_packet(struct rxrpc_call *call,
>                 ret = 0;
>                 break;
>         case RXRPC_SECURITY_AUTH:
> -               ret = rxkad_secure_packet_auth(call, skb, data_size, sechdr);
> +               ret = rxkad_secure_packet_auth(call, skb, data_size, sechdr,
> +                                              req);
>                 break;
>         case RXRPC_SECURITY_ENCRYPT:
>                 ret = rxkad_secure_packet_encrypt(call, skb, data_size,
> -                                                 sechdr);
> +                                                 sechdr, req);
>                 break;
>         default:
>                 ret = -EPERM;
> @@ -316,10 +317,10 @@ static int rxkad_secure_packet(struct rxrpc_call *call,
>   */
>  static int rxkad_verify_packet_1(struct rxrpc_call *call, struct sk_buff *skb,
>                                  unsigned int offset, unsigned int len,
> -                                rxrpc_seq_t seq)
> +                                rxrpc_seq_t seq,
> +                                struct skcipher_request *req)
>  {
>         struct rxkad_level1_hdr sechdr;
> -       SKCIPHER_REQUEST_ON_STACK(req, call->conn->cipher);
>         struct rxrpc_crypt iv;
>         struct scatterlist sg[16];
>         struct sk_buff *trailer;
> @@ -402,11 +403,11 @@ static int rxkad_verify_packet_1(struct rxrpc_call *call, struct sk_buff *skb,
>   */
>  static int rxkad_verify_packet_2(struct rxrpc_call *call, struct sk_buff *skb,
>                                  unsigned int offset, unsigned int len,
> -                                rxrpc_seq_t seq)
> +                                rxrpc_seq_t seq,
> +                                struct skcipher_request *req)
>  {
>         const struct rxrpc_key_token *token;
>         struct rxkad_level2_hdr sechdr;
> -       SKCIPHER_REQUEST_ON_STACK(req, call->conn->cipher);
>         struct rxrpc_crypt iv;
>         struct scatterlist _sg[4], *sg;
>         struct sk_buff *trailer;
> @@ -549,9 +550,9 @@ static int rxkad_verify_packet(struct rxrpc_call *call, struct sk_buff *skb,
>         case RXRPC_SECURITY_PLAIN:
>                 return 0;
>         case RXRPC_SECURITY_AUTH:
> -               return rxkad_verify_packet_1(call, skb, offset, len, seq);
> +               return rxkad_verify_packet_1(call, skb, offset, len, seq, req);
>         case RXRPC_SECURITY_ENCRYPT:
> -               return rxkad_verify_packet_2(call, skb, offset, len, seq);
> +               return rxkad_verify_packet_2(call, skb, offset, len, seq, req);
>         default:
>                 return -ENOANO;
>         }
> --
> 2.17.1
>



-- 
Kees Cook
Pixel Security

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v6 16/18] rxrpc: Reuse SKCIPHER_REQUEST_ON_STACK buffer
  2018-07-24 16:49 ` [PATCH v6 16/18] rxrpc: Reuse SKCIPHER_REQUEST_ON_STACK buffer Kees Cook
  2018-08-02 23:46   ` Kees Cook
@ 2018-08-03  9:14   ` David Howells
  1 sibling, 0 replies; 32+ messages in thread
From: David Howells @ 2018-08-03  9:14 UTC (permalink / raw)
  To: Kees Cook
  Cc: dhowells, Herbert Xu, Eric Biggers, Gustavo A. R. Silva,
	linux-crypto, Network Development, LKML, David S. Miller

Hmmm...  I'm wondering if the skcipher request should be allocated on call
initialisation and a pointer put in the rxrpc_call struct.  It's only used
serially for any particular call.

For the moment, I'll pass your patch onto DaveM.

David

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v6 12/18] drbd: Convert from ahash to shash
  2018-08-02 23:43   ` Kees Cook
@ 2018-08-03 13:44     ` Lars Ellenberg
  2018-08-06 17:27       ` Kees Cook
  0 siblings, 1 reply; 32+ messages in thread
From: Lars Ellenberg @ 2018-08-03 13:44 UTC (permalink / raw)
  To: Kees Cook
  Cc: Philipp Reisner, Jens Axboe, Herbert Xu, Arnd Bergmann,
	Eric Biggers, Gustavo A. R. Silva, linux-crypto, drbd-dev,
	linux-block, LKML

On Thu, Aug 02, 2018 at 04:43:19PM -0700, Kees Cook wrote:
> On Tue, Jul 24, 2018 at 9:49 AM, Kees Cook <keescook@chromium.org> wrote:
> > In preparing to remove all stack VLA usage from the kernel[1], this
> > removes the discouraged use of AHASH_REQUEST_ON_STACK in favor of
> > the smaller SHASH_DESC_ON_STACK by converting from ahash-wrapped-shash
> > to direct shash. By removing a layer of indirection this both improves
> > performance and reduces stack usage. The stack allocation will be made
> > a fixed size in a later patch to the crypto subsystem.
> 
> Philipp or Lars, what do you think of this? Can this go via your tree
> or maybe via Jens?
> 
> I'd love an Ack or Review.

I'm sorry, summer time and all, limited online time ;-)

I'm happy for this to go in via Jens, or any other way.

Being not that fluent in the crypto stuff,
I will just "believe" most of it.

I still think I found something, comments below:

> > diff --git a/drivers/block/drbd/drbd_worker.c b/drivers/block/drbd/drbd_worker.c
> > index 1476cb3439f4..62dd3700dd84 100644
> > --- a/drivers/block/drbd/drbd_worker.c
> > +++ b/drivers/block/drbd/drbd_worker.c
> > @@ -295,60 +295,54 @@ void drbd_request_endio(struct bio *bio)
> >                 complete_master_bio(device, &m);
> >  }
> >
> > -void drbd_csum_ee(struct crypto_ahash *tfm, struct drbd_peer_request *peer_req, void *digest)
> > +void drbd_csum_ee(struct crypto_shash *tfm, struct drbd_peer_request *peer_req, void *digest)
> >  {
> > -       AHASH_REQUEST_ON_STACK(req, tfm);
> > -       struct scatterlist sg;
> > +       SHASH_DESC_ON_STACK(desc, tfm);
> >         struct page *page = peer_req->pages;
> >         struct page *tmp;
> >         unsigned len;
> >
> > -       ahash_request_set_tfm(req, tfm);
> > -       ahash_request_set_callback(req, 0, NULL, NULL);
> > +       desc->tfm = tfm;
> > +       desc->flags = 0;
> >
> > -       sg_init_table(&sg, 1);
> > -       crypto_ahash_init(req);
> > +       crypto_shash_init(desc);
> >
> >         while ((tmp = page_chain_next(page))) {
> >                 /* all but the last page will be fully used */
> > -               sg_set_page(&sg, page, PAGE_SIZE, 0);
> > -               ahash_request_set_crypt(req, &sg, NULL, sg.length);
> > -               crypto_ahash_update(req);
> > +               crypto_shash_update(desc, page_to_virt(page), PAGE_SIZE);

These pages may be "highmem", so page_to_virt() seem not appropriate.
I think the crypto_ahash_update() thingy did an implicit kmap() for us
in the crypto_hash_walk()?
Maybe it is good enough to add a kmap() here,
and a kunmap() on the next line?


> >                 page = tmp;
> >         }
> >         /* and now the last, possibly only partially used page */
> >         len = peer_req->i.size & (PAGE_SIZE - 1);
> > -       sg_set_page(&sg, page, len ?: PAGE_SIZE, 0);
> > -       ahash_request_set_crypt(req, &sg, digest, sg.length);
> > -       crypto_ahash_finup(req);
> > -       ahash_request_zero(req);
> > +       crypto_shash_update(desc, page_to_virt(page), len ?: PAGE_SIZE);

Same here ...

> > +
> > +       crypto_shash_final(desc, digest);
> > +       shash_desc_zero(desc);
> >  }
> >
> > -void drbd_csum_bio(struct crypto_ahash *tfm, struct bio *bio, void *digest)
> > +void drbd_csum_bio(struct crypto_shash *tfm, struct bio *bio, void *digest)
> >  {
> > -       AHASH_REQUEST_ON_STACK(req, tfm);
> > -       struct scatterlist sg;
> > +       SHASH_DESC_ON_STACK(desc, tfm);
> >         struct bio_vec bvec;
> >         struct bvec_iter iter;
> >
> > -       ahash_request_set_tfm(req, tfm);
> > -       ahash_request_set_callback(req, 0, NULL, NULL);
> > +       desc->tfm = tfm;
> > +       desc->flags = 0;
> >
> > -       sg_init_table(&sg, 1);
> > -       crypto_ahash_init(req);
> > +       crypto_shash_init(desc);
> >
> >         bio_for_each_segment(bvec, bio, iter) {
> > -               sg_set_page(&sg, bvec.bv_page, bvec.bv_len, bvec.bv_offset);
> > -               ahash_request_set_crypt(req, &sg, NULL, sg.length);
> > -               crypto_ahash_update(req);
> > +               crypto_shash_update(desc, (u8 *)page_to_virt(bvec.bv_page) +
> > +                                         bvec.bv_offset,
> > +                                   bvec.bv_len);


... and here.


> > +
> >                 /* REQ_OP_WRITE_SAME has only one segment,
> >                  * checksum the payload only once. */
> >                 if (bio_op(bio) == REQ_OP_WRITE_SAME)
> >                         break;
> >         }
> > -       ahash_request_set_crypt(req, NULL, digest, 0);
> > -       crypto_ahash_final(req);
> > -       ahash_request_zero(req);
> > +       crypto_shash_final(desc, digest);
> > +       shash_desc_zero(desc);
> >  }
> >
> >  /* MAYBE merge common code with w_e_end_ov_req */

Thanks,

    Lars


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v6 10/18] x86/power/64: Remove VLA usage
  2018-07-25 18:01     ` Kees Cook
@ 2018-08-06 10:28       ` Rafael J. Wysocki
  0 siblings, 0 replies; 32+ messages in thread
From: Rafael J. Wysocki @ 2018-08-06 10:28 UTC (permalink / raw)
  To: Kees Cook
  Cc: Rafael J. Wysocki, Herbert Xu, Arnd Bergmann, Eric Biggers,
	Gustavo A. R. Silva, Alasdair Kergon, Rabin Vincent, Tim Chen,
	Pavel Machek, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	the arch/x86 maintainers, Philipp Reisner, Lars Ellenberg,
	Jens Axboe, Giovanni Cabiddu, Mike Snitzer, Paul Mackerras,
	Greg Kroah-Hartman, David Howells, Johannes Berg,
	Tudor-Dan Ambarus, Jia-Ju Bai, Andrew Morton, Geert Uytterhoeven,
	Josh Poimboeuf, David Woodhouse, Will Deacon,
	device-mapper development, Linux PM, linux-crypto, drbd-dev,
	linux-block, qat-linux, linux-ppp, netdev,
	open list:ANDROID DRIVERS, linux-afs,
	open list:NETWORKING DRIVERS (WIRELESS),
	Linux Kernel Mailing List

On Wednesday, July 25, 2018 8:01:47 PM CEST Kees Cook wrote:
> On Wed, Jul 25, 2018 at 4:32 AM, Rafael J. Wysocki <rafael@kernel.org> wrote:
> > On Tue, Jul 24, 2018 at 6:49 PM, Kees Cook <keescook@chromium.org> wrote:
> >> In the quest to remove all stack VLA usage from the kernel[1], this
> >> removes the discouraged use of AHASH_REQUEST_ON_STACK by switching to
> >> shash directly and allocating the descriptor in heap memory (which should
> >> be fine: the tfm has already been allocated there too).
> >>
> >> [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com
> >>
> >> Signed-off-by: Kees Cook <keescook@chromium.org>
> >> Acked-by: Pavel Machek <pavel@ucw.cz>
> >
> > I think I can queue this up if there are no objections from others.
> >
> > Do you want me to do that?
> 
> Sure thing. It looks like the other stand-alone patches like this one
> are getting taken into the non-crypto trees, so that's fine.
> 

Applied now, thanks!


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v6 12/18] drbd: Convert from ahash to shash
  2018-08-03 13:44     ` Lars Ellenberg
@ 2018-08-06 17:27       ` Kees Cook
  0 siblings, 0 replies; 32+ messages in thread
From: Kees Cook @ 2018-08-06 17:27 UTC (permalink / raw)
  To: Kees Cook, Philipp Reisner, Jens Axboe, Herbert Xu,
	Arnd Bergmann, Eric Biggers, Gustavo A. R. Silva, linux-crypto,
	drbd-dev, linux-block, LKML

On Fri, Aug 3, 2018 at 6:44 AM, Lars Ellenberg
<lars.ellenberg@linbit.com> wrote:
> On Thu, Aug 02, 2018 at 04:43:19PM -0700, Kees Cook wrote:
>> On Tue, Jul 24, 2018 at 9:49 AM, Kees Cook <keescook@chromium.org> wrote:
>> > In preparing to remove all stack VLA usage from the kernel[1], this
>> > removes the discouraged use of AHASH_REQUEST_ON_STACK in favor of
>> > the smaller SHASH_DESC_ON_STACK by converting from ahash-wrapped-shash
>> > to direct shash. By removing a layer of indirection this both improves
>> > performance and reduces stack usage. The stack allocation will be made
>> > a fixed size in a later patch to the crypto subsystem.
>>
>> Philipp or Lars, what do you think of this? Can this go via your tree
>> or maybe via Jens?
>>
>> I'd love an Ack or Review.
>
> I'm sorry, summer time and all, limited online time ;-)
>
> I'm happy for this to go in via Jens, or any other way.
>
> Being not that fluent in the crypto stuff,
> I will just "believe" most of it.
>
> I still think I found something, comments below:
>
>> > diff --git a/drivers/block/drbd/drbd_worker.c b/drivers/block/drbd/drbd_worker.c
>> > index 1476cb3439f4..62dd3700dd84 100644
>> > --- a/drivers/block/drbd/drbd_worker.c
>> > +++ b/drivers/block/drbd/drbd_worker.c
>> > @@ -295,60 +295,54 @@ void drbd_request_endio(struct bio *bio)
>> >                 complete_master_bio(device, &m);
>> >  }
>> >
>> > -void drbd_csum_ee(struct crypto_ahash *tfm, struct drbd_peer_request *peer_req, void *digest)
>> > +void drbd_csum_ee(struct crypto_shash *tfm, struct drbd_peer_request *peer_req, void *digest)
>> >  {
>> > -       AHASH_REQUEST_ON_STACK(req, tfm);
>> > -       struct scatterlist sg;
>> > +       SHASH_DESC_ON_STACK(desc, tfm);
>> >         struct page *page = peer_req->pages;
>> >         struct page *tmp;
>> >         unsigned len;
>> >
>> > -       ahash_request_set_tfm(req, tfm);
>> > -       ahash_request_set_callback(req, 0, NULL, NULL);
>> > +       desc->tfm = tfm;
>> > +       desc->flags = 0;
>> >
>> > -       sg_init_table(&sg, 1);
>> > -       crypto_ahash_init(req);
>> > +       crypto_shash_init(desc);
>> >
>> >         while ((tmp = page_chain_next(page))) {
>> >                 /* all but the last page will be fully used */
>> > -               sg_set_page(&sg, page, PAGE_SIZE, 0);
>> > -               ahash_request_set_crypt(req, &sg, NULL, sg.length);
>> > -               crypto_ahash_update(req);
>> > +               crypto_shash_update(desc, page_to_virt(page), PAGE_SIZE);
>
> These pages may be "highmem", so page_to_virt() seem not appropriate.
> I think the crypto_ahash_update() thingy did an implicit kmap() for us
> in the crypto_hash_walk()?
> Maybe it is good enough to add a kmap() here,
> and a kunmap() on the next line?

Oooh, excellent point. Thanks, I'll see what I can do to sort this out.

-Kees

-- 
Kees Cook
Pixel Security

^ permalink raw reply	[flat|nested] 32+ messages in thread

end of thread, other threads:[~2018-08-06 17:27 UTC | newest]

Thread overview: 32+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-07-24 16:49 [PATCH v6 00/18] crypto: Remove VLA usage Kees Cook
2018-07-24 16:49 ` [PATCH v6 01/18] crypto: xcbc: " Kees Cook
2018-07-24 16:49 ` [PATCH v6 02/18] crypto: cbc: " Kees Cook
2018-07-24 16:49 ` [PATCH v6 03/18] crypto: hash: " Kees Cook
2018-07-24 16:49 ` [PATCH v6 04/18] dm: Remove VLA usage from hashes Kees Cook
2018-07-24 16:49 ` [PATCH v6 05/18] crypto alg: Introduce generic max blocksize and alignmask Kees Cook
2018-07-24 16:49 ` [PATCH v6 06/18] crypto: qat: Remove VLA usage Kees Cook
2018-07-24 16:49 ` [PATCH v6 07/18] crypto: shash: Remove VLA usage in unaligned hashing Kees Cook
2018-07-24 16:49 ` [PATCH v6 08/18] crypto: skcipher: Remove VLA usage for SKCIPHER_REQUEST_ON_STACK Kees Cook
2018-07-24 16:49 ` [PATCH v6 09/18] ppp: mppe: Remove VLA usage Kees Cook
2018-07-24 16:49 ` [PATCH v6 10/18] x86/power/64: " Kees Cook
2018-07-25 11:32   ` Rafael J. Wysocki
2018-07-25 18:01     ` Kees Cook
2018-08-06 10:28       ` Rafael J. Wysocki
2018-07-24 16:49 ` [PATCH v6 11/18] dm crypt: Convert essiv from ahash to shash Kees Cook
2018-07-24 16:49 ` [PATCH v6 12/18] drbd: Convert " Kees Cook
2018-08-02 23:43   ` Kees Cook
2018-08-03 13:44     ` Lars Ellenberg
2018-08-06 17:27       ` Kees Cook
2018-07-24 16:49 ` [PATCH v6 13/18] wireless/lib80211: " Kees Cook
2018-07-25  7:24   ` Johannes Berg
2018-07-24 16:49 ` [PATCH v6 14/18] staging: rtl8192u: ieee80211: " Kees Cook
2018-07-24 16:49 ` [PATCH v6 15/18] staging: rtl8192e: " Kees Cook
2018-07-24 16:49 ` [PATCH v6 16/18] rxrpc: Reuse SKCIPHER_REQUEST_ON_STACK buffer Kees Cook
2018-08-02 23:46   ` Kees Cook
2018-08-03  9:14   ` David Howells
2018-07-24 16:49 ` [PATCH v6 17/18] crypto: ccm: Remove VLA usage Kees Cook
2018-07-24 16:57   ` Ard Biesheuvel
2018-07-24 17:51     ` Kees Cook
2018-07-24 16:49 ` [PATCH v6 18/18] crypto: Remove AHASH_REQUEST_ON_STACK Kees Cook
2018-07-24 17:31   ` Joe Perches
2018-07-24 17:53     ` Kees Cook

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).