linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v4 00/14] crypto: Remove VLA usage
@ 2018-07-11 20:36 Kees Cook
  2018-07-11 20:36 ` [PATCH v4 01/14] crypto: xcbc: " Kees Cook
                   ` (13 more replies)
  0 siblings, 14 replies; 53+ messages in thread
From: Kees Cook @ 2018-07-11 20:36 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Kees Cook, Gustavo A. R. Silva, Arnd Bergmann, Eric Biggers,
	Alasdair Kergon, Giovanni Cabiddu, Lars Persson, Mike Snitzer,
	Rabin Vincent, Tim Chen, David S. Miller, Masahiro Yamada,
	linux-crypto, qat-linux, dm-devel, linux-kernel

v4:
- add back *_REQUEST_ON_STACK patches.
- programmatically find stack sizes for *_REQUEST_ON_STACK patches.
- whitelist some code that trips FRAME_WARN on 32-bit builds.
- fix alignment patches.

v3:
- drop *_REQUEST_ON_STACK patches. The rest of this series is pretty
  straight-forward, and I'd like to get them tested in -next while
  we continue to chip away at the *_REQUEST_ON_STACK VLA removal patches
  separately. "Part 2" will continue with those.

v2:
- use 512 instead of PAGE_SIZE / 8 to avoid bloat on large-page archs.
- swtich xcbc to "16" max universally.
- fix typo in bounds check for dm buffer size.
- examine actual reqsizes for skcipher and ahash instead of guessing.
- improve names and comments for alg maxes


This is nearly the last of the VLA removals[1], but it's one of the
largest because crypto gets used in lots of places. After looking
through code, usage, reading the threads Gustavo started, and comparing
the use-cases to the other VLA removals that have landed in the kernel,
I think this series is likely the best way forward to shut the door on
VLAs forever.

As background, the crypto stack usage is for callers to do an immediate
bit of work that doesn't allocate new memory. This means that other VLA
removal techniques (like just using kmalloc) aren't workable, and the
next common technique is needed: examination of maximum stack usage and
the addition of sanity checks. This series does that, and in several
cases, these maximums were already implicit in the code.

This series is intended to land via the crypto tree for 4.19, though
it touches dm and a few other things as well, since there are dependent
patches (new crypto #defines being used, kbuild, etc).

Thanks!

-Kees

[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com

Kees Cook (14):
  crypto: xcbc: Remove VLA usage
  crypto: cbc: Remove VLA usage
  crypto: shash: Remove VLA usage
  dm integrity: Remove VLA usage
  crypto: ahash: Remove VLA usage
  dm verity fec: Remove VLA usage
  crypto alg: Introduce generic max blocksize and alignmask
  crypto: qat: Remove VLA usage
  crypto: shash: Remove VLA usage in unaligned hashing
  kbuild: Introduce FRAME_WARN_BUMP_FLAG
  treewide: Prepare to remove VLA usage for AHASH_REQUEST_ON_STACK
  crypto: ahash: Remove VLA usage for AHASH_REQUEST_ON_STACK
  rxrpc: Prepare to remove VLA usage for SKCIPHER_REQUEST_ON_STACK
  crypto: skcipher: Remove VLA usage for SKCIPHER_REQUEST_ON_STACK

 Makefile                                    |  6 ++++
 crypto/Makefile                             |  1 +
 crypto/ahash.c                              |  4 +--
 crypto/algapi.c                             |  7 ++++-
 crypto/algif_hash.c                         |  2 +-
 crypto/shash.c                              | 33 ++++++++++++---------
 crypto/xcbc.c                               |  9 ++++--
 drivers/block/drbd/Makefile                 |  2 ++
 drivers/crypto/qat/qat_common/qat_algs.c    |  8 +++--
 drivers/md/Makefile                         |  1 +
 drivers/md/dm-integrity.c                   | 23 ++++++++++----
 drivers/md/dm-verity-fec.c                  |  5 +++-
 drivers/net/ppp/Makefile                    |  1 +
 drivers/staging/rtl8192e/Makefile           |  1 +
 drivers/staging/rtl8192u/Makefile           |  1 +
 drivers/staging/rtl8192u/ieee80211/Makefile |  1 +
 include/crypto/algapi.h                     |  4 ++-
 include/crypto/cbc.h                        |  4 ++-
 include/crypto/hash.h                       | 12 ++++++--
 include/crypto/internal/hash.h              |  1 +
 include/crypto/internal/skcipher.h          |  1 +
 include/crypto/skcipher.h                   |  4 ++-
 include/linux/compiler-gcc.h                |  1 -
 net/rxrpc/Makefile                          |  1 +
 net/wireless/Makefile                       |  1 +
 25 files changed, 99 insertions(+), 35 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 53+ messages in thread

* [PATCH v4 01/14] crypto: xcbc: Remove VLA usage
  2018-07-11 20:36 [PATCH v4 00/14] crypto: Remove VLA usage Kees Cook
@ 2018-07-11 20:36 ` Kees Cook
  2018-07-11 20:36 ` [PATCH v4 02/14] crypto: cbc: " Kees Cook
                   ` (12 subsequent siblings)
  13 siblings, 0 replies; 53+ messages in thread
From: Kees Cook @ 2018-07-11 20:36 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Kees Cook, Gustavo A. R. Silva, Arnd Bergmann, Eric Biggers,
	Alasdair Kergon, Giovanni Cabiddu, Lars Persson, Mike Snitzer,
	Rabin Vincent, Tim Chen, David S. Miller, Masahiro Yamada,
	linux-crypto, qat-linux, dm-devel, linux-kernel

In the quest to remove all stack VLA usage from the kernel[1], this uses
the maximum blocksize and adds a sanity check. For xcbc, the blocksize
must always be 16, so use that, since it's already being enforced during
instantiation.

[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 crypto/xcbc.c | 9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/crypto/xcbc.c b/crypto/xcbc.c
index 25c75af50d3f..7aa03beed795 100644
--- a/crypto/xcbc.c
+++ b/crypto/xcbc.c
@@ -57,6 +57,8 @@ struct xcbc_desc_ctx {
 	u8 ctx[];
 };
 
+#define XCBC_BLOCKSIZE	16
+
 static int crypto_xcbc_digest_setkey(struct crypto_shash *parent,
 				     const u8 *inkey, unsigned int keylen)
 {
@@ -65,7 +67,10 @@ static int crypto_xcbc_digest_setkey(struct crypto_shash *parent,
 	int bs = crypto_shash_blocksize(parent);
 	u8 *consts = PTR_ALIGN(&ctx->ctx[0], alignmask + 1);
 	int err = 0;
-	u8 key1[bs];
+	u8 key1[XCBC_BLOCKSIZE];
+
+	if (WARN_ON(bs > sizeof(key1)))
+		return -EINVAL;
 
 	if ((err = crypto_cipher_setkey(ctx->child, inkey, keylen)))
 		return err;
@@ -212,7 +217,7 @@ static int xcbc_create(struct crypto_template *tmpl, struct rtattr **tb)
 		return PTR_ERR(alg);
 
 	switch(alg->cra_blocksize) {
-	case 16:
+	case XCBC_BLOCKSIZE:
 		break;
 	default:
 		goto out_put_alg;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH v4 02/14] crypto: cbc: Remove VLA usage
  2018-07-11 20:36 [PATCH v4 00/14] crypto: Remove VLA usage Kees Cook
  2018-07-11 20:36 ` [PATCH v4 01/14] crypto: xcbc: " Kees Cook
@ 2018-07-11 20:36 ` Kees Cook
  2018-07-11 20:36 ` [PATCH v4 03/14] crypto: shash: " Kees Cook
                   ` (11 subsequent siblings)
  13 siblings, 0 replies; 53+ messages in thread
From: Kees Cook @ 2018-07-11 20:36 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Kees Cook, Gustavo A. R. Silva, Arnd Bergmann, Eric Biggers,
	Alasdair Kergon, Giovanni Cabiddu, Lars Persson, Mike Snitzer,
	Rabin Vincent, Tim Chen, David S. Miller, Masahiro Yamada,
	linux-crypto, qat-linux, dm-devel, linux-kernel

In the quest to remove all stack VLA usage from the kernel[1], this
uses the upper bounds on blocksize. Since this is always a cipher
blocksize, use the existing cipher max blocksize.

[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 include/crypto/cbc.h | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/include/crypto/cbc.h b/include/crypto/cbc.h
index f5b8bfc22e6d..47db0aac2ab9 100644
--- a/include/crypto/cbc.h
+++ b/include/crypto/cbc.h
@@ -113,7 +113,9 @@ static inline int crypto_cbc_decrypt_inplace(
 	unsigned int bsize = crypto_skcipher_blocksize(tfm);
 	unsigned int nbytes = walk->nbytes;
 	u8 *src = walk->src.virt.addr;
-	u8 last_iv[bsize];
+	u8 last_iv[MAX_CIPHER_BLOCKSIZE];
+
+	BUG_ON(bsize > sizeof(last_iv));
 
 	/* Start of the last block. */
 	src += nbytes - (nbytes & (bsize - 1)) - bsize;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH v4 03/14] crypto: shash: Remove VLA usage
  2018-07-11 20:36 [PATCH v4 00/14] crypto: Remove VLA usage Kees Cook
  2018-07-11 20:36 ` [PATCH v4 01/14] crypto: xcbc: " Kees Cook
  2018-07-11 20:36 ` [PATCH v4 02/14] crypto: cbc: " Kees Cook
@ 2018-07-11 20:36 ` Kees Cook
  2018-07-11 20:36 ` [PATCH v4 04/14] dm integrity: " Kees Cook
                   ` (10 subsequent siblings)
  13 siblings, 0 replies; 53+ messages in thread
From: Kees Cook @ 2018-07-11 20:36 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Kees Cook, Gustavo A. R. Silva, Arnd Bergmann, Eric Biggers,
	Alasdair Kergon, Giovanni Cabiddu, Lars Persson, Mike Snitzer,
	Rabin Vincent, Tim Chen, David S. Miller, Masahiro Yamada,
	linux-crypto, qat-linux, dm-devel, linux-kernel

In the quest to remove all stack VLA usage from the kernel[1], this
removes the VLAs in SHASH_DESC_ON_STACK (via crypto_shash_descsize())
by using the maximum allowable size (which is now more clearly captured
in a macro). Similar limits are turned into macros as well.

A review of existing sizes shows that SHA512_DIGEST_SIZE (64) is the
largest digest size and that sizeof(struct sha3_state) (360) is the
largest descriptor size. The corresponding maximums are reduced.

[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 crypto/shash.c        | 6 +++---
 include/crypto/hash.h | 6 +++++-
 2 files changed, 8 insertions(+), 4 deletions(-)

diff --git a/crypto/shash.c b/crypto/shash.c
index 5d732c6bb4b2..ab6902c6dae7 100644
--- a/crypto/shash.c
+++ b/crypto/shash.c
@@ -458,9 +458,9 @@ static int shash_prepare_alg(struct shash_alg *alg)
 {
 	struct crypto_alg *base = &alg->base;
 
-	if (alg->digestsize > PAGE_SIZE / 8 ||
-	    alg->descsize > PAGE_SIZE / 8 ||
-	    alg->statesize > PAGE_SIZE / 8)
+	if (alg->digestsize > SHASH_MAX_DIGESTSIZE ||
+	    alg->descsize > SHASH_MAX_DESCSIZE ||
+	    alg->statesize > SHASH_MAX_STATESIZE)
 		return -EINVAL;
 
 	base->cra_type = &crypto_shash_type;
diff --git a/include/crypto/hash.h b/include/crypto/hash.h
index 76e432cab75d..ae14cc0e0cdb 100644
--- a/include/crypto/hash.h
+++ b/include/crypto/hash.h
@@ -151,9 +151,13 @@ struct shash_desc {
 	void *__ctx[] CRYPTO_MINALIGN_ATTR;
 };
 
+#define SHASH_MAX_DIGESTSIZE     64
+#define SHASH_MAX_DESCSIZE      360
+#define SHASH_MAX_STATESIZE     512
+
 #define SHASH_DESC_ON_STACK(shash, ctx)				  \
 	char __##shash##_desc[sizeof(struct shash_desc) +	  \
-		crypto_shash_descsize(ctx)] CRYPTO_MINALIGN_ATTR; \
+		SHASH_MAX_DESCSIZE] CRYPTO_MINALIGN_ATTR; \
 	struct shash_desc *shash = (struct shash_desc *)__##shash##_desc
 
 /**
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH v4 04/14] dm integrity: Remove VLA usage
  2018-07-11 20:36 [PATCH v4 00/14] crypto: Remove VLA usage Kees Cook
                   ` (2 preceding siblings ...)
  2018-07-11 20:36 ` [PATCH v4 03/14] crypto: shash: " Kees Cook
@ 2018-07-11 20:36 ` Kees Cook
  2018-07-11 20:36 ` [PATCH v4 05/14] crypto: ahash: " Kees Cook
                   ` (9 subsequent siblings)
  13 siblings, 0 replies; 53+ messages in thread
From: Kees Cook @ 2018-07-11 20:36 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Kees Cook, Gustavo A. R. Silva, Arnd Bergmann, Eric Biggers,
	Alasdair Kergon, Giovanni Cabiddu, Lars Persson, Mike Snitzer,
	Rabin Vincent, Tim Chen, David S. Miller, Masahiro Yamada,
	linux-crypto, qat-linux, dm-devel, linux-kernel

In the quest to remove all stack VLA usage from the kernel[1], this uses
the new SHASH_MAX_DIGESTSIZE from the crypto layer to allocate the upper
bounds on stack usage.

[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 drivers/md/dm-integrity.c | 23 +++++++++++++++++------
 1 file changed, 17 insertions(+), 6 deletions(-)

diff --git a/drivers/md/dm-integrity.c b/drivers/md/dm-integrity.c
index 86438b2f10dd..85e8ce1625a2 100644
--- a/drivers/md/dm-integrity.c
+++ b/drivers/md/dm-integrity.c
@@ -521,7 +521,12 @@ static void section_mac(struct dm_integrity_c *ic, unsigned section, __u8 result
 		}
 		memset(result + size, 0, JOURNAL_MAC_SIZE - size);
 	} else {
-		__u8 digest[size];
+		__u8 digest[SHASH_MAX_DIGESTSIZE];
+
+		if (WARN_ON(size > sizeof(digest))) {
+			dm_integrity_io_error(ic, "digest_size", -EINVAL);
+			goto err;
+		}
 		r = crypto_shash_final(desc, digest);
 		if (unlikely(r)) {
 			dm_integrity_io_error(ic, "crypto_shash_final", r);
@@ -1244,7 +1249,7 @@ static void integrity_metadata(struct work_struct *w)
 		struct bio *bio = dm_bio_from_per_bio_data(dio, sizeof(struct dm_integrity_io));
 		char *checksums;
 		unsigned extra_space = unlikely(digest_size > ic->tag_size) ? digest_size - ic->tag_size : 0;
-		char checksums_onstack[ic->tag_size + extra_space];
+		char checksums_onstack[SHASH_MAX_DIGESTSIZE];
 		unsigned sectors_to_process = dio->range.n_sectors;
 		sector_t sector = dio->range.logical_sector;
 
@@ -1253,8 +1258,14 @@ static void integrity_metadata(struct work_struct *w)
 
 		checksums = kmalloc((PAGE_SIZE >> SECTOR_SHIFT >> ic->sb->log2_sectors_per_block) * ic->tag_size + extra_space,
 				    GFP_NOIO | __GFP_NORETRY | __GFP_NOWARN);
-		if (!checksums)
+		if (!checksums) {
 			checksums = checksums_onstack;
+			if (WARN_ON(extra_space &&
+				    digest_size > sizeof(checksums_onstack))) {
+				r = -EINVAL;
+				goto error;
+			}
+		}
 
 		__bio_for_each_segment(bv, bio, iter, dio->orig_bi_iter) {
 			unsigned pos;
@@ -1466,7 +1477,7 @@ static bool __journal_read_write(struct dm_integrity_io *dio, struct bio *bio,
 				} while (++s < ic->sectors_per_block);
 #ifdef INTERNAL_VERIFY
 				if (ic->internal_hash) {
-					char checksums_onstack[max(crypto_shash_digestsize(ic->internal_hash), ic->tag_size)];
+					char checksums_onstack[max(SHASH_MAX_DIGESTSIZE, MAX_TAG_SIZE)];
 
 					integrity_sector_checksum(ic, logical_sector, mem + bv.bv_offset, checksums_onstack);
 					if (unlikely(memcmp(checksums_onstack, journal_entry_tag(ic, je), ic->tag_size))) {
@@ -1516,7 +1527,7 @@ static bool __journal_read_write(struct dm_integrity_io *dio, struct bio *bio,
 				if (ic->internal_hash) {
 					unsigned digest_size = crypto_shash_digestsize(ic->internal_hash);
 					if (unlikely(digest_size > ic->tag_size)) {
-						char checksums_onstack[digest_size];
+						char checksums_onstack[SHASH_MAX_DIGESTSIZE];
 						integrity_sector_checksum(ic, logical_sector, (char *)js, checksums_onstack);
 						memcpy(journal_entry_tag(ic, je), checksums_onstack, ic->tag_size);
 					} else
@@ -1937,7 +1948,7 @@ static void do_journal_write(struct dm_integrity_c *ic, unsigned write_start,
 				    unlikely(from_replay) &&
 #endif
 				    ic->internal_hash) {
-					char test_tag[max(crypto_shash_digestsize(ic->internal_hash), ic->tag_size)];
+					char test_tag[max_t(size_t, SHASH_MAX_DIGESTSIZE, MAX_TAG_SIZE)];
 
 					integrity_sector_checksum(ic, sec + ((l - j) << ic->sb->log2_sectors_per_block),
 								  (char *)access_journal_data(ic, i, l), test_tag);
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH v4 05/14] crypto: ahash: Remove VLA usage
  2018-07-11 20:36 [PATCH v4 00/14] crypto: Remove VLA usage Kees Cook
                   ` (3 preceding siblings ...)
  2018-07-11 20:36 ` [PATCH v4 04/14] dm integrity: " Kees Cook
@ 2018-07-11 20:36 ` Kees Cook
  2018-07-11 20:36 ` [PATCH v4 06/14] dm verity fec: " Kees Cook
                   ` (8 subsequent siblings)
  13 siblings, 0 replies; 53+ messages in thread
From: Kees Cook @ 2018-07-11 20:36 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Kees Cook, Gustavo A. R. Silva, Arnd Bergmann, Eric Biggers,
	Alasdair Kergon, Giovanni Cabiddu, Lars Persson, Mike Snitzer,
	Rabin Vincent, Tim Chen, David S. Miller, Masahiro Yamada,
	linux-crypto, qat-linux, dm-devel, linux-kernel

In the quest to remove all stack VLA usage from the kernel[1], this
introduces max size macros for ahash, as already done for shash, and
adjust the crypto user to max state size.

[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 crypto/ahash.c        | 4 ++--
 crypto/algif_hash.c   | 2 +-
 include/crypto/hash.h | 3 +++
 3 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/crypto/ahash.c b/crypto/ahash.c
index a64c143165b1..6435bdbe42fd 100644
--- a/crypto/ahash.c
+++ b/crypto/ahash.c
@@ -550,8 +550,8 @@ static int ahash_prepare_alg(struct ahash_alg *alg)
 {
 	struct crypto_alg *base = &alg->halg.base;
 
-	if (alg->halg.digestsize > PAGE_SIZE / 8 ||
-	    alg->halg.statesize > PAGE_SIZE / 8 ||
+	if (alg->halg.digestsize > AHASH_MAX_DIGESTSIZE ||
+	    alg->halg.statesize > AHASH_MAX_STATESIZE ||
 	    alg->halg.statesize == 0)
 		return -EINVAL;
 
diff --git a/crypto/algif_hash.c b/crypto/algif_hash.c
index bfcf595fd8f9..8974ee8ebead 100644
--- a/crypto/algif_hash.c
+++ b/crypto/algif_hash.c
@@ -239,7 +239,7 @@ static int hash_accept(struct socket *sock, struct socket *newsock, int flags,
 	struct alg_sock *ask = alg_sk(sk);
 	struct hash_ctx *ctx = ask->private;
 	struct ahash_request *req = &ctx->req;
-	char state[crypto_ahash_statesize(crypto_ahash_reqtfm(req)) ? : 1];
+	char state[AHASH_MAX_STATESIZE];
 	struct sock *sk2;
 	struct alg_sock *ask2;
 	struct hash_ctx *ctx2;
diff --git a/include/crypto/hash.h b/include/crypto/hash.h
index ae14cc0e0cdb..4fcd0e2368cd 100644
--- a/include/crypto/hash.h
+++ b/include/crypto/hash.h
@@ -64,6 +64,9 @@ struct ahash_request {
 	void *__ctx[] CRYPTO_MINALIGN_ATTR;
 };
 
+#define AHASH_MAX_DIGESTSIZE	512
+#define AHASH_MAX_STATESIZE	512
+
 #define AHASH_REQUEST_ON_STACK(name, ahash) \
 	char __##name##_desc[sizeof(struct ahash_request) + \
 		crypto_ahash_reqsize(ahash)] CRYPTO_MINALIGN_ATTR; \
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH v4 06/14] dm verity fec: Remove VLA usage
  2018-07-11 20:36 [PATCH v4 00/14] crypto: Remove VLA usage Kees Cook
                   ` (4 preceding siblings ...)
  2018-07-11 20:36 ` [PATCH v4 05/14] crypto: ahash: " Kees Cook
@ 2018-07-11 20:36 ` Kees Cook
  2018-07-11 20:36 ` [PATCH v4 07/14] crypto alg: Introduce generic max blocksize and alignmask Kees Cook
                   ` (7 subsequent siblings)
  13 siblings, 0 replies; 53+ messages in thread
From: Kees Cook @ 2018-07-11 20:36 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Kees Cook, Gustavo A. R. Silva, Arnd Bergmann, Eric Biggers,
	Alasdair Kergon, Giovanni Cabiddu, Lars Persson, Mike Snitzer,
	Rabin Vincent, Tim Chen, David S. Miller, Masahiro Yamada,
	linux-crypto, qat-linux, dm-devel, linux-kernel

In the quest to remove all stack VLA usage from the kernel[1], this
uses the newly defined max digest size macro. Also adds a sanity-check
at use-time.

[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 drivers/md/dm-verity-fec.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/md/dm-verity-fec.c b/drivers/md/dm-verity-fec.c
index 684af08d0747..fe5cfd1a5fa5 100644
--- a/drivers/md/dm-verity-fec.c
+++ b/drivers/md/dm-verity-fec.c
@@ -212,12 +212,15 @@ static int fec_read_bufs(struct dm_verity *v, struct dm_verity_io *io,
 	struct dm_verity_fec_io *fio = fec_io(io);
 	u64 block, ileaved;
 	u8 *bbuf, *rs_block;
-	u8 want_digest[v->digest_size];
+	u8 want_digest[AHASH_MAX_DIGESTSIZE];
 	unsigned n, k;
 
 	if (neras)
 		*neras = 0;
 
+	if (WARN_ON(v->digest_size > sizeof(want_digest)))
+		return -EINVAL;
+
 	/*
 	 * read each of the rsn data blocks that are part of the RS block, and
 	 * interleave contents to available bufs
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH v4 07/14] crypto alg: Introduce generic max blocksize and alignmask
  2018-07-11 20:36 [PATCH v4 00/14] crypto: Remove VLA usage Kees Cook
                   ` (5 preceding siblings ...)
  2018-07-11 20:36 ` [PATCH v4 06/14] dm verity fec: " Kees Cook
@ 2018-07-11 20:36 ` Kees Cook
  2018-07-11 20:36 ` [PATCH v4 08/14] crypto: qat: Remove VLA usage Kees Cook
                   ` (6 subsequent siblings)
  13 siblings, 0 replies; 53+ messages in thread
From: Kees Cook @ 2018-07-11 20:36 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Kees Cook, Gustavo A. R. Silva, Arnd Bergmann, Eric Biggers,
	Alasdair Kergon, Giovanni Cabiddu, Lars Persson, Mike Snitzer,
	Rabin Vincent, Tim Chen, David S. Miller, Masahiro Yamada,
	linux-crypto, qat-linux, dm-devel, linux-kernel

In the quest to remove all stack VLA usage from the kernel[1], this
exposes a new general upper bound on crypto blocksize and alignmask
(higher than for the existing cipher limits) for VLA removal,
and introduces new checks.

At present, the highest cra_alignmask in the kernel is 63. The highest
cra_blocksize is 144 (SHA3_224_BLOCK_SIZE, 18 8-byte words). For the
new blocksize limit, I went with 160 (20 8-byte words).

[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 crypto/algapi.c         | 7 ++++++-
 include/crypto/algapi.h | 4 +++-
 2 files changed, 9 insertions(+), 2 deletions(-)

diff --git a/crypto/algapi.c b/crypto/algapi.c
index c0755cf4f53f..496fc51bf215 100644
--- a/crypto/algapi.c
+++ b/crypto/algapi.c
@@ -57,9 +57,14 @@ static int crypto_check_alg(struct crypto_alg *alg)
 	if (alg->cra_alignmask & (alg->cra_alignmask + 1))
 		return -EINVAL;
 
-	if (alg->cra_blocksize > PAGE_SIZE / 8)
+	/* General maximums for all algs. */
+	if (alg->cra_alignmask > MAX_ALGAPI_ALIGNMASK)
 		return -EINVAL;
 
+	if (alg->cra_blocksize > MAX_ALGAPI_BLOCKSIZE)
+		return -EINVAL;
+
+	/* Lower maximums for specific alg types. */
 	if (!alg->cra_type && (alg->cra_flags & CRYPTO_ALG_TYPE_MASK) ==
 			       CRYPTO_ALG_TYPE_CIPHER) {
 		if (alg->cra_alignmask > MAX_CIPHER_ALIGNMASK)
diff --git a/include/crypto/algapi.h b/include/crypto/algapi.h
index bd5e8ccf1687..21371ac8f355 100644
--- a/include/crypto/algapi.h
+++ b/include/crypto/algapi.h
@@ -20,8 +20,10 @@
 /*
  * Maximum values for blocksize and alignmask, used to allocate
  * static buffers that are big enough for any combination of
- * ciphers and architectures.
+ * algs and architectures. Ciphers have a lower maximum size.
  */
+#define MAX_ALGAPI_BLOCKSIZE		160
+#define MAX_ALGAPI_ALIGNMASK		63
 #define MAX_CIPHER_BLOCKSIZE		16
 #define MAX_CIPHER_ALIGNMASK		15
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH v4 08/14] crypto: qat: Remove VLA usage
  2018-07-11 20:36 [PATCH v4 00/14] crypto: Remove VLA usage Kees Cook
                   ` (6 preceding siblings ...)
  2018-07-11 20:36 ` [PATCH v4 07/14] crypto alg: Introduce generic max blocksize and alignmask Kees Cook
@ 2018-07-11 20:36 ` Kees Cook
  2018-07-11 20:36 ` [PATCH v4 09/14] crypto: shash: Remove VLA usage in unaligned hashing Kees Cook
                   ` (5 subsequent siblings)
  13 siblings, 0 replies; 53+ messages in thread
From: Kees Cook @ 2018-07-11 20:36 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Kees Cook, Gustavo A. R. Silva, Arnd Bergmann, Eric Biggers,
	Alasdair Kergon, Giovanni Cabiddu, Lars Persson, Mike Snitzer,
	Rabin Vincent, Tim Chen, David S. Miller, Masahiro Yamada,
	linux-crypto, qat-linux, dm-devel, linux-kernel

In the quest to remove all stack VLA usage from the kernel[1], this uses
the new upper bound for the stack buffer. Also adds a sanity check.

[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 drivers/crypto/qat/qat_common/qat_algs.c | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/drivers/crypto/qat/qat_common/qat_algs.c b/drivers/crypto/qat/qat_common/qat_algs.c
index 1138e41d6805..a28edf7b792f 100644
--- a/drivers/crypto/qat/qat_common/qat_algs.c
+++ b/drivers/crypto/qat/qat_common/qat_algs.c
@@ -153,8 +153,8 @@ static int qat_alg_do_precomputes(struct icp_qat_hw_auth_algo_blk *hash,
 	struct sha512_state sha512;
 	int block_size = crypto_shash_blocksize(ctx->hash_tfm);
 	int digest_size = crypto_shash_digestsize(ctx->hash_tfm);
-	char ipad[block_size];
-	char opad[block_size];
+	char ipad[MAX_ALGAPI_BLOCKSIZE];
+	char opad[MAX_ALGAPI_BLOCKSIZE];
 	__be32 *hash_state_out;
 	__be64 *hash512_state_out;
 	int i, offset;
@@ -164,6 +164,10 @@ static int qat_alg_do_precomputes(struct icp_qat_hw_auth_algo_blk *hash,
 	shash->tfm = ctx->hash_tfm;
 	shash->flags = 0x0;
 
+	if (WARN_ON(block_size > sizeof(ipad) ||
+		    sizeof(ipad) != sizeof(opad)))
+		return -EINVAL;
+
 	if (auth_keylen > block_size) {
 		int ret = crypto_shash_digest(shash, auth_key,
 					      auth_keylen, ipad);
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH v4 09/14] crypto: shash: Remove VLA usage in unaligned hashing
  2018-07-11 20:36 [PATCH v4 00/14] crypto: Remove VLA usage Kees Cook
                   ` (7 preceding siblings ...)
  2018-07-11 20:36 ` [PATCH v4 08/14] crypto: qat: Remove VLA usage Kees Cook
@ 2018-07-11 20:36 ` Kees Cook
  2018-07-11 20:36 ` [PATCH v4 10/14] kbuild: Introduce FRAME_WARN_BUMP_FLAG Kees Cook
                   ` (4 subsequent siblings)
  13 siblings, 0 replies; 53+ messages in thread
From: Kees Cook @ 2018-07-11 20:36 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Kees Cook, Gustavo A. R. Silva, Arnd Bergmann, Eric Biggers,
	Alasdair Kergon, Giovanni Cabiddu, Lars Persson, Mike Snitzer,
	Rabin Vincent, Tim Chen, David S. Miller, Masahiro Yamada,
	linux-crypto, qat-linux, dm-devel, linux-kernel

In the quest to remove all stack VLA usage from the kernel[1], this uses
the newly defined max alignment to perform unaligned hashing to avoid
VLAs, and drops the helper function while adding sanity checks on the
resulting buffer sizes. Additionally, the __aligned_largest macro is
removed since this helper was the only user.

[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 crypto/shash.c               | 27 ++++++++++++++++-----------
 include/linux/compiler-gcc.h |  1 -
 2 files changed, 16 insertions(+), 12 deletions(-)

diff --git a/crypto/shash.c b/crypto/shash.c
index ab6902c6dae7..8d4746b14dd5 100644
--- a/crypto/shash.c
+++ b/crypto/shash.c
@@ -73,13 +73,6 @@ int crypto_shash_setkey(struct crypto_shash *tfm, const u8 *key,
 }
 EXPORT_SYMBOL_GPL(crypto_shash_setkey);
 
-static inline unsigned int shash_align_buffer_size(unsigned len,
-						   unsigned long mask)
-{
-	typedef u8 __aligned_largest u8_aligned;
-	return len + (mask & ~(__alignof__(u8_aligned) - 1));
-}
-
 static int shash_update_unaligned(struct shash_desc *desc, const u8 *data,
 				  unsigned int len)
 {
@@ -88,11 +81,17 @@ static int shash_update_unaligned(struct shash_desc *desc, const u8 *data,
 	unsigned long alignmask = crypto_shash_alignmask(tfm);
 	unsigned int unaligned_len = alignmask + 1 -
 				     ((unsigned long)data & alignmask);
-	u8 ubuf[shash_align_buffer_size(unaligned_len, alignmask)]
-		__aligned_largest;
+	/*
+	 * We cannot count on __aligned() working for large values:
+	 * https://patchwork.kernel.org/patch/9507697/
+	 */
+	u8 ubuf[MAX_ALGAPI_ALIGNMASK * 2];
 	u8 *buf = PTR_ALIGN(&ubuf[0], alignmask + 1);
 	int err;
 
+	if (WARN_ON(buf + unaligned_len > ubuf + sizeof(ubuf)))
+		return -EINVAL;
+
 	if (unaligned_len > len)
 		unaligned_len = len;
 
@@ -124,11 +123,17 @@ static int shash_final_unaligned(struct shash_desc *desc, u8 *out)
 	unsigned long alignmask = crypto_shash_alignmask(tfm);
 	struct shash_alg *shash = crypto_shash_alg(tfm);
 	unsigned int ds = crypto_shash_digestsize(tfm);
-	u8 ubuf[shash_align_buffer_size(ds, alignmask)]
-		__aligned_largest;
+	/*
+	 * We cannot count on __aligned() working for large values:
+	 * https://patchwork.kernel.org/patch/9507697/
+	 */
+	u8 ubuf[MAX_ALGAPI_ALIGNMASK + SHASH_MAX_DIGESTSIZE];
 	u8 *buf = PTR_ALIGN(&ubuf[0], alignmask + 1);
 	int err;
 
+	if (WARN_ON(buf + ds > ubuf + sizeof(ubuf)))
+		return -EINVAL;
+
 	err = shash->final(desc, buf);
 	if (err)
 		goto out;
diff --git a/include/linux/compiler-gcc.h b/include/linux/compiler-gcc.h
index f1a7492a5cc8..1f1cdef36a82 100644
--- a/include/linux/compiler-gcc.h
+++ b/include/linux/compiler-gcc.h
@@ -125,7 +125,6 @@
  */
 #define __pure			__attribute__((pure))
 #define __aligned(x)		__attribute__((aligned(x)))
-#define __aligned_largest	__attribute__((aligned))
 #define __printf(a, b)		__attribute__((format(printf, a, b)))
 #define __scanf(a, b)		__attribute__((format(scanf, a, b)))
 #define __attribute_const__	__attribute__((__const__))
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH v4 10/14] kbuild: Introduce FRAME_WARN_BUMP_FLAG
  2018-07-11 20:36 [PATCH v4 00/14] crypto: Remove VLA usage Kees Cook
                   ` (8 preceding siblings ...)
  2018-07-11 20:36 ` [PATCH v4 09/14] crypto: shash: Remove VLA usage in unaligned hashing Kees Cook
@ 2018-07-11 20:36 ` Kees Cook
  2018-07-11 20:36 ` [PATCH v4 11/14] treewide: Prepare to remove VLA usage for AHASH_REQUEST_ON_STACK Kees Cook
                   ` (3 subsequent siblings)
  13 siblings, 0 replies; 53+ messages in thread
From: Kees Cook @ 2018-07-11 20:36 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Kees Cook, Gustavo A. R. Silva, Arnd Bergmann, Eric Biggers,
	Alasdair Kergon, Giovanni Cabiddu, Lars Persson, Mike Snitzer,
	Rabin Vincent, Tim Chen, David S. Miller, Masahiro Yamada,
	linux-crypto, qat-linux, dm-devel, linux-kernel

When building with CONFIG_FRAME_WARN, it is sometimes useful to
give certain compilation units small additional headroom. This is
normally useful when VLA removal exposes a large stack allocation,
especially with FRAME_WARN at 1024 or smaller. This adds 20%.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 Makefile | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/Makefile b/Makefile
index c9132594860b..06fe93edd052 100644
--- a/Makefile
+++ b/Makefile
@@ -684,7 +684,13 @@ endif
 
 ifneq ($(CONFIG_FRAME_WARN),0)
 KBUILD_CFLAGS += $(call cc-option,-Wframe-larger-than=${CONFIG_FRAME_WARN})
+# Small code (mostly exposed by VLA removal) needs some additional
+# headroom, especially for a FRAME_WARN of 1024. This adds 20% which
+# can be used by the CFLAG_FRAME_WARN_BUMP option.
+FRAME_WARN_BUMP_SIZE := $(shell expr $(CONFIG_FRAME_WARN) / 5 + $(CONFIG_FRAME_WARN))
+FRAME_WARN_BUMP_FLAG := $(call cc-option,-Wframe-larger-than=$(FRAME_WARN_BUMP_SIZE))
 endif
+export FRAME_WARN_BUMP_FLAG
 
 stackp-flags-$(CONFIG_CC_HAS_STACKPROTECTOR_NONE) := -fno-stack-protector
 stackp-flags-$(CONFIG_STACKPROTECTOR)             := -fstack-protector
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH v4 11/14] treewide: Prepare to remove VLA usage for AHASH_REQUEST_ON_STACK
  2018-07-11 20:36 [PATCH v4 00/14] crypto: Remove VLA usage Kees Cook
                   ` (9 preceding siblings ...)
  2018-07-11 20:36 ` [PATCH v4 10/14] kbuild: Introduce FRAME_WARN_BUMP_FLAG Kees Cook
@ 2018-07-11 20:36 ` Kees Cook
  2018-07-12 16:02   ` Arnd Bergmann
  2018-07-11 20:36 ` [PATCH v4 12/14] crypto: ahash: Remove " Kees Cook
                   ` (2 subsequent siblings)
  13 siblings, 1 reply; 53+ messages in thread
From: Kees Cook @ 2018-07-11 20:36 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Kees Cook, Gustavo A. R. Silva, Arnd Bergmann, Eric Biggers,
	Alasdair Kergon, Giovanni Cabiddu, Lars Persson, Mike Snitzer,
	Rabin Vincent, Tim Chen, David S. Miller, Masahiro Yamada,
	linux-crypto, qat-linux, dm-devel, linux-kernel

Several uses of AHASH_REQUEST_ON_STACK() will trigger FRAME_WARN warnings
(when less than 2048) once the VLA is no longer hidden from the check:

drivers/block/drbd/drbd_worker.c:325:1: warning: the frame size of 1112 bytes is larger than 1024 bytes [-Wframe-larger-than=]
drivers/block/drbd/drbd_worker.c:352:1: warning: the frame size of 1120 bytes is larger than 1024 bytes [-Wframe-larger-than=]
crypto/ccm.c:235:1: warning: the frame size of 1184 bytes is larger than 1024 bytes [-Wframe-larger-than=]
drivers/md/dm-crypt.c:353:1: warning: the frame size of 1096 bytes is larger than 1024 bytes [-Wframe-larger-than=]
drivers/net/ppp/ppp_mppe.c:158:1: warning: the frame size of 1168 bytes is larger than 1024 bytes [-Wframe-larger-than=]
net/wireless/lib80211_crypt_tkip.c:537:1: warning: the frame size of 1136 bytes is larger than 1024 bytes [-Wframe-larger-than=]
drivers/staging/rtl8192u/ieee80211/ieee80211_crypt_tkip.c:528:1: warning: the frame size of 1136 bytes is larger than 1024 bytes [-Wframe-larger-than=]
drivers/staging/rtl8192e/rtllib_crypt_tkip.c:531:1: warning: the frame size of 1136 bytes is larger than 1024 bytes [-Wframe-larger-than=]

This bumps the affected objects by 20% to silence the warnings while still
providing coverage is anything grows even more.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 crypto/Makefile                             | 1 +
 drivers/block/drbd/Makefile                 | 2 ++
 drivers/md/Makefile                         | 1 +
 drivers/net/ppp/Makefile                    | 1 +
 drivers/staging/rtl8192e/Makefile           | 1 +
 drivers/staging/rtl8192u/Makefile           | 1 +
 drivers/staging/rtl8192u/ieee80211/Makefile | 1 +
 net/wireless/Makefile                       | 1 +
 8 files changed, 9 insertions(+)

diff --git a/crypto/Makefile b/crypto/Makefile
index 6d1d40eeb964..a4487b61ac4e 100644
--- a/crypto/Makefile
+++ b/crypto/Makefile
@@ -85,6 +85,7 @@ obj-$(CONFIG_CRYPTO_CTR) += ctr.o
 obj-$(CONFIG_CRYPTO_KEYWRAP) += keywrap.o
 obj-$(CONFIG_CRYPTO_GCM) += gcm.o
 obj-$(CONFIG_CRYPTO_CCM) += ccm.o
+CFLAGS_ccm.o += $(FRAME_WARN_BUMP_FLAG)
 obj-$(CONFIG_CRYPTO_CHACHA20POLY1305) += chacha20poly1305.o
 obj-$(CONFIG_CRYPTO_AEGIS128) += aegis128.o
 obj-$(CONFIG_CRYPTO_AEGIS128L) += aegis128l.o
diff --git a/drivers/block/drbd/Makefile b/drivers/block/drbd/Makefile
index 8bd534697d1b..9b6184487cb4 100644
--- a/drivers/block/drbd/Makefile
+++ b/drivers/block/drbd/Makefile
@@ -7,3 +7,5 @@ drbd-y += drbd_nla.o
 drbd-$(CONFIG_DEBUG_FS) += drbd_debugfs.o
 
 obj-$(CONFIG_BLK_DEV_DRBD)     += drbd.o
+
+CFLAGS_drbd_worker.o += $(FRAME_WARN_BUMP_FLAG)
diff --git a/drivers/md/Makefile b/drivers/md/Makefile
index 822f4e8753bc..639ff6599846 100644
--- a/drivers/md/Makefile
+++ b/drivers/md/Makefile
@@ -47,6 +47,7 @@ obj-$(CONFIG_DM_UNSTRIPED)	+= dm-unstripe.o
 obj-$(CONFIG_DM_BUFIO)		+= dm-bufio.o
 obj-$(CONFIG_DM_BIO_PRISON)	+= dm-bio-prison.o
 obj-$(CONFIG_DM_CRYPT)		+= dm-crypt.o
+CFLAGS_dm-crypt.o		+= $(FRAME_WARN_BUMP_FLAG)
 obj-$(CONFIG_DM_DELAY)		+= dm-delay.o
 obj-$(CONFIG_DM_FLAKEY)		+= dm-flakey.o
 obj-$(CONFIG_DM_MULTIPATH)	+= dm-multipath.o dm-round-robin.o
diff --git a/drivers/net/ppp/Makefile b/drivers/net/ppp/Makefile
index 16c457d6b324..18f35e449c93 100644
--- a/drivers/net/ppp/Makefile
+++ b/drivers/net/ppp/Makefile
@@ -8,6 +8,7 @@ obj-$(CONFIG_PPP_ASYNC) += ppp_async.o
 obj-$(CONFIG_PPP_BSDCOMP) += bsd_comp.o
 obj-$(CONFIG_PPP_DEFLATE) += ppp_deflate.o
 obj-$(CONFIG_PPP_MPPE) += ppp_mppe.o
+CFLAGS_ppp_mppe.o += $(FRAME_WARN_BUMP_FLAG)
 obj-$(CONFIG_PPP_SYNC_TTY) += ppp_synctty.o
 obj-$(CONFIG_PPPOE) += pppox.o pppoe.o
 obj-$(CONFIG_PPPOL2TP) += pppox.o
diff --git a/drivers/staging/rtl8192e/Makefile b/drivers/staging/rtl8192e/Makefile
index 6af519938868..fde738cdf876 100644
--- a/drivers/staging/rtl8192e/Makefile
+++ b/drivers/staging/rtl8192e/Makefile
@@ -15,6 +15,7 @@ obj-$(CONFIG_RTLLIB) += rtllib.o
 
 obj-$(CONFIG_RTLLIB_CRYPTO_CCMP) += rtllib_crypt_ccmp.o
 obj-$(CONFIG_RTLLIB_CRYPTO_TKIP) += rtllib_crypt_tkip.o
+CFLAGS_rtllib_crypt_tkip.o += $(FRAME_WARN_BUMP_FLAG)
 obj-$(CONFIG_RTLLIB_CRYPTO_WEP) += rtllib_crypt_wep.o
 
 obj-$(CONFIG_RTL8192E) += rtl8192e/
diff --git a/drivers/staging/rtl8192u/Makefile b/drivers/staging/rtl8192u/Makefile
index 3022728a364c..ad059546df88 100644
--- a/drivers/staging/rtl8192u/Makefile
+++ b/drivers/staging/rtl8192u/Makefile
@@ -26,5 +26,6 @@ r8192u_usb-y := r8192U_core.o r8180_93cx6.o r8192U_wx.o		\
 		  ieee80211/rtl819x_TSProc.o				\
 		  ieee80211/rtl819x_BAProc.o				\
 		  ieee80211/dot11d.o
+CFLAGS_ieee80211_crypt_tkip.o += $(FRAME_WARN_BUMP_FLAG)
 
 obj-$(CONFIG_RTL8192U) += r8192u_usb.o
diff --git a/drivers/staging/rtl8192u/ieee80211/Makefile b/drivers/staging/rtl8192u/ieee80211/Makefile
index 0d4d6489f767..9f3a06674c1a 100644
--- a/drivers/staging/rtl8192u/ieee80211/Makefile
+++ b/drivers/staging/rtl8192u/ieee80211/Makefile
@@ -17,6 +17,7 @@ ieee80211-rsl-objs := ieee80211_rx.o \
 
 ieee80211_crypt-rsl-objs := ieee80211_crypt.o
 ieee80211_crypt_tkip-rsl-objs := ieee80211_crypt_tkip.o
+CFLAGS_ieee80211_crypt_tkip.o += $(FRAME_WARN_BUMP_FLAG)
 ieee80211_crypt_ccmp-rsl-objs := ieee80211_crypt_ccmp.o
 ieee80211_crypt_wep-rsl-objs := ieee80211_crypt_wep.o
 
diff --git a/net/wireless/Makefile b/net/wireless/Makefile
index 1d84f91bbfb0..f6af5a6233e1 100644
--- a/net/wireless/Makefile
+++ b/net/wireless/Makefile
@@ -4,6 +4,7 @@ obj-$(CONFIG_LIB80211) += lib80211.o
 obj-$(CONFIG_LIB80211_CRYPT_WEP) += lib80211_crypt_wep.o
 obj-$(CONFIG_LIB80211_CRYPT_CCMP) += lib80211_crypt_ccmp.o
 obj-$(CONFIG_LIB80211_CRYPT_TKIP) += lib80211_crypt_tkip.o
+CFLAGS_lib80211_crypt_tkip.o += $(FRAME_WARN_BUMP_FLAG)
 
 obj-$(CONFIG_WEXT_CORE) += wext-core.o
 obj-$(CONFIG_WEXT_PROC) += wext-proc.o
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH v4 12/14] crypto: ahash: Remove VLA usage for AHASH_REQUEST_ON_STACK
  2018-07-11 20:36 [PATCH v4 00/14] crypto: Remove VLA usage Kees Cook
                   ` (10 preceding siblings ...)
  2018-07-11 20:36 ` [PATCH v4 11/14] treewide: Prepare to remove VLA usage for AHASH_REQUEST_ON_STACK Kees Cook
@ 2018-07-11 20:36 ` Kees Cook
  2018-07-11 20:36 ` [PATCH v4 13/14] rxrpc: Prepare to remove VLA usage for SKCIPHER_REQUEST_ON_STACK Kees Cook
  2018-07-11 20:36 ` [PATCH v4 14/14] crypto: skcipher: Remove " Kees Cook
  13 siblings, 0 replies; 53+ messages in thread
From: Kees Cook @ 2018-07-11 20:36 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Kees Cook, Gustavo A. R. Silva, Arnd Bergmann, Eric Biggers,
	Alasdair Kergon, Giovanni Cabiddu, Lars Persson, Mike Snitzer,
	Rabin Vincent, Tim Chen, David S. Miller, Masahiro Yamada,
	linux-crypto, qat-linux, dm-devel, linux-kernel

In the quest to remove all stack VLA usage from the kernel[1], this caps
the ahash request size similar to the other limits and adds a sanity
check at registration. Unfortunately, these reqsizes can be very large.
Looking at instrumented tcrypt output, the largest is for sha512:

	crypt: testing sha512
	crypto_ahash_set_reqsize: 528
	crypto_ahash_set_reqsize: 728
	crypto_ahash_set_reqsize: 808

[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 include/crypto/hash.h          | 3 ++-
 include/crypto/internal/hash.h | 1 +
 2 files changed, 3 insertions(+), 1 deletion(-)

diff --git a/include/crypto/hash.h b/include/crypto/hash.h
index 4fcd0e2368cd..9290120536cd 100644
--- a/include/crypto/hash.h
+++ b/include/crypto/hash.h
@@ -66,10 +66,11 @@ struct ahash_request {
 
 #define AHASH_MAX_DIGESTSIZE	512
 #define AHASH_MAX_STATESIZE	512
+#define AHASH_MAX_REQSIZE	808
 
 #define AHASH_REQUEST_ON_STACK(name, ahash) \
 	char __##name##_desc[sizeof(struct ahash_request) + \
-		crypto_ahash_reqsize(ahash)] CRYPTO_MINALIGN_ATTR; \
+		AHASH_MAX_REQSIZE] CRYPTO_MINALIGN_ATTR; \
 	struct ahash_request *name = (void *)__##name##_desc
 
 /**
diff --git a/include/crypto/internal/hash.h b/include/crypto/internal/hash.h
index a0b0ad9d585e..d96ae5f52125 100644
--- a/include/crypto/internal/hash.h
+++ b/include/crypto/internal/hash.h
@@ -142,6 +142,7 @@ static inline struct ahash_alg *__crypto_ahash_alg(struct crypto_alg *alg)
 static inline void crypto_ahash_set_reqsize(struct crypto_ahash *tfm,
 					    unsigned int reqsize)
 {
+	BUG_ON(reqsize > AHASH_MAX_REQSIZE);
 	tfm->reqsize = reqsize;
 }
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH v4 13/14] rxrpc: Prepare to remove VLA usage for SKCIPHER_REQUEST_ON_STACK
  2018-07-11 20:36 [PATCH v4 00/14] crypto: Remove VLA usage Kees Cook
                   ` (11 preceding siblings ...)
  2018-07-11 20:36 ` [PATCH v4 12/14] crypto: ahash: Remove " Kees Cook
@ 2018-07-11 20:36 ` Kees Cook
  2018-07-12 15:11   ` Arnd Bergmann
                     ` (2 more replies)
  2018-07-11 20:36 ` [PATCH v4 14/14] crypto: skcipher: Remove " Kees Cook
  13 siblings, 3 replies; 53+ messages in thread
From: Kees Cook @ 2018-07-11 20:36 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Kees Cook, Gustavo A. R. Silva, Arnd Bergmann, Eric Biggers,
	Alasdair Kergon, Giovanni Cabiddu, Lars Persson, Mike Snitzer,
	Rabin Vincent, Tim Chen, David S. Miller, Masahiro Yamada,
	linux-crypto, qat-linux, dm-devel, linux-kernel

Two uses of SKCIPHER_REQUEST_ON_STACK() will trigger FRAME_WARN warnings
(when less than 2048) once the VLA is no longer hidden from the check:

net/rxrpc/rxkad.c:398:1: warning: the frame size of 1152 bytes is larger than 1024 bytes [-Wframe-larger-than=]
net/rxrpc/rxkad.c:242:1: warning: the frame size of 1152 bytes is larger than 1024 bytes [-Wframe-larger-than=]

This bumps the affected objects by 20% to silence the warnings while
still providing coverage is anything grows even more.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 net/rxrpc/Makefile | 1 +
 1 file changed, 1 insertion(+)

diff --git a/net/rxrpc/Makefile b/net/rxrpc/Makefile
index 6ffb7e9887ce..53e1177129e6 100644
--- a/net/rxrpc/Makefile
+++ b/net/rxrpc/Makefile
@@ -32,4 +32,5 @@ rxrpc-y := \
 
 rxrpc-$(CONFIG_PROC_FS) += proc.o
 rxrpc-$(CONFIG_RXKAD) += rxkad.o
+CFLAGS_rxkad.o += $(FRAME_WARN_BUMP_FLAG)
 rxrpc-$(CONFIG_SYSCTL) += sysctl.o
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH v4 14/14] crypto: skcipher: Remove VLA usage for SKCIPHER_REQUEST_ON_STACK
  2018-07-11 20:36 [PATCH v4 00/14] crypto: Remove VLA usage Kees Cook
                   ` (12 preceding siblings ...)
  2018-07-11 20:36 ` [PATCH v4 13/14] rxrpc: Prepare to remove VLA usage for SKCIPHER_REQUEST_ON_STACK Kees Cook
@ 2018-07-11 20:36 ` Kees Cook
  13 siblings, 0 replies; 53+ messages in thread
From: Kees Cook @ 2018-07-11 20:36 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Kees Cook, Gustavo A. R. Silva, Arnd Bergmann, Eric Biggers,
	Alasdair Kergon, Giovanni Cabiddu, Lars Persson, Mike Snitzer,
	Rabin Vincent, Tim Chen, David S. Miller, Masahiro Yamada,
	linux-crypto, qat-linux, dm-devel, linux-kernel

In the quest to remove all stack VLA usage from the kernel[1], this
caps the skcipher request size similar to other limits and adds a sanity
check at registration. Looking at instrumented tcrypt output, the largest
is for lrw:

	crypt: testing lrw(aes)
	crypto_skcipher_set_reqsize: 8
	crypto_skcipher_set_reqsize: 88
	crypto_skcipher_set_reqsize: 472

[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 include/crypto/internal/skcipher.h | 1 +
 include/crypto/skcipher.h          | 4 +++-
 2 files changed, 4 insertions(+), 1 deletion(-)

diff --git a/include/crypto/internal/skcipher.h b/include/crypto/internal/skcipher.h
index e42f7063f245..5035482cbe68 100644
--- a/include/crypto/internal/skcipher.h
+++ b/include/crypto/internal/skcipher.h
@@ -130,6 +130,7 @@ static inline struct crypto_skcipher *crypto_spawn_skcipher(
 static inline void crypto_skcipher_set_reqsize(
 	struct crypto_skcipher *skcipher, unsigned int reqsize)
 {
+	BUG_ON(reqsize > SKCIPHER_MAX_REQSIZE);
 	skcipher->reqsize = reqsize;
 }
 
diff --git a/include/crypto/skcipher.h b/include/crypto/skcipher.h
index 2f327f090c3e..c48e194438cf 100644
--- a/include/crypto/skcipher.h
+++ b/include/crypto/skcipher.h
@@ -139,9 +139,11 @@ struct skcipher_alg {
 	struct crypto_alg base;
 };
 
+#define SKCIPHER_MAX_REQSIZE	472
+
 #define SKCIPHER_REQUEST_ON_STACK(name, tfm) \
 	char __##name##_desc[sizeof(struct skcipher_request) + \
-		crypto_skcipher_reqsize(tfm)] CRYPTO_MINALIGN_ATTR; \
+		SKCIPHER_MAX_REQSIZE] CRYPTO_MINALIGN_ATTR; \
 	struct skcipher_request *name = (void *)__##name##_desc
 
 /**
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* Re: [PATCH v4 13/14] rxrpc: Prepare to remove VLA usage for SKCIPHER_REQUEST_ON_STACK
  2018-07-11 20:36 ` [PATCH v4 13/14] rxrpc: Prepare to remove VLA usage for SKCIPHER_REQUEST_ON_STACK Kees Cook
@ 2018-07-12 15:11   ` Arnd Bergmann
  2018-07-12 20:23     ` Kees Cook
  2018-07-12 21:28   ` David Howells
  2018-07-12 22:05   ` David Howells
  2 siblings, 1 reply; 53+ messages in thread
From: Arnd Bergmann @ 2018-07-12 15:11 UTC (permalink / raw)
  To: Kees Cook
  Cc: Herbert Xu, Gustavo A. R. Silva, Eric Biggers, Alasdair Kergon,
	Giovanni Cabiddu, Lars Persson, Mike Snitzer, Rabin Vincent,
	Tim Chen, David S. Miller, Masahiro Yamada,
	open list:HARDWARE RANDOM NUMBER GENERATOR CORE, qat-linux,
	dm-devel, Linux Kernel Mailing List, David Howells

On Wed, Jul 11, 2018 at 10:36 PM, Kees Cook <keescook@chromium.org> wrote:
> Two uses of SKCIPHER_REQUEST_ON_STACK() will trigger FRAME_WARN warnings
> (when less than 2048) once the VLA is no longer hidden from the check:
>
> net/rxrpc/rxkad.c:398:1: warning: the frame size of 1152 bytes is larger than 1024 bytes [-Wframe-larger-than=]
> net/rxrpc/rxkad.c:242:1: warning: the frame size of 1152 bytes is larger than 1024 bytes [-Wframe-larger-than=]
>
> This bumps the affected objects by 20% to silence the warnings while
> still providing coverage is anything grows even more.
>
> Signed-off-by: Kees Cook <keescook@chromium.org>

(adding David Howells to cc)

I don't think these are in a fast path, it should be possible to just use
skcipher_alloc_req() instead of SKCIPHER_REQUEST_ON_STACK() here.
From what I can tell, neither of the two are called in atomic context, so
you should be able to use a GFP_KERNEL allocation.

      Arnd

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v4 11/14] treewide: Prepare to remove VLA usage for AHASH_REQUEST_ON_STACK
  2018-07-11 20:36 ` [PATCH v4 11/14] treewide: Prepare to remove VLA usage for AHASH_REQUEST_ON_STACK Kees Cook
@ 2018-07-12 16:02   ` Arnd Bergmann
  2018-07-12 20:17     ` Kees Cook
  2018-07-13  0:40     ` Herbert Xu
  0 siblings, 2 replies; 53+ messages in thread
From: Arnd Bergmann @ 2018-07-12 16:02 UTC (permalink / raw)
  To: Kees Cook
  Cc: Herbert Xu, Gustavo A. R. Silva, Eric Biggers, Alasdair Kergon,
	Giovanni Cabiddu, Lars Persson, Mike Snitzer, Rabin Vincent,
	Tim Chen, David S. Miller, Masahiro Yamada,
	open list:HARDWARE RANDOM NUMBER GENERATOR CORE, qat-linux,
	dm-devel, Linux Kernel Mailing List

On Wed, Jul 11, 2018 at 10:36 PM, Kees Cook <keescook@chromium.org> wrote:
> Several uses of AHASH_REQUEST_ON_STACK() will trigger FRAME_WARN warnings
> (when less than 2048) once the VLA is no longer hidden from the check:
>
> drivers/block/drbd/drbd_worker.c:325:1: warning: the frame size of 1112 bytes is larger than 1024 bytes [-Wframe-larger-than=]
> drivers/block/drbd/drbd_worker.c:352:1: warning: the frame size of 1120 bytes is larger than 1024 bytes [-Wframe-larger-than=]
> crypto/ccm.c:235:1: warning: the frame size of 1184 bytes is larger than 1024 bytes [-Wframe-larger-than=]
> drivers/md/dm-crypt.c:353:1: warning: the frame size of 1096 bytes is larger than 1024 bytes [-Wframe-larger-than=]
> drivers/net/ppp/ppp_mppe.c:158:1: warning: the frame size of 1168 bytes is larger than 1024 bytes [-Wframe-larger-than=]
> net/wireless/lib80211_crypt_tkip.c:537:1: warning: the frame size of 1136 bytes is larger than 1024 bytes [-Wframe-larger-than=]
> drivers/staging/rtl8192u/ieee80211/ieee80211_crypt_tkip.c:528:1: warning: the frame size of 1136 bytes is larger than 1024 bytes [-Wframe-larger-than=]
> drivers/staging/rtl8192e/rtllib_crypt_tkip.c:531:1: warning: the frame size of 1136 bytes is larger than 1024 bytes [-Wframe-larger-than=]
>
> This bumps the affected objects by 20% to silence the warnings while still
> providing coverage is anything grows even more.
>
> Signed-off-by: Kees Cook <keescook@chromium.org>

I think this is a dangerous precedent, I wouldn't really want any of
those functions to
ever take more than 1024 bytes, even that is really too much, but we
can't easily
lower the global limit.

You are patching all users of AHASH_REQUEST_ON_STACK with the exception of
arch/x86/power/hibernate_64.c here (which is always used on 64-bit and has
a larger limit already), which in turn suggests that the AHASH_REQUEST_ON_STACK
macro using bytes is just fundamentally broken by requiring that much space
(808 bytes for the context, plus 8 pointers for struct ahash_request, plus
CRYPTO_MINALIGN_ATTR).

How did you come up with that 808 byte number? I see a total of 39 callers
of crypto_ahash_set_reqsize(), did you check all of those individually?
If 808 bytes is the worst case, what are the next 5 ones? If there are only
a few of them that are badly written, maybe we can fix the drivers instead
and lower that number to something more reasonable.

Looking through some of the drivers, I found this interesting one:

#define SHA_BUFFER_LEN          (PAGE_SIZE / 16)
struct atmel_sha_reqctx {
...
        u8 buffer[SHA_BUFFER_LEN + SHA512_BLOCK_SIZE] __aligned(sizeof(u32));
};

which would result in overrunning the kernel stack immediately if ever
used with 64k PAGE_SIZE (we fortunately don't support that driver on
any architectures with 64k pages yet).

The other ones I looked at seem to all be well under 400 bytes (which is
still a lot to put on the stack, but probably ok).

      Arnd

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v4 11/14] treewide: Prepare to remove VLA usage for AHASH_REQUEST_ON_STACK
  2018-07-12 16:02   ` Arnd Bergmann
@ 2018-07-12 20:17     ` Kees Cook
  2018-07-12 21:38       ` Arnd Bergmann
  2018-07-13  0:40     ` Herbert Xu
  1 sibling, 1 reply; 53+ messages in thread
From: Kees Cook @ 2018-07-12 20:17 UTC (permalink / raw)
  To: Arnd Bergmann
  Cc: Herbert Xu, Gustavo A. R. Silva, Eric Biggers, Alasdair Kergon,
	Giovanni Cabiddu, Lars Persson, Mike Snitzer, Rabin Vincent,
	Tim Chen, David S. Miller, Masahiro Yamada,
	open list:HARDWARE RANDOM NUMBER GENERATOR CORE, qat-linux,
	dm-devel, Linux Kernel Mailing List

On Thu, Jul 12, 2018 at 9:02 AM, Arnd Bergmann <arnd@arndb.de> wrote:
> On Wed, Jul 11, 2018 at 10:36 PM, Kees Cook <keescook@chromium.org> wrote:
>> Several uses of AHASH_REQUEST_ON_STACK() will trigger FRAME_WARN warnings
>> (when less than 2048) once the VLA is no longer hidden from the check:
>>
>> drivers/block/drbd/drbd_worker.c:325:1: warning: the frame size of 1112 bytes is larger than 1024 bytes [-Wframe-larger-than=]
>> drivers/block/drbd/drbd_worker.c:352:1: warning: the frame size of 1120 bytes is larger than 1024 bytes [-Wframe-larger-than=]
>> crypto/ccm.c:235:1: warning: the frame size of 1184 bytes is larger than 1024 bytes [-Wframe-larger-than=]
>> drivers/md/dm-crypt.c:353:1: warning: the frame size of 1096 bytes is larger than 1024 bytes [-Wframe-larger-than=]
>> drivers/net/ppp/ppp_mppe.c:158:1: warning: the frame size of 1168 bytes is larger than 1024 bytes [-Wframe-larger-than=]
>> net/wireless/lib80211_crypt_tkip.c:537:1: warning: the frame size of 1136 bytes is larger than 1024 bytes [-Wframe-larger-than=]
>> drivers/staging/rtl8192u/ieee80211/ieee80211_crypt_tkip.c:528:1: warning: the frame size of 1136 bytes is larger than 1024 bytes [-Wframe-larger-than=]
>> drivers/staging/rtl8192e/rtllib_crypt_tkip.c:531:1: warning: the frame size of 1136 bytes is larger than 1024 bytes [-Wframe-larger-than=]
>>
>> This bumps the affected objects by 20% to silence the warnings while still
>> providing coverage is anything grows even more.
>>
>> Signed-off-by: Kees Cook <keescook@chromium.org>
>
> I think this is a dangerous precedent, I wouldn't really want any of
> those functions to
> ever take more than 1024 bytes, even that is really too much, but we
> can't easily
> lower the global limit.

The issue is that these are _already_ able to use this much stack
because of the VLA. It was just hidden from the FRAME_WARN checks.

> You are patching all users of AHASH_REQUEST_ON_STACK with the exception of
> arch/x86/power/hibernate_64.c here (which is always used on 64-bit and has
> a larger limit already), which in turn suggests that the AHASH_REQUEST_ON_STACK
> macro using bytes is just fundamentally broken by requiring that much space
> (808 bytes for the context, plus 8 pointers for struct ahash_request, plus
> CRYPTO_MINALIGN_ATTR).

Yes -- it's huge. That's always been true, unfortunately.

> How did you come up with that 808 byte number? I see a total of 39 callers
> of crypto_ahash_set_reqsize(), did you check all of those individually?
> If 808 bytes is the worst case, what are the next 5 ones? If there are only
> a few of them that are badly written, maybe we can fix the drivers instead
> and lower that number to something more reasonable.

That was discussed a bit (maybe not enough?) in the next patch:
https://patchwork.kernel.org/patch/10520407/

I used tcrypt (which examines all sane combinations) and sha512
produces the 808 number. I had done an earlier manual evaluation of
all crypto_ahash_set_reqsize() callers but Herbert and Eric pointed
out issues with my methodology (namely that things can be recursively
stacked (I had calculated too low) but some things will never be
stacked together (so some pathological conditions will never happen)).
So I moved to the tcrypt instrumentation approach, which tests
real-world combinations.

For example, reaching this 808 size is trivially easy to do right now
by just asking for dm-crypt to use a cipher of
capi:cbc(aes)-essiv:sha512.

> Looking through some of the drivers, I found this interesting one:
>
> #define SHA_BUFFER_LEN          (PAGE_SIZE / 16)
> struct atmel_sha_reqctx {
> ...
>         u8 buffer[SHA_BUFFER_LEN + SHA512_BLOCK_SIZE] __aligned(sizeof(u32));
> };
>
> which would result in overrunning the kernel stack immediately if ever
> used with 64k PAGE_SIZE (we fortunately don't support that driver on
> any architectures with 64k pages yet).

Right -- the large page size isn't reachable there. But we don't
overrun the kernel stack because of the check I added in
crypto_ahash_set_reqsize() in the above mentioned patch.

> The other ones I looked at seem to all be well under 400 bytes (which is
> still a lot to put on the stack, but probably ok).

I wish sha512 was "rare", but it's not. :(

So: mainly the crypto VLA removal is about exposing all these giant
stack usages. We can work to fix them, but I want to get these fixed
so we can add -Wvla to the kernel to avoid more being added (we've had
at least 2 added during this linux-next cycle already).

IMO, we're much better off with this stack usage _actually_ being
checked (even with a 20% bump) than staying entirely hidden (as it's
been).

-Kees

-- 
Kees Cook
Pixel Security

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v4 13/14] rxrpc: Prepare to remove VLA usage for SKCIPHER_REQUEST_ON_STACK
  2018-07-12 15:11   ` Arnd Bergmann
@ 2018-07-12 20:23     ` Kees Cook
  2018-07-12 20:30       ` Kees Cook
  0 siblings, 1 reply; 53+ messages in thread
From: Kees Cook @ 2018-07-12 20:23 UTC (permalink / raw)
  To: Arnd Bergmann
  Cc: Herbert Xu, Gustavo A. R. Silva, Eric Biggers, Alasdair Kergon,
	Giovanni Cabiddu, Lars Persson, Mike Snitzer, Rabin Vincent,
	Tim Chen, David S. Miller, Masahiro Yamada,
	open list:HARDWARE RANDOM NUMBER GENERATOR CORE, qat-linux,
	dm-devel, Linux Kernel Mailing List, David Howells

On Thu, Jul 12, 2018 at 8:11 AM, Arnd Bergmann <arnd@arndb.de> wrote:
> On Wed, Jul 11, 2018 at 10:36 PM, Kees Cook <keescook@chromium.org> wrote:
>> Two uses of SKCIPHER_REQUEST_ON_STACK() will trigger FRAME_WARN warnings
>> (when less than 2048) once the VLA is no longer hidden from the check:
>>
>> net/rxrpc/rxkad.c:398:1: warning: the frame size of 1152 bytes is larger than 1024 bytes [-Wframe-larger-than=]
>> net/rxrpc/rxkad.c:242:1: warning: the frame size of 1152 bytes is larger than 1024 bytes [-Wframe-larger-than=]
>>
>> This bumps the affected objects by 20% to silence the warnings while
>> still providing coverage is anything grows even more.
>>
>> Signed-off-by: Kees Cook <keescook@chromium.org>
>
> (adding David Howells to cc)
>
> I don't think these are in a fast path, it should be possible to just use
> skcipher_alloc_req() instead of SKCIPHER_REQUEST_ON_STACK() here.
> From what I can tell, neither of the two are called in atomic context, so
> you should be able to use a GFP_KERNEL allocation.

Sure, I can do that instead.

-Kees

-- 
Kees Cook
Pixel Security

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v4 13/14] rxrpc: Prepare to remove VLA usage for SKCIPHER_REQUEST_ON_STACK
  2018-07-12 20:23     ` Kees Cook
@ 2018-07-12 20:30       ` Kees Cook
  2018-07-12 21:15         ` Arnd Bergmann
  0 siblings, 1 reply; 53+ messages in thread
From: Kees Cook @ 2018-07-12 20:30 UTC (permalink / raw)
  To: Arnd Bergmann
  Cc: Herbert Xu, Gustavo A. R. Silva, Eric Biggers, Alasdair Kergon,
	Giovanni Cabiddu, Lars Persson, Mike Snitzer, Rabin Vincent,
	Tim Chen, David S. Miller, Masahiro Yamada,
	open list:HARDWARE RANDOM NUMBER GENERATOR CORE, qat-linux,
	dm-devel, Linux Kernel Mailing List, David Howells

On Thu, Jul 12, 2018 at 1:23 PM, Kees Cook <keescook@chromium.org> wrote:
> On Thu, Jul 12, 2018 at 8:11 AM, Arnd Bergmann <arnd@arndb.de> wrote:
>> On Wed, Jul 11, 2018 at 10:36 PM, Kees Cook <keescook@chromium.org> wrote:
>>> Two uses of SKCIPHER_REQUEST_ON_STACK() will trigger FRAME_WARN warnings
>>> (when less than 2048) once the VLA is no longer hidden from the check:
>>>
>>> net/rxrpc/rxkad.c:398:1: warning: the frame size of 1152 bytes is larger than 1024 bytes [-Wframe-larger-than=]
>>> net/rxrpc/rxkad.c:242:1: warning: the frame size of 1152 bytes is larger than 1024 bytes [-Wframe-larger-than=]
>>>
>>> This bumps the affected objects by 20% to silence the warnings while
>>> still providing coverage is anything grows even more.
>>>
>>> Signed-off-by: Kees Cook <keescook@chromium.org>
>>
>> (adding David Howells to cc)
>>
>> I don't think these are in a fast path, it should be possible to just use
>> skcipher_alloc_req() instead of SKCIPHER_REQUEST_ON_STACK() here.
>> From what I can tell, neither of the two are called in atomic context, so
>> you should be able to use a GFP_KERNEL allocation.
>
> Sure, I can do that instead.

Actually, I think this can actually be adjusted to just re-use the
stack allocation, since rxkad_verify_packet() finishes one before
doing another in rxkad_verify_packet_1():

diff --git a/net/rxrpc/rxkad.c b/net/rxrpc/rxkad.c
index 278ac0807a60..d6a2e7cab384 100644
--- a/net/rxrpc/rxkad.c
+++ b/net/rxrpc/rxkad.c
@@ -316,10 +316,10 @@ static int rxkad_secure_packet(struct rxrpc_call *call,
  */
 static int rxkad_verify_packet_1(struct rxrpc_call *call, struct sk_buff *skb,
                                 unsigned int offset, unsigned int len,
-                                rxrpc_seq_t seq)
+                                rxrpc_seq_t seq,
+                                struct skcipher_request *req)
 {
        struct rxkad_level1_hdr sechdr;
-       SKCIPHER_REQUEST_ON_STACK(req, call->conn->cipher);
        struct rxrpc_crypt iv;
        struct scatterlist sg[16];
        struct sk_buff *trailer;
@@ -549,7 +549,7 @@ static int rxkad_verify_packet(struct rxrpc_call
*call, struct sk_buff *skb,
        case RXRPC_SECURITY_PLAIN:
                return 0;
        case RXRPC_SECURITY_AUTH:
-               return rxkad_verify_packet_1(call, skb, offset, len, seq);
+               return rxkad_verify_packet_1(call, skb, offset, len, seq, req);
        case RXRPC_SECURITY_ENCRYPT:
                return rxkad_verify_packet_2(call, skb, offset, len, seq);
        default:


-Kees

-- 
Kees Cook
Pixel Security

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* Re: [PATCH v4 13/14] rxrpc: Prepare to remove VLA usage for SKCIPHER_REQUEST_ON_STACK
  2018-07-12 20:30       ` Kees Cook
@ 2018-07-12 21:15         ` Arnd Bergmann
  2018-07-12 21:38           ` Kees Cook
  0 siblings, 1 reply; 53+ messages in thread
From: Arnd Bergmann @ 2018-07-12 21:15 UTC (permalink / raw)
  To: Kees Cook
  Cc: Herbert Xu, Gustavo A. R. Silva, Eric Biggers, Alasdair Kergon,
	Giovanni Cabiddu, Lars Persson, Mike Snitzer, Rabin Vincent,
	Tim Chen, David S. Miller, Masahiro Yamada,
	open list:HARDWARE RANDOM NUMBER GENERATOR CORE, qat-linux,
	dm-devel, Linux Kernel Mailing List, David Howells

On Thu, Jul 12, 2018 at 10:30 PM, Kees Cook <keescook@chromium.org> wrote:
> On Thu, Jul 12, 2018 at 1:23 PM, Kees Cook <keescook@chromium.org> wrote:
>> On Thu, Jul 12, 2018 at 8:11 AM, Arnd Bergmann <arnd@arndb.de> wrote:
>>> On Wed, Jul 11, 2018 at 10:36 PM, Kees Cook <keescook@chromium.org> wrote:
>>>> Two uses of SKCIPHER_REQUEST_ON_STACK() will trigger FRAME_WARN warnings
>>>> (when less than 2048) once the VLA is no longer hidden from the check:
>>>>
>>>> net/rxrpc/rxkad.c:398:1: warning: the frame size of 1152 bytes is larger than 1024 bytes [-Wframe-larger-than=]
>>>> net/rxrpc/rxkad.c:242:1: warning: the frame size of 1152 bytes is larger than 1024 bytes [-Wframe-larger-than=]
>>>>
>>>> This bumps the affected objects by 20% to silence the warnings while
>>>> still providing coverage is anything grows even more.
>>>>
>>>> Signed-off-by: Kees Cook <keescook@chromium.org>
>>>
>>> (adding David Howells to cc)
>>>
>>> I don't think these are in a fast path, it should be possible to just use
>>> skcipher_alloc_req() instead of SKCIPHER_REQUEST_ON_STACK() here.
>>> From what I can tell, neither of the two are called in atomic context, so
>>> you should be able to use a GFP_KERNEL allocation.
>>
>> Sure, I can do that instead.
>
> Actually, I think this can actually be adjusted to just re-use the
> stack allocation, since rxkad_verify_packet() finishes one before
> doing another in rxkad_verify_packet_1():

That looks very nice, yes. The same thing is needed in
rxkad_secure_packet(), right?

       Arnd

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v4 13/14] rxrpc: Prepare to remove VLA usage for SKCIPHER_REQUEST_ON_STACK
  2018-07-11 20:36 ` [PATCH v4 13/14] rxrpc: Prepare to remove VLA usage for SKCIPHER_REQUEST_ON_STACK Kees Cook
  2018-07-12 15:11   ` Arnd Bergmann
@ 2018-07-12 21:28   ` David Howells
  2018-07-12 21:34     ` Kees Cook
  2018-07-12 22:05   ` David Howells
  2 siblings, 1 reply; 53+ messages in thread
From: David Howells @ 2018-07-12 21:28 UTC (permalink / raw)
  To: Arnd Bergmann
  Cc: dhowells, Kees Cook, Herbert Xu, Gustavo A. R. Silva,
	Eric Biggers, Alasdair Kergon, Giovanni Cabiddu, Lars Persson,
	Mike Snitzer, Rabin Vincent, Tim Chen, David S. Miller,
	Masahiro Yamada, open list:HARDWARE RANDOM NUMBER GENERATOR CORE,
	qat-linux, dm-devel, Linux Kernel Mailing List

Can I get a cc on the original patch?

David

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v4 13/14] rxrpc: Prepare to remove VLA usage for SKCIPHER_REQUEST_ON_STACK
  2018-07-12 21:28   ` David Howells
@ 2018-07-12 21:34     ` Kees Cook
  0 siblings, 0 replies; 53+ messages in thread
From: Kees Cook @ 2018-07-12 21:34 UTC (permalink / raw)
  To: David Howells
  Cc: Arnd Bergmann, Herbert Xu, Gustavo A. R. Silva, Eric Biggers,
	Alasdair Kergon, Giovanni Cabiddu, Lars Persson, Mike Snitzer,
	Rabin Vincent, Tim Chen, David S. Miller, Masahiro Yamada,
	open list:HARDWARE RANDOM NUMBER GENERATOR CORE, qat-linux,
	dm-devel, Linux Kernel Mailing List

Hi David,

On Thu, Jul 12, 2018 at 2:28 PM, David Howells <dhowells@redhat.com> wrote:
> Can I get a cc on the original patch?

I'll add you to CC for future revisions. Here was the start of this thread:
https://lkml.kernel.org/r/20180711203619.1020-14-keescook@chromium.org

-Kees

-- 
Kees Cook
Pixel Security

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v4 13/14] rxrpc: Prepare to remove VLA usage for SKCIPHER_REQUEST_ON_STACK
  2018-07-12 21:15         ` Arnd Bergmann
@ 2018-07-12 21:38           ` Kees Cook
  0 siblings, 0 replies; 53+ messages in thread
From: Kees Cook @ 2018-07-12 21:38 UTC (permalink / raw)
  To: Arnd Bergmann
  Cc: Herbert Xu, Gustavo A. R. Silva, Eric Biggers, Alasdair Kergon,
	Giovanni Cabiddu, Lars Persson, Mike Snitzer, Rabin Vincent,
	Tim Chen, David S. Miller, Masahiro Yamada,
	open list:HARDWARE RANDOM NUMBER GENERATOR CORE, qat-linux,
	dm-devel, Linux Kernel Mailing List, David Howells

On Thu, Jul 12, 2018 at 2:15 PM, Arnd Bergmann <arnd@arndb.de> wrote:
> On Thu, Jul 12, 2018 at 10:30 PM, Kees Cook <keescook@chromium.org> wrote:
>> Actually, I think this can actually be adjusted to just re-use the
>> stack allocation, since rxkad_verify_packet() finishes one before
>> doing another in rxkad_verify_packet_1():
>
> That looks very nice, yes. The same thing is needed in
> rxkad_secure_packet(), right?

Yup. 4 leaf functions and the 2 callers.

-Kees

-- 
Kees Cook
Pixel Security

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v4 11/14] treewide: Prepare to remove VLA usage for AHASH_REQUEST_ON_STACK
  2018-07-12 20:17     ` Kees Cook
@ 2018-07-12 21:38       ` Arnd Bergmann
  0 siblings, 0 replies; 53+ messages in thread
From: Arnd Bergmann @ 2018-07-12 21:38 UTC (permalink / raw)
  To: Kees Cook
  Cc: Herbert Xu, Gustavo A. R. Silva, Eric Biggers, Alasdair Kergon,
	Giovanni Cabiddu, Lars Persson, Mike Snitzer, Rabin Vincent,
	Tim Chen, David S. Miller, Masahiro Yamada,
	open list:HARDWARE RANDOM NUMBER GENERATOR CORE, qat-linux,
	dm-devel, Linux Kernel Mailing List

On Thu, Jul 12, 2018 at 10:17 PM, Kees Cook <keescook@chromium.org> wrote:
> On Thu, Jul 12, 2018 at 9:02 AM, Arnd Bergmann <arnd@arndb.de> wrote:
>> On Wed, Jul 11, 2018 at 10:36 PM, Kees Cook <keescook@chromium.org> wrote:
>>> Several uses of AHASH_REQUEST_ON_STACK() will trigger FRAME_WARN warnings
>>> (when less than 2048) once the VLA is no longer hidden from the check:
>>>
>>> drivers/block/drbd/drbd_worker.c:325:1: warning: the frame size of 1112 bytes is larger than 1024 bytes [-Wframe-larger-than=]
>>> drivers/block/drbd/drbd_worker.c:352:1: warning: the frame size of 1120 bytes is larger than 1024 bytes [-Wframe-larger-than=]
>>> crypto/ccm.c:235:1: warning: the frame size of 1184 bytes is larger than 1024 bytes [-Wframe-larger-than=]
>>> drivers/md/dm-crypt.c:353:1: warning: the frame size of 1096 bytes is larger than 1024 bytes [-Wframe-larger-than=]
>>> drivers/net/ppp/ppp_mppe.c:158:1: warning: the frame size of 1168 bytes is larger than 1024 bytes [-Wframe-larger-than=]
>>> net/wireless/lib80211_crypt_tkip.c:537:1: warning: the frame size of 1136 bytes is larger than 1024 bytes [-Wframe-larger-than=]
>>> drivers/staging/rtl8192u/ieee80211/ieee80211_crypt_tkip.c:528:1: warning: the frame size of 1136 bytes is larger than 1024 bytes [-Wframe-larger-than=]
>>> drivers/staging/rtl8192e/rtllib_crypt_tkip.c:531:1: warning: the frame size of 1136 bytes is larger than 1024 bytes [-Wframe-larger-than=]
>>>
>>> This bumps the affected objects by 20% to silence the warnings while still
>>> providing coverage is anything grows even more.
>>>
>>> Signed-off-by: Kees Cook <keescook@chromium.org>
>>
>> I think this is a dangerous precedent, I wouldn't really want any of
>> those functions to
>> ever take more than 1024 bytes, even that is really too much, but we
>> can't easily
>> lower the global limit.
>
> The issue is that these are _already_ able to use this much stack
> because of the VLA. It was just hidden from the FRAME_WARN checks.

Yes, of course.

>> You are patching all users of AHASH_REQUEST_ON_STACK with the exception of
>> arch/x86/power/hibernate_64.c here (which is always used on 64-bit and has
>> a larger limit already), which in turn suggests that the AHASH_REQUEST_ON_STACK
>> macro using bytes is just fundamentally broken by requiring that much space
>> (808 bytes for the context, plus 8 pointers for struct ahash_request, plus
>> CRYPTO_MINALIGN_ATTR).
>
> Yes -- it's huge. That's always been true, unfortunately.
>
>> How did you come up with that 808 byte number? I see a total of 39 callers
>> of crypto_ahash_set_reqsize(), did you check all of those individually?
>> If 808 bytes is the worst case, what are the next 5 ones? If there are only
>> a few of them that are badly written, maybe we can fix the drivers instead
>> and lower that number to something more reasonable.
>
> That was discussed a bit (maybe not enough?) in the next patch:
> https://patchwork.kernel.org/patch/10520407/
>
> I used tcrypt (which examines all sane combinations) and sha512
> produces the 808 number. I had done an earlier manual evaluation of
> all crypto_ahash_set_reqsize() callers but Herbert and Eric pointed
> out issues with my methodology (namely that things can be recursively
> stacked (I had calculated too low) but some things will never be
> stacked together (so some pathological conditions will never happen)).
> So I moved to the tcrypt instrumentation approach, which tests
> real-world combinations.
>
> For example, reaching this 808 size is trivially easy to do right now
> by just asking for dm-crypt to use a cipher of
> capi:cbc(aes)-essiv:sha512.

Ok, but is there anything that can be done to the sha512
implementation to lower that number? E.g. if a significant chunk
of struct sha512_hash_ctx is only used to hold temporary data,
could it be replaced with e.g. a percpu buffer?
>> The other ones I looked at seem to all be well under 400 bytes (which is
>> still a lot to put on the stack, but probably ok).
>
> I wish sha512 was "rare", but it's not. :(

Looking at the callers of crypto_ahash_set_reqsize(), it appears
that the only instance that is so bad is specifically
arch/x86/crypto/sha512-mb/sha512_mb.c, which is architecture
specific, and only one of multiple implementations of sha512.

Am I misreading that code, or does that mean that we could get
away with using the 808 byte limit only on x86 when
CONFIG_CRYPTO_SHA512_MB is enabled, but using a smaller
limit everywhere where else?

> So: mainly the crypto VLA removal is about exposing all these giant
> stack usages. We can work to fix them, but I want to get these fixed
> so we can add -Wvla to the kernel to avoid more being added (we've had
> at least 2 added during this linux-next cycle already).
>
> IMO, we're much better off with this stack usage _actually_ being
> checked (even with a 20% bump) than staying entirely hidden (as it's
> been).

Yes, definitely. You may recall that I spent several months tracking
down all drivers that grew to insane stack usage when CONFIG_KASAN
was enabled, so we could again turn on the existing stack check
in an allmodconfig build in order to find the normal regressions, so
I'm definitely all for improving both the actual usage and the kind
of diagnostic we have available.

I mainly want to ensure that we have tried anything within reason
to reduce the stack usage of the AHASH_REQUEST_ON_STACK()
users before we resort to changing the warning limit. I'm not
convinced that everything has been tried if we have 808 byte
structures.

      Arnd

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v4 13/14] rxrpc: Prepare to remove VLA usage for SKCIPHER_REQUEST_ON_STACK
  2018-07-11 20:36 ` [PATCH v4 13/14] rxrpc: Prepare to remove VLA usage for SKCIPHER_REQUEST_ON_STACK Kees Cook
  2018-07-12 15:11   ` Arnd Bergmann
  2018-07-12 21:28   ` David Howells
@ 2018-07-12 22:05   ` David Howells
  2 siblings, 0 replies; 53+ messages in thread
From: David Howells @ 2018-07-12 22:05 UTC (permalink / raw)
  To: Arnd Bergmann
  Cc: dhowells, Kees Cook, Herbert Xu, Gustavo A. R. Silva,
	Eric Biggers, Alasdair Kergon, Giovanni Cabiddu, Lars Persson,
	Mike Snitzer, Rabin Vincent, Tim Chen, David S. Miller,
	Masahiro Yamada, open list:HARDWARE RANDOM NUMBER GENERATOR CORE,
	qat-linux, dm-devel, Linux Kernel Mailing List

Arnd Bergmann <arnd@arndb.de> wrote:

> From what I can tell, neither of the two are called in atomic context, so
> you should be able to use a GFP_KERNEL allocation.

You need to be careful doing that since the allocation might happen in the AFS
writeback path.  I use GFP_NOIO or GFP_NOFS in rxkad.c and skb_cow_data() uses
GFP_ATOMIC - though we should have single ownership of the packet at this
point.

David

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v4 11/14] treewide: Prepare to remove VLA usage for AHASH_REQUEST_ON_STACK
  2018-07-12 16:02   ` Arnd Bergmann
  2018-07-12 20:17     ` Kees Cook
@ 2018-07-13  0:40     ` Herbert Xu
  2018-07-13  3:33       ` Kees Cook
  2018-07-13  6:16       ` Kees Cook
  1 sibling, 2 replies; 53+ messages in thread
From: Herbert Xu @ 2018-07-13  0:40 UTC (permalink / raw)
  To: Arnd Bergmann
  Cc: Kees Cook, Gustavo A. R. Silva, Eric Biggers, Alasdair Kergon,
	Giovanni Cabiddu, Lars Persson, Mike Snitzer, Rabin Vincent,
	Tim Chen, David S. Miller, Masahiro Yamada,
	open list:HARDWARE RANDOM NUMBER GENERATOR CORE, qat-linux,
	dm-devel, Linux Kernel Mailing List

On Thu, Jul 12, 2018 at 06:02:26PM +0200, Arnd Bergmann wrote:
>
> Looking through some of the drivers, I found this interesting one:

As I said before these patches are fundamentally broken.  Users
of AHASH_REQUEST_ON_STACK can only use sync algorithm providers
and therefore drivers are irrelevant.

Cheers,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v4 11/14] treewide: Prepare to remove VLA usage for AHASH_REQUEST_ON_STACK
  2018-07-13  0:40     ` Herbert Xu
@ 2018-07-13  3:33       ` Kees Cook
  2018-07-13  3:44         ` Herbert Xu
  2018-07-13  6:16       ` Kees Cook
  1 sibling, 1 reply; 53+ messages in thread
From: Kees Cook @ 2018-07-13  3:33 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Arnd Bergmann, Gustavo A. R. Silva, Eric Biggers,
	Alasdair Kergon, Giovanni Cabiddu, Lars Persson, Mike Snitzer,
	Rabin Vincent, Tim Chen, David S. Miller, Masahiro Yamada,
	open list:HARDWARE RANDOM NUMBER GENERATOR CORE, qat-linux,
	dm-devel, Linux Kernel Mailing List

On Thu, Jul 12, 2018 at 5:40 PM, Herbert Xu <herbert@gondor.apana.org.au> wrote:
> On Thu, Jul 12, 2018 at 06:02:26PM +0200, Arnd Bergmann wrote:
>>
>> Looking through some of the drivers, I found this interesting one:
>
> As I said before these patches are fundamentally broken.  Users
> of AHASH_REQUEST_ON_STACK can only use sync algorithm providers
> and therefore drivers are irrelevant.

I don't understand what this means. Can you give an example of what
you want to see happen that will accomplish the VLA removals?

-Kees

-- 
Kees Cook
Pixel Security

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v4 11/14] treewide: Prepare to remove VLA usage for AHASH_REQUEST_ON_STACK
  2018-07-13  3:33       ` Kees Cook
@ 2018-07-13  3:44         ` Herbert Xu
  2018-07-13  5:17           ` Kees Cook
  0 siblings, 1 reply; 53+ messages in thread
From: Herbert Xu @ 2018-07-13  3:44 UTC (permalink / raw)
  To: Kees Cook
  Cc: Arnd Bergmann, Gustavo A. R. Silva, Eric Biggers,
	Alasdair Kergon, Giovanni Cabiddu, Lars Persson, Mike Snitzer,
	Rabin Vincent, Tim Chen, David S. Miller, Masahiro Yamada,
	open list:HARDWARE RANDOM NUMBER GENERATOR CORE, qat-linux,
	dm-devel, Linux Kernel Mailing List

On Thu, Jul 12, 2018 at 08:33:24PM -0700, Kees Cook wrote:
> On Thu, Jul 12, 2018 at 5:40 PM, Herbert Xu <herbert@gondor.apana.org.au> wrote:
> > On Thu, Jul 12, 2018 at 06:02:26PM +0200, Arnd Bergmann wrote:
> >>
> >> Looking through some of the drivers, I found this interesting one:
> >
> > As I said before these patches are fundamentally broken.  Users
> > of AHASH_REQUEST_ON_STACK can only use sync algorithm providers
> > and therefore drivers are irrelevant.
> 
> I don't understand what this means. Can you give an example of what
> you want to see happen that will accomplish the VLA removals?

Any algorithm that is async must be ignored when you're calculating
the maximum on-stack size of the request.  For example, sha512-mb
is marked as async and therefore must not be used in conjunction
with AHASH_REQUEST_ON_STACK.

Cheers,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v4 11/14] treewide: Prepare to remove VLA usage for AHASH_REQUEST_ON_STACK
  2018-07-13  3:44         ` Herbert Xu
@ 2018-07-13  5:17           ` Kees Cook
  2018-07-13  5:20             ` Herbert Xu
  0 siblings, 1 reply; 53+ messages in thread
From: Kees Cook @ 2018-07-13  5:17 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Arnd Bergmann, Gustavo A. R. Silva, Eric Biggers,
	Alasdair Kergon, Giovanni Cabiddu, Lars Persson, Mike Snitzer,
	Rabin Vincent, Tim Chen, David S. Miller, Masahiro Yamada,
	open list:HARDWARE RANDOM NUMBER GENERATOR CORE, qat-linux,
	dm-devel, Linux Kernel Mailing List

On Thu, Jul 12, 2018 at 8:44 PM, Herbert Xu <herbert@gondor.apana.org.au> wrote:
> On Thu, Jul 12, 2018 at 08:33:24PM -0700, Kees Cook wrote:
>> On Thu, Jul 12, 2018 at 5:40 PM, Herbert Xu <herbert@gondor.apana.org.au> wrote:
>> > On Thu, Jul 12, 2018 at 06:02:26PM +0200, Arnd Bergmann wrote:
>> >>
>> >> Looking through some of the drivers, I found this interesting one:
>> >
>> > As I said before these patches are fundamentally broken.  Users
>> > of AHASH_REQUEST_ON_STACK can only use sync algorithm providers
>> > and therefore drivers are irrelevant.
>>
>> I don't understand what this means. Can you give an example of what
>> you want to see happen that will accomplish the VLA removals?
>
> Any algorithm that is async must be ignored when you're calculating
> the maximum on-stack size of the request.  For example, sha512-mb
> is marked as async and therefore must not be used in conjunction
> with AHASH_REQUEST_ON_STACK.

Then why does the instrumented tcrypt output show the huge size? Is
tcrypt doing something incorrectly?

What is the correct value to use for AHASH_REQUEST_ON_STACK?

-Kees

-- 
Kees Cook
Pixel Security

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v4 11/14] treewide: Prepare to remove VLA usage for AHASH_REQUEST_ON_STACK
  2018-07-13  5:17           ` Kees Cook
@ 2018-07-13  5:20             ` Herbert Xu
  2018-07-13  6:00               ` Kees Cook
  0 siblings, 1 reply; 53+ messages in thread
From: Herbert Xu @ 2018-07-13  5:20 UTC (permalink / raw)
  To: Kees Cook
  Cc: Arnd Bergmann, Gustavo A. R. Silva, Eric Biggers,
	Alasdair Kergon, Giovanni Cabiddu, Lars Persson, Mike Snitzer,
	Rabin Vincent, Tim Chen, David S. Miller, Masahiro Yamada,
	open list:HARDWARE RANDOM NUMBER GENERATOR CORE, qat-linux,
	dm-devel, Linux Kernel Mailing List

On Thu, Jul 12, 2018 at 10:17:29PM -0700, Kees Cook wrote:
>
> Then why does the instrumented tcrypt output show the huge size? Is
> tcrypt doing something incorrectly?

tcrypt doesn't even use AHASH_REQUEST_ON_STACK so I don't understand
your point.

> What is the correct value to use for AHASH_REQUEST_ON_STACK?

As I said to arrive at a fixed value you should examine all sync
ahash algorithms (e.g., all shash ones plus ahash ones marked as
sync if there are any).

Cheers,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v4 11/14] treewide: Prepare to remove VLA usage for AHASH_REQUEST_ON_STACK
  2018-07-13  5:20             ` Herbert Xu
@ 2018-07-13  6:00               ` Kees Cook
  2018-07-13 10:14                 ` Arnd Bergmann
  0 siblings, 1 reply; 53+ messages in thread
From: Kees Cook @ 2018-07-13  6:00 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Arnd Bergmann, Gustavo A. R. Silva, Eric Biggers,
	Alasdair Kergon, Giovanni Cabiddu, Lars Persson, Mike Snitzer,
	Rabin Vincent, Tim Chen, David S. Miller, Masahiro Yamada,
	open list:HARDWARE RANDOM NUMBER GENERATOR CORE, qat-linux,
	dm-devel, Linux Kernel Mailing List

On Thu, Jul 12, 2018 at 10:20 PM, Herbert Xu
<herbert@gondor.apana.org.au> wrote:
> On Thu, Jul 12, 2018 at 10:17:29PM -0700, Kees Cook wrote:
>>
>> Then why does the instrumented tcrypt output show the huge size? Is
>> tcrypt doing something incorrectly?
>
> tcrypt doesn't even use AHASH_REQUEST_ON_STACK so I don't understand
> your point.

It's using crypto_ahash_set_reqsize(), which is what
AHASH_REQUEST_ON_STACK() reads back via crypto_ahash_reqsize() (i.e.
tfm->reqsize). It sounds like you're saying that there are cases where
an ahash is constructed (and will call crypto_ahash_set_reqsize()) but
where it cannot be used with AHASH_REQUEST_ON_STACK()? What actually
enforces this, since there will be a difference between
crypto_ahash_set_reqsize() (as seen with sha512-mb) and the actually
allowed stack usage. (i.e. where should I perform a check against the
new fixed value?)

>> What is the correct value to use for AHASH_REQUEST_ON_STACK?
>
> As I said to arrive at a fixed value you should examine all sync
> ahash algorithms (e.g., all shash ones plus ahash ones marked as
> sync if there are any).

The "value" for the ahash I understand: it has a request size
(tfm->reqsize, as set by crypto_ahash_set_reqsize()) what should be
used to measure the shash value? (And how does this relate to the
value returned by crypto_ahash_reqsize()?) The closest clue I can find
is this:

crypto_init_shash_ops_async() does:
        crt->reqsize = sizeof(struct shash_desc) + crypto_shash_descsize(shash);

and that gets called from crypto_ahash_init_tfm(), so if it starts
with the above reqsize and adds to it with a call to
crypto_ahash_set_reqsize() later, we'll have that maximum?

So, do I want to calculate this answer as:

sizeof(struct shash_desc) + max(shash descsize) + max(ahash-sync reqsize) ?
16 + 360 + 0

It's 0 above because if I look at all the callers of
crypto_ahash_set_reqsize() that do wrapping, all are ASYNC.

So, should this really just be 376? Where is best to validate this
size, as it seems checking in crypto_ahash_set_reqsize() is
inappropriate?

-Kees

-- 
Kees Cook
Pixel Security

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v4 11/14] treewide: Prepare to remove VLA usage for AHASH_REQUEST_ON_STACK
  2018-07-13  0:40     ` Herbert Xu
  2018-07-13  3:33       ` Kees Cook
@ 2018-07-13  6:16       ` Kees Cook
  2018-07-13  6:22         ` Herbert Xu
  1 sibling, 1 reply; 53+ messages in thread
From: Kees Cook @ 2018-07-13  6:16 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Arnd Bergmann, Gustavo A. R. Silva, Eric Biggers,
	Alasdair Kergon, Giovanni Cabiddu, Lars Persson, Mike Snitzer,
	Rabin Vincent, Tim Chen, David S. Miller, Masahiro Yamada,
	open list:HARDWARE RANDOM NUMBER GENERATOR CORE, qat-linux,
	dm-devel, Linux Kernel Mailing List

On Thu, Jul 12, 2018 at 5:40 PM, Herbert Xu <herbert@gondor.apana.org.au> wrote:
> On Thu, Jul 12, 2018 at 06:02:26PM +0200, Arnd Bergmann wrote:
>>
>> Looking through some of the drivers, I found this interesting one:
>
> As I said before these patches are fundamentally broken.  Users
> of AHASH_REQUEST_ON_STACK can only use sync algorithm providers
> and therefore drivers are irrelevant.

I've also now gone to look at the few users of AHASH_REQUEST_ON_STACK,
and it seems like they come in two flavors:

- ones that can be trivially converts to shash (hibernate)
- things that use scatter/gather

Is this correct? It seems like you did the bulk of
AHASH_REQUEST_ON_STACK conversions in 2016. Can shash grow an sg
interface?

-Kees

-- 
Kees Cook
Pixel Security

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v4 11/14] treewide: Prepare to remove VLA usage for AHASH_REQUEST_ON_STACK
  2018-07-13  6:16       ` Kees Cook
@ 2018-07-13  6:22         ` Herbert Xu
  2018-07-14  3:07           ` Kees Cook
  0 siblings, 1 reply; 53+ messages in thread
From: Herbert Xu @ 2018-07-13  6:22 UTC (permalink / raw)
  To: Kees Cook
  Cc: Arnd Bergmann, Gustavo A. R. Silva, Eric Biggers,
	Alasdair Kergon, Giovanni Cabiddu, Lars Persson, Mike Snitzer,
	Rabin Vincent, Tim Chen, David S. Miller, Masahiro Yamada,
	open list:HARDWARE RANDOM NUMBER GENERATOR CORE, qat-linux,
	dm-devel, Linux Kernel Mailing List

On Thu, Jul 12, 2018 at 11:16:28PM -0700, Kees Cook wrote:
>
> Is this correct? It seems like you did the bulk of
> AHASH_REQUEST_ON_STACK conversions in 2016. Can shash grow an sg
> interface?

shash does not need to grow an sg interface.  All users of
AHASH_REQUEST_ON_STACK set the CRYPTO_ALG_ASYNC flag to zero
when allocating the tfm.

Cheers,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v4 11/14] treewide: Prepare to remove VLA usage for AHASH_REQUEST_ON_STACK
  2018-07-13  6:00               ` Kees Cook
@ 2018-07-13 10:14                 ` Arnd Bergmann
  2018-07-15  4:28                   ` Kees Cook
  0 siblings, 1 reply; 53+ messages in thread
From: Arnd Bergmann @ 2018-07-13 10:14 UTC (permalink / raw)
  To: Kees Cook
  Cc: Herbert Xu, Gustavo A. R. Silva, Eric Biggers, Alasdair Kergon,
	Giovanni Cabiddu, Lars Persson, Mike Snitzer, Rabin Vincent,
	Tim Chen, David S. Miller, Masahiro Yamada,
	open list:HARDWARE RANDOM NUMBER GENERATOR CORE, qat-linux,
	dm-devel, Linux Kernel Mailing List

On Fri, Jul 13, 2018 at 8:00 AM, Kees Cook <keescook@chromium.org> wrote:
> On Thu, Jul 12, 2018 at 10:20 PM, Herbert Xu
> <herbert@gondor.apana.org.au> wrote:
>> On Thu, Jul 12, 2018 at 10:17:29PM -0700, Kees Cook wrote:
>>> What is the correct value to use for AHASH_REQUEST_ON_STACK?
>>
>> As I said to arrive at a fixed value you should examine all sync
>> ahash algorithms (e.g., all shash ones plus ahash ones marked as
>> sync if there are any).
>
> The "value" for the ahash I understand: it has a request size
> (tfm->reqsize, as set by crypto_ahash_set_reqsize()) what should be
> used to measure the shash value? (And how does this relate to the
> value returned by crypto_ahash_reqsize()?) The closest clue I can find
> is this:
>
> crypto_init_shash_ops_async() does:
>         crt->reqsize = sizeof(struct shash_desc) + crypto_shash_descsize(shash);
>
> and that gets called from crypto_ahash_init_tfm(), so if it starts
> with the above reqsize and adds to it with a call to
> crypto_ahash_set_reqsize() later, we'll have that maximum?
>
> So, do I want to calculate this answer as:
>
> sizeof(struct shash_desc) + max(shash descsize) + max(ahash-sync reqsize) ?
> 16 + 360 + 0

I arrived at the same number, looking at all the sizes in shash,
The largest I found are sha3_state (360 bytes) and s390_sha_ctx
(336 bytes), everything else is way smaller.

> It's 0 above because if I look at all the callers of
> crypto_ahash_set_reqsize() that do wrapping, all are ASYNC.
>
> So, should this really just be 376? Where is best to validate this
> size, as it seems checking in crypto_ahash_set_reqsize() is
> inappropriate?

How about crypto_init_shash_ops_async()?

      Arnd

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v4 11/14] treewide: Prepare to remove VLA usage for AHASH_REQUEST_ON_STACK
  2018-07-13  6:22         ` Herbert Xu
@ 2018-07-14  3:07           ` Kees Cook
  2018-07-15  2:44             ` Herbert Xu
  0 siblings, 1 reply; 53+ messages in thread
From: Kees Cook @ 2018-07-14  3:07 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Arnd Bergmann, Gustavo A. R. Silva, Eric Biggers,
	Alasdair Kergon, Giovanni Cabiddu, Lars Persson, Mike Snitzer,
	Rabin Vincent, Tim Chen, David S. Miller, Masahiro Yamada,
	open list:HARDWARE RANDOM NUMBER GENERATOR CORE, qat-linux,
	dm-devel, Linux Kernel Mailing List

On Thu, Jul 12, 2018 at 11:22 PM, Herbert Xu
<herbert@gondor.apana.org.au> wrote:
> On Thu, Jul 12, 2018 at 11:16:28PM -0700, Kees Cook wrote:
>>
>> Is this correct? It seems like you did the bulk of
>> AHASH_REQUEST_ON_STACK conversions in 2016. Can shash grow an sg
>> interface?
>
> shash does not need to grow an sg interface.  All users of
> AHASH_REQUEST_ON_STACK set the CRYPTO_ALG_ASYNC flag to zero
> when allocating the tfm.

On a plane today I started converting all these to shash. IIUC, it
just looks like this (apologies for whitespace damage):


 static int crypt_iv_essiv_init(struct crypt_config *cc)
 {
        struct iv_essiv_private *essiv = &cc->iv_gen_private.essiv;
-       AHASH_REQUEST_ON_STACK(req, essiv->hash_tfm);
-       struct scatterlist sg;
+       SHASH_DESC_ON_STACK(desc, essiv->hash_tfm);
        struct crypto_cipher *essiv_tfm;
        int err;

-       sg_init_one(&sg, cc->key, cc->key_size);
-       ahash_request_set_tfm(req, essiv->hash_tfm);
-       ahash_request_set_callback(req, CRYPTO_TFM_REQ_MAY_SLEEP, NULL, NULL);
-       ahash_request_set_crypt(req, &sg, essiv->salt, cc->key_size);
+       desc->tfm = essiv->hash_tfm;
+       desc->flags = CRYPTO_TFM_REQ_MAY_SLEEP;

-       err = crypto_ahash_digest(req);
-       ahash_request_zero(req);
+       err = crypto_shash_digest(desc, key, cc->key_size, essiv->salt);
+       shash_desc_zero(desc);
        if (err)
                return err;


(I left out all the s/ahash/shash/ in types and function declarations.)

Does this look like what you were thinking of for converting these
away from ahash? The only one I couldn't make sense of was in
drivers/crypto/inside-secure/safexcel_hash.c. I have no idea what's
happening there.

-Kees

-- 
Kees Cook
Pixel Security

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v4 11/14] treewide: Prepare to remove VLA usage for AHASH_REQUEST_ON_STACK
  2018-07-14  3:07           ` Kees Cook
@ 2018-07-15  2:44             ` Herbert Xu
  2018-07-15  2:59               ` Kees Cook
  0 siblings, 1 reply; 53+ messages in thread
From: Herbert Xu @ 2018-07-15  2:44 UTC (permalink / raw)
  To: Kees Cook
  Cc: Arnd Bergmann, Gustavo A. R. Silva, Eric Biggers,
	Alasdair Kergon, Giovanni Cabiddu, Lars Persson, Mike Snitzer,
	Rabin Vincent, Tim Chen, David S. Miller, Masahiro Yamada,
	open list:HARDWARE RANDOM NUMBER GENERATOR CORE, qat-linux,
	dm-devel, Linux Kernel Mailing List

On Fri, Jul 13, 2018 at 08:07:10PM -0700, Kees Cook wrote:
>
> On a plane today I started converting all these to shash. IIUC, it
> just looks like this (apologies for whitespace damage):

Yes if it doesn't actually make use of SGs then shash would be
the way to go.  However, for SG users ahash is the best interface.

Cheers,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v4 11/14] treewide: Prepare to remove VLA usage for AHASH_REQUEST_ON_STACK
  2018-07-15  2:44             ` Herbert Xu
@ 2018-07-15  2:59               ` Kees Cook
  2018-07-16  0:01                 ` Herbert Xu
  0 siblings, 1 reply; 53+ messages in thread
From: Kees Cook @ 2018-07-15  2:59 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Arnd Bergmann, Gustavo A. R. Silva, Eric Biggers,
	Alasdair Kergon, Giovanni Cabiddu, Lars Persson, Mike Snitzer,
	Rabin Vincent, Tim Chen, David S. Miller, Masahiro Yamada,
	open list:HARDWARE RANDOM NUMBER GENERATOR CORE, qat-linux,
	dm-devel, Linux Kernel Mailing List

On Sat, Jul 14, 2018 at 7:44 PM, Herbert Xu <herbert@gondor.apana.org.au> wrote:
> On Fri, Jul 13, 2018 at 08:07:10PM -0700, Kees Cook wrote:
>>
>> On a plane today I started converting all these to shash. IIUC, it
>> just looks like this (apologies for whitespace damage):
>
> Yes if it doesn't actually make use of SGs then shash would be
> the way to go.  However, for SG users ahash is the best interface.

Nearly all of them artificially build an sg explicitly to use the
ahash interface. :P

So, I'll take that as a "yes, do these conversions." :) Thanks!

-Kees

-- 
Kees Cook
Pixel Security

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v4 11/14] treewide: Prepare to remove VLA usage for AHASH_REQUEST_ON_STACK
  2018-07-13 10:14                 ` Arnd Bergmann
@ 2018-07-15  4:28                   ` Kees Cook
  2018-07-17 20:59                     ` Arnd Bergmann
  0 siblings, 1 reply; 53+ messages in thread
From: Kees Cook @ 2018-07-15  4:28 UTC (permalink / raw)
  To: Arnd Bergmann
  Cc: Herbert Xu, Gustavo A. R. Silva, Eric Biggers, Alasdair Kergon,
	Giovanni Cabiddu, Lars Persson, Mike Snitzer, Rabin Vincent,
	Tim Chen, David S. Miller, Masahiro Yamada,
	open list:HARDWARE RANDOM NUMBER GENERATOR CORE, qat-linux,
	dm-devel, Linux Kernel Mailing List

On Fri, Jul 13, 2018 at 3:14 AM, Arnd Bergmann <arnd@arndb.de> wrote:
> On Fri, Jul 13, 2018 at 8:00 AM, Kees Cook <keescook@chromium.org> wrote:
>> On Thu, Jul 12, 2018 at 10:20 PM, Herbert Xu
>> <herbert@gondor.apana.org.au> wrote:
>>> On Thu, Jul 12, 2018 at 10:17:29PM -0700, Kees Cook wrote:
>>>> What is the correct value to use for AHASH_REQUEST_ON_STACK?
>>>
>>> As I said to arrive at a fixed value you should examine all sync
>>> ahash algorithms (e.g., all shash ones plus ahash ones marked as
>>> sync if there are any).
>>
>> The "value" for the ahash I understand: it has a request size
>> (tfm->reqsize, as set by crypto_ahash_set_reqsize()) what should be
>> used to measure the shash value? (And how does this relate to the
>> value returned by crypto_ahash_reqsize()?) The closest clue I can find
>> is this:
>>
>> crypto_init_shash_ops_async() does:
>>         crt->reqsize = sizeof(struct shash_desc) + crypto_shash_descsize(shash);
>>
>> and that gets called from crypto_ahash_init_tfm(), so if it starts
>> with the above reqsize and adds to it with a call to
>> crypto_ahash_set_reqsize() later, we'll have that maximum?
>>
>> So, do I want to calculate this answer as:
>>
>> sizeof(struct shash_desc) + max(shash descsize) + max(ahash-sync reqsize) ?
>> 16 + 360 + 0
>
> I arrived at the same number, looking at all the sizes in shash,
> The largest I found are sha3_state (360 bytes) and s390_sha_ctx
> (336 bytes), everything else is way smaller.

Excellent. Thanks for double-checking this. :)

>
>> It's 0 above because if I look at all the callers of
>> crypto_ahash_set_reqsize() that do wrapping, all are ASYNC.
>>
>> So, should this really just be 376? Where is best to validate this
>> size, as it seems checking in crypto_ahash_set_reqsize() is
>> inappropriate?
>
> How about crypto_init_shash_ops_async()?

Ah yes, that looks good. Nice find!

After my ahash to shash conversions, only ccm is left as an ahash
user, since it actually uses sg. But with the hard-coded value reduced
to 376, this doesn't trip the frame warnings any more. :)

I'll send an updated series soon.

-Kees

-- 
Kees Cook
Pixel Security

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v4 11/14] treewide: Prepare to remove VLA usage for AHASH_REQUEST_ON_STACK
  2018-07-15  2:59               ` Kees Cook
@ 2018-07-16  0:01                 ` Herbert Xu
  2018-07-16  3:39                   ` Kees Cook
  0 siblings, 1 reply; 53+ messages in thread
From: Herbert Xu @ 2018-07-16  0:01 UTC (permalink / raw)
  To: Kees Cook
  Cc: Arnd Bergmann, Gustavo A. R. Silva, Eric Biggers,
	Alasdair Kergon, Giovanni Cabiddu, Lars Persson, Mike Snitzer,
	Rabin Vincent, Tim Chen, David S. Miller, Masahiro Yamada,
	open list:HARDWARE RANDOM NUMBER GENERATOR CORE, qat-linux,
	dm-devel, Linux Kernel Mailing List

On Sat, Jul 14, 2018 at 07:59:09PM -0700, Kees Cook wrote:
> On Sat, Jul 14, 2018 at 7:44 PM, Herbert Xu <herbert@gondor.apana.org.au> wrote:
> > On Fri, Jul 13, 2018 at 08:07:10PM -0700, Kees Cook wrote:
> >>
> >> On a plane today I started converting all these to shash. IIUC, it
> >> just looks like this (apologies for whitespace damage):
> >
> > Yes if it doesn't actually make use of SGs then shash would be
> > the way to go.  However, for SG users ahash is the best interface.
> 
> Nearly all of them artificially build an sg explicitly to use the
> ahash interface. :P
> 
> So, I'll take that as a "yes, do these conversions." :) Thanks!

Yeah anything that's doing a single-element SG list should just
be converted.

Thanks,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v4 11/14] treewide: Prepare to remove VLA usage for AHASH_REQUEST_ON_STACK
  2018-07-16  0:01                 ` Herbert Xu
@ 2018-07-16  3:39                   ` Kees Cook
  2018-07-16  7:24                     ` Arnd Bergmann
  0 siblings, 1 reply; 53+ messages in thread
From: Kees Cook @ 2018-07-16  3:39 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Arnd Bergmann, Gustavo A. R. Silva, Eric Biggers,
	Alasdair Kergon, Giovanni Cabiddu, Lars Persson, Mike Snitzer,
	Rabin Vincent, Tim Chen, David S. Miller, Masahiro Yamada,
	open list:HARDWARE RANDOM NUMBER GENERATOR CORE, qat-linux,
	dm-devel, Linux Kernel Mailing List

On Sun, Jul 15, 2018 at 5:01 PM, Herbert Xu <herbert@gondor.apana.org.au> wrote:
> On Sat, Jul 14, 2018 at 07:59:09PM -0700, Kees Cook wrote:
>> On Sat, Jul 14, 2018 at 7:44 PM, Herbert Xu <herbert@gondor.apana.org.au> wrote:
>> > On Fri, Jul 13, 2018 at 08:07:10PM -0700, Kees Cook wrote:
>> >>
>> >> On a plane today I started converting all these to shash. IIUC, it
>> >> just looks like this (apologies for whitespace damage):
>> >
>> > Yes if it doesn't actually make use of SGs then shash would be
>> > the way to go.  However, for SG users ahash is the best interface.
>>
>> Nearly all of them artificially build an sg explicitly to use the
>> ahash interface. :P
>>
>> So, I'll take that as a "yes, do these conversions." :) Thanks!
>
> Yeah anything that's doing a single-element SG list should just
> be converted.

There are a few that are multiple element SG list, but it's a locally
allocated array of SGs, and filled with data. All easily replaced with
just calls to ..._update() instead of sg helpers. For example
net/wireless/lib80211_crypt_tkip.c:

-       sg_init_table(sg, 2);
-       sg_set_buf(&sg[0], hdr, 16);
-       sg_set_buf(&sg[1], data, data_len);
...
-       ahash_request_set_tfm(req, tfm_michael);
-       ahash_request_set_callback(req, 0, NULL, NULL);
-       ahash_request_set_crypt(req, sg, mic, data_len + 16);
-       err = crypto_ahash_digest(req);
-       ahash_request_zero(req);
+       err = crypto_shash_init(desc);
+       if (err)
+               goto out;
+       err = crypto_shash_update(desc, hdr, 16);
+       if (err)
+               goto out;
+       err = crypto_shash_update(desc, data, data_len);
+       if (err)
+               goto out;
+       err = crypto_shash_final(desc, mic);
+
+out:
+       shash_desc_zero(desc);
        return err;

-Kees

-- 
Kees Cook
Pixel Security

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v4 11/14] treewide: Prepare to remove VLA usage for AHASH_REQUEST_ON_STACK
  2018-07-16  3:39                   ` Kees Cook
@ 2018-07-16  7:24                     ` Arnd Bergmann
  0 siblings, 0 replies; 53+ messages in thread
From: Arnd Bergmann @ 2018-07-16  7:24 UTC (permalink / raw)
  To: Kees Cook
  Cc: Herbert Xu, Gustavo A. R. Silva, Eric Biggers, Alasdair Kergon,
	Giovanni Cabiddu, Lars Persson, Mike Snitzer, Rabin Vincent,
	Tim Chen, David S. Miller, Masahiro Yamada,
	open list:HARDWARE RANDOM NUMBER GENERATOR CORE, qat-linux,
	dm-devel, Linux Kernel Mailing List

On Mon, Jul 16, 2018 at 5:39 AM, Kees Cook <keescook@chromium.org> wrote:
> On Sun, Jul 15, 2018 at 5:01 PM, Herbert Xu <herbert@gondor.apana.org.au> wrote:
>> On Sat, Jul 14, 2018 at 07:59:09PM -0700, Kees Cook wrote:
>>> On Sat, Jul 14, 2018 at 7:44 PM, Herbert Xu <herbert@gondor.apana.org.au> wrote:
>>> > On Fri, Jul 13, 2018 at 08:07:10PM -0700, Kees Cook wrote:
>>> >>
>>> >> On a plane today I started converting all these to shash. IIUC, it
>>> >> just looks like this (apologies for whitespace damage):
>>> >
>>> > Yes if it doesn't actually make use of SGs then shash would be
>>> > the way to go.  However, for SG users ahash is the best interface.
>>>
>>> Nearly all of them artificially build an sg explicitly to use the
>>> ahash interface. :P
>>>
>>> So, I'll take that as a "yes, do these conversions." :) Thanks!
>>
>> Yeah anything that's doing a single-element SG list should just
>> be converted.
>
> There are a few that are multiple element SG list, but it's a locally
> allocated array of SGs, and filled with data. All easily replaced with
> just calls to ..._update() instead of sg helpers. For example
> net/wireless/lib80211_crypt_tkip.c:
>
> -       sg_init_table(sg, 2);
> -       sg_set_buf(&sg[0], hdr, 16);
> -       sg_set_buf(&sg[1], data, data_len);
> ...
> -       ahash_request_set_tfm(req, tfm_michael);
> -       ahash_request_set_callback(req, 0, NULL, NULL);
> -       ahash_request_set_crypt(req, sg, mic, data_len + 16);
> -       err = crypto_ahash_digest(req);
> -       ahash_request_zero(req);
> +       err = crypto_shash_init(desc);
> +       if (err)
> +               goto out;
> +       err = crypto_shash_update(desc, hdr, 16);
> +       if (err)
> +               goto out;
> +       err = crypto_shash_update(desc, data, data_len);
> +       if (err)
> +               goto out;
> +       err = crypto_shash_final(desc, mic);
> +
> +out:
> +       shash_desc_zero(desc);
>         return err;

There may be a little overhead in calling crypto_shash_update()/
crypto_shash_final() repeatedly compared to calling
crypto_ahash_digest() once. It's probably no worse (or maybe
better) in this case, since we call only three times and there
is less indirection, but if there are any cases with a long sglist,
it would be good to measure the performance difference.

       Arnd

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v4 11/14] treewide: Prepare to remove VLA usage for AHASH_REQUEST_ON_STACK
  2018-07-15  4:28                   ` Kees Cook
@ 2018-07-17 20:59                     ` Arnd Bergmann
  2018-07-18 14:50                       ` Ard Biesheuvel
  0 siblings, 1 reply; 53+ messages in thread
From: Arnd Bergmann @ 2018-07-17 20:59 UTC (permalink / raw)
  To: Kees Cook
  Cc: Herbert Xu, Gustavo A. R. Silva, Eric Biggers, Alasdair Kergon,
	Giovanni Cabiddu, Lars Persson, Mike Snitzer, Rabin Vincent,
	Tim Chen, David S. Miller, Masahiro Yamada,
	open list:HARDWARE RANDOM NUMBER GENERATOR CORE, qat-linux,
	dm-devel, Linux Kernel Mailing List, Ard Biesheuvel

On Sun, Jul 15, 2018 at 6:28 AM, Kees Cook <keescook@chromium.org> wrote:
>
> After my ahash to shash conversions, only ccm is left as an ahash
> user, since it actually uses sg. But with the hard-coded value reduced
> to 376, this doesn't trip the frame warnings any more. :)
>
> I'll send an updated series soon.

Maybe we should get rid of that one as well then and remove
AHASH_REQUEST_ON_STACK()?

I see that Ard (now on Cc) added this usage only recently. Looking
at the code some more, I also find that the descsize is probably
much smaller than 376 for all possible cases   of "cbcmac(*)",
either alg->cra_blocksize plus a few bytes or sizeof(mac_desc_ctx)
(i.e. 20) for arch/arm64/crypto/aes-glue.c.

Walking the sglist here means open-coding a shash_ahash_update()
implementation in crypto_ccm_auth(), that that doesn't seem to
add much complexity over what it already has to do to chain
the sglist today.

      Arnd

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v4 11/14] treewide: Prepare to remove VLA usage for AHASH_REQUEST_ON_STACK
  2018-07-17 20:59                     ` Arnd Bergmann
@ 2018-07-18 14:50                       ` Ard Biesheuvel
  2018-07-18 15:19                         ` Ard Biesheuvel
  0 siblings, 1 reply; 53+ messages in thread
From: Ard Biesheuvel @ 2018-07-18 14:50 UTC (permalink / raw)
  To: Arnd Bergmann
  Cc: Kees Cook, Herbert Xu, Gustavo A. R. Silva, Eric Biggers,
	Alasdair Kergon, Giovanni Cabiddu, Lars Persson, Mike Snitzer,
	Rabin Vincent, Tim Chen, David S. Miller, Masahiro Yamada,
	open list:HARDWARE RANDOM NUMBER GENERATOR CORE, qat-linux,
	dm-devel, Linux Kernel Mailing List

On 18 July 2018 at 05:59, Arnd Bergmann <arnd@arndb.de> wrote:
> On Sun, Jul 15, 2018 at 6:28 AM, Kees Cook <keescook@chromium.org> wrote:
>>
>> After my ahash to shash conversions, only ccm is left as an ahash
>> user, since it actually uses sg. But with the hard-coded value reduced
>> to 376, this doesn't trip the frame warnings any more. :)
>>
>> I'll send an updated series soon.
>
> Maybe we should get rid of that one as well then and remove
> AHASH_REQUEST_ON_STACK()?
>
> I see that Ard (now on Cc) added this usage only recently. Looking
> at the code some more, I also find that the descsize is probably
> much smaller than 376 for all possible cases   of "cbcmac(*)",
> either alg->cra_blocksize plus a few bytes or sizeof(mac_desc_ctx)
> (i.e. 20) for arch/arm64/crypto/aes-glue.c.
>
> Walking the sglist here means open-coding a shash_ahash_update()
> implementation in crypto_ccm_auth(), that that doesn't seem to
> add much complexity over what it already has to do to chain
> the sglist today.
>

It would be better to add a variably sized ahash request member to
struct crypto_ccm_req_priv_ctx, the only problem is that the last
member of that struct (skreq) is variably sized already, so it would
involve having a struct ahash_request pointer pointing into the same
struct, after the skreq member.

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v4 11/14] treewide: Prepare to remove VLA usage for AHASH_REQUEST_ON_STACK
  2018-07-18 14:50                       ` Ard Biesheuvel
@ 2018-07-18 15:19                         ` Ard Biesheuvel
  2018-07-18 15:33                           ` Arnd Bergmann
  2018-07-19  2:51                           ` Kees Cook
  0 siblings, 2 replies; 53+ messages in thread
From: Ard Biesheuvel @ 2018-07-18 15:19 UTC (permalink / raw)
  To: Arnd Bergmann
  Cc: Kees Cook, Herbert Xu, Gustavo A. R. Silva, Eric Biggers,
	Alasdair Kergon, Giovanni Cabiddu, Lars Persson, Mike Snitzer,
	Rabin Vincent, Tim Chen, David S. Miller, Masahiro Yamada,
	open list:HARDWARE RANDOM NUMBER GENERATOR CORE, qat-linux,
	dm-devel, Linux Kernel Mailing List

On 18 July 2018 at 23:50, Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:
> On 18 July 2018 at 05:59, Arnd Bergmann <arnd@arndb.de> wrote:
>> On Sun, Jul 15, 2018 at 6:28 AM, Kees Cook <keescook@chromium.org> wrote:
>>>
>>> After my ahash to shash conversions, only ccm is left as an ahash
>>> user, since it actually uses sg. But with the hard-coded value reduced
>>> to 376, this doesn't trip the frame warnings any more. :)
>>>
>>> I'll send an updated series soon.
>>
>> Maybe we should get rid of that one as well then and remove
>> AHASH_REQUEST_ON_STACK()?
>>
>> I see that Ard (now on Cc) added this usage only recently. Looking
>> at the code some more, I also find that the descsize is probably
>> much smaller than 376 for all possible cases   of "cbcmac(*)",
>> either alg->cra_blocksize plus a few bytes or sizeof(mac_desc_ctx)
>> (i.e. 20) for arch/arm64/crypto/aes-glue.c.
>>
>> Walking the sglist here means open-coding a shash_ahash_update()
>> implementation in crypto_ccm_auth(), that that doesn't seem to
>> add much complexity over what it already has to do to chain
>> the sglist today.
>>
>
> It would be better to add a variably sized ahash request member to
> struct crypto_ccm_req_priv_ctx, the only problem is that the last
> member of that struct (skreq) is variably sized already, so it would
> involve having a struct ahash_request pointer pointing into the same
> struct, after the skreq member.

Actually, I think the below should already do the trick: ahreq and
skreq are not used at the same time, so we can stick them in a union,
and take the max() of the reqsize to ensure there's enough empty space
after it.

--------8<----------
diff --git a/crypto/ccm.c b/crypto/ccm.c
index 0a083342ec8c..b242fd0d3262 100644
--- a/crypto/ccm.c
+++ b/crypto/ccm.c
@@ -50,7 +50,10 @@ struct crypto_ccm_req_priv_ctx {
        u32 flags;
        struct scatterlist src[3];
        struct scatterlist dst[3];
-       struct skcipher_request skreq;
+       union {
+               struct ahash_request ahreq;
+               struct skcipher_request skreq;
+       };
 };

 struct cbcmac_tfm_ctx {
@@ -181,7 +184,7 @@
        struct crypto_ccm_req_priv_ctx *pctx = crypto_ccm_reqctx(req);
        struct crypto_aead *aead = crypto_aead_reqtfm(req);
        struct crypto_ccm_ctx *ctx = crypto_aead_ctx(aead);
-       AHASH_REQUEST_ON_STACK(ahreq, ctx->mac);
+       struct ahash_request *ahreq = &pctx->ahreq;
        unsigned int assoclen = req->assoclen;
        struct scatterlist sg[3];
        u8 *odata = pctx->odata;
@@ -427,7 +430,7 @@
        crypto_aead_set_reqsize(
                tfm,
                align + sizeof(struct crypto_ccm_req_priv_ctx) +
-               crypto_skcipher_reqsize(ctr));
+               max(crypto_ahash_reqsize(mac), crypto_skcipher_reqsize(ctr)));

        return 0;

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* Re: [PATCH v4 11/14] treewide: Prepare to remove VLA usage for AHASH_REQUEST_ON_STACK
  2018-07-18 15:19                         ` Ard Biesheuvel
@ 2018-07-18 15:33                           ` Arnd Bergmann
  2018-07-18 15:33                             ` Ard Biesheuvel
  2018-07-19  2:51                           ` Kees Cook
  1 sibling, 1 reply; 53+ messages in thread
From: Arnd Bergmann @ 2018-07-18 15:33 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Kees Cook, Herbert Xu, Gustavo A. R. Silva, Eric Biggers,
	Alasdair Kergon, Giovanni Cabiddu, Lars Persson, Mike Snitzer,
	Rabin Vincent, Tim Chen, David S. Miller, Masahiro Yamada,
	open list:HARDWARE RANDOM NUMBER GENERATOR CORE, qat-linux,
	dm-devel, Linux Kernel Mailing List

On Wed, Jul 18, 2018 at 5:19 PM, Ard Biesheuvel
<ard.biesheuvel@linaro.org> wrote:
> On 18 July 2018 at 23:50, Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:
>> On 18 July 2018 at 05:59, Arnd Bergmann <arnd@arndb.de> wrote:
>>> On Sun, Jul 15, 2018 at 6:28 AM, Kees Cook <keescook@chromium.org> wrote:
>>>>
>>>> After my ahash to shash conversions, only ccm is left as an ahash
>>>> user, since it actually uses sg. But with the hard-coded value reduced
>>>> to 376, this doesn't trip the frame warnings any more. :)
>>>>
>>>> I'll send an updated series soon.
>>>
>>> Maybe we should get rid of that one as well then and remove
>>> AHASH_REQUEST_ON_STACK()?
>>>
>>> I see that Ard (now on Cc) added this usage only recently. Looking
>>> at the code some more, I also find that the descsize is probably
>>> much smaller than 376 for all possible cases   of "cbcmac(*)",
>>> either alg->cra_blocksize plus a few bytes or sizeof(mac_desc_ctx)
>>> (i.e. 20) for arch/arm64/crypto/aes-glue.c.
>>>
>>> Walking the sglist here means open-coding a shash_ahash_update()
>>> implementation in crypto_ccm_auth(), that that doesn't seem to
>>> add much complexity over what it already has to do to chain
>>> the sglist today.
>>>
>>
>> It would be better to add a variably sized ahash request member to
>> struct crypto_ccm_req_priv_ctx, the only problem is that the last
>> member of that struct (skreq) is variably sized already, so it would
>> involve having a struct ahash_request pointer pointing into the same
>> struct, after the skreq member.
>
> Actually, I think the below should already do the trick: ahreq and
> skreq are not used at the same time, so we can stick them in a union,
> and take the max() of the reqsize to ensure there's enough empty space
> after it.

This looks very nice indeed.

> --------8<----------
> diff --git a/crypto/ccm.c b/crypto/ccm.c
> index 0a083342ec8c..b242fd0d3262 100644
> --- a/crypto/ccm.c
> +++ b/crypto/ccm.c
> @@ -50,7 +50,10 @@ struct crypto_ccm_req_priv_ctx {
>         u32 flags;
>         struct scatterlist src[3];
>         struct scatterlist dst[3];
> -       struct skcipher_request skreq;
> +       union {
> +               struct ahash_request ahreq;
> +               struct skcipher_request skreq;
> +       };
>  };
>

And this structure is never put on the stack anywhere but
always dynamically allocated anyway, right?

      Arnd

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v4 11/14] treewide: Prepare to remove VLA usage for AHASH_REQUEST_ON_STACK
  2018-07-18 15:33                           ` Arnd Bergmann
@ 2018-07-18 15:33                             ` Ard Biesheuvel
  0 siblings, 0 replies; 53+ messages in thread
From: Ard Biesheuvel @ 2018-07-18 15:33 UTC (permalink / raw)
  To: Arnd Bergmann
  Cc: Kees Cook, Herbert Xu, Gustavo A. R. Silva, Eric Biggers,
	Alasdair Kergon, Giovanni Cabiddu, Lars Persson, Mike Snitzer,
	Rabin Vincent, Tim Chen, David S. Miller, Masahiro Yamada,
	open list:HARDWARE RANDOM NUMBER GENERATOR CORE, qat-linux,
	dm-devel, Linux Kernel Mailing List

On 19 July 2018 at 00:33, Arnd Bergmann <arnd@arndb.de> wrote:
> On Wed, Jul 18, 2018 at 5:19 PM, Ard Biesheuvel
> <ard.biesheuvel@linaro.org> wrote:
>> On 18 July 2018 at 23:50, Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:
>>> On 18 July 2018 at 05:59, Arnd Bergmann <arnd@arndb.de> wrote:
>>>> On Sun, Jul 15, 2018 at 6:28 AM, Kees Cook <keescook@chromium.org> wrote:
>>>>>
>>>>> After my ahash to shash conversions, only ccm is left as an ahash
>>>>> user, since it actually uses sg. But with the hard-coded value reduced
>>>>> to 376, this doesn't trip the frame warnings any more. :)
>>>>>
>>>>> I'll send an updated series soon.
>>>>
>>>> Maybe we should get rid of that one as well then and remove
>>>> AHASH_REQUEST_ON_STACK()?
>>>>
>>>> I see that Ard (now on Cc) added this usage only recently. Looking
>>>> at the code some more, I also find that the descsize is probably
>>>> much smaller than 376 for all possible cases   of "cbcmac(*)",
>>>> either alg->cra_blocksize plus a few bytes or sizeof(mac_desc_ctx)
>>>> (i.e. 20) for arch/arm64/crypto/aes-glue.c.
>>>>
>>>> Walking the sglist here means open-coding a shash_ahash_update()
>>>> implementation in crypto_ccm_auth(), that that doesn't seem to
>>>> add much complexity over what it already has to do to chain
>>>> the sglist today.
>>>>
>>>
>>> It would be better to add a variably sized ahash request member to
>>> struct crypto_ccm_req_priv_ctx, the only problem is that the last
>>> member of that struct (skreq) is variably sized already, so it would
>>> involve having a struct ahash_request pointer pointing into the same
>>> struct, after the skreq member.
>>
>> Actually, I think the below should already do the trick: ahreq and
>> skreq are not used at the same time, so we can stick them in a union,
>> and take the max() of the reqsize to ensure there's enough empty space
>> after it.
>
> This looks very nice indeed.
>
>> --------8<----------
>> diff --git a/crypto/ccm.c b/crypto/ccm.c
>> index 0a083342ec8c..b242fd0d3262 100644
>> --- a/crypto/ccm.c
>> +++ b/crypto/ccm.c
>> @@ -50,7 +50,10 @@ struct crypto_ccm_req_priv_ctx {
>>         u32 flags;
>>         struct scatterlist src[3];
>>         struct scatterlist dst[3];
>> -       struct skcipher_request skreq;
>> +       union {
>> +               struct ahash_request ahreq;
>> +               struct skcipher_request skreq;
>> +       };
>>  };
>>
>
> And this structure is never put on the stack anywhere but
> always dynamically allocated anyway, right?
>

Yes.

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v4 11/14] treewide: Prepare to remove VLA usage for AHASH_REQUEST_ON_STACK
  2018-07-18 15:19                         ` Ard Biesheuvel
  2018-07-18 15:33                           ` Arnd Bergmann
@ 2018-07-19  2:51                           ` Kees Cook
  2018-07-19  2:55                             ` Ard Biesheuvel
  1 sibling, 1 reply; 53+ messages in thread
From: Kees Cook @ 2018-07-19  2:51 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Arnd Bergmann, Herbert Xu, Gustavo A. R. Silva, Eric Biggers,
	Alasdair Kergon, Giovanni Cabiddu, Lars Persson, Mike Snitzer,
	Rabin Vincent, Tim Chen, David S. Miller, Masahiro Yamada,
	open list:HARDWARE RANDOM NUMBER GENERATOR CORE, qat-linux,
	dm-devel, Linux Kernel Mailing List

On Wed, Jul 18, 2018 at 8:19 AM, Ard Biesheuvel
<ard.biesheuvel@linaro.org> wrote:
> On 18 July 2018 at 23:50, Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:
>> On 18 July 2018 at 05:59, Arnd Bergmann <arnd@arndb.de> wrote:
>>> On Sun, Jul 15, 2018 at 6:28 AM, Kees Cook <keescook@chromium.org> wrote:
>>>>
>>>> After my ahash to shash conversions, only ccm is left as an ahash
>>>> user, since it actually uses sg. But with the hard-coded value reduced
>>>> to 376, this doesn't trip the frame warnings any more. :)
>>>>
>>>> I'll send an updated series soon.
>>>
>>> Maybe we should get rid of that one as well then and remove
>>> AHASH_REQUEST_ON_STACK()?
>>>
>>> I see that Ard (now on Cc) added this usage only recently. Looking
>>> at the code some more, I also find that the descsize is probably
>>> much smaller than 376 for all possible cases   of "cbcmac(*)",
>>> either alg->cra_blocksize plus a few bytes or sizeof(mac_desc_ctx)
>>> (i.e. 20) for arch/arm64/crypto/aes-glue.c.
>>>
>>> Walking the sglist here means open-coding a shash_ahash_update()
>>> implementation in crypto_ccm_auth(), that that doesn't seem to
>>> add much complexity over what it already has to do to chain
>>> the sglist today.
>>>
>>
>> It would be better to add a variably sized ahash request member to
>> struct crypto_ccm_req_priv_ctx, the only problem is that the last
>> member of that struct (skreq) is variably sized already, so it would
>> involve having a struct ahash_request pointer pointing into the same
>> struct, after the skreq member.
>
> Actually, I think the below should already do the trick: ahreq and
> skreq are not used at the same time, so we can stick them in a union,
> and take the max() of the reqsize to ensure there's enough empty space
> after it.
>
> --------8<----------
> diff --git a/crypto/ccm.c b/crypto/ccm.c
> index 0a083342ec8c..b242fd0d3262 100644
> --- a/crypto/ccm.c
> +++ b/crypto/ccm.c
> @@ -50,7 +50,10 @@ struct crypto_ccm_req_priv_ctx {
>         u32 flags;
>         struct scatterlist src[3];
>         struct scatterlist dst[3];
> -       struct skcipher_request skreq;
> +       union {
> +               struct ahash_request ahreq;
> +               struct skcipher_request skreq;
> +       };
>  };
>
>  struct cbcmac_tfm_ctx {
> @@ -181,7 +184,7 @@
>         struct crypto_ccm_req_priv_ctx *pctx = crypto_ccm_reqctx(req);
>         struct crypto_aead *aead = crypto_aead_reqtfm(req);
>         struct crypto_ccm_ctx *ctx = crypto_aead_ctx(aead);
> -       AHASH_REQUEST_ON_STACK(ahreq, ctx->mac);
> +       struct ahash_request *ahreq = &pctx->ahreq;
>         unsigned int assoclen = req->assoclen;
>         struct scatterlist sg[3];
>         u8 *odata = pctx->odata;
> @@ -427,7 +430,7 @@
>         crypto_aead_set_reqsize(
>                 tfm,
>                 align + sizeof(struct crypto_ccm_req_priv_ctx) +
> -               crypto_skcipher_reqsize(ctr));
> +               max(crypto_ahash_reqsize(mac), crypto_skcipher_reqsize(ctr)));
>
>         return 0;

Oh, this is lovely! Thank you! Shall I add your S-o-b and add it to the series?

-Kees

-- 
Kees Cook
Pixel Security

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v4 11/14] treewide: Prepare to remove VLA usage for AHASH_REQUEST_ON_STACK
  2018-07-19  2:51                           ` Kees Cook
@ 2018-07-19  2:55                             ` Ard Biesheuvel
  2018-07-19  3:09                               ` Kees Cook
  0 siblings, 1 reply; 53+ messages in thread
From: Ard Biesheuvel @ 2018-07-19  2:55 UTC (permalink / raw)
  To: Kees Cook
  Cc: Arnd Bergmann, Herbert Xu, Gustavo A. R. Silva, Eric Biggers,
	Alasdair Kergon, Giovanni Cabiddu, Lars Persson, Mike Snitzer,
	Rabin Vincent, Tim Chen, David S. Miller, Masahiro Yamada,
	open list:HARDWARE RANDOM NUMBER GENERATOR CORE, qat-linux,
	dm-devel, Linux Kernel Mailing List



> On 19 Jul 2018, at 11:51, Kees Cook <keescook@chromium.org> wrote:
> 
> On Wed, Jul 18, 2018 at 8:19 AM, Ard Biesheuvel
> <ard.biesheuvel@linaro.org> wrote:
>> On 18 July 2018 at 23:50, Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:
>>>> On 18 July 2018 at 05:59, Arnd Bergmann <arnd@arndb.de> wrote:
>>>>> On Sun, Jul 15, 2018 at 6:28 AM, Kees Cook <keescook@chromium.org> wrote:
>>>>> 
>>>>> After my ahash to shash conversions, only ccm is left as an ahash
>>>>> user, since it actually uses sg. But with the hard-coded value reduced
>>>>> to 376, this doesn't trip the frame warnings any more. :)
>>>>> 
>>>>> I'll send an updated series soon.
>>>> 
>>>> Maybe we should get rid of that one as well then and remove
>>>> AHASH_REQUEST_ON_STACK()?
>>>> 
>>>> I see that Ard (now on Cc) added this usage only recently. Looking
>>>> at the code some more, I also find that the descsize is probably
>>>> much smaller than 376 for all possible cases   of "cbcmac(*)",
>>>> either alg->cra_blocksize plus a few bytes or sizeof(mac_desc_ctx)
>>>> (i.e. 20) for arch/arm64/crypto/aes-glue.c.
>>>> 
>>>> Walking the sglist here means open-coding a shash_ahash_update()
>>>> implementation in crypto_ccm_auth(), that that doesn't seem to
>>>> add much complexity over what it already has to do to chain
>>>> the sglist today.
>>>> 
>>> 
>>> It would be better to add a variably sized ahash request member to
>>> struct crypto_ccm_req_priv_ctx, the only problem is that the last
>>> member of that struct (skreq) is variably sized already, so it would
>>> involve having a struct ahash_request pointer pointing into the same
>>> struct, after the skreq member.
>> 
>> Actually, I think the below should already do the trick: ahreq and
>> skreq are not used at the same time, so we can stick them in a union,
>> and take the max() of the reqsize to ensure there's enough empty space
>> after it.
>> 
>> --------8<----------
>> diff --git a/crypto/ccm.c b/crypto/ccm.c
>> index 0a083342ec8c..b242fd0d3262 100644
>> --- a/crypto/ccm.c
>> +++ b/crypto/ccm.c
>> @@ -50,7 +50,10 @@ struct crypto_ccm_req_priv_ctx {
>>        u32 flags;
>>        struct scatterlist src[3];
>>        struct scatterlist dst[3];
>> -       struct skcipher_request skreq;
>> +       union {
>> +               struct ahash_request ahreq;
>> +               struct skcipher_request skreq;
>> +       };
>> };
>> 
>> struct cbcmac_tfm_ctx {
>> @@ -181,7 +184,7 @@
>>        struct crypto_ccm_req_priv_ctx *pctx = crypto_ccm_reqctx(req);
>>        struct crypto_aead *aead = crypto_aead_reqtfm(req);
>>        struct crypto_ccm_ctx *ctx = crypto_aead_ctx(aead);
>> -       AHASH_REQUEST_ON_STACK(ahreq, ctx->mac);
>> +       struct ahash_request *ahreq = &pctx->ahreq;
>>        unsigned int assoclen = req->assoclen;
>>        struct scatterlist sg[3];
>>        u8 *odata = pctx->odata;
>> @@ -427,7 +430,7 @@
>>        crypto_aead_set_reqsize(
>>                tfm,
>>                align + sizeof(struct crypto_ccm_req_priv_ctx) +
>> -               crypto_skcipher_reqsize(ctr));
>> +               max(crypto_ahash_reqsize(mac), crypto_skcipher_reqsize(ctr)));
>> 
>>        return 0;
> 
> Oh, this is lovely! Thank you! Shall I add your S-o-b and add it to the series?
> 

I have only build tested it, so if you make sure that it does not break anything, please go ahead.





^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v4 11/14] treewide: Prepare to remove VLA usage for AHASH_REQUEST_ON_STACK
  2018-07-19  2:55                             ` Ard Biesheuvel
@ 2018-07-19  3:09                               ` Kees Cook
  2018-07-19  3:13                                 ` Ard Biesheuvel
  0 siblings, 1 reply; 53+ messages in thread
From: Kees Cook @ 2018-07-19  3:09 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Arnd Bergmann, Herbert Xu, Gustavo A. R. Silva, Eric Biggers,
	Alasdair Kergon, Giovanni Cabiddu, Lars Persson, Mike Snitzer,
	Rabin Vincent, Tim Chen, David S. Miller, Masahiro Yamada,
	open list:HARDWARE RANDOM NUMBER GENERATOR CORE, qat-linux,
	dm-devel, Linux Kernel Mailing List

On Wed, Jul 18, 2018 at 7:55 PM, Ard Biesheuvel
<ard.biesheuvel@linaro.org> wrote:
> I have only build tested it, so if you make sure that it does not break anything, please go ahead.

I can give it a spin; what's the best way? Is CONFIG_CRYPTO_MANAGER=y
sufficient?

-Kees

-- 
Kees Cook
Pixel Security

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v4 11/14] treewide: Prepare to remove VLA usage for AHASH_REQUEST_ON_STACK
  2018-07-19  3:09                               ` Kees Cook
@ 2018-07-19  3:13                                 ` Ard Biesheuvel
  2018-07-19 14:54                                   ` Ard Biesheuvel
  0 siblings, 1 reply; 53+ messages in thread
From: Ard Biesheuvel @ 2018-07-19  3:13 UTC (permalink / raw)
  To: Kees Cook
  Cc: Arnd Bergmann, Herbert Xu, Gustavo A. R. Silva, Eric Biggers,
	Alasdair Kergon, Giovanni Cabiddu, Lars Persson, Mike Snitzer,
	Rabin Vincent, Tim Chen, David S. Miller, Masahiro Yamada,
	open list:HARDWARE RANDOM NUMBER GENERATOR CORE, qat-linux,
	dm-devel, Linux Kernel Mailing List


> On 19 Jul 2018, at 12:09, Kees Cook <keescook@chromium.org> wrote:
> 
> On Wed, Jul 18, 2018 at 7:55 PM, Ard Biesheuvel
> <ard.biesheuvel@linaro.org> wrote:
>> I have only build tested it, so if you make sure that it does not break anything, please go ahead.
> 
> I can give it a spin; what's the best way? Is CONFIG_CRYPTO_MANAGER=y
> sufficient?
> 

You should be able to test ccm(aes) with tcrypt, yes

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v4 11/14] treewide: Prepare to remove VLA usage for AHASH_REQUEST_ON_STACK
  2018-07-19  3:13                                 ` Ard Biesheuvel
@ 2018-07-19 14:54                                   ` Ard Biesheuvel
  2018-07-19 18:44                                     ` Kees Cook
  0 siblings, 1 reply; 53+ messages in thread
From: Ard Biesheuvel @ 2018-07-19 14:54 UTC (permalink / raw)
  To: Kees Cook
  Cc: Arnd Bergmann, Herbert Xu, Gustavo A. R. Silva, Eric Biggers,
	Alasdair Kergon, Giovanni Cabiddu, Lars Persson, Mike Snitzer,
	Rabin Vincent, Tim Chen, David S. Miller, Masahiro Yamada,
	open list:HARDWARE RANDOM NUMBER GENERATOR CORE, qat-linux,
	dm-devel, Linux Kernel Mailing List

On 19 July 2018 at 12:13, Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:
>
>> On 19 Jul 2018, at 12:09, Kees Cook <keescook@chromium.org> wrote:
>>
>> On Wed, Jul 18, 2018 at 7:55 PM, Ard Biesheuvel
>> <ard.biesheuvel@linaro.org> wrote:
>>> I have only build tested it, so if you make sure that it does not break anything, please go ahead.
>>
>> I can give it a spin; what's the best way? Is CONFIG_CRYPTO_MANAGER=y
>> sufficient?
>>
>
> You should be able to test ccm(aes) with tcrypt, yes

Apologies, I should have been more clear here. I was replying on my
phone while attending a meeting.

The builtin test will only kick in for chaining mode templates if they
are instantiated by something that invokes the algorithm, such as
loading tcrypt.ko with mode=37 (assuming that ccm(aes) has to be
instantiated from crypto/ccm.c and some AES cipher rather than being
provided directly by, e.g., arm64's AES-CCM driver)

I just did the tcrypt myself, and the patch appears to be fine. Let me
know if you want me to spin the patch.

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v4 11/14] treewide: Prepare to remove VLA usage for AHASH_REQUEST_ON_STACK
  2018-07-19 14:54                                   ` Ard Biesheuvel
@ 2018-07-19 18:44                                     ` Kees Cook
  0 siblings, 0 replies; 53+ messages in thread
From: Kees Cook @ 2018-07-19 18:44 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Arnd Bergmann, Herbert Xu, Gustavo A. R. Silva, Eric Biggers,
	Alasdair Kergon, Giovanni Cabiddu, Lars Persson, Mike Snitzer,
	Rabin Vincent, Tim Chen, David S. Miller, Masahiro Yamada,
	open list:HARDWARE RANDOM NUMBER GENERATOR CORE, qat-linux,
	dm-devel, Linux Kernel Mailing List

On Thu, Jul 19, 2018 at 7:54 AM, Ard Biesheuvel
<ard.biesheuvel@linaro.org> wrote:
> On 19 July 2018 at 12:13, Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:
>>
>>> On 19 Jul 2018, at 12:09, Kees Cook <keescook@chromium.org> wrote:
>>>
>>> On Wed, Jul 18, 2018 at 7:55 PM, Ard Biesheuvel
>>> <ard.biesheuvel@linaro.org> wrote:
>>>> I have only build tested it, so if you make sure that it does not break anything, please go ahead.
>>>
>>> I can give it a spin; what's the best way? Is CONFIG_CRYPTO_MANAGER=y
>>> sufficient?
>>>
>>
>> You should be able to test ccm(aes) with tcrypt, yes
>
> Apologies, I should have been more clear here. I was replying on my
> phone while attending a meeting.
>
> The builtin test will only kick in for chaining mode templates if they
> are instantiated by something that invokes the algorithm, such as
> loading tcrypt.ko with mode=37 (assuming that ccm(aes) has to be
> instantiated from crypto/ccm.c and some AES cipher rather than being
> provided directly by, e.g., arm64's AES-CCM driver)
>
> I just did the tcrypt myself, and the patch appears to be fine. Let me
> know if you want me to spin the patch.

Awesome, thanks for testing! I built a commit with you as the author.
I'll send it out with the next batch... :)

-Kees

-- 
Kees Cook
Pixel Security

^ permalink raw reply	[flat|nested] 53+ messages in thread

end of thread, other threads:[~2018-07-19 18:44 UTC | newest]

Thread overview: 53+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-07-11 20:36 [PATCH v4 00/14] crypto: Remove VLA usage Kees Cook
2018-07-11 20:36 ` [PATCH v4 01/14] crypto: xcbc: " Kees Cook
2018-07-11 20:36 ` [PATCH v4 02/14] crypto: cbc: " Kees Cook
2018-07-11 20:36 ` [PATCH v4 03/14] crypto: shash: " Kees Cook
2018-07-11 20:36 ` [PATCH v4 04/14] dm integrity: " Kees Cook
2018-07-11 20:36 ` [PATCH v4 05/14] crypto: ahash: " Kees Cook
2018-07-11 20:36 ` [PATCH v4 06/14] dm verity fec: " Kees Cook
2018-07-11 20:36 ` [PATCH v4 07/14] crypto alg: Introduce generic max blocksize and alignmask Kees Cook
2018-07-11 20:36 ` [PATCH v4 08/14] crypto: qat: Remove VLA usage Kees Cook
2018-07-11 20:36 ` [PATCH v4 09/14] crypto: shash: Remove VLA usage in unaligned hashing Kees Cook
2018-07-11 20:36 ` [PATCH v4 10/14] kbuild: Introduce FRAME_WARN_BUMP_FLAG Kees Cook
2018-07-11 20:36 ` [PATCH v4 11/14] treewide: Prepare to remove VLA usage for AHASH_REQUEST_ON_STACK Kees Cook
2018-07-12 16:02   ` Arnd Bergmann
2018-07-12 20:17     ` Kees Cook
2018-07-12 21:38       ` Arnd Bergmann
2018-07-13  0:40     ` Herbert Xu
2018-07-13  3:33       ` Kees Cook
2018-07-13  3:44         ` Herbert Xu
2018-07-13  5:17           ` Kees Cook
2018-07-13  5:20             ` Herbert Xu
2018-07-13  6:00               ` Kees Cook
2018-07-13 10:14                 ` Arnd Bergmann
2018-07-15  4:28                   ` Kees Cook
2018-07-17 20:59                     ` Arnd Bergmann
2018-07-18 14:50                       ` Ard Biesheuvel
2018-07-18 15:19                         ` Ard Biesheuvel
2018-07-18 15:33                           ` Arnd Bergmann
2018-07-18 15:33                             ` Ard Biesheuvel
2018-07-19  2:51                           ` Kees Cook
2018-07-19  2:55                             ` Ard Biesheuvel
2018-07-19  3:09                               ` Kees Cook
2018-07-19  3:13                                 ` Ard Biesheuvel
2018-07-19 14:54                                   ` Ard Biesheuvel
2018-07-19 18:44                                     ` Kees Cook
2018-07-13  6:16       ` Kees Cook
2018-07-13  6:22         ` Herbert Xu
2018-07-14  3:07           ` Kees Cook
2018-07-15  2:44             ` Herbert Xu
2018-07-15  2:59               ` Kees Cook
2018-07-16  0:01                 ` Herbert Xu
2018-07-16  3:39                   ` Kees Cook
2018-07-16  7:24                     ` Arnd Bergmann
2018-07-11 20:36 ` [PATCH v4 12/14] crypto: ahash: Remove " Kees Cook
2018-07-11 20:36 ` [PATCH v4 13/14] rxrpc: Prepare to remove VLA usage for SKCIPHER_REQUEST_ON_STACK Kees Cook
2018-07-12 15:11   ` Arnd Bergmann
2018-07-12 20:23     ` Kees Cook
2018-07-12 20:30       ` Kees Cook
2018-07-12 21:15         ` Arnd Bergmann
2018-07-12 21:38           ` Kees Cook
2018-07-12 21:28   ` David Howells
2018-07-12 21:34     ` Kees Cook
2018-07-12 22:05   ` David Howells
2018-07-11 20:36 ` [PATCH v4 14/14] crypto: skcipher: Remove " Kees Cook

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).