linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH net-next 1/4] siphash: add cryptographically secure PRF
@ 2017-01-06 18:37 Jason A. Donenfeld
  2017-01-06 18:37 ` [PATCH net-next 2/4] siphash: implement HalfSipHash1-3 for hash tables Jason A. Donenfeld
                   ` (3 more replies)
  0 siblings, 4 replies; 18+ messages in thread
From: Jason A. Donenfeld @ 2017-01-06 18:37 UTC (permalink / raw)
  To: davem, netdev, linux-kernel
  Cc: Jason A. Donenfeld, Jean-Philippe Aumasson, Linus Torvalds,
	Eric Biggers, David Laight, Eric Dumazet

SipHash is a 64-bit keyed hash function that is actually a
cryptographically secure PRF, like HMAC. Except SipHash is super fast,
and is meant to be used as a hashtable keyed lookup function, or as a
general PRF for short input use cases, such as sequence numbers or RNG
chaining.

For the first usage:

There are a variety of attacks known as "hashtable poisoning" in which an
attacker forms some data such that the hash of that data will be the
same, and then preceeds to fill up all entries of a hashbucket. This is
a realistic and well-known denial-of-service vector. Currently
hashtables use jhash, which is fast but not secure, and some kind of
rotating key scheme (or none at all, which isn't good). SipHash is meant
as a replacement for jhash in these cases.

There are a modicum of places in the kernel that are vulnerable to
hashtable poisoning attacks, either via userspace vectors or network
vectors, and there's not a reliable mechanism inside the kernel at the
moment to fix it. The first step toward fixing these issues is actually
getting a secure primitive into the kernel for developers to use. Then
we can, bit by bit, port things over to it as deemed appropriate.

While SipHash is extremely fast for a cryptographically secure function,
it is likely a bit slower than the insecure jhash, and so replacements
will be evaluated on a case-by-case basis based on whether or not the
difference in speed is negligible and whether or not the current jhash usage
poses a real security risk.

For the second usage:

A few places in the kernel are using MD5 or SHA1 for creating secure
sequence numbers, syn cookies, port numbers, or fast random numbers.
SipHash is a faster and more fitting, and more secure replacement for MD5
in those situations. Replacing MD5 and SHA1 with SipHash for these uses is
obvious and straight-forward, and so is submitted along with this patch
series. There shouldn't be much of a debate over its efficacy.

Dozens of languages are already using this internally for their hash
tables and PRFs. Some of the BSDs already use this in their kernels.
SipHash is a widely known high-speed solution to a widely known set of
problems, and it's time we catch-up.

Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Cc: Jean-Philippe Aumasson <jeanphilippe.aumasson@gmail.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Eric Biggers <ebiggers3@gmail.com>
Cc: David Laight <David.Laight@aculab.com>
Cc: Eric Dumazet <eric.dumazet@gmail.com>
---
 Documentation/siphash.txt |  79 ++++++++++++++++
 MAINTAINERS               |   7 ++
 include/linux/siphash.h   |  79 ++++++++++++++++
 lib/Kconfig.debug         |   6 +-
 lib/Makefile              |   5 +-
 lib/siphash.c             | 232 ++++++++++++++++++++++++++++++++++++++++++++++
 lib/test_siphash.c        | 119 ++++++++++++++++++++++++
 7 files changed, 522 insertions(+), 5 deletions(-)
 create mode 100644 Documentation/siphash.txt
 create mode 100644 include/linux/siphash.h
 create mode 100644 lib/siphash.c
 create mode 100644 lib/test_siphash.c

diff --git a/Documentation/siphash.txt b/Documentation/siphash.txt
new file mode 100644
index 000000000000..39ff7f0438e7
--- /dev/null
+++ b/Documentation/siphash.txt
@@ -0,0 +1,79 @@
+         SipHash - a short input PRF
+-----------------------------------------------
+Written by Jason A. Donenfeld <jason@zx2c4.com>
+
+SipHash is a cryptographically secure PRF -- a keyed hash function -- that
+performs very well for short inputs, hence the name. It was designed by
+cryptographers Daniel J. Bernstein and Jean-Philippe Aumasson. It is intended
+as a replacement for some uses of: `jhash`, `md5_transform`, `sha_transform`,
+and so forth.
+
+SipHash takes a secret key filled with randomly generated numbers and either
+an input buffer or several input integers. It spits out an integer that is
+indistinguishable from random. You may then use that integer as part of secure
+sequence numbers, secure cookies, or mask it off for use in a hash table.
+
+1. Generating a key
+
+Keys should always be generated from a cryptographically secure source of
+random numbers, either using get_random_bytes or get_random_once:
+
+siphash_key_t key;
+get_random_bytes(key, sizeof(key));
+
+If you're not deriving your key from here, you're doing it wrong.
+
+2. Using the functions
+
+There are two variants of the function, one that takes a list of integers, and
+one that takes a buffer:
+
+u64 siphash(const void *data, size_t len, siphash_key_t key);
+
+And:
+
+u64 siphash_1u64(u64, siphash_key_t key);
+u64 siphash_2u64(u64, u64, siphash_key_t key);
+u64 siphash_3u64(u64, u64, u64, siphash_key_t key);
+u64 siphash_4u64(u64, u64, u64, u64, siphash_key_t key);
+u64 siphash_1u32(u32, siphash_key_t key);
+u64 siphash_2u32(u32, u32, siphash_key_t key);
+u64 siphash_3u32(u32, u32, u32, siphash_key_t key);
+u64 siphash_4u32(u32, u32, u32, u32, siphash_key_t key);
+
+If you pass the generic siphash function something of a constant length, it
+will constant fold at compile-time and automatically choose one of the
+optimized functions.
+
+3. Hashtable key function usage:
+
+struct some_hashtable {
+	DECLARE_HASHTABLE(hashtable, 8);
+	siphash_key_t key;
+};
+
+void init_hashtable(struct some_hashtable *table)
+{
+	get_random_bytes(table->key, sizeof(table->key));
+}
+
+static inline hlist_head *some_hashtable_bucket(struct some_hashtable *table, struct interesting_input *input)
+{
+	return &table->hashtable[siphash(input, sizeof(*input), table->key) & (HASH_SIZE(table->hashtable) - 1)];
+}
+
+You may then iterate like usual over the returned hash bucket.
+
+4. Security
+
+SipHash has a very high security margin, with its 128-bit key. So long as the
+key is kept secret, it is impossible for an attacker to guess the outputs of
+the function, even if being able to observe many outputs, since 2^128 outputs
+is significant.
+
+Linux implements the "2-4" variant of SipHash.
+
+5. Resources
+
+Read the SipHash paper if you're interested in learning more:
+https://131002.net/siphash/siphash.pdf
diff --git a/MAINTAINERS b/MAINTAINERS
index cfff2c9e3d94..e1384ae37344 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -11291,6 +11291,13 @@ F:	arch/arm/mach-s3c24xx/mach-bast.c
 F:	arch/arm/mach-s3c24xx/bast-ide.c
 F:	arch/arm/mach-s3c24xx/bast-irq.c
 
+SIPHASH PRF ROUTINES
+M:	Jason A. Donenfeld <Jason@zx2c4.com>
+S:	Maintained
+F:	lib/siphash.c
+F:	lib/test_siphash.c
+F:	include/linux/siphash.h
+
 TI DAVINCI MACHINE SUPPORT
 M:	Sekhar Nori <nsekhar@ti.com>
 M:	Kevin Hilman <khilman@kernel.org>
diff --git a/include/linux/siphash.h b/include/linux/siphash.h
new file mode 100644
index 000000000000..7aa666eb00d9
--- /dev/null
+++ b/include/linux/siphash.h
@@ -0,0 +1,79 @@
+/* Copyright (C) 2016 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
+ *
+ * This file is provided under a dual BSD/GPLv2 license.
+ *
+ * SipHash: a fast short-input PRF
+ * https://131002.net/siphash/
+ *
+ * This implementation is specifically for SipHash2-4.
+ */
+
+#ifndef _LINUX_SIPHASH_H
+#define _LINUX_SIPHASH_H
+
+#include <linux/types.h>
+#include <linux/kernel.h>
+
+#define SIPHASH_ALIGNMENT __alignof__(u64)
+typedef u64 siphash_key_t[2];
+
+u64 __siphash_aligned(const void *data, size_t len, const siphash_key_t key);
+#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
+u64 __siphash_unaligned(const void *data, size_t len, const siphash_key_t key);
+#endif
+
+u64 siphash_1u64(const u64 a, const siphash_key_t key);
+u64 siphash_2u64(const u64 a, const u64 b, const siphash_key_t key);
+u64 siphash_3u64(const u64 a, const u64 b, const u64 c,
+		 const siphash_key_t key);
+u64 siphash_4u64(const u64 a, const u64 b, const u64 c, const u64 d,
+		 const siphash_key_t key);
+u64 siphash_1u32(const u32 a, const siphash_key_t key);
+u64 siphash_3u32(const u32 a, const u32 b, const u32 c, const siphash_key_t key);
+
+static inline u64 siphash_2u32(const u32 a, const u32 b, const siphash_key_t key)
+{
+	return siphash_1u64((u64)b << 32 | a, key);
+}
+static inline u64 siphash_4u32(const u32 a, const u32 b, const u32 c, const u32 d,
+			       const siphash_key_t key)
+{
+	return siphash_2u64((u64)b << 32 | a, (u64)d << 32 | c, key);
+}
+
+
+static inline u64 ___siphash_aligned(const __le64 *data, size_t len, const siphash_key_t key)
+{
+	if (__builtin_constant_p(len) && len == 4)
+		return siphash_1u32(le32_to_cpu(data[0]), key);
+	if (__builtin_constant_p(len) && len == 8)
+		return siphash_1u64(le64_to_cpu(data[0]), key);
+	if (__builtin_constant_p(len) && len == 16)
+		return siphash_2u64(le64_to_cpu(data[0]), le64_to_cpu(data[1]),
+				    key);
+	if (__builtin_constant_p(len) && len == 24)
+		return siphash_3u64(le64_to_cpu(data[0]), le64_to_cpu(data[1]),
+				    le64_to_cpu(data[2]), key);
+	if (__builtin_constant_p(len) && len == 32)
+		return siphash_4u64(le64_to_cpu(data[0]), le64_to_cpu(data[1]),
+				    le64_to_cpu(data[2]), le64_to_cpu(data[3]),
+				    key);
+	return __siphash_aligned(data, len, key);
+}
+
+/**
+ * siphash - compute 64-bit siphash PRF value
+ * @data: buffer to hash
+ * @size: size of @data
+ * @key: the siphash key
+ */
+static inline u64 siphash(const void *data, size_t len, const siphash_key_t key)
+{
+#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
+	if (!IS_ALIGNED((unsigned long)data, SIPHASH_ALIGNMENT))
+		return __siphash_unaligned(data, len, key);
+#endif
+	return ___siphash_aligned(data, len, key);
+}
+
+#endif /* _LINUX_SIPHASH_H */
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index b06848a104e6..3d2515a770c3 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -1819,9 +1819,9 @@ config TEST_HASH
 	tristate "Perform selftest on hash functions"
 	default n
 	help
-	  Enable this option to test the kernel's integer (<linux/hash,h>)
-	  and string (<linux/stringhash.h>) hash functions on boot
-	  (or module load).
+	  Enable this option to test the kernel's integer (<linux/hash.h>),
+	  string (<linux/stringhash.h>), and siphash (<linux/siphash.h>)
+	  hash functions on boot (or module load).
 
 	  This is intended to help people writing architecture-specific
 	  optimized versions.  If unsure, say N.
diff --git a/lib/Makefile b/lib/Makefile
index bc4073a8cd08..7b3008d58600 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -22,7 +22,8 @@ lib-y := ctype.o string.o vsprintf.o cmdline.o \
 	 sha1.o chacha20.o md5.o irq_regs.o argv_split.o \
 	 flex_proportions.o ratelimit.o show_mem.o \
 	 is_single_threaded.o plist.o decompress.o kobject_uevent.o \
-	 earlycpio.o seq_buf.o nmi_backtrace.o nodemask.o win_minmax.o
+	 earlycpio.o seq_buf.o siphash.o \
+	 nmi_backtrace.o nodemask.o win_minmax.o
 
 lib-$(CONFIG_MMU) += ioremap.o
 lib-$(CONFIG_SMP) += cpumask.o
@@ -44,7 +45,7 @@ obj-$(CONFIG_TEST_HEXDUMP) += test_hexdump.o
 obj-y += kstrtox.o
 obj-$(CONFIG_TEST_BPF) += test_bpf.o
 obj-$(CONFIG_TEST_FIRMWARE) += test_firmware.o
-obj-$(CONFIG_TEST_HASH) += test_hash.o
+obj-$(CONFIG_TEST_HASH) += test_hash.o test_siphash.o
 obj-$(CONFIG_TEST_KASAN) += test_kasan.o
 obj-$(CONFIG_TEST_KSTRTOX) += test-kstrtox.o
 obj-$(CONFIG_TEST_LKM) += test_module.o
diff --git a/lib/siphash.c b/lib/siphash.c
new file mode 100644
index 000000000000..ff2151313667
--- /dev/null
+++ b/lib/siphash.c
@@ -0,0 +1,232 @@
+/* Copyright (C) 2016 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
+ *
+ * This file is provided under a dual BSD/GPLv2 license.
+ *
+ * SipHash: a fast short-input PRF
+ * https://131002.net/siphash/
+ *
+ * This implementation is specifically for SipHash2-4.
+ */
+
+#include <linux/siphash.h>
+#include <asm/unaligned.h>
+
+#if defined(CONFIG_DCACHE_WORD_ACCESS) && BITS_PER_LONG == 64
+#include <linux/dcache.h>
+#include <asm/word-at-a-time.h>
+#endif
+
+#define SIPROUND \
+	do { \
+	v0 += v1; v1 = rol64(v1, 13); v1 ^= v0; v0 = rol64(v0, 32); \
+	v2 += v3; v3 = rol64(v3, 16); v3 ^= v2; \
+	v0 += v3; v3 = rol64(v3, 21); v3 ^= v0; \
+	v2 += v1; v1 = rol64(v1, 17); v1 ^= v2; v2 = rol64(v2, 32); \
+	} while(0)
+
+#define PREAMBLE(len) \
+	u64 v0 = 0x736f6d6570736575ULL; \
+	u64 v1 = 0x646f72616e646f6dULL; \
+	u64 v2 = 0x6c7967656e657261ULL; \
+	u64 v3 = 0x7465646279746573ULL; \
+	u64 b = ((u64)len) << 56; \
+	v3 ^= key[1]; \
+	v2 ^= key[0]; \
+	v1 ^= key[1]; \
+	v0 ^= key[0];
+
+#define POSTAMBLE \
+	v3 ^= b; \
+	SIPROUND; \
+	SIPROUND; \
+	v0 ^= b; \
+	v2 ^= 0xff; \
+	SIPROUND; \
+	SIPROUND; \
+	SIPROUND; \
+	SIPROUND; \
+	return (v0 ^ v1) ^ (v2 ^ v3);
+
+u64 __siphash_aligned(const void *data, size_t len, const siphash_key_t key)
+{
+	const u8 *end = data + len - (len % sizeof(u64));
+	const u8 left = len & (sizeof(u64) - 1);
+	u64 m;
+	PREAMBLE(len)
+	for (; data != end; data += sizeof(u64)) {
+		m = le64_to_cpup(data);
+		v3 ^= m;
+		SIPROUND;
+		SIPROUND;
+		v0 ^= m;
+	}
+#if defined(CONFIG_DCACHE_WORD_ACCESS) && BITS_PER_LONG == 64
+	if (left)
+		b |= le64_to_cpu((__force __le64)(load_unaligned_zeropad(data) &
+						  bytemask_from_count(left)));
+#else
+	switch (left) {
+	case 7: b |= ((u64)end[6]) << 48;
+	case 6: b |= ((u64)end[5]) << 40;
+	case 5: b |= ((u64)end[4]) << 32;
+	case 4: b |= le32_to_cpup(data); break;
+	case 3: b |= ((u64)end[2]) << 16;
+	case 2: b |= le16_to_cpup(data); break;
+	case 1: b |= end[0];
+	}
+#endif
+	POSTAMBLE
+}
+EXPORT_SYMBOL(__siphash_aligned);
+
+#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
+u64 __siphash_unaligned(const void *data, size_t len, const siphash_key_t key)
+{
+	const u8 *end = data + len - (len % sizeof(u64));
+	const u8 left = len & (sizeof(u64) - 1);
+	u64 m;
+	PREAMBLE(len)
+	for (; data != end; data += sizeof(u64)) {
+		m = get_unaligned_le64(data);
+		v3 ^= m;
+		SIPROUND;
+		SIPROUND;
+		v0 ^= m;
+	}
+#if defined(CONFIG_DCACHE_WORD_ACCESS) && BITS_PER_LONG == 64
+	if (left)
+		b |= le64_to_cpu((__force __le64)(load_unaligned_zeropad(data) &
+						  bytemask_from_count(left)));
+#else
+	switch (left) {
+	case 7: b |= ((u64)end[6]) << 48;
+	case 6: b |= ((u64)end[5]) << 40;
+	case 5: b |= ((u64)end[4]) << 32;
+	case 4: b |= get_unaligned_le32(end); break;
+	case 3: b |= ((u64)end[2]) << 16;
+	case 2: b |= get_unaligned_le16(end); break;
+	case 1: b |= end[0];
+	}
+#endif
+	POSTAMBLE
+}
+EXPORT_SYMBOL(__siphash_unaligned);
+#endif
+
+/**
+ * siphash_1u64 - compute 64-bit siphash PRF value of a u64
+ * @first: first u64
+ * @key: the siphash key
+ */
+u64 siphash_1u64(const u64 first, const siphash_key_t key)
+{
+	PREAMBLE(8)
+	v3 ^= first;
+	SIPROUND;
+	SIPROUND;
+	v0 ^= first;
+	POSTAMBLE
+}
+EXPORT_SYMBOL(siphash_1u64);
+
+/**
+ * siphash_2u64 - compute 64-bit siphash PRF value of 2 u64
+ * @first: first u64
+ * @second: second u64
+ * @key: the siphash key
+ */
+u64 siphash_2u64(const u64 first, const u64 second, const siphash_key_t key)
+{
+	PREAMBLE(16)
+	v3 ^= first;
+	SIPROUND;
+	SIPROUND;
+	v0 ^= first;
+	v3 ^= second;
+	SIPROUND;
+	SIPROUND;
+	v0 ^= second;
+	POSTAMBLE
+}
+EXPORT_SYMBOL(siphash_2u64);
+
+/**
+ * siphash_3u64 - compute 64-bit siphash PRF value of 3 u64
+ * @first: first u64
+ * @second: second u64
+ * @third: third u64
+ * @key: the siphash key
+ */
+u64 siphash_3u64(const u64 first, const u64 second, const u64 third,
+		 const siphash_key_t key)
+{
+	PREAMBLE(24)
+	v3 ^= first;
+	SIPROUND;
+	SIPROUND;
+	v0 ^= first;
+	v3 ^= second;
+	SIPROUND;
+	SIPROUND;
+	v0 ^= second;
+	v3 ^= third;
+	SIPROUND;
+	SIPROUND;
+	v0 ^= third;
+	POSTAMBLE
+}
+EXPORT_SYMBOL(siphash_3u64);
+
+/**
+ * siphash_4u64 - compute 64-bit siphash PRF value of 4 u64
+ * @first: first u64
+ * @second: second u64
+ * @third: third u64
+ * @forth: forth u64
+ * @key: the siphash key
+ */
+u64 siphash_4u64(const u64 first, const u64 second, const u64 third,
+		 const u64 forth, const siphash_key_t key)
+{
+	PREAMBLE(32)
+	v3 ^= first;
+	SIPROUND;
+	SIPROUND;
+	v0 ^= first;
+	v3 ^= second;
+	SIPROUND;
+	SIPROUND;
+	v0 ^= second;
+	v3 ^= third;
+	SIPROUND;
+	SIPROUND;
+	v0 ^= third;
+	v3 ^= forth;
+	SIPROUND;
+	SIPROUND;
+	v0 ^= forth;
+	POSTAMBLE
+}
+EXPORT_SYMBOL(siphash_4u64);
+
+u64 siphash_1u32(const u32 first, const siphash_key_t key)
+{
+	PREAMBLE(4)
+	b |= first;
+	POSTAMBLE
+}
+EXPORT_SYMBOL(siphash_1u32);
+
+u64 siphash_3u32(const u32 first, const u32 second, const u32 third,
+		 const siphash_key_t key)
+{
+	u64 combined = (u64)second << 32 | first;
+	PREAMBLE(12)
+	v3 ^= combined;
+	SIPROUND;
+	SIPROUND;
+	v0 ^= combined;
+	b |= third;
+	POSTAMBLE
+}
+EXPORT_SYMBOL(siphash_3u32);
diff --git a/lib/test_siphash.c b/lib/test_siphash.c
new file mode 100644
index 000000000000..e0ba2cf8dc67
--- /dev/null
+++ b/lib/test_siphash.c
@@ -0,0 +1,119 @@
+/* Test cases for siphash.c
+ *
+ * Copyright (C) 2016 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
+ *
+ * This file is provided under a dual BSD/GPLv2 license.
+ *
+ * SipHash: a fast short-input PRF
+ * https://131002.net/siphash/
+ *
+ * This implementation is specifically for SipHash2-4.
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <linux/siphash.h>
+#include <linux/kernel.h>
+#include <linux/string.h>
+#include <linux/errno.h>
+#include <linux/module.h>
+
+/* Test vectors taken from official reference source available at:
+ *     https://131002.net/siphash/siphash24.c
+ */
+static const u64 test_vectors[64] = {
+	0x726fdb47dd0e0e31ULL, 0x74f839c593dc67fdULL, 0x0d6c8009d9a94f5aULL,
+	0x85676696d7fb7e2dULL, 0xcf2794e0277187b7ULL, 0x18765564cd99a68dULL,
+	0xcbc9466e58fee3ceULL, 0xab0200f58b01d137ULL, 0x93f5f5799a932462ULL,
+	0x9e0082df0ba9e4b0ULL, 0x7a5dbbc594ddb9f3ULL, 0xf4b32f46226bada7ULL,
+	0x751e8fbc860ee5fbULL, 0x14ea5627c0843d90ULL, 0xf723ca908e7af2eeULL,
+	0xa129ca6149be45e5ULL, 0x3f2acc7f57c29bdbULL, 0x699ae9f52cbe4794ULL,
+	0x4bc1b3f0968dd39cULL, 0xbb6dc91da77961bdULL, 0xbed65cf21aa2ee98ULL,
+	0xd0f2cbb02e3b67c7ULL, 0x93536795e3a33e88ULL, 0xa80c038ccd5ccec8ULL,
+	0xb8ad50c6f649af94ULL, 0xbce192de8a85b8eaULL, 0x17d835b85bbb15f3ULL,
+	0x2f2e6163076bcfadULL, 0xde4daaaca71dc9a5ULL, 0xa6a2506687956571ULL,
+	0xad87a3535c49ef28ULL, 0x32d892fad841c342ULL, 0x7127512f72f27cceULL,
+	0xa7f32346f95978e3ULL, 0x12e0b01abb051238ULL, 0x15e034d40fa197aeULL,
+	0x314dffbe0815a3b4ULL, 0x027990f029623981ULL, 0xcadcd4e59ef40c4dULL,
+	0x9abfd8766a33735cULL, 0x0e3ea96b5304a7d0ULL, 0xad0c42d6fc585992ULL,
+	0x187306c89bc215a9ULL, 0xd4a60abcf3792b95ULL, 0xf935451de4f21df2ULL,
+	0xa9538f0419755787ULL, 0xdb9acddff56ca510ULL, 0xd06c98cd5c0975ebULL,
+	0xe612a3cb9ecba951ULL, 0xc766e62cfcadaf96ULL, 0xee64435a9752fe72ULL,
+	0xa192d576b245165aULL, 0x0a8787bf8ecb74b2ULL, 0x81b3e73d20b49b6fULL,
+	0x7fa8220ba3b2eceaULL, 0x245731c13ca42499ULL, 0xb78dbfaf3a8d83bdULL,
+	0xea1ad565322a1a0bULL, 0x60e61c23a3795013ULL, 0x6606d7e446282b93ULL,
+	0x6ca4ecb15c5f91e1ULL, 0x9f626da15c9625f3ULL, 0xe51b38608ef25f57ULL,
+	0x958a324ceb064572ULL
+};
+static const siphash_key_t test_key =
+	{ 0x0706050403020100ULL , 0x0f0e0d0c0b0a0908ULL };
+
+static int __init siphash_test_init(void)
+{
+	u8 in[64] __aligned(SIPHASH_ALIGNMENT);
+	u8 in_unaligned[65];
+	u8 i;
+	int ret = 0;
+
+	for (i = 0; i < 64; ++i) {
+		in[i] = i;
+		in_unaligned[i + 1] = i;
+		if (siphash(in, i, test_key) != test_vectors[i]) {
+			pr_info("self-test aligned %u: FAIL\n", i + 1);
+			ret = -EINVAL;
+		}
+		if (siphash(in_unaligned + 1, i, test_key) != test_vectors[i]) {
+			pr_info("self-test unaligned %u: FAIL\n", i + 1);
+			ret = -EINVAL;
+		}
+	}
+	if (siphash_1u64(0x0706050403020100ULL, test_key) != test_vectors[8]) {
+		pr_info("self-test 1u64: FAIL\n");
+		ret = -EINVAL;
+	}
+	if (siphash_2u64(0x0706050403020100ULL, 0x0f0e0d0c0b0a0908ULL, test_key) != test_vectors[16]) {
+		pr_info("self-test 2u64: FAIL\n");
+		ret = -EINVAL;
+	}
+	if (siphash_3u64(0x0706050403020100ULL, 0x0f0e0d0c0b0a0908ULL,
+			 0x1716151413121110ULL, test_key) != test_vectors[24]) {
+		pr_info("self-test 3u64: FAIL\n");
+		ret = -EINVAL;
+	}
+	if (siphash_4u64(0x0706050403020100ULL, 0x0f0e0d0c0b0a0908ULL,
+			 0x1716151413121110ULL, 0x1f1e1d1c1b1a1918ULL, test_key) != test_vectors[32]) {
+		pr_info("self-test 4u64: FAIL\n");
+		ret = -EINVAL;
+	}
+	if (siphash_1u32(0x03020100U, test_key) != test_vectors[4]) {
+		pr_info("self-test 1u32: FAIL\n");
+		ret = -EINVAL;
+	}
+	if (siphash_2u32(0x03020100U, 0x07060504U, test_key) != test_vectors[8]) {
+		pr_info("self-test 2u32: FAIL\n");
+		ret = -EINVAL;
+	}
+	if (siphash_3u32(0x03020100U, 0x07060504U,
+			 0x0b0a0908U, test_key) != test_vectors[12]) {
+		pr_info("self-test 3u32: FAIL\n");
+		ret = -EINVAL;
+	}
+	if (siphash_4u32(0x03020100U, 0x07060504U,
+			 0x0b0a0908U, 0x0f0e0d0cU, test_key) != test_vectors[16]) {
+		pr_info("self-test 4u32: FAIL\n");
+		ret = -EINVAL;
+	}
+	if (!ret)
+		pr_info("self-tests: pass\n");
+	return ret;
+}
+
+static void __exit siphash_test_exit(void)
+{
+}
+
+module_init(siphash_test_init);
+module_exit(siphash_test_exit);
+
+MODULE_AUTHOR("Jason A. Donenfeld <Jason@zx2c4.com>");
+MODULE_LICENSE("Dual BSD/GPL");
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH net-next 2/4] siphash: implement HalfSipHash1-3 for hash tables
  2017-01-06 18:37 [PATCH net-next 1/4] siphash: add cryptographically secure PRF Jason A. Donenfeld
@ 2017-01-06 18:37 ` Jason A. Donenfeld
  2017-01-06 18:37 ` [PATCH net-next 3/4] secure_seq: use SipHash in place of MD5 Jason A. Donenfeld
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 18+ messages in thread
From: Jason A. Donenfeld @ 2017-01-06 18:37 UTC (permalink / raw)
  To: davem, netdev, linux-kernel; +Cc: Jason A. Donenfeld, Jean-Philippe Aumasson

HalfSipHash, or hsiphash, is a shortened version of SipHash, which
generates 32-bit outputs using a weaker 64-bit key. It has *much* lower
security margins, and shouldn't be used for anything too sensitive, but
it could be used as a hashtable key function replacement, if the output
is never exposed, and if the security requirement is not too high.

The goal is to make this something that performance-critical jhash users
would be willing to use.

On 64-bit machines, HalfSipHash1-3 is slower than SipHash1-3, so we alias
SipHash1-3 to HalfSipHash1-3 on those systems.

64-bit x86_64:
[    0.509409] test_siphash:     SipHash2-4 cycles: 4049181
[    0.510650] test_siphash:     SipHash1-3 cycles: 2512884
[    0.512205] test_siphash: HalfSipHash1-3 cycles: 3429920
[    0.512904] test_siphash:    JenkinsHash cycles:  978267
So, we map hsiphash() -> SipHash1-3

32-bit x86:
[    0.509868] test_siphash:     SipHash2-4 cycles: 14812892
[    0.513601] test_siphash:     SipHash1-3 cycles:  9510710
[    0.515263] test_siphash: HalfSipHash1-3 cycles:  3856157
[    0.515952] test_siphash:    JenkinsHash cycles:  1148567
So, we map hsiphash() -> HalfSipHash1-3

hsiphash() is roughly 3 times slower than jhash(), but comes with a
considerable security improvement.

Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Cc: Jean-Philippe Aumasson <jeanphilippe.aumasson@gmail.com>
---
 Documentation/siphash.txt |  75 +++++++++++
 include/linux/siphash.h   |  56 +++++++-
 lib/siphash.c             | 318 +++++++++++++++++++++++++++++++++++++++++++++-
 lib/test_siphash.c        | 139 ++++++++++++++++----
 4 files changed, 561 insertions(+), 27 deletions(-)

diff --git a/Documentation/siphash.txt b/Documentation/siphash.txt
index 39ff7f0438e7..f93c1d7104c4 100644
--- a/Documentation/siphash.txt
+++ b/Documentation/siphash.txt
@@ -77,3 +77,78 @@ Linux implements the "2-4" variant of SipHash.
 
 Read the SipHash paper if you're interested in learning more:
 https://131002.net/siphash/siphash.pdf
+
+
+~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~
+
+HalfSipHash - SipHash's insecure younger cousin
+-----------------------------------------------
+Written by Jason A. Donenfeld <jason@zx2c4.com>
+
+On the off-chance that SipHash is not fast enough for your needs, you might be
+able to justify using HalfSipHash, a terrifying but potentially useful
+possibility. HalfSipHash cuts SipHash's rounds down from "2-4" to "1-3" and,
+even scarier, uses an easily brute-forcable 64-bit key (with a 32-bit output)
+instead of SipHash's 128-bit key. However, this may appeal to some
+high-performance `jhash` users.
+
+Danger!
+
+Do not ever use HalfSipHash except for as a hashtable key function, and only
+then when you can be absolutely certain that the outputs will never be
+transmitted out of the kernel. This is only remotely useful over `jhash` as a
+means of mitigating hashtable flooding denial of service attacks.
+
+1. Generating a key
+
+Keys should always be generated from a cryptographically secure source of
+random numbers, either using get_random_bytes or get_random_once:
+
+hsiphash_key_t key;
+get_random_bytes(key, sizeof(key));
+
+If you're not deriving your key from here, you're doing it wrong.
+
+2. Using the functions
+
+There are two variants of the function, one that takes a list of integers, and
+one that takes a buffer:
+
+u32 hsiphash(const void *data, size_t len, siphash_key_t key);
+
+And:
+
+u32 hsiphash_1u32(u32, hsiphash_key_t key);
+u32 hsiphash_2u32(u32, u32, hsiphash_key_t key);
+u32 hsiphash_3u32(u32, u32, u32, hsiphash_key_t key);
+u32 hsiphash_4u32(u32, u32, u32, u32, hsiphash_key_t key);
+
+If you pass the generic hsiphash function something of a constant length, it
+will constant fold at compile-time and automatically choose one of the
+optimized functions.
+
+3. Hashtable key function usage:
+
+struct some_hashtable {
+	DECLARE_HASHTABLE(hashtable, 8);
+	hsiphash_key_t key;
+};
+
+void init_hashtable(struct some_hashtable *table)
+{
+	get_random_bytes(table->key, sizeof(table->key));
+}
+
+static inline hlist_head *some_hashtable_bucket(struct some_hashtable *table, struct interesting_input *input)
+{
+	return &table->hashtable[hsiphash(input, sizeof(*input), table->key) & (HASH_SIZE(table->hashtable) - 1)];
+}
+
+You may then iterate like usual over the returned hash bucket.
+
+4. Performance
+
+HalfSipHash is roughly 3 times slower than JenkinsHash. For many replacements,
+this will not be a problem, as the hashtable lookup isn't the bottleneck. And
+in general, this is probably a good sacrifice to make for the security and DoS
+resistance of HalfSipHash.
diff --git a/include/linux/siphash.h b/include/linux/siphash.h
index 7aa666eb00d9..efab44c654f3 100644
--- a/include/linux/siphash.h
+++ b/include/linux/siphash.h
@@ -5,7 +5,9 @@
  * SipHash: a fast short-input PRF
  * https://131002.net/siphash/
  *
- * This implementation is specifically for SipHash2-4.
+ * This implementation is specifically for SipHash2-4 for a secure PRF
+ * and HalfSipHash1-3/SipHash1-3 for an insecure PRF only suitable for
+ * hashtables.
  */
 
 #ifndef _LINUX_SIPHASH_H
@@ -76,4 +78,56 @@ static inline u64 siphash(const void *data, size_t len, const siphash_key_t key)
 	return ___siphash_aligned(data, len, key);
 }
 
+#if BITS_PER_LONG == 64
+typedef siphash_key_t hsiphash_key_t;
+#define HSIPHASH_ALIGNMENT SIPHASH_ALIGNMENT
+#else
+typedef u32 hsiphash_key_t[2];
+#define HSIPHASH_ALIGNMENT __alignof__(u32)
+#endif
+
+u32 __hsiphash_aligned(const void *data, size_t len, const hsiphash_key_t key);
+#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
+u32 __hsiphash_unaligned(const void *data, size_t len, const hsiphash_key_t key);
+#endif
+
+u32 hsiphash_1u32(const u32 a, const hsiphash_key_t key);
+u32 hsiphash_2u32(const u32 a, const u32 b, const hsiphash_key_t key);
+u32 hsiphash_3u32(const u32 a, const u32 b, const u32 c,
+		  const hsiphash_key_t key);
+u32 hsiphash_4u32(const u32 a, const u32 b, const u32 c, const u32 d,
+		  const hsiphash_key_t key);
+
+static inline u32 ___hsiphash_aligned(const __le32 *data, size_t len, const hsiphash_key_t key)
+{
+	if (__builtin_constant_p(len) && len == 4)
+		return hsiphash_1u32(le32_to_cpu(data[0]), key);
+	if (__builtin_constant_p(len) && len == 8)
+		return hsiphash_2u32(le32_to_cpu(data[0]), le32_to_cpu(data[1]),
+				     key);
+	if (__builtin_constant_p(len) && len == 12)
+		return hsiphash_3u32(le32_to_cpu(data[0]), le32_to_cpu(data[1]),
+				     le32_to_cpu(data[2]), key);
+	if (__builtin_constant_p(len) && len == 16)
+		return hsiphash_4u32(le32_to_cpu(data[0]), le32_to_cpu(data[1]),
+				     le32_to_cpu(data[2]), le32_to_cpu(data[3]),
+				     key);
+	return __hsiphash_aligned(data, len, key);
+}
+
+/**
+ * hsiphash - compute 32-bit hsiphash PRF value
+ * @data: buffer to hash
+ * @size: size of @data
+ * @key: the hsiphash key
+ */
+static inline u32 hsiphash(const void *data, size_t len, const hsiphash_key_t key)
+{
+#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
+	if (!IS_ALIGNED((unsigned long)data, HSIPHASH_ALIGNMENT))
+		return __hsiphash_unaligned(data, len, key);
+#endif
+	return ___hsiphash_aligned(data, len, key);
+}
+
 #endif /* _LINUX_SIPHASH_H */
diff --git a/lib/siphash.c b/lib/siphash.c
index ff2151313667..e2481226d96c 100644
--- a/lib/siphash.c
+++ b/lib/siphash.c
@@ -5,7 +5,9 @@
  * SipHash: a fast short-input PRF
  * https://131002.net/siphash/
  *
- * This implementation is specifically for SipHash2-4.
+ * This implementation is specifically for SipHash2-4 for a secure PRF
+ * and HalfSipHash1-3/SipHash1-3 for an insecure PRF only suitable for
+ * hashtables.
  */
 
 #include <linux/siphash.h>
@@ -230,3 +232,317 @@ u64 siphash_3u32(const u32 first, const u32 second, const u32 third,
 	POSTAMBLE
 }
 EXPORT_SYMBOL(siphash_3u32);
+
+#if BITS_PER_LONG == 64
+/* Note that this HalfSipHash1-3 implementation on 64-bit
+ * isn't actually HalfSipHash1-3 but rather SipHash1-3. */
+
+#define HSIPROUND SIPROUND
+#define HPREAMBLE(len) PREAMBLE(len)
+#define HPOSTAMBLE \
+	v3 ^= b; \
+	HSIPROUND; \
+	v0 ^= b; \
+	v2 ^= 0xff; \
+	HSIPROUND; \
+	HSIPROUND; \
+	HSIPROUND; \
+	return (v0 ^ v1) ^ (v2 ^ v3);
+
+u32 __hsiphash_aligned(const void *data, size_t len, const hsiphash_key_t key)
+{
+	const u8 *end = data + len - (len % sizeof(u64));
+	const u8 left = len & (sizeof(u64) - 1);
+	u64 m;
+	HPREAMBLE(len)
+	for (; data != end; data += sizeof(u64)) {
+		m = le64_to_cpup(data);
+		v3 ^= m;
+		HSIPROUND;
+		v0 ^= m;
+	}
+#if defined(CONFIG_DCACHE_WORD_ACCESS) && BITS_PER_LONG == 64
+	if (left)
+		b |= le64_to_cpu((__force __le64)(load_unaligned_zeropad(data) &
+						  bytemask_from_count(left)));
+#else
+	switch (left) {
+	case 7: b |= ((u64)end[6]) << 48;
+	case 6: b |= ((u64)end[5]) << 40;
+	case 5: b |= ((u64)end[4]) << 32;
+	case 4: b |= le32_to_cpup(data); break;
+	case 3: b |= ((u64)end[2]) << 16;
+	case 2: b |= le16_to_cpup(data); break;
+	case 1: b |= end[0];
+	}
+#endif
+	HPOSTAMBLE
+}
+EXPORT_SYMBOL(__hsiphash_aligned);
+
+#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
+u32 __hsiphash_unaligned(const void *data, size_t len, const hsiphash_key_t key)
+{
+	const u8 *end = data + len - (len % sizeof(u64));
+	const u8 left = len & (sizeof(u64) - 1);
+	u64 m;
+	HPREAMBLE(len)
+	for (; data != end; data += sizeof(u64)) {
+		m = get_unaligned_le64(data);
+		v3 ^= m;
+		HSIPROUND;
+		v0 ^= m;
+	}
+#if defined(CONFIG_DCACHE_WORD_ACCESS) && BITS_PER_LONG == 64
+	if (left)
+		b |= le64_to_cpu((__force __le64)(load_unaligned_zeropad(data) &
+						  bytemask_from_count(left)));
+#else
+	switch (left) {
+	case 7: b |= ((u64)end[6]) << 48;
+	case 6: b |= ((u64)end[5]) << 40;
+	case 5: b |= ((u64)end[4]) << 32;
+	case 4: b |= get_unaligned_le32(end); break;
+	case 3: b |= ((u64)end[2]) << 16;
+	case 2: b |= get_unaligned_le16(end); break;
+	case 1: b |= end[0];
+	}
+#endif
+	HPOSTAMBLE
+}
+EXPORT_SYMBOL(__hsiphash_unaligned);
+#endif
+
+/**
+ * hsiphash_1u32 - compute 64-bit hsiphash PRF value of a u32
+ * @first: first u32
+ * @key: the hsiphash key
+ */
+u32 hsiphash_1u32(const u32 first, const hsiphash_key_t key)
+{
+	HPREAMBLE(4)
+	b |= first;
+	HPOSTAMBLE
+}
+EXPORT_SYMBOL(hsiphash_1u32);
+
+/**
+ * hsiphash_2u32 - compute 32-bit hsiphash PRF value of 2 u32
+ * @first: first u32
+ * @second: second u32
+ * @key: the hsiphash key
+ */
+u32 hsiphash_2u32(const u32 first, const u32 second, const hsiphash_key_t key)
+{
+	u64 combined = (u64)second << 32 | first;
+	HPREAMBLE(8)
+	v3 ^= combined;
+	HSIPROUND;
+	v0 ^= combined;
+	HPOSTAMBLE
+}
+EXPORT_SYMBOL(hsiphash_2u32);
+
+/**
+ * hsiphash_3u32 - compute 32-bit hsiphash PRF value of 3 u32
+ * @first: first u32
+ * @second: second u32
+ * @third: third u32
+ * @key: the hsiphash key
+ */
+u32 hsiphash_3u32(const u32 first, const u32 second, const u32 third,
+		  const hsiphash_key_t key)
+{
+	u64 combined = (u64)second << 32 | first;
+	HPREAMBLE(12)
+	v3 ^= combined;
+	HSIPROUND;
+	v0 ^= combined;
+	b |= third;
+	HPOSTAMBLE
+}
+EXPORT_SYMBOL(hsiphash_3u32);
+
+/**
+ * hsiphash_4u32 - compute 32-bit hsiphash PRF value of 4 u32
+ * @first: first u32
+ * @second: second u32
+ * @third: third u32
+ * @forth: forth u32
+ * @key: the hsiphash key
+ */
+u32 hsiphash_4u32(const u32 first, const u32 second, const u32 third,
+		  const u32 forth, const hsiphash_key_t key)
+{
+	u64 combined = (u64)second << 32 | first;
+	HPREAMBLE(16)
+	v3 ^= combined;
+	HSIPROUND;
+	v0 ^= combined;
+	combined = (u64)forth << 32 | third;
+	v3 ^= combined;
+	HSIPROUND;
+	v0 ^= combined;
+	HPOSTAMBLE
+}
+EXPORT_SYMBOL(hsiphash_4u32);
+#else
+#define HSIPROUND \
+	do { \
+	v0 += v1; v1 = rol32(v1, 5); v1 ^= v0; v0 = rol32(v0, 16); \
+	v2 += v3; v3 = rol32(v3, 8); v3 ^= v2; \
+	v0 += v3; v3 = rol32(v3, 7); v3 ^= v0; \
+	v2 += v1; v1 = rol32(v1, 13); v1 ^= v2; v2 = rol32(v2, 16); \
+	} while(0)
+
+#define HPREAMBLE(len) \
+	u32 v0 = 0; \
+	u32 v1 = 0; \
+	u32 v2 = 0x6c796765U; \
+	u32 v3 = 0x74656462U; \
+	u32 b = ((u32)len) << 24; \
+	v3 ^= key[1]; \
+	v2 ^= key[0]; \
+	v1 ^= key[1]; \
+	v0 ^= key[0];
+
+#define HPOSTAMBLE \
+	v3 ^= b; \
+	HSIPROUND; \
+	v0 ^= b; \
+	v2 ^= 0xff; \
+	HSIPROUND; \
+	HSIPROUND; \
+	HSIPROUND; \
+	return v1 ^ v3;
+
+u32 __hsiphash_aligned(const void *data, size_t len, const hsiphash_key_t key)
+{
+	const u8 *end = data + len - (len % sizeof(u32));
+	const u8 left = len & (sizeof(u32) - 1);
+	u32 m;
+	HPREAMBLE(len)
+	for (; data != end; data += sizeof(u32)) {
+		m = le32_to_cpup(data);
+		v3 ^= m;
+		HSIPROUND;
+		v0 ^= m;
+	}
+	switch (left) {
+	case 3: b |= ((u32)end[2]) << 16;
+	case 2: b |= le16_to_cpup(data); break;
+	case 1: b |= end[0];
+	}
+	HPOSTAMBLE
+}
+EXPORT_SYMBOL(__hsiphash_aligned);
+
+#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
+u32 __hsiphash_unaligned(const void *data, size_t len, const hsiphash_key_t key)
+{
+	const u8 *end = data + len - (len % sizeof(u32));
+	const u8 left = len & (sizeof(u32) - 1);
+	u32 m;
+	HPREAMBLE(len)
+	for (; data != end; data += sizeof(u32)) {
+		m = get_unaligned_le32(data);
+		v3 ^= m;
+		HSIPROUND;
+		v0 ^= m;
+	}
+	switch (left) {
+	case 3: b |= ((u32)end[2]) << 16;
+	case 2: b |= get_unaligned_le16(end); break;
+	case 1: b |= end[0];
+	}
+	HPOSTAMBLE
+}
+EXPORT_SYMBOL(__hsiphash_unaligned);
+#endif
+
+/**
+ * hsiphash_1u32 - compute 32-bit hsiphash PRF value of a u32
+ * @first: first u32
+ * @key: the hsiphash key
+ */
+u32 hsiphash_1u32(const u32 first, const hsiphash_key_t key)
+{
+	HPREAMBLE(4)
+	v3 ^= first;
+	HSIPROUND;
+	v0 ^= first;
+	HPOSTAMBLE
+}
+EXPORT_SYMBOL(hsiphash_1u32);
+
+/**
+ * hsiphash_2u32 - compute 32-bit hsiphash PRF value of 2 u32
+ * @first: first u32
+ * @second: second u32
+ * @key: the hsiphash key
+ */
+u32 hsiphash_2u32(const u32 first, const u32 second, const hsiphash_key_t key)
+{
+	HPREAMBLE(8)
+	v3 ^= first;
+	HSIPROUND;
+	v0 ^= first;
+	v3 ^= second;
+	HSIPROUND;
+	v0 ^= second;
+	HPOSTAMBLE
+}
+EXPORT_SYMBOL(hsiphash_2u32);
+
+/**
+ * hsiphash_3u32 - compute 32-bit hsiphash PRF value of 3 u32
+ * @first: first u32
+ * @second: second u32
+ * @third: third u32
+ * @key: the hsiphash key
+ */
+u32 hsiphash_3u32(const u32 first, const u32 second, const u32 third,
+		  const hsiphash_key_t key)
+{
+	HPREAMBLE(12)
+	v3 ^= first;
+	HSIPROUND;
+	v0 ^= first;
+	v3 ^= second;
+	HSIPROUND;
+	v0 ^= second;
+	v3 ^= third;
+	HSIPROUND;
+	v0 ^= third;
+	HPOSTAMBLE
+}
+EXPORT_SYMBOL(hsiphash_3u32);
+
+/**
+ * hsiphash_4u32 - compute 32-bit hsiphash PRF value of 4 u32
+ * @first: first u32
+ * @second: second u32
+ * @third: third u32
+ * @forth: forth u32
+ * @key: the hsiphash key
+ */
+u32 hsiphash_4u32(const u32 first, const u32 second, const u32 third,
+		  const u32 forth, const hsiphash_key_t key)
+{
+	HPREAMBLE(16)
+	v3 ^= first;
+	HSIPROUND;
+	v0 ^= first;
+	v3 ^= second;
+	HSIPROUND;
+	v0 ^= second;
+	v3 ^= third;
+	HSIPROUND;
+	v0 ^= third;
+	v3 ^= forth;
+	HSIPROUND;
+	v0 ^= forth;
+	HPOSTAMBLE
+}
+EXPORT_SYMBOL(hsiphash_4u32);
+#endif
diff --git a/lib/test_siphash.c b/lib/test_siphash.c
index e0ba2cf8dc67..ac291ec27fb6 100644
--- a/lib/test_siphash.c
+++ b/lib/test_siphash.c
@@ -7,7 +7,9 @@
  * SipHash: a fast short-input PRF
  * https://131002.net/siphash/
  *
- * This implementation is specifically for SipHash2-4.
+ * This implementation is specifically for SipHash2-4 for a secure PRF
+ * and HalfSipHash1-3/SipHash1-3 for an insecure PRF only suitable for
+ * hashtables.
  */
 
 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
@@ -18,10 +20,16 @@
 #include <linux/errno.h>
 #include <linux/module.h>
 
-/* Test vectors taken from official reference source available at:
- *     https://131002.net/siphash/siphash24.c
+/* Test vectors taken from reference source available at:
+ *     https://github.com/veorq/SipHash
  */
-static const u64 test_vectors[64] = {
+
+
+
+static const siphash_key_t test_key_siphash =
+	{ 0x0706050403020100ULL , 0x0f0e0d0c0b0a0908ULL };
+
+static const u64 test_vectors_siphash[64] = {
 	0x726fdb47dd0e0e31ULL, 0x74f839c593dc67fdULL, 0x0d6c8009d9a94f5aULL,
 	0x85676696d7fb7e2dULL, 0xcf2794e0277187b7ULL, 0x18765564cd99a68dULL,
 	0xcbc9466e58fee3ceULL, 0xab0200f58b01d137ULL, 0x93f5f5799a932462ULL,
@@ -45,9 +53,64 @@ static const u64 test_vectors[64] = {
 	0x6ca4ecb15c5f91e1ULL, 0x9f626da15c9625f3ULL, 0xe51b38608ef25f57ULL,
 	0x958a324ceb064572ULL
 };
-static const siphash_key_t test_key =
+#if BITS_PER_LONG == 64
+static const hsiphash_key_t test_key_hsiphash =
 	{ 0x0706050403020100ULL , 0x0f0e0d0c0b0a0908ULL };
 
+static const u32 test_vectors_hsiphash[64] = {
+	0x050fc4dcU, 0x7d57ca93U, 0x4dc7d44dU,
+	0xe7ddf7fbU, 0x88d38328U, 0x49533b67U,
+	0xc59f22a7U, 0x9bb11140U, 0x8d299a8eU,
+	0x6c063de4U, 0x92ff097fU, 0xf94dc352U,
+	0x57b4d9a2U, 0x1229ffa7U, 0xc0f95d34U,
+	0x2a519956U, 0x7d908b66U, 0x63dbd80cU,
+	0xb473e63eU, 0x8d297d1cU, 0xa6cce040U,
+	0x2b45f844U, 0xa320872eU, 0xdae6c123U,
+	0x67349c8cU, 0x705b0979U, 0xca9913a5U,
+	0x4ade3b35U, 0xef6cd00dU, 0x4ab1e1f4U,
+	0x43c5e663U, 0x8c21d1bcU, 0x16a7b60dU,
+	0x7a8ff9bfU, 0x1f2a753eU, 0xbf186b91U,
+	0xada26206U, 0xa3c33057U, 0xae3a36a1U,
+	0x7b108392U, 0x99e41531U, 0x3f1ad944U,
+	0xc8138825U, 0xc28949a6U, 0xfaf8876bU,
+	0x9f042196U, 0x68b1d623U, 0x8b5114fdU,
+	0xdf074c46U, 0x12cc86b3U, 0x0a52098fU,
+	0x9d292f9aU, 0xa2f41f12U, 0x43a71ed0U,
+	0x73f0bce6U, 0x70a7e980U, 0x243c6d75U,
+	0xfdb71513U, 0xa67d8a08U, 0xb7e8f148U,
+	0xf7a644eeU, 0x0f1837f2U, 0x4b6694e0U,
+	0xb7bbb3a8U
+};
+#else
+static const hsiphash_key_t test_key_hsiphash =
+	{ 0x03020100U, 0x07060504U };
+
+static const u32 test_vectors_hsiphash[64] = {
+	0x5814c896U, 0xe7e864caU, 0xbc4b0e30U,
+	0x01539939U, 0x7e059ea6U, 0x88e3d89bU,
+	0xa0080b65U, 0x9d38d9d6U, 0x577999b1U,
+	0xc839caedU, 0xe4fa32cfU, 0x959246eeU,
+	0x6b28096cU, 0x66dd9cd6U, 0x16658a7cU,
+	0xd0257b04U, 0x8b31d501U, 0x2b1cd04bU,
+	0x06712339U, 0x522aca67U, 0x911bb605U,
+	0x90a65f0eU, 0xf826ef7bU, 0x62512debU,
+	0x57150ad7U, 0x5d473507U, 0x1ec47442U,
+	0xab64afd3U, 0x0a4100d0U, 0x6d2ce652U,
+	0x2331b6a3U, 0x08d8791aU, 0xbc6dda8dU,
+	0xe0f6c934U, 0xb0652033U, 0x9b9851ccU,
+	0x7c46fb7fU, 0x732ba8cbU, 0xf142997aU,
+	0xfcc9aa1bU, 0x05327eb2U, 0xe110131cU,
+	0xf9e5e7c0U, 0xa7d708a6U, 0x11795ab1U,
+	0x65671619U, 0x9f5fff91U, 0xd89c5267U,
+	0x007783ebU, 0x95766243U, 0xab639262U,
+	0x9c7e1390U, 0xc368dda6U, 0x38ddc455U,
+	0xfa13d379U, 0x979ea4e8U, 0x53ecd77eU,
+	0x2ee80657U, 0x33dbb66aU, 0xae3f0577U,
+	0x88b4c4ccU, 0x3e7f480bU, 0x74c1ebf8U,
+	0x87178304U
+};
+#endif
+
 static int __init siphash_test_init(void)
 {
 	u8 in[64] __aligned(SIPHASH_ALIGNMENT);
@@ -58,49 +121,75 @@ static int __init siphash_test_init(void)
 	for (i = 0; i < 64; ++i) {
 		in[i] = i;
 		in_unaligned[i + 1] = i;
-		if (siphash(in, i, test_key) != test_vectors[i]) {
-			pr_info("self-test aligned %u: FAIL\n", i + 1);
+		if (siphash(in, i, test_key_siphash) != test_vectors_siphash[i]) {
+			pr_info("siphash self-test aligned %u: FAIL\n", i + 1);
+			ret = -EINVAL;
+		}
+		if (siphash(in_unaligned + 1, i, test_key_siphash) != test_vectors_siphash[i]) {
+			pr_info("siphash self-test unaligned %u: FAIL\n", i + 1);
 			ret = -EINVAL;
 		}
-		if (siphash(in_unaligned + 1, i, test_key) != test_vectors[i]) {
-			pr_info("self-test unaligned %u: FAIL\n", i + 1);
+		if (hsiphash(in, i, test_key_hsiphash) != test_vectors_hsiphash[i]) {
+			pr_info("hsiphash self-test aligned %u: FAIL\n", i + 1);
+			ret = -EINVAL;
+		}
+		if (hsiphash(in_unaligned + 1, i, test_key_hsiphash) != test_vectors_hsiphash[i]) {
+			pr_info("hsiphash self-test unaligned %u: FAIL\n", i + 1);
 			ret = -EINVAL;
 		}
 	}
-	if (siphash_1u64(0x0706050403020100ULL, test_key) != test_vectors[8]) {
-		pr_info("self-test 1u64: FAIL\n");
+	if (siphash_1u64(0x0706050403020100ULL, test_key_siphash) != test_vectors_siphash[8]) {
+		pr_info("siphash self-test 1u64: FAIL\n");
 		ret = -EINVAL;
 	}
-	if (siphash_2u64(0x0706050403020100ULL, 0x0f0e0d0c0b0a0908ULL, test_key) != test_vectors[16]) {
-		pr_info("self-test 2u64: FAIL\n");
+	if (siphash_2u64(0x0706050403020100ULL, 0x0f0e0d0c0b0a0908ULL, test_key_siphash) != test_vectors_siphash[16]) {
+		pr_info("siphash self-test 2u64: FAIL\n");
 		ret = -EINVAL;
 	}
 	if (siphash_3u64(0x0706050403020100ULL, 0x0f0e0d0c0b0a0908ULL,
-			 0x1716151413121110ULL, test_key) != test_vectors[24]) {
-		pr_info("self-test 3u64: FAIL\n");
+			 0x1716151413121110ULL, test_key_siphash) != test_vectors_siphash[24]) {
+		pr_info("siphash self-test 3u64: FAIL\n");
 		ret = -EINVAL;
 	}
 	if (siphash_4u64(0x0706050403020100ULL, 0x0f0e0d0c0b0a0908ULL,
-			 0x1716151413121110ULL, 0x1f1e1d1c1b1a1918ULL, test_key) != test_vectors[32]) {
-		pr_info("self-test 4u64: FAIL\n");
+			 0x1716151413121110ULL, 0x1f1e1d1c1b1a1918ULL, test_key_siphash) != test_vectors_siphash[32]) {
+		pr_info("siphash self-test 4u64: FAIL\n");
 		ret = -EINVAL;
 	}
-	if (siphash_1u32(0x03020100U, test_key) != test_vectors[4]) {
-		pr_info("self-test 1u32: FAIL\n");
+	if (siphash_1u32(0x03020100U, test_key_siphash) != test_vectors_siphash[4]) {
+		pr_info("siphash self-test 1u32: FAIL\n");
 		ret = -EINVAL;
 	}
-	if (siphash_2u32(0x03020100U, 0x07060504U, test_key) != test_vectors[8]) {
-		pr_info("self-test 2u32: FAIL\n");
+	if (siphash_2u32(0x03020100U, 0x07060504U, test_key_siphash) != test_vectors_siphash[8]) {
+		pr_info("siphash self-test 2u32: FAIL\n");
 		ret = -EINVAL;
 	}
 	if (siphash_3u32(0x03020100U, 0x07060504U,
-			 0x0b0a0908U, test_key) != test_vectors[12]) {
-		pr_info("self-test 3u32: FAIL\n");
+			 0x0b0a0908U, test_key_siphash) != test_vectors_siphash[12]) {
+		pr_info("siphash self-test 3u32: FAIL\n");
 		ret = -EINVAL;
 	}
 	if (siphash_4u32(0x03020100U, 0x07060504U,
-			 0x0b0a0908U, 0x0f0e0d0cU, test_key) != test_vectors[16]) {
-		pr_info("self-test 4u32: FAIL\n");
+			 0x0b0a0908U, 0x0f0e0d0cU, test_key_siphash) != test_vectors_siphash[16]) {
+		pr_info("siphash self-test 4u32: FAIL\n");
+		ret = -EINVAL;
+	}
+	if (hsiphash_1u32(0x03020100U, test_key_hsiphash) != test_vectors_hsiphash[4]) {
+		pr_info("hsiphash self-test 1u32: FAIL\n");
+		ret = -EINVAL;
+	}
+	if (hsiphash_2u32(0x03020100U, 0x07060504U, test_key_hsiphash) != test_vectors_hsiphash[8]) {
+		pr_info("hsiphash self-test 2u32: FAIL\n");
+		ret = -EINVAL;
+	}
+	if (hsiphash_3u32(0x03020100U, 0x07060504U,
+			  0x0b0a0908U, test_key_hsiphash) != test_vectors_hsiphash[12]) {
+		pr_info("hsiphash self-test 3u32: FAIL\n");
+		ret = -EINVAL;
+	}
+	if (hsiphash_4u32(0x03020100U, 0x07060504U,
+			  0x0b0a0908U, 0x0f0e0d0cU, test_key_hsiphash) != test_vectors_hsiphash[16]) {
+		pr_info("hsiphash self-test 4u32: FAIL\n");
 		ret = -EINVAL;
 	}
 	if (!ret)
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH net-next 3/4] secure_seq: use SipHash in place of MD5
  2017-01-06 18:37 [PATCH net-next 1/4] siphash: add cryptographically secure PRF Jason A. Donenfeld
  2017-01-06 18:37 ` [PATCH net-next 2/4] siphash: implement HalfSipHash1-3 for hash tables Jason A. Donenfeld
@ 2017-01-06 18:37 ` Jason A. Donenfeld
  2017-01-06 18:37 ` [PATCH net-next 4/4] syncookies: use SipHash in place of SHA1 Jason A. Donenfeld
  2017-01-06 19:57 ` [PATCH net-next 1/4] siphash: add cryptographically secure PRF David Miller
  3 siblings, 0 replies; 18+ messages in thread
From: Jason A. Donenfeld @ 2017-01-06 18:37 UTC (permalink / raw)
  To: davem, netdev, linux-kernel
  Cc: Jason A. Donenfeld, Andi Kleen, David Laight, Tom Herbert,
	Hannes Frederic Sowa, Eric Dumazet

This gives a clear speed and security improvement. Siphash is both
faster and is more solid crypto than the aging MD5.

Rather than manually filling MD5 buffers, for IPv6, we simply create
a layout by a simple anonymous struct, for which gcc generates
rather efficient code. For IPv4, we pass the values directly to the
short input convenience functions.

64-bit x86_64:
[    1.683628] secure_tcpv6_sequence_number_md5# cycles: 99563527
[    1.717350] secure_tcp_sequence_number_md5# cycles: 92890502
[    1.741968] secure_tcpv6_sequence_number_siphash# cycles: 67825362
[    1.762048] secure_tcp_sequence_number_siphash# cycles: 67485526

32-bit x86:
[    1.600012] secure_tcpv6_sequence_number_md5# cycles: 103227892
[    1.634219] secure_tcp_sequence_number_md5# cycles: 94732544
[    1.669102] secure_tcpv6_sequence_number_siphash# cycles: 96299384
[    1.700165] secure_tcp_sequence_number_siphash# cycles: 86015473

Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: David Miller <davem@davemloft.net>
Cc: David Laight <David.Laight@aculab.com>
Cc: Tom Herbert <tom@herbertland.com>
Cc: Hannes Frederic Sowa <hannes@stressinduktion.org>
Cc: Eric Dumazet <eric.dumazet@gmail.com>
---
 net/core/secure_seq.c | 135 ++++++++++++++++++++------------------------------
 1 file changed, 54 insertions(+), 81 deletions(-)

diff --git a/net/core/secure_seq.c b/net/core/secure_seq.c
index 88a8e429fc3e..3dc2689bcc64 100644
--- a/net/core/secure_seq.c
+++ b/net/core/secure_seq.c
@@ -1,3 +1,5 @@
+/* Copyright (C) 2016 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved. */
+
 #include <linux/kernel.h>
 #include <linux/init.h>
 #include <linux/cryptohash.h>
@@ -8,14 +10,14 @@
 #include <linux/ktime.h>
 #include <linux/string.h>
 #include <linux/net.h>
-
+#include <linux/siphash.h>
 #include <net/secure_seq.h>
 
 #if IS_ENABLED(CONFIG_IPV6) || IS_ENABLED(CONFIG_INET)
+#include <linux/in6.h>
 #include <net/tcp.h>
-#define NET_SECRET_SIZE (MD5_MESSAGE_BYTES / 4)
 
-static u32 net_secret[NET_SECRET_SIZE] ____cacheline_aligned;
+static siphash_key_t net_secret;
 
 static __always_inline void net_secret_init(void)
 {
@@ -44,80 +46,65 @@ static u32 seq_scale(u32 seq)
 u32 secure_tcpv6_sequence_number(const __be32 *saddr, const __be32 *daddr,
 				 __be16 sport, __be16 dport, u32 *tsoff)
 {
-	u32 secret[MD5_MESSAGE_BYTES / 4];
-	u32 hash[MD5_DIGEST_WORDS];
-	u32 i;
-
+	const struct {
+		struct in6_addr saddr;
+		struct in6_addr daddr;
+		__be16 sport;
+		__be16 dport;
+	} __aligned(SIPHASH_ALIGNMENT) combined = {
+		.saddr = *(struct in6_addr *)saddr,
+		.daddr = *(struct in6_addr *)daddr,
+		.sport = sport,
+		.dport = dport
+	};
+	u64 hash;
 	net_secret_init();
-	memcpy(hash, saddr, 16);
-	for (i = 0; i < 4; i++)
-		secret[i] = net_secret[i] + (__force u32)daddr[i];
-	secret[4] = net_secret[4] +
-		(((__force u16)sport << 16) + (__force u16)dport);
-	for (i = 5; i < MD5_MESSAGE_BYTES / 4; i++)
-		secret[i] = net_secret[i];
-
-	md5_transform(hash, secret);
-
-	*tsoff = sysctl_tcp_timestamps == 1 ? hash[1] : 0;
-	return seq_scale(hash[0]);
+	hash = siphash(&combined, offsetofend(typeof(combined), dport), net_secret);
+	*tsoff = sysctl_tcp_timestamps == 1 ? (hash >> 32) : 0;
+	return seq_scale(hash);
 }
 EXPORT_SYMBOL(secure_tcpv6_sequence_number);
 
 u32 secure_ipv6_port_ephemeral(const __be32 *saddr, const __be32 *daddr,
 			       __be16 dport)
 {
-	u32 secret[MD5_MESSAGE_BYTES / 4];
-	u32 hash[MD5_DIGEST_WORDS];
-	u32 i;
-
+	const struct {
+		struct in6_addr saddr;
+		struct in6_addr daddr;
+		__be16 dport;
+	} __aligned(SIPHASH_ALIGNMENT) combined = {
+		.saddr = *(struct in6_addr *)saddr,
+		.daddr = *(struct in6_addr *)daddr,
+		.dport = dport
+	};
 	net_secret_init();
-	memcpy(hash, saddr, 16);
-	for (i = 0; i < 4; i++)
-		secret[i] = net_secret[i] + (__force u32) daddr[i];
-	secret[4] = net_secret[4] + (__force u32)dport;
-	for (i = 5; i < MD5_MESSAGE_BYTES / 4; i++)
-		secret[i] = net_secret[i];
-
-	md5_transform(hash, secret);
-
-	return hash[0];
+	return siphash(&combined, offsetofend(typeof(combined), dport), net_secret);
 }
 EXPORT_SYMBOL(secure_ipv6_port_ephemeral);
 #endif
 
 #ifdef CONFIG_INET
 
+/* secure_tcp_sequence_number(a, b, 0, d) == secure_ipv4_port_ephemeral(a, b, d),
+ * but fortunately, `sport' cannot be 0 in any circumstances. If this changes,
+ * it would be easy enough to have the former function use siphash_4u32, passing
+ * the arguments as separate u32.
+ */
+
 u32 secure_tcp_sequence_number(__be32 saddr, __be32 daddr,
 			       __be16 sport, __be16 dport, u32 *tsoff)
 {
-	u32 hash[MD5_DIGEST_WORDS];
-
+	u64 hash;
 	net_secret_init();
-	hash[0] = (__force u32)saddr;
-	hash[1] = (__force u32)daddr;
-	hash[2] = ((__force u16)sport << 16) + (__force u16)dport;
-	hash[3] = net_secret[15];
-
-	md5_transform(hash, net_secret);
-
-	*tsoff = sysctl_tcp_timestamps == 1 ? hash[1] : 0;
-	return seq_scale(hash[0]);
+	hash = siphash_3u32(saddr, daddr, (u32)sport << 16 | dport, net_secret);
+	*tsoff = sysctl_tcp_timestamps == 1 ? (hash >> 32) : 0;
+	return seq_scale(hash);
 }
 
 u32 secure_ipv4_port_ephemeral(__be32 saddr, __be32 daddr, __be16 dport)
 {
-	u32 hash[MD5_DIGEST_WORDS];
-
 	net_secret_init();
-	hash[0] = (__force u32)saddr;
-	hash[1] = (__force u32)daddr;
-	hash[2] = (__force u32)dport ^ net_secret[14];
-	hash[3] = net_secret[15];
-
-	md5_transform(hash, net_secret);
-
-	return hash[0];
+	return siphash_3u32(saddr, daddr, dport, net_secret);
 }
 EXPORT_SYMBOL_GPL(secure_ipv4_port_ephemeral);
 #endif
@@ -126,21 +113,11 @@ EXPORT_SYMBOL_GPL(secure_ipv4_port_ephemeral);
 u64 secure_dccp_sequence_number(__be32 saddr, __be32 daddr,
 				__be16 sport, __be16 dport)
 {
-	u32 hash[MD5_DIGEST_WORDS];
 	u64 seq;
-
 	net_secret_init();
-	hash[0] = (__force u32)saddr;
-	hash[1] = (__force u32)daddr;
-	hash[2] = ((__force u16)sport << 16) + (__force u16)dport;
-	hash[3] = net_secret[15];
-
-	md5_transform(hash, net_secret);
-
-	seq = hash[0] | (((u64)hash[1]) << 32);
+	seq = siphash_3u32(saddr, daddr, (u32)sport << 16 | dport, net_secret);
 	seq += ktime_get_real_ns();
 	seq &= (1ull << 48) - 1;
-
 	return seq;
 }
 EXPORT_SYMBOL(secure_dccp_sequence_number);
@@ -149,26 +126,22 @@ EXPORT_SYMBOL(secure_dccp_sequence_number);
 u64 secure_dccpv6_sequence_number(__be32 *saddr, __be32 *daddr,
 				  __be16 sport, __be16 dport)
 {
-	u32 secret[MD5_MESSAGE_BYTES / 4];
-	u32 hash[MD5_DIGEST_WORDS];
+	const struct {
+		struct in6_addr saddr;
+		struct in6_addr daddr;
+		__be16 sport;
+		__be16 dport;
+	} __aligned(SIPHASH_ALIGNMENT) combined = {
+		.saddr = *(struct in6_addr *)saddr,
+		.daddr = *(struct in6_addr *)daddr,
+		.sport = sport,
+		.dport = dport
+	};
 	u64 seq;
-	u32 i;
-
 	net_secret_init();
-	memcpy(hash, saddr, 16);
-	for (i = 0; i < 4; i++)
-		secret[i] = net_secret[i] + (__force u32)daddr[i];
-	secret[4] = net_secret[4] +
-		(((__force u16)sport << 16) + (__force u16)dport);
-	for (i = 5; i < MD5_MESSAGE_BYTES / 4; i++)
-		secret[i] = net_secret[i];
-
-	md5_transform(hash, secret);
-
-	seq = hash[0] | (((u64)hash[1]) << 32);
+	seq = siphash(&combined, offsetofend(typeof(combined), dport), net_secret);
 	seq += ktime_get_real_ns();
 	seq &= (1ull << 48) - 1;
-
 	return seq;
 }
 EXPORT_SYMBOL(secure_dccpv6_sequence_number);
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH net-next 4/4] syncookies: use SipHash in place of SHA1
  2017-01-06 18:37 [PATCH net-next 1/4] siphash: add cryptographically secure PRF Jason A. Donenfeld
  2017-01-06 18:37 ` [PATCH net-next 2/4] siphash: implement HalfSipHash1-3 for hash tables Jason A. Donenfeld
  2017-01-06 18:37 ` [PATCH net-next 3/4] secure_seq: use SipHash in place of MD5 Jason A. Donenfeld
@ 2017-01-06 18:37 ` Jason A. Donenfeld
  2017-01-06 19:57 ` [PATCH net-next 1/4] siphash: add cryptographically secure PRF David Miller
  3 siblings, 0 replies; 18+ messages in thread
From: Jason A. Donenfeld @ 2017-01-06 18:37 UTC (permalink / raw)
  To: davem, netdev, linux-kernel; +Cc: Jason A. Donenfeld, Eric Dumazet

SHA1 is slower and less secure than SipHash, and so replacing syncookie
generation with SipHash makes natural sense. Some BSDs have been doing
this for several years in fact.

The speedup should be similar -- and even more impressive -- to the
speedup from the sequence number fix in this series.

Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Cc: David Miller <davem@davemloft.net>
---
 net/ipv4/syncookies.c | 20 ++++----------------
 net/ipv6/syncookies.c | 37 ++++++++++++++++---------------------
 2 files changed, 20 insertions(+), 37 deletions(-)

diff --git a/net/ipv4/syncookies.c b/net/ipv4/syncookies.c
index 3e88467d70ee..03bb068f8888 100644
--- a/net/ipv4/syncookies.c
+++ b/net/ipv4/syncookies.c
@@ -13,13 +13,13 @@
 #include <linux/tcp.h>
 #include <linux/slab.h>
 #include <linux/random.h>
-#include <linux/cryptohash.h>
+#include <linux/siphash.h>
 #include <linux/kernel.h>
 #include <linux/export.h>
 #include <net/tcp.h>
 #include <net/route.h>
 
-static u32 syncookie_secret[2][16-4+SHA_DIGEST_WORDS] __read_mostly;
+static siphash_key_t syncookie_secret[2] __read_mostly;
 
 #define COOKIEBITS 24	/* Upper bits store count */
 #define COOKIEMASK (((__u32)1 << COOKIEBITS) - 1)
@@ -48,24 +48,12 @@ static u32 syncookie_secret[2][16-4+SHA_DIGEST_WORDS] __read_mostly;
 #define TSBITS	6
 #define TSMASK	(((__u32)1 << TSBITS) - 1)
 
-static DEFINE_PER_CPU(__u32 [16 + 5 + SHA_WORKSPACE_WORDS], ipv4_cookie_scratch);
-
 static u32 cookie_hash(__be32 saddr, __be32 daddr, __be16 sport, __be16 dport,
 		       u32 count, int c)
 {
-	__u32 *tmp;
-
 	net_get_random_once(syncookie_secret, sizeof(syncookie_secret));
-
-	tmp  = this_cpu_ptr(ipv4_cookie_scratch);
-	memcpy(tmp + 4, syncookie_secret[c], sizeof(syncookie_secret[c]));
-	tmp[0] = (__force u32)saddr;
-	tmp[1] = (__force u32)daddr;
-	tmp[2] = ((__force u32)sport << 16) + (__force u32)dport;
-	tmp[3] = count;
-	sha_transform(tmp + 16, (__u8 *)tmp, tmp + 16 + 5);
-
-	return tmp[17];
+	return siphash_4u32(saddr, daddr, (u32)sport << 16 | dport, count,
+			    syncookie_secret[c]);
 }
 
 
diff --git a/net/ipv6/syncookies.c b/net/ipv6/syncookies.c
index a4d49760bf43..be51fc0d99ad 100644
--- a/net/ipv6/syncookies.c
+++ b/net/ipv6/syncookies.c
@@ -16,7 +16,7 @@
 
 #include <linux/tcp.h>
 #include <linux/random.h>
-#include <linux/cryptohash.h>
+#include <linux/siphash.h>
 #include <linux/kernel.h>
 #include <net/ipv6.h>
 #include <net/tcp.h>
@@ -24,7 +24,7 @@
 #define COOKIEBITS 24	/* Upper bits store count */
 #define COOKIEMASK (((__u32)1 << COOKIEBITS) - 1)
 
-static u32 syncookie6_secret[2][16-4+SHA_DIGEST_WORDS] __read_mostly;
+static siphash_key_t syncookie6_secret[2] __read_mostly;
 
 /* RFC 2460, Section 8.3:
  * [ipv6 tcp] MSS must be computed as the maximum packet size minus 60 [..]
@@ -41,30 +41,25 @@ static __u16 const msstab[] = {
 	9000 - 60,
 };
 
-static DEFINE_PER_CPU(__u32 [16 + 5 + SHA_WORKSPACE_WORDS], ipv6_cookie_scratch);
-
 static u32 cookie_hash(const struct in6_addr *saddr, const struct in6_addr *daddr,
 		       __be16 sport, __be16 dport, u32 count, int c)
 {
-	__u32 *tmp;
+	const struct {
+		struct in6_addr saddr;
+		struct in6_addr daddr;
+		u32 count;
+		u16 sport;
+		u16 dport;
+	} __aligned(SIPHASH_ALIGNMENT) combined = {
+		.saddr = *saddr,
+		.daddr = *daddr,
+		.count = count,
+		.sport = sport,
+		.dport = dport
+	};
 
 	net_get_random_once(syncookie6_secret, sizeof(syncookie6_secret));
-
-	tmp  = this_cpu_ptr(ipv6_cookie_scratch);
-
-	/*
-	 * we have 320 bits of information to hash, copy in the remaining
-	 * 192 bits required for sha_transform, from the syncookie6_secret
-	 * and overwrite the digest with the secret
-	 */
-	memcpy(tmp + 10, syncookie6_secret[c], 44);
-	memcpy(tmp, saddr, 16);
-	memcpy(tmp + 4, daddr, 16);
-	tmp[8] = ((__force u32)sport << 16) + (__force u32)dport;
-	tmp[9] = count;
-	sha_transform(tmp + 16, (__u8 *)tmp, tmp + 16 + 5);
-
-	return tmp[17];
+	return siphash(&combined, offsetofend(typeof(combined), dport), syncookie6_secret[c]);
 }
 
 static __u32 secure_tcp_syn_cookie(const struct in6_addr *saddr,
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH net-next 1/4] siphash: add cryptographically secure PRF
  2017-01-06 18:37 [PATCH net-next 1/4] siphash: add cryptographically secure PRF Jason A. Donenfeld
                   ` (2 preceding siblings ...)
  2017-01-06 18:37 ` [PATCH net-next 4/4] syncookies: use SipHash in place of SHA1 Jason A. Donenfeld
@ 2017-01-06 19:57 ` David Miller
  2017-01-06 20:03   ` Jason A. Donenfeld
  2017-01-06 20:10   ` [PATCH net-next 0/4] Introduce The SipHash PRF Jason A. Donenfeld
  3 siblings, 2 replies; 18+ messages in thread
From: David Miller @ 2017-01-06 19:57 UTC (permalink / raw)
  To: Jason
  Cc: netdev, linux-kernel, jeanphilippe.aumasson, torvalds, ebiggers3,
	David.Laight, eric.dumazet


Proper patch series submissions require a header "[PATCH 0/N] ..." posting
explaining at a high level, what the series is doing, how it is doing it,
and why it is doing it that way.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH net-next 1/4] siphash: add cryptographically secure PRF
  2017-01-06 19:57 ` [PATCH net-next 1/4] siphash: add cryptographically secure PRF David Miller
@ 2017-01-06 20:03   ` Jason A. Donenfeld
  2017-01-06 20:10   ` [PATCH net-next 0/4] Introduce The SipHash PRF Jason A. Donenfeld
  1 sibling, 0 replies; 18+ messages in thread
From: Jason A. Donenfeld @ 2017-01-06 20:03 UTC (permalink / raw)
  To: David Miller
  Cc: Netdev, LKML, Jean-Philippe Aumasson, Linus Torvalds,
	Eric Biggers, David Laight, Eric Dumazet

Will resubmit. Sorry. I had this in earlier series and dropped it in
this one. Apologies. Give me 30 minutes and you'll have a beautiful
and conformant patch series.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH net-next 0/4] Introduce The SipHash PRF
  2017-01-06 19:57 ` [PATCH net-next 1/4] siphash: add cryptographically secure PRF David Miller
  2017-01-06 20:03   ` Jason A. Donenfeld
@ 2017-01-06 20:10   ` Jason A. Donenfeld
  2017-01-06 20:10     ` [PATCH net-next 1/4] siphash: add cryptographically secure PRF Jason A. Donenfeld
                       ` (3 more replies)
  1 sibling, 4 replies; 18+ messages in thread
From: Jason A. Donenfeld @ 2017-01-06 20:10 UTC (permalink / raw)
  To: davem, netdev, linux-kernel; +Cc: Jason A. Donenfeld

This patch series introduces SipHash into the kernel. SipHash is a
cryptographically secure PRF, which serves a variety of functions, and is
introduced in patch #1. The following patch #2 introduces HalfSipHash,
an optimization suitable for hash tables only. Finally, the last two patches
in this series show two usages of the introduced siphash function family.
It is expected that after this initial introductin, other usages will follow.

Please read the extensive descriptions in patch #1 and patch #2 of what these
functions do and the various levels of assurances. They're products of intense
cryptographic research, and I believe they're suitable for the uses outlined
herein.

The use of SipHash is not limited to the networking subsystem -- indeed I
would like to use it in other places too in the kernel. But after discussing
with a few on this list and at Linus' suggestion, the initial import of these
functions is coming through the networking tree. After these are merged, it
will then be easier to expand use elsewhere.

Jason A. Donenfeld (4):
  siphash: add cryptographically secure PRF
  siphash: implement HalfSipHash1-3 for hash tables
  secure_seq: use SipHash in place of MD5
  syncookies: use SipHash in place of SHA1

 Documentation/siphash.txt | 154 +++++++++++++
 MAINTAINERS               |   7 +
 include/linux/siphash.h   | 133 +++++++++++
 lib/Kconfig.debug         |   6 +-
 lib/Makefile              |   5 +-
 lib/siphash.c             | 548 ++++++++++++++++++++++++++++++++++++++++++++++
 lib/test_siphash.c        | 208 ++++++++++++++++++
 net/core/secure_seq.c     | 135 +++++-------
 net/ipv4/syncookies.c     |  20 +-
 net/ipv6/syncookies.c     |  37 ++--
 10 files changed, 1130 insertions(+), 123 deletions(-)
 create mode 100644 Documentation/siphash.txt
 create mode 100644 include/linux/siphash.h
 create mode 100644 lib/siphash.c
 create mode 100644 lib/test_siphash.c

-- 
2.11.0

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH net-next 1/4] siphash: add cryptographically secure PRF
  2017-01-06 20:10   ` [PATCH net-next 0/4] Introduce The SipHash PRF Jason A. Donenfeld
@ 2017-01-06 20:10     ` Jason A. Donenfeld
  2017-01-06 20:32       ` Jean-Philippe Aumasson
  2017-01-07  4:04       ` Eric Biggers
  2017-01-06 20:10     ` [PATCH net-next 2/4] siphash: implement HalfSipHash1-3 for hash tables Jason A. Donenfeld
                       ` (2 subsequent siblings)
  3 siblings, 2 replies; 18+ messages in thread
From: Jason A. Donenfeld @ 2017-01-06 20:10 UTC (permalink / raw)
  To: davem, netdev, linux-kernel
  Cc: Jason A. Donenfeld, Jean-Philippe Aumasson, Linus Torvalds,
	Eric Biggers, David Laight, Eric Dumazet

SipHash is a 64-bit keyed hash function that is actually a
cryptographically secure PRF, like HMAC. Except SipHash is super fast,
and is meant to be used as a hashtable keyed lookup function, or as a
general PRF for short input use cases, such as sequence numbers or RNG
chaining.

For the first usage:

There are a variety of attacks known as "hashtable poisoning" in which an
attacker forms some data such that the hash of that data will be the
same, and then preceeds to fill up all entries of a hashbucket. This is
a realistic and well-known denial-of-service vector. Currently
hashtables use jhash, which is fast but not secure, and some kind of
rotating key scheme (or none at all, which isn't good). SipHash is meant
as a replacement for jhash in these cases.

There are a modicum of places in the kernel that are vulnerable to
hashtable poisoning attacks, either via userspace vectors or network
vectors, and there's not a reliable mechanism inside the kernel at the
moment to fix it. The first step toward fixing these issues is actually
getting a secure primitive into the kernel for developers to use. Then
we can, bit by bit, port things over to it as deemed appropriate.

While SipHash is extremely fast for a cryptographically secure function,
it is likely a bit slower than the insecure jhash, and so replacements
will be evaluated on a case-by-case basis based on whether or not the
difference in speed is negligible and whether or not the current jhash usage
poses a real security risk.

For the second usage:

A few places in the kernel are using MD5 or SHA1 for creating secure
sequence numbers, syn cookies, port numbers, or fast random numbers.
SipHash is a faster and more fitting, and more secure replacement for MD5
in those situations. Replacing MD5 and SHA1 with SipHash for these uses is
obvious and straight-forward, and so is submitted along with this patch
series. There shouldn't be much of a debate over its efficacy.

Dozens of languages are already using this internally for their hash
tables and PRFs. Some of the BSDs already use this in their kernels.
SipHash is a widely known high-speed solution to a widely known set of
problems, and it's time we catch-up.

Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Cc: Jean-Philippe Aumasson <jeanphilippe.aumasson@gmail.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Eric Biggers <ebiggers3@gmail.com>
Cc: David Laight <David.Laight@aculab.com>
Cc: Eric Dumazet <eric.dumazet@gmail.com>
---
 Documentation/siphash.txt |  79 ++++++++++++++++
 MAINTAINERS               |   7 ++
 include/linux/siphash.h   |  79 ++++++++++++++++
 lib/Kconfig.debug         |   6 +-
 lib/Makefile              |   5 +-
 lib/siphash.c             | 232 ++++++++++++++++++++++++++++++++++++++++++++++
 lib/test_siphash.c        | 119 ++++++++++++++++++++++++
 7 files changed, 522 insertions(+), 5 deletions(-)
 create mode 100644 Documentation/siphash.txt
 create mode 100644 include/linux/siphash.h
 create mode 100644 lib/siphash.c
 create mode 100644 lib/test_siphash.c

diff --git a/Documentation/siphash.txt b/Documentation/siphash.txt
new file mode 100644
index 000000000000..39ff7f0438e7
--- /dev/null
+++ b/Documentation/siphash.txt
@@ -0,0 +1,79 @@
+         SipHash - a short input PRF
+-----------------------------------------------
+Written by Jason A. Donenfeld <jason@zx2c4.com>
+
+SipHash is a cryptographically secure PRF -- a keyed hash function -- that
+performs very well for short inputs, hence the name. It was designed by
+cryptographers Daniel J. Bernstein and Jean-Philippe Aumasson. It is intended
+as a replacement for some uses of: `jhash`, `md5_transform`, `sha_transform`,
+and so forth.
+
+SipHash takes a secret key filled with randomly generated numbers and either
+an input buffer or several input integers. It spits out an integer that is
+indistinguishable from random. You may then use that integer as part of secure
+sequence numbers, secure cookies, or mask it off for use in a hash table.
+
+1. Generating a key
+
+Keys should always be generated from a cryptographically secure source of
+random numbers, either using get_random_bytes or get_random_once:
+
+siphash_key_t key;
+get_random_bytes(key, sizeof(key));
+
+If you're not deriving your key from here, you're doing it wrong.
+
+2. Using the functions
+
+There are two variants of the function, one that takes a list of integers, and
+one that takes a buffer:
+
+u64 siphash(const void *data, size_t len, siphash_key_t key);
+
+And:
+
+u64 siphash_1u64(u64, siphash_key_t key);
+u64 siphash_2u64(u64, u64, siphash_key_t key);
+u64 siphash_3u64(u64, u64, u64, siphash_key_t key);
+u64 siphash_4u64(u64, u64, u64, u64, siphash_key_t key);
+u64 siphash_1u32(u32, siphash_key_t key);
+u64 siphash_2u32(u32, u32, siphash_key_t key);
+u64 siphash_3u32(u32, u32, u32, siphash_key_t key);
+u64 siphash_4u32(u32, u32, u32, u32, siphash_key_t key);
+
+If you pass the generic siphash function something of a constant length, it
+will constant fold at compile-time and automatically choose one of the
+optimized functions.
+
+3. Hashtable key function usage:
+
+struct some_hashtable {
+	DECLARE_HASHTABLE(hashtable, 8);
+	siphash_key_t key;
+};
+
+void init_hashtable(struct some_hashtable *table)
+{
+	get_random_bytes(table->key, sizeof(table->key));
+}
+
+static inline hlist_head *some_hashtable_bucket(struct some_hashtable *table, struct interesting_input *input)
+{
+	return &table->hashtable[siphash(input, sizeof(*input), table->key) & (HASH_SIZE(table->hashtable) - 1)];
+}
+
+You may then iterate like usual over the returned hash bucket.
+
+4. Security
+
+SipHash has a very high security margin, with its 128-bit key. So long as the
+key is kept secret, it is impossible for an attacker to guess the outputs of
+the function, even if being able to observe many outputs, since 2^128 outputs
+is significant.
+
+Linux implements the "2-4" variant of SipHash.
+
+5. Resources
+
+Read the SipHash paper if you're interested in learning more:
+https://131002.net/siphash/siphash.pdf
diff --git a/MAINTAINERS b/MAINTAINERS
index cfff2c9e3d94..e1384ae37344 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -11291,6 +11291,13 @@ F:	arch/arm/mach-s3c24xx/mach-bast.c
 F:	arch/arm/mach-s3c24xx/bast-ide.c
 F:	arch/arm/mach-s3c24xx/bast-irq.c
 
+SIPHASH PRF ROUTINES
+M:	Jason A. Donenfeld <Jason@zx2c4.com>
+S:	Maintained
+F:	lib/siphash.c
+F:	lib/test_siphash.c
+F:	include/linux/siphash.h
+
 TI DAVINCI MACHINE SUPPORT
 M:	Sekhar Nori <nsekhar@ti.com>
 M:	Kevin Hilman <khilman@kernel.org>
diff --git a/include/linux/siphash.h b/include/linux/siphash.h
new file mode 100644
index 000000000000..7aa666eb00d9
--- /dev/null
+++ b/include/linux/siphash.h
@@ -0,0 +1,79 @@
+/* Copyright (C) 2016 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
+ *
+ * This file is provided under a dual BSD/GPLv2 license.
+ *
+ * SipHash: a fast short-input PRF
+ * https://131002.net/siphash/
+ *
+ * This implementation is specifically for SipHash2-4.
+ */
+
+#ifndef _LINUX_SIPHASH_H
+#define _LINUX_SIPHASH_H
+
+#include <linux/types.h>
+#include <linux/kernel.h>
+
+#define SIPHASH_ALIGNMENT __alignof__(u64)
+typedef u64 siphash_key_t[2];
+
+u64 __siphash_aligned(const void *data, size_t len, const siphash_key_t key);
+#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
+u64 __siphash_unaligned(const void *data, size_t len, const siphash_key_t key);
+#endif
+
+u64 siphash_1u64(const u64 a, const siphash_key_t key);
+u64 siphash_2u64(const u64 a, const u64 b, const siphash_key_t key);
+u64 siphash_3u64(const u64 a, const u64 b, const u64 c,
+		 const siphash_key_t key);
+u64 siphash_4u64(const u64 a, const u64 b, const u64 c, const u64 d,
+		 const siphash_key_t key);
+u64 siphash_1u32(const u32 a, const siphash_key_t key);
+u64 siphash_3u32(const u32 a, const u32 b, const u32 c, const siphash_key_t key);
+
+static inline u64 siphash_2u32(const u32 a, const u32 b, const siphash_key_t key)
+{
+	return siphash_1u64((u64)b << 32 | a, key);
+}
+static inline u64 siphash_4u32(const u32 a, const u32 b, const u32 c, const u32 d,
+			       const siphash_key_t key)
+{
+	return siphash_2u64((u64)b << 32 | a, (u64)d << 32 | c, key);
+}
+
+
+static inline u64 ___siphash_aligned(const __le64 *data, size_t len, const siphash_key_t key)
+{
+	if (__builtin_constant_p(len) && len == 4)
+		return siphash_1u32(le32_to_cpu(data[0]), key);
+	if (__builtin_constant_p(len) && len == 8)
+		return siphash_1u64(le64_to_cpu(data[0]), key);
+	if (__builtin_constant_p(len) && len == 16)
+		return siphash_2u64(le64_to_cpu(data[0]), le64_to_cpu(data[1]),
+				    key);
+	if (__builtin_constant_p(len) && len == 24)
+		return siphash_3u64(le64_to_cpu(data[0]), le64_to_cpu(data[1]),
+				    le64_to_cpu(data[2]), key);
+	if (__builtin_constant_p(len) && len == 32)
+		return siphash_4u64(le64_to_cpu(data[0]), le64_to_cpu(data[1]),
+				    le64_to_cpu(data[2]), le64_to_cpu(data[3]),
+				    key);
+	return __siphash_aligned(data, len, key);
+}
+
+/**
+ * siphash - compute 64-bit siphash PRF value
+ * @data: buffer to hash
+ * @size: size of @data
+ * @key: the siphash key
+ */
+static inline u64 siphash(const void *data, size_t len, const siphash_key_t key)
+{
+#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
+	if (!IS_ALIGNED((unsigned long)data, SIPHASH_ALIGNMENT))
+		return __siphash_unaligned(data, len, key);
+#endif
+	return ___siphash_aligned(data, len, key);
+}
+
+#endif /* _LINUX_SIPHASH_H */
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index b06848a104e6..3d2515a770c3 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -1819,9 +1819,9 @@ config TEST_HASH
 	tristate "Perform selftest on hash functions"
 	default n
 	help
-	  Enable this option to test the kernel's integer (<linux/hash,h>)
-	  and string (<linux/stringhash.h>) hash functions on boot
-	  (or module load).
+	  Enable this option to test the kernel's integer (<linux/hash.h>),
+	  string (<linux/stringhash.h>), and siphash (<linux/siphash.h>)
+	  hash functions on boot (or module load).
 
 	  This is intended to help people writing architecture-specific
 	  optimized versions.  If unsure, say N.
diff --git a/lib/Makefile b/lib/Makefile
index bc4073a8cd08..7b3008d58600 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -22,7 +22,8 @@ lib-y := ctype.o string.o vsprintf.o cmdline.o \
 	 sha1.o chacha20.o md5.o irq_regs.o argv_split.o \
 	 flex_proportions.o ratelimit.o show_mem.o \
 	 is_single_threaded.o plist.o decompress.o kobject_uevent.o \
-	 earlycpio.o seq_buf.o nmi_backtrace.o nodemask.o win_minmax.o
+	 earlycpio.o seq_buf.o siphash.o \
+	 nmi_backtrace.o nodemask.o win_minmax.o
 
 lib-$(CONFIG_MMU) += ioremap.o
 lib-$(CONFIG_SMP) += cpumask.o
@@ -44,7 +45,7 @@ obj-$(CONFIG_TEST_HEXDUMP) += test_hexdump.o
 obj-y += kstrtox.o
 obj-$(CONFIG_TEST_BPF) += test_bpf.o
 obj-$(CONFIG_TEST_FIRMWARE) += test_firmware.o
-obj-$(CONFIG_TEST_HASH) += test_hash.o
+obj-$(CONFIG_TEST_HASH) += test_hash.o test_siphash.o
 obj-$(CONFIG_TEST_KASAN) += test_kasan.o
 obj-$(CONFIG_TEST_KSTRTOX) += test-kstrtox.o
 obj-$(CONFIG_TEST_LKM) += test_module.o
diff --git a/lib/siphash.c b/lib/siphash.c
new file mode 100644
index 000000000000..ff2151313667
--- /dev/null
+++ b/lib/siphash.c
@@ -0,0 +1,232 @@
+/* Copyright (C) 2016 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
+ *
+ * This file is provided under a dual BSD/GPLv2 license.
+ *
+ * SipHash: a fast short-input PRF
+ * https://131002.net/siphash/
+ *
+ * This implementation is specifically for SipHash2-4.
+ */
+
+#include <linux/siphash.h>
+#include <asm/unaligned.h>
+
+#if defined(CONFIG_DCACHE_WORD_ACCESS) && BITS_PER_LONG == 64
+#include <linux/dcache.h>
+#include <asm/word-at-a-time.h>
+#endif
+
+#define SIPROUND \
+	do { \
+	v0 += v1; v1 = rol64(v1, 13); v1 ^= v0; v0 = rol64(v0, 32); \
+	v2 += v3; v3 = rol64(v3, 16); v3 ^= v2; \
+	v0 += v3; v3 = rol64(v3, 21); v3 ^= v0; \
+	v2 += v1; v1 = rol64(v1, 17); v1 ^= v2; v2 = rol64(v2, 32); \
+	} while(0)
+
+#define PREAMBLE(len) \
+	u64 v0 = 0x736f6d6570736575ULL; \
+	u64 v1 = 0x646f72616e646f6dULL; \
+	u64 v2 = 0x6c7967656e657261ULL; \
+	u64 v3 = 0x7465646279746573ULL; \
+	u64 b = ((u64)len) << 56; \
+	v3 ^= key[1]; \
+	v2 ^= key[0]; \
+	v1 ^= key[1]; \
+	v0 ^= key[0];
+
+#define POSTAMBLE \
+	v3 ^= b; \
+	SIPROUND; \
+	SIPROUND; \
+	v0 ^= b; \
+	v2 ^= 0xff; \
+	SIPROUND; \
+	SIPROUND; \
+	SIPROUND; \
+	SIPROUND; \
+	return (v0 ^ v1) ^ (v2 ^ v3);
+
+u64 __siphash_aligned(const void *data, size_t len, const siphash_key_t key)
+{
+	const u8 *end = data + len - (len % sizeof(u64));
+	const u8 left = len & (sizeof(u64) - 1);
+	u64 m;
+	PREAMBLE(len)
+	for (; data != end; data += sizeof(u64)) {
+		m = le64_to_cpup(data);
+		v3 ^= m;
+		SIPROUND;
+		SIPROUND;
+		v0 ^= m;
+	}
+#if defined(CONFIG_DCACHE_WORD_ACCESS) && BITS_PER_LONG == 64
+	if (left)
+		b |= le64_to_cpu((__force __le64)(load_unaligned_zeropad(data) &
+						  bytemask_from_count(left)));
+#else
+	switch (left) {
+	case 7: b |= ((u64)end[6]) << 48;
+	case 6: b |= ((u64)end[5]) << 40;
+	case 5: b |= ((u64)end[4]) << 32;
+	case 4: b |= le32_to_cpup(data); break;
+	case 3: b |= ((u64)end[2]) << 16;
+	case 2: b |= le16_to_cpup(data); break;
+	case 1: b |= end[0];
+	}
+#endif
+	POSTAMBLE
+}
+EXPORT_SYMBOL(__siphash_aligned);
+
+#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
+u64 __siphash_unaligned(const void *data, size_t len, const siphash_key_t key)
+{
+	const u8 *end = data + len - (len % sizeof(u64));
+	const u8 left = len & (sizeof(u64) - 1);
+	u64 m;
+	PREAMBLE(len)
+	for (; data != end; data += sizeof(u64)) {
+		m = get_unaligned_le64(data);
+		v3 ^= m;
+		SIPROUND;
+		SIPROUND;
+		v0 ^= m;
+	}
+#if defined(CONFIG_DCACHE_WORD_ACCESS) && BITS_PER_LONG == 64
+	if (left)
+		b |= le64_to_cpu((__force __le64)(load_unaligned_zeropad(data) &
+						  bytemask_from_count(left)));
+#else
+	switch (left) {
+	case 7: b |= ((u64)end[6]) << 48;
+	case 6: b |= ((u64)end[5]) << 40;
+	case 5: b |= ((u64)end[4]) << 32;
+	case 4: b |= get_unaligned_le32(end); break;
+	case 3: b |= ((u64)end[2]) << 16;
+	case 2: b |= get_unaligned_le16(end); break;
+	case 1: b |= end[0];
+	}
+#endif
+	POSTAMBLE
+}
+EXPORT_SYMBOL(__siphash_unaligned);
+#endif
+
+/**
+ * siphash_1u64 - compute 64-bit siphash PRF value of a u64
+ * @first: first u64
+ * @key: the siphash key
+ */
+u64 siphash_1u64(const u64 first, const siphash_key_t key)
+{
+	PREAMBLE(8)
+	v3 ^= first;
+	SIPROUND;
+	SIPROUND;
+	v0 ^= first;
+	POSTAMBLE
+}
+EXPORT_SYMBOL(siphash_1u64);
+
+/**
+ * siphash_2u64 - compute 64-bit siphash PRF value of 2 u64
+ * @first: first u64
+ * @second: second u64
+ * @key: the siphash key
+ */
+u64 siphash_2u64(const u64 first, const u64 second, const siphash_key_t key)
+{
+	PREAMBLE(16)
+	v3 ^= first;
+	SIPROUND;
+	SIPROUND;
+	v0 ^= first;
+	v3 ^= second;
+	SIPROUND;
+	SIPROUND;
+	v0 ^= second;
+	POSTAMBLE
+}
+EXPORT_SYMBOL(siphash_2u64);
+
+/**
+ * siphash_3u64 - compute 64-bit siphash PRF value of 3 u64
+ * @first: first u64
+ * @second: second u64
+ * @third: third u64
+ * @key: the siphash key
+ */
+u64 siphash_3u64(const u64 first, const u64 second, const u64 third,
+		 const siphash_key_t key)
+{
+	PREAMBLE(24)
+	v3 ^= first;
+	SIPROUND;
+	SIPROUND;
+	v0 ^= first;
+	v3 ^= second;
+	SIPROUND;
+	SIPROUND;
+	v0 ^= second;
+	v3 ^= third;
+	SIPROUND;
+	SIPROUND;
+	v0 ^= third;
+	POSTAMBLE
+}
+EXPORT_SYMBOL(siphash_3u64);
+
+/**
+ * siphash_4u64 - compute 64-bit siphash PRF value of 4 u64
+ * @first: first u64
+ * @second: second u64
+ * @third: third u64
+ * @forth: forth u64
+ * @key: the siphash key
+ */
+u64 siphash_4u64(const u64 first, const u64 second, const u64 third,
+		 const u64 forth, const siphash_key_t key)
+{
+	PREAMBLE(32)
+	v3 ^= first;
+	SIPROUND;
+	SIPROUND;
+	v0 ^= first;
+	v3 ^= second;
+	SIPROUND;
+	SIPROUND;
+	v0 ^= second;
+	v3 ^= third;
+	SIPROUND;
+	SIPROUND;
+	v0 ^= third;
+	v3 ^= forth;
+	SIPROUND;
+	SIPROUND;
+	v0 ^= forth;
+	POSTAMBLE
+}
+EXPORT_SYMBOL(siphash_4u64);
+
+u64 siphash_1u32(const u32 first, const siphash_key_t key)
+{
+	PREAMBLE(4)
+	b |= first;
+	POSTAMBLE
+}
+EXPORT_SYMBOL(siphash_1u32);
+
+u64 siphash_3u32(const u32 first, const u32 second, const u32 third,
+		 const siphash_key_t key)
+{
+	u64 combined = (u64)second << 32 | first;
+	PREAMBLE(12)
+	v3 ^= combined;
+	SIPROUND;
+	SIPROUND;
+	v0 ^= combined;
+	b |= third;
+	POSTAMBLE
+}
+EXPORT_SYMBOL(siphash_3u32);
diff --git a/lib/test_siphash.c b/lib/test_siphash.c
new file mode 100644
index 000000000000..e0ba2cf8dc67
--- /dev/null
+++ b/lib/test_siphash.c
@@ -0,0 +1,119 @@
+/* Test cases for siphash.c
+ *
+ * Copyright (C) 2016 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
+ *
+ * This file is provided under a dual BSD/GPLv2 license.
+ *
+ * SipHash: a fast short-input PRF
+ * https://131002.net/siphash/
+ *
+ * This implementation is specifically for SipHash2-4.
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <linux/siphash.h>
+#include <linux/kernel.h>
+#include <linux/string.h>
+#include <linux/errno.h>
+#include <linux/module.h>
+
+/* Test vectors taken from official reference source available at:
+ *     https://131002.net/siphash/siphash24.c
+ */
+static const u64 test_vectors[64] = {
+	0x726fdb47dd0e0e31ULL, 0x74f839c593dc67fdULL, 0x0d6c8009d9a94f5aULL,
+	0x85676696d7fb7e2dULL, 0xcf2794e0277187b7ULL, 0x18765564cd99a68dULL,
+	0xcbc9466e58fee3ceULL, 0xab0200f58b01d137ULL, 0x93f5f5799a932462ULL,
+	0x9e0082df0ba9e4b0ULL, 0x7a5dbbc594ddb9f3ULL, 0xf4b32f46226bada7ULL,
+	0x751e8fbc860ee5fbULL, 0x14ea5627c0843d90ULL, 0xf723ca908e7af2eeULL,
+	0xa129ca6149be45e5ULL, 0x3f2acc7f57c29bdbULL, 0x699ae9f52cbe4794ULL,
+	0x4bc1b3f0968dd39cULL, 0xbb6dc91da77961bdULL, 0xbed65cf21aa2ee98ULL,
+	0xd0f2cbb02e3b67c7ULL, 0x93536795e3a33e88ULL, 0xa80c038ccd5ccec8ULL,
+	0xb8ad50c6f649af94ULL, 0xbce192de8a85b8eaULL, 0x17d835b85bbb15f3ULL,
+	0x2f2e6163076bcfadULL, 0xde4daaaca71dc9a5ULL, 0xa6a2506687956571ULL,
+	0xad87a3535c49ef28ULL, 0x32d892fad841c342ULL, 0x7127512f72f27cceULL,
+	0xa7f32346f95978e3ULL, 0x12e0b01abb051238ULL, 0x15e034d40fa197aeULL,
+	0x314dffbe0815a3b4ULL, 0x027990f029623981ULL, 0xcadcd4e59ef40c4dULL,
+	0x9abfd8766a33735cULL, 0x0e3ea96b5304a7d0ULL, 0xad0c42d6fc585992ULL,
+	0x187306c89bc215a9ULL, 0xd4a60abcf3792b95ULL, 0xf935451de4f21df2ULL,
+	0xa9538f0419755787ULL, 0xdb9acddff56ca510ULL, 0xd06c98cd5c0975ebULL,
+	0xe612a3cb9ecba951ULL, 0xc766e62cfcadaf96ULL, 0xee64435a9752fe72ULL,
+	0xa192d576b245165aULL, 0x0a8787bf8ecb74b2ULL, 0x81b3e73d20b49b6fULL,
+	0x7fa8220ba3b2eceaULL, 0x245731c13ca42499ULL, 0xb78dbfaf3a8d83bdULL,
+	0xea1ad565322a1a0bULL, 0x60e61c23a3795013ULL, 0x6606d7e446282b93ULL,
+	0x6ca4ecb15c5f91e1ULL, 0x9f626da15c9625f3ULL, 0xe51b38608ef25f57ULL,
+	0x958a324ceb064572ULL
+};
+static const siphash_key_t test_key =
+	{ 0x0706050403020100ULL , 0x0f0e0d0c0b0a0908ULL };
+
+static int __init siphash_test_init(void)
+{
+	u8 in[64] __aligned(SIPHASH_ALIGNMENT);
+	u8 in_unaligned[65];
+	u8 i;
+	int ret = 0;
+
+	for (i = 0; i < 64; ++i) {
+		in[i] = i;
+		in_unaligned[i + 1] = i;
+		if (siphash(in, i, test_key) != test_vectors[i]) {
+			pr_info("self-test aligned %u: FAIL\n", i + 1);
+			ret = -EINVAL;
+		}
+		if (siphash(in_unaligned + 1, i, test_key) != test_vectors[i]) {
+			pr_info("self-test unaligned %u: FAIL\n", i + 1);
+			ret = -EINVAL;
+		}
+	}
+	if (siphash_1u64(0x0706050403020100ULL, test_key) != test_vectors[8]) {
+		pr_info("self-test 1u64: FAIL\n");
+		ret = -EINVAL;
+	}
+	if (siphash_2u64(0x0706050403020100ULL, 0x0f0e0d0c0b0a0908ULL, test_key) != test_vectors[16]) {
+		pr_info("self-test 2u64: FAIL\n");
+		ret = -EINVAL;
+	}
+	if (siphash_3u64(0x0706050403020100ULL, 0x0f0e0d0c0b0a0908ULL,
+			 0x1716151413121110ULL, test_key) != test_vectors[24]) {
+		pr_info("self-test 3u64: FAIL\n");
+		ret = -EINVAL;
+	}
+	if (siphash_4u64(0x0706050403020100ULL, 0x0f0e0d0c0b0a0908ULL,
+			 0x1716151413121110ULL, 0x1f1e1d1c1b1a1918ULL, test_key) != test_vectors[32]) {
+		pr_info("self-test 4u64: FAIL\n");
+		ret = -EINVAL;
+	}
+	if (siphash_1u32(0x03020100U, test_key) != test_vectors[4]) {
+		pr_info("self-test 1u32: FAIL\n");
+		ret = -EINVAL;
+	}
+	if (siphash_2u32(0x03020100U, 0x07060504U, test_key) != test_vectors[8]) {
+		pr_info("self-test 2u32: FAIL\n");
+		ret = -EINVAL;
+	}
+	if (siphash_3u32(0x03020100U, 0x07060504U,
+			 0x0b0a0908U, test_key) != test_vectors[12]) {
+		pr_info("self-test 3u32: FAIL\n");
+		ret = -EINVAL;
+	}
+	if (siphash_4u32(0x03020100U, 0x07060504U,
+			 0x0b0a0908U, 0x0f0e0d0cU, test_key) != test_vectors[16]) {
+		pr_info("self-test 4u32: FAIL\n");
+		ret = -EINVAL;
+	}
+	if (!ret)
+		pr_info("self-tests: pass\n");
+	return ret;
+}
+
+static void __exit siphash_test_exit(void)
+{
+}
+
+module_init(siphash_test_init);
+module_exit(siphash_test_exit);
+
+MODULE_AUTHOR("Jason A. Donenfeld <Jason@zx2c4.com>");
+MODULE_LICENSE("Dual BSD/GPL");
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH net-next 2/4] siphash: implement HalfSipHash1-3 for hash tables
  2017-01-06 20:10   ` [PATCH net-next 0/4] Introduce The SipHash PRF Jason A. Donenfeld
  2017-01-06 20:10     ` [PATCH net-next 1/4] siphash: add cryptographically secure PRF Jason A. Donenfeld
@ 2017-01-06 20:10     ` Jason A. Donenfeld
  2017-01-06 20:33       ` Jean-Philippe Aumasson
  2017-01-06 20:10     ` [PATCH net-next 3/4] secure_seq: use SipHash in place of MD5 Jason A. Donenfeld
  2017-01-06 20:10     ` [PATCH net-next 4/4] syncookies: use SipHash in place of SHA1 Jason A. Donenfeld
  3 siblings, 1 reply; 18+ messages in thread
From: Jason A. Donenfeld @ 2017-01-06 20:10 UTC (permalink / raw)
  To: davem, netdev, linux-kernel; +Cc: Jason A. Donenfeld, Jean-Philippe Aumasson

HalfSipHash, or hsiphash, is a shortened version of SipHash, which
generates 32-bit outputs using a weaker 64-bit key. It has *much* lower
security margins, and shouldn't be used for anything too sensitive, but
it could be used as a hashtable key function replacement, if the output
is never exposed, and if the security requirement is not too high.

The goal is to make this something that performance-critical jhash users
would be willing to use.

On 64-bit machines, HalfSipHash1-3 is slower than SipHash1-3, so we alias
SipHash1-3 to HalfSipHash1-3 on those systems.

64-bit x86_64:
[    0.509409] test_siphash:     SipHash2-4 cycles: 4049181
[    0.510650] test_siphash:     SipHash1-3 cycles: 2512884
[    0.512205] test_siphash: HalfSipHash1-3 cycles: 3429920
[    0.512904] test_siphash:    JenkinsHash cycles:  978267
So, we map hsiphash() -> SipHash1-3

32-bit x86:
[    0.509868] test_siphash:     SipHash2-4 cycles: 14812892
[    0.513601] test_siphash:     SipHash1-3 cycles:  9510710
[    0.515263] test_siphash: HalfSipHash1-3 cycles:  3856157
[    0.515952] test_siphash:    JenkinsHash cycles:  1148567
So, we map hsiphash() -> HalfSipHash1-3

hsiphash() is roughly 3 times slower than jhash(), but comes with a
considerable security improvement.

Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Cc: Jean-Philippe Aumasson <jeanphilippe.aumasson@gmail.com>
---
 Documentation/siphash.txt |  75 +++++++++++
 include/linux/siphash.h   |  56 +++++++-
 lib/siphash.c             | 318 +++++++++++++++++++++++++++++++++++++++++++++-
 lib/test_siphash.c        | 139 ++++++++++++++++----
 4 files changed, 561 insertions(+), 27 deletions(-)

diff --git a/Documentation/siphash.txt b/Documentation/siphash.txt
index 39ff7f0438e7..f93c1d7104c4 100644
--- a/Documentation/siphash.txt
+++ b/Documentation/siphash.txt
@@ -77,3 +77,78 @@ Linux implements the "2-4" variant of SipHash.
 
 Read the SipHash paper if you're interested in learning more:
 https://131002.net/siphash/siphash.pdf
+
+
+~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~
+
+HalfSipHash - SipHash's insecure younger cousin
+-----------------------------------------------
+Written by Jason A. Donenfeld <jason@zx2c4.com>
+
+On the off-chance that SipHash is not fast enough for your needs, you might be
+able to justify using HalfSipHash, a terrifying but potentially useful
+possibility. HalfSipHash cuts SipHash's rounds down from "2-4" to "1-3" and,
+even scarier, uses an easily brute-forcable 64-bit key (with a 32-bit output)
+instead of SipHash's 128-bit key. However, this may appeal to some
+high-performance `jhash` users.
+
+Danger!
+
+Do not ever use HalfSipHash except for as a hashtable key function, and only
+then when you can be absolutely certain that the outputs will never be
+transmitted out of the kernel. This is only remotely useful over `jhash` as a
+means of mitigating hashtable flooding denial of service attacks.
+
+1. Generating a key
+
+Keys should always be generated from a cryptographically secure source of
+random numbers, either using get_random_bytes or get_random_once:
+
+hsiphash_key_t key;
+get_random_bytes(key, sizeof(key));
+
+If you're not deriving your key from here, you're doing it wrong.
+
+2. Using the functions
+
+There are two variants of the function, one that takes a list of integers, and
+one that takes a buffer:
+
+u32 hsiphash(const void *data, size_t len, siphash_key_t key);
+
+And:
+
+u32 hsiphash_1u32(u32, hsiphash_key_t key);
+u32 hsiphash_2u32(u32, u32, hsiphash_key_t key);
+u32 hsiphash_3u32(u32, u32, u32, hsiphash_key_t key);
+u32 hsiphash_4u32(u32, u32, u32, u32, hsiphash_key_t key);
+
+If you pass the generic hsiphash function something of a constant length, it
+will constant fold at compile-time and automatically choose one of the
+optimized functions.
+
+3. Hashtable key function usage:
+
+struct some_hashtable {
+	DECLARE_HASHTABLE(hashtable, 8);
+	hsiphash_key_t key;
+};
+
+void init_hashtable(struct some_hashtable *table)
+{
+	get_random_bytes(table->key, sizeof(table->key));
+}
+
+static inline hlist_head *some_hashtable_bucket(struct some_hashtable *table, struct interesting_input *input)
+{
+	return &table->hashtable[hsiphash(input, sizeof(*input), table->key) & (HASH_SIZE(table->hashtable) - 1)];
+}
+
+You may then iterate like usual over the returned hash bucket.
+
+4. Performance
+
+HalfSipHash is roughly 3 times slower than JenkinsHash. For many replacements,
+this will not be a problem, as the hashtable lookup isn't the bottleneck. And
+in general, this is probably a good sacrifice to make for the security and DoS
+resistance of HalfSipHash.
diff --git a/include/linux/siphash.h b/include/linux/siphash.h
index 7aa666eb00d9..efab44c654f3 100644
--- a/include/linux/siphash.h
+++ b/include/linux/siphash.h
@@ -5,7 +5,9 @@
  * SipHash: a fast short-input PRF
  * https://131002.net/siphash/
  *
- * This implementation is specifically for SipHash2-4.
+ * This implementation is specifically for SipHash2-4 for a secure PRF
+ * and HalfSipHash1-3/SipHash1-3 for an insecure PRF only suitable for
+ * hashtables.
  */
 
 #ifndef _LINUX_SIPHASH_H
@@ -76,4 +78,56 @@ static inline u64 siphash(const void *data, size_t len, const siphash_key_t key)
 	return ___siphash_aligned(data, len, key);
 }
 
+#if BITS_PER_LONG == 64
+typedef siphash_key_t hsiphash_key_t;
+#define HSIPHASH_ALIGNMENT SIPHASH_ALIGNMENT
+#else
+typedef u32 hsiphash_key_t[2];
+#define HSIPHASH_ALIGNMENT __alignof__(u32)
+#endif
+
+u32 __hsiphash_aligned(const void *data, size_t len, const hsiphash_key_t key);
+#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
+u32 __hsiphash_unaligned(const void *data, size_t len, const hsiphash_key_t key);
+#endif
+
+u32 hsiphash_1u32(const u32 a, const hsiphash_key_t key);
+u32 hsiphash_2u32(const u32 a, const u32 b, const hsiphash_key_t key);
+u32 hsiphash_3u32(const u32 a, const u32 b, const u32 c,
+		  const hsiphash_key_t key);
+u32 hsiphash_4u32(const u32 a, const u32 b, const u32 c, const u32 d,
+		  const hsiphash_key_t key);
+
+static inline u32 ___hsiphash_aligned(const __le32 *data, size_t len, const hsiphash_key_t key)
+{
+	if (__builtin_constant_p(len) && len == 4)
+		return hsiphash_1u32(le32_to_cpu(data[0]), key);
+	if (__builtin_constant_p(len) && len == 8)
+		return hsiphash_2u32(le32_to_cpu(data[0]), le32_to_cpu(data[1]),
+				     key);
+	if (__builtin_constant_p(len) && len == 12)
+		return hsiphash_3u32(le32_to_cpu(data[0]), le32_to_cpu(data[1]),
+				     le32_to_cpu(data[2]), key);
+	if (__builtin_constant_p(len) && len == 16)
+		return hsiphash_4u32(le32_to_cpu(data[0]), le32_to_cpu(data[1]),
+				     le32_to_cpu(data[2]), le32_to_cpu(data[3]),
+				     key);
+	return __hsiphash_aligned(data, len, key);
+}
+
+/**
+ * hsiphash - compute 32-bit hsiphash PRF value
+ * @data: buffer to hash
+ * @size: size of @data
+ * @key: the hsiphash key
+ */
+static inline u32 hsiphash(const void *data, size_t len, const hsiphash_key_t key)
+{
+#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
+	if (!IS_ALIGNED((unsigned long)data, HSIPHASH_ALIGNMENT))
+		return __hsiphash_unaligned(data, len, key);
+#endif
+	return ___hsiphash_aligned(data, len, key);
+}
+
 #endif /* _LINUX_SIPHASH_H */
diff --git a/lib/siphash.c b/lib/siphash.c
index ff2151313667..e2481226d96c 100644
--- a/lib/siphash.c
+++ b/lib/siphash.c
@@ -5,7 +5,9 @@
  * SipHash: a fast short-input PRF
  * https://131002.net/siphash/
  *
- * This implementation is specifically for SipHash2-4.
+ * This implementation is specifically for SipHash2-4 for a secure PRF
+ * and HalfSipHash1-3/SipHash1-3 for an insecure PRF only suitable for
+ * hashtables.
  */
 
 #include <linux/siphash.h>
@@ -230,3 +232,317 @@ u64 siphash_3u32(const u32 first, const u32 second, const u32 third,
 	POSTAMBLE
 }
 EXPORT_SYMBOL(siphash_3u32);
+
+#if BITS_PER_LONG == 64
+/* Note that this HalfSipHash1-3 implementation on 64-bit
+ * isn't actually HalfSipHash1-3 but rather SipHash1-3. */
+
+#define HSIPROUND SIPROUND
+#define HPREAMBLE(len) PREAMBLE(len)
+#define HPOSTAMBLE \
+	v3 ^= b; \
+	HSIPROUND; \
+	v0 ^= b; \
+	v2 ^= 0xff; \
+	HSIPROUND; \
+	HSIPROUND; \
+	HSIPROUND; \
+	return (v0 ^ v1) ^ (v2 ^ v3);
+
+u32 __hsiphash_aligned(const void *data, size_t len, const hsiphash_key_t key)
+{
+	const u8 *end = data + len - (len % sizeof(u64));
+	const u8 left = len & (sizeof(u64) - 1);
+	u64 m;
+	HPREAMBLE(len)
+	for (; data != end; data += sizeof(u64)) {
+		m = le64_to_cpup(data);
+		v3 ^= m;
+		HSIPROUND;
+		v0 ^= m;
+	}
+#if defined(CONFIG_DCACHE_WORD_ACCESS) && BITS_PER_LONG == 64
+	if (left)
+		b |= le64_to_cpu((__force __le64)(load_unaligned_zeropad(data) &
+						  bytemask_from_count(left)));
+#else
+	switch (left) {
+	case 7: b |= ((u64)end[6]) << 48;
+	case 6: b |= ((u64)end[5]) << 40;
+	case 5: b |= ((u64)end[4]) << 32;
+	case 4: b |= le32_to_cpup(data); break;
+	case 3: b |= ((u64)end[2]) << 16;
+	case 2: b |= le16_to_cpup(data); break;
+	case 1: b |= end[0];
+	}
+#endif
+	HPOSTAMBLE
+}
+EXPORT_SYMBOL(__hsiphash_aligned);
+
+#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
+u32 __hsiphash_unaligned(const void *data, size_t len, const hsiphash_key_t key)
+{
+	const u8 *end = data + len - (len % sizeof(u64));
+	const u8 left = len & (sizeof(u64) - 1);
+	u64 m;
+	HPREAMBLE(len)
+	for (; data != end; data += sizeof(u64)) {
+		m = get_unaligned_le64(data);
+		v3 ^= m;
+		HSIPROUND;
+		v0 ^= m;
+	}
+#if defined(CONFIG_DCACHE_WORD_ACCESS) && BITS_PER_LONG == 64
+	if (left)
+		b |= le64_to_cpu((__force __le64)(load_unaligned_zeropad(data) &
+						  bytemask_from_count(left)));
+#else
+	switch (left) {
+	case 7: b |= ((u64)end[6]) << 48;
+	case 6: b |= ((u64)end[5]) << 40;
+	case 5: b |= ((u64)end[4]) << 32;
+	case 4: b |= get_unaligned_le32(end); break;
+	case 3: b |= ((u64)end[2]) << 16;
+	case 2: b |= get_unaligned_le16(end); break;
+	case 1: b |= end[0];
+	}
+#endif
+	HPOSTAMBLE
+}
+EXPORT_SYMBOL(__hsiphash_unaligned);
+#endif
+
+/**
+ * hsiphash_1u32 - compute 64-bit hsiphash PRF value of a u32
+ * @first: first u32
+ * @key: the hsiphash key
+ */
+u32 hsiphash_1u32(const u32 first, const hsiphash_key_t key)
+{
+	HPREAMBLE(4)
+	b |= first;
+	HPOSTAMBLE
+}
+EXPORT_SYMBOL(hsiphash_1u32);
+
+/**
+ * hsiphash_2u32 - compute 32-bit hsiphash PRF value of 2 u32
+ * @first: first u32
+ * @second: second u32
+ * @key: the hsiphash key
+ */
+u32 hsiphash_2u32(const u32 first, const u32 second, const hsiphash_key_t key)
+{
+	u64 combined = (u64)second << 32 | first;
+	HPREAMBLE(8)
+	v3 ^= combined;
+	HSIPROUND;
+	v0 ^= combined;
+	HPOSTAMBLE
+}
+EXPORT_SYMBOL(hsiphash_2u32);
+
+/**
+ * hsiphash_3u32 - compute 32-bit hsiphash PRF value of 3 u32
+ * @first: first u32
+ * @second: second u32
+ * @third: third u32
+ * @key: the hsiphash key
+ */
+u32 hsiphash_3u32(const u32 first, const u32 second, const u32 third,
+		  const hsiphash_key_t key)
+{
+	u64 combined = (u64)second << 32 | first;
+	HPREAMBLE(12)
+	v3 ^= combined;
+	HSIPROUND;
+	v0 ^= combined;
+	b |= third;
+	HPOSTAMBLE
+}
+EXPORT_SYMBOL(hsiphash_3u32);
+
+/**
+ * hsiphash_4u32 - compute 32-bit hsiphash PRF value of 4 u32
+ * @first: first u32
+ * @second: second u32
+ * @third: third u32
+ * @forth: forth u32
+ * @key: the hsiphash key
+ */
+u32 hsiphash_4u32(const u32 first, const u32 second, const u32 third,
+		  const u32 forth, const hsiphash_key_t key)
+{
+	u64 combined = (u64)second << 32 | first;
+	HPREAMBLE(16)
+	v3 ^= combined;
+	HSIPROUND;
+	v0 ^= combined;
+	combined = (u64)forth << 32 | third;
+	v3 ^= combined;
+	HSIPROUND;
+	v0 ^= combined;
+	HPOSTAMBLE
+}
+EXPORT_SYMBOL(hsiphash_4u32);
+#else
+#define HSIPROUND \
+	do { \
+	v0 += v1; v1 = rol32(v1, 5); v1 ^= v0; v0 = rol32(v0, 16); \
+	v2 += v3; v3 = rol32(v3, 8); v3 ^= v2; \
+	v0 += v3; v3 = rol32(v3, 7); v3 ^= v0; \
+	v2 += v1; v1 = rol32(v1, 13); v1 ^= v2; v2 = rol32(v2, 16); \
+	} while(0)
+
+#define HPREAMBLE(len) \
+	u32 v0 = 0; \
+	u32 v1 = 0; \
+	u32 v2 = 0x6c796765U; \
+	u32 v3 = 0x74656462U; \
+	u32 b = ((u32)len) << 24; \
+	v3 ^= key[1]; \
+	v2 ^= key[0]; \
+	v1 ^= key[1]; \
+	v0 ^= key[0];
+
+#define HPOSTAMBLE \
+	v3 ^= b; \
+	HSIPROUND; \
+	v0 ^= b; \
+	v2 ^= 0xff; \
+	HSIPROUND; \
+	HSIPROUND; \
+	HSIPROUND; \
+	return v1 ^ v3;
+
+u32 __hsiphash_aligned(const void *data, size_t len, const hsiphash_key_t key)
+{
+	const u8 *end = data + len - (len % sizeof(u32));
+	const u8 left = len & (sizeof(u32) - 1);
+	u32 m;
+	HPREAMBLE(len)
+	for (; data != end; data += sizeof(u32)) {
+		m = le32_to_cpup(data);
+		v3 ^= m;
+		HSIPROUND;
+		v0 ^= m;
+	}
+	switch (left) {
+	case 3: b |= ((u32)end[2]) << 16;
+	case 2: b |= le16_to_cpup(data); break;
+	case 1: b |= end[0];
+	}
+	HPOSTAMBLE
+}
+EXPORT_SYMBOL(__hsiphash_aligned);
+
+#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
+u32 __hsiphash_unaligned(const void *data, size_t len, const hsiphash_key_t key)
+{
+	const u8 *end = data + len - (len % sizeof(u32));
+	const u8 left = len & (sizeof(u32) - 1);
+	u32 m;
+	HPREAMBLE(len)
+	for (; data != end; data += sizeof(u32)) {
+		m = get_unaligned_le32(data);
+		v3 ^= m;
+		HSIPROUND;
+		v0 ^= m;
+	}
+	switch (left) {
+	case 3: b |= ((u32)end[2]) << 16;
+	case 2: b |= get_unaligned_le16(end); break;
+	case 1: b |= end[0];
+	}
+	HPOSTAMBLE
+}
+EXPORT_SYMBOL(__hsiphash_unaligned);
+#endif
+
+/**
+ * hsiphash_1u32 - compute 32-bit hsiphash PRF value of a u32
+ * @first: first u32
+ * @key: the hsiphash key
+ */
+u32 hsiphash_1u32(const u32 first, const hsiphash_key_t key)
+{
+	HPREAMBLE(4)
+	v3 ^= first;
+	HSIPROUND;
+	v0 ^= first;
+	HPOSTAMBLE
+}
+EXPORT_SYMBOL(hsiphash_1u32);
+
+/**
+ * hsiphash_2u32 - compute 32-bit hsiphash PRF value of 2 u32
+ * @first: first u32
+ * @second: second u32
+ * @key: the hsiphash key
+ */
+u32 hsiphash_2u32(const u32 first, const u32 second, const hsiphash_key_t key)
+{
+	HPREAMBLE(8)
+	v3 ^= first;
+	HSIPROUND;
+	v0 ^= first;
+	v3 ^= second;
+	HSIPROUND;
+	v0 ^= second;
+	HPOSTAMBLE
+}
+EXPORT_SYMBOL(hsiphash_2u32);
+
+/**
+ * hsiphash_3u32 - compute 32-bit hsiphash PRF value of 3 u32
+ * @first: first u32
+ * @second: second u32
+ * @third: third u32
+ * @key: the hsiphash key
+ */
+u32 hsiphash_3u32(const u32 first, const u32 second, const u32 third,
+		  const hsiphash_key_t key)
+{
+	HPREAMBLE(12)
+	v3 ^= first;
+	HSIPROUND;
+	v0 ^= first;
+	v3 ^= second;
+	HSIPROUND;
+	v0 ^= second;
+	v3 ^= third;
+	HSIPROUND;
+	v0 ^= third;
+	HPOSTAMBLE
+}
+EXPORT_SYMBOL(hsiphash_3u32);
+
+/**
+ * hsiphash_4u32 - compute 32-bit hsiphash PRF value of 4 u32
+ * @first: first u32
+ * @second: second u32
+ * @third: third u32
+ * @forth: forth u32
+ * @key: the hsiphash key
+ */
+u32 hsiphash_4u32(const u32 first, const u32 second, const u32 third,
+		  const u32 forth, const hsiphash_key_t key)
+{
+	HPREAMBLE(16)
+	v3 ^= first;
+	HSIPROUND;
+	v0 ^= first;
+	v3 ^= second;
+	HSIPROUND;
+	v0 ^= second;
+	v3 ^= third;
+	HSIPROUND;
+	v0 ^= third;
+	v3 ^= forth;
+	HSIPROUND;
+	v0 ^= forth;
+	HPOSTAMBLE
+}
+EXPORT_SYMBOL(hsiphash_4u32);
+#endif
diff --git a/lib/test_siphash.c b/lib/test_siphash.c
index e0ba2cf8dc67..ac291ec27fb6 100644
--- a/lib/test_siphash.c
+++ b/lib/test_siphash.c
@@ -7,7 +7,9 @@
  * SipHash: a fast short-input PRF
  * https://131002.net/siphash/
  *
- * This implementation is specifically for SipHash2-4.
+ * This implementation is specifically for SipHash2-4 for a secure PRF
+ * and HalfSipHash1-3/SipHash1-3 for an insecure PRF only suitable for
+ * hashtables.
  */
 
 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
@@ -18,10 +20,16 @@
 #include <linux/errno.h>
 #include <linux/module.h>
 
-/* Test vectors taken from official reference source available at:
- *     https://131002.net/siphash/siphash24.c
+/* Test vectors taken from reference source available at:
+ *     https://github.com/veorq/SipHash
  */
-static const u64 test_vectors[64] = {
+
+
+
+static const siphash_key_t test_key_siphash =
+	{ 0x0706050403020100ULL , 0x0f0e0d0c0b0a0908ULL };
+
+static const u64 test_vectors_siphash[64] = {
 	0x726fdb47dd0e0e31ULL, 0x74f839c593dc67fdULL, 0x0d6c8009d9a94f5aULL,
 	0x85676696d7fb7e2dULL, 0xcf2794e0277187b7ULL, 0x18765564cd99a68dULL,
 	0xcbc9466e58fee3ceULL, 0xab0200f58b01d137ULL, 0x93f5f5799a932462ULL,
@@ -45,9 +53,64 @@ static const u64 test_vectors[64] = {
 	0x6ca4ecb15c5f91e1ULL, 0x9f626da15c9625f3ULL, 0xe51b38608ef25f57ULL,
 	0x958a324ceb064572ULL
 };
-static const siphash_key_t test_key =
+#if BITS_PER_LONG == 64
+static const hsiphash_key_t test_key_hsiphash =
 	{ 0x0706050403020100ULL , 0x0f0e0d0c0b0a0908ULL };
 
+static const u32 test_vectors_hsiphash[64] = {
+	0x050fc4dcU, 0x7d57ca93U, 0x4dc7d44dU,
+	0xe7ddf7fbU, 0x88d38328U, 0x49533b67U,
+	0xc59f22a7U, 0x9bb11140U, 0x8d299a8eU,
+	0x6c063de4U, 0x92ff097fU, 0xf94dc352U,
+	0x57b4d9a2U, 0x1229ffa7U, 0xc0f95d34U,
+	0x2a519956U, 0x7d908b66U, 0x63dbd80cU,
+	0xb473e63eU, 0x8d297d1cU, 0xa6cce040U,
+	0x2b45f844U, 0xa320872eU, 0xdae6c123U,
+	0x67349c8cU, 0x705b0979U, 0xca9913a5U,
+	0x4ade3b35U, 0xef6cd00dU, 0x4ab1e1f4U,
+	0x43c5e663U, 0x8c21d1bcU, 0x16a7b60dU,
+	0x7a8ff9bfU, 0x1f2a753eU, 0xbf186b91U,
+	0xada26206U, 0xa3c33057U, 0xae3a36a1U,
+	0x7b108392U, 0x99e41531U, 0x3f1ad944U,
+	0xc8138825U, 0xc28949a6U, 0xfaf8876bU,
+	0x9f042196U, 0x68b1d623U, 0x8b5114fdU,
+	0xdf074c46U, 0x12cc86b3U, 0x0a52098fU,
+	0x9d292f9aU, 0xa2f41f12U, 0x43a71ed0U,
+	0x73f0bce6U, 0x70a7e980U, 0x243c6d75U,
+	0xfdb71513U, 0xa67d8a08U, 0xb7e8f148U,
+	0xf7a644eeU, 0x0f1837f2U, 0x4b6694e0U,
+	0xb7bbb3a8U
+};
+#else
+static const hsiphash_key_t test_key_hsiphash =
+	{ 0x03020100U, 0x07060504U };
+
+static const u32 test_vectors_hsiphash[64] = {
+	0x5814c896U, 0xe7e864caU, 0xbc4b0e30U,
+	0x01539939U, 0x7e059ea6U, 0x88e3d89bU,
+	0xa0080b65U, 0x9d38d9d6U, 0x577999b1U,
+	0xc839caedU, 0xe4fa32cfU, 0x959246eeU,
+	0x6b28096cU, 0x66dd9cd6U, 0x16658a7cU,
+	0xd0257b04U, 0x8b31d501U, 0x2b1cd04bU,
+	0x06712339U, 0x522aca67U, 0x911bb605U,
+	0x90a65f0eU, 0xf826ef7bU, 0x62512debU,
+	0x57150ad7U, 0x5d473507U, 0x1ec47442U,
+	0xab64afd3U, 0x0a4100d0U, 0x6d2ce652U,
+	0x2331b6a3U, 0x08d8791aU, 0xbc6dda8dU,
+	0xe0f6c934U, 0xb0652033U, 0x9b9851ccU,
+	0x7c46fb7fU, 0x732ba8cbU, 0xf142997aU,
+	0xfcc9aa1bU, 0x05327eb2U, 0xe110131cU,
+	0xf9e5e7c0U, 0xa7d708a6U, 0x11795ab1U,
+	0x65671619U, 0x9f5fff91U, 0xd89c5267U,
+	0x007783ebU, 0x95766243U, 0xab639262U,
+	0x9c7e1390U, 0xc368dda6U, 0x38ddc455U,
+	0xfa13d379U, 0x979ea4e8U, 0x53ecd77eU,
+	0x2ee80657U, 0x33dbb66aU, 0xae3f0577U,
+	0x88b4c4ccU, 0x3e7f480bU, 0x74c1ebf8U,
+	0x87178304U
+};
+#endif
+
 static int __init siphash_test_init(void)
 {
 	u8 in[64] __aligned(SIPHASH_ALIGNMENT);
@@ -58,49 +121,75 @@ static int __init siphash_test_init(void)
 	for (i = 0; i < 64; ++i) {
 		in[i] = i;
 		in_unaligned[i + 1] = i;
-		if (siphash(in, i, test_key) != test_vectors[i]) {
-			pr_info("self-test aligned %u: FAIL\n", i + 1);
+		if (siphash(in, i, test_key_siphash) != test_vectors_siphash[i]) {
+			pr_info("siphash self-test aligned %u: FAIL\n", i + 1);
+			ret = -EINVAL;
+		}
+		if (siphash(in_unaligned + 1, i, test_key_siphash) != test_vectors_siphash[i]) {
+			pr_info("siphash self-test unaligned %u: FAIL\n", i + 1);
 			ret = -EINVAL;
 		}
-		if (siphash(in_unaligned + 1, i, test_key) != test_vectors[i]) {
-			pr_info("self-test unaligned %u: FAIL\n", i + 1);
+		if (hsiphash(in, i, test_key_hsiphash) != test_vectors_hsiphash[i]) {
+			pr_info("hsiphash self-test aligned %u: FAIL\n", i + 1);
+			ret = -EINVAL;
+		}
+		if (hsiphash(in_unaligned + 1, i, test_key_hsiphash) != test_vectors_hsiphash[i]) {
+			pr_info("hsiphash self-test unaligned %u: FAIL\n", i + 1);
 			ret = -EINVAL;
 		}
 	}
-	if (siphash_1u64(0x0706050403020100ULL, test_key) != test_vectors[8]) {
-		pr_info("self-test 1u64: FAIL\n");
+	if (siphash_1u64(0x0706050403020100ULL, test_key_siphash) != test_vectors_siphash[8]) {
+		pr_info("siphash self-test 1u64: FAIL\n");
 		ret = -EINVAL;
 	}
-	if (siphash_2u64(0x0706050403020100ULL, 0x0f0e0d0c0b0a0908ULL, test_key) != test_vectors[16]) {
-		pr_info("self-test 2u64: FAIL\n");
+	if (siphash_2u64(0x0706050403020100ULL, 0x0f0e0d0c0b0a0908ULL, test_key_siphash) != test_vectors_siphash[16]) {
+		pr_info("siphash self-test 2u64: FAIL\n");
 		ret = -EINVAL;
 	}
 	if (siphash_3u64(0x0706050403020100ULL, 0x0f0e0d0c0b0a0908ULL,
-			 0x1716151413121110ULL, test_key) != test_vectors[24]) {
-		pr_info("self-test 3u64: FAIL\n");
+			 0x1716151413121110ULL, test_key_siphash) != test_vectors_siphash[24]) {
+		pr_info("siphash self-test 3u64: FAIL\n");
 		ret = -EINVAL;
 	}
 	if (siphash_4u64(0x0706050403020100ULL, 0x0f0e0d0c0b0a0908ULL,
-			 0x1716151413121110ULL, 0x1f1e1d1c1b1a1918ULL, test_key) != test_vectors[32]) {
-		pr_info("self-test 4u64: FAIL\n");
+			 0x1716151413121110ULL, 0x1f1e1d1c1b1a1918ULL, test_key_siphash) != test_vectors_siphash[32]) {
+		pr_info("siphash self-test 4u64: FAIL\n");
 		ret = -EINVAL;
 	}
-	if (siphash_1u32(0x03020100U, test_key) != test_vectors[4]) {
-		pr_info("self-test 1u32: FAIL\n");
+	if (siphash_1u32(0x03020100U, test_key_siphash) != test_vectors_siphash[4]) {
+		pr_info("siphash self-test 1u32: FAIL\n");
 		ret = -EINVAL;
 	}
-	if (siphash_2u32(0x03020100U, 0x07060504U, test_key) != test_vectors[8]) {
-		pr_info("self-test 2u32: FAIL\n");
+	if (siphash_2u32(0x03020100U, 0x07060504U, test_key_siphash) != test_vectors_siphash[8]) {
+		pr_info("siphash self-test 2u32: FAIL\n");
 		ret = -EINVAL;
 	}
 	if (siphash_3u32(0x03020100U, 0x07060504U,
-			 0x0b0a0908U, test_key) != test_vectors[12]) {
-		pr_info("self-test 3u32: FAIL\n");
+			 0x0b0a0908U, test_key_siphash) != test_vectors_siphash[12]) {
+		pr_info("siphash self-test 3u32: FAIL\n");
 		ret = -EINVAL;
 	}
 	if (siphash_4u32(0x03020100U, 0x07060504U,
-			 0x0b0a0908U, 0x0f0e0d0cU, test_key) != test_vectors[16]) {
-		pr_info("self-test 4u32: FAIL\n");
+			 0x0b0a0908U, 0x0f0e0d0cU, test_key_siphash) != test_vectors_siphash[16]) {
+		pr_info("siphash self-test 4u32: FAIL\n");
+		ret = -EINVAL;
+	}
+	if (hsiphash_1u32(0x03020100U, test_key_hsiphash) != test_vectors_hsiphash[4]) {
+		pr_info("hsiphash self-test 1u32: FAIL\n");
+		ret = -EINVAL;
+	}
+	if (hsiphash_2u32(0x03020100U, 0x07060504U, test_key_hsiphash) != test_vectors_hsiphash[8]) {
+		pr_info("hsiphash self-test 2u32: FAIL\n");
+		ret = -EINVAL;
+	}
+	if (hsiphash_3u32(0x03020100U, 0x07060504U,
+			  0x0b0a0908U, test_key_hsiphash) != test_vectors_hsiphash[12]) {
+		pr_info("hsiphash self-test 3u32: FAIL\n");
+		ret = -EINVAL;
+	}
+	if (hsiphash_4u32(0x03020100U, 0x07060504U,
+			  0x0b0a0908U, 0x0f0e0d0cU, test_key_hsiphash) != test_vectors_hsiphash[16]) {
+		pr_info("hsiphash self-test 4u32: FAIL\n");
 		ret = -EINVAL;
 	}
 	if (!ret)
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH net-next 3/4] secure_seq: use SipHash in place of MD5
  2017-01-06 20:10   ` [PATCH net-next 0/4] Introduce The SipHash PRF Jason A. Donenfeld
  2017-01-06 20:10     ` [PATCH net-next 1/4] siphash: add cryptographically secure PRF Jason A. Donenfeld
  2017-01-06 20:10     ` [PATCH net-next 2/4] siphash: implement HalfSipHash1-3 for hash tables Jason A. Donenfeld
@ 2017-01-06 20:10     ` Jason A. Donenfeld
  2017-01-06 20:10     ` [PATCH net-next 4/4] syncookies: use SipHash in place of SHA1 Jason A. Donenfeld
  3 siblings, 0 replies; 18+ messages in thread
From: Jason A. Donenfeld @ 2017-01-06 20:10 UTC (permalink / raw)
  To: davem, netdev, linux-kernel
  Cc: Jason A. Donenfeld, Andi Kleen, David Laight, Tom Herbert,
	Hannes Frederic Sowa, Eric Dumazet

This gives a clear speed and security improvement. Siphash is both
faster and is more solid crypto than the aging MD5.

Rather than manually filling MD5 buffers, for IPv6, we simply create
a layout by a simple anonymous struct, for which gcc generates
rather efficient code. For IPv4, we pass the values directly to the
short input convenience functions.

64-bit x86_64:
[    1.683628] secure_tcpv6_sequence_number_md5# cycles: 99563527
[    1.717350] secure_tcp_sequence_number_md5# cycles: 92890502
[    1.741968] secure_tcpv6_sequence_number_siphash# cycles: 67825362
[    1.762048] secure_tcp_sequence_number_siphash# cycles: 67485526

32-bit x86:
[    1.600012] secure_tcpv6_sequence_number_md5# cycles: 103227892
[    1.634219] secure_tcp_sequence_number_md5# cycles: 94732544
[    1.669102] secure_tcpv6_sequence_number_siphash# cycles: 96299384
[    1.700165] secure_tcp_sequence_number_siphash# cycles: 86015473

Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: David Miller <davem@davemloft.net>
Cc: David Laight <David.Laight@aculab.com>
Cc: Tom Herbert <tom@herbertland.com>
Cc: Hannes Frederic Sowa <hannes@stressinduktion.org>
Cc: Eric Dumazet <eric.dumazet@gmail.com>
---
 net/core/secure_seq.c | 135 ++++++++++++++++++++------------------------------
 1 file changed, 54 insertions(+), 81 deletions(-)

diff --git a/net/core/secure_seq.c b/net/core/secure_seq.c
index 88a8e429fc3e..3dc2689bcc64 100644
--- a/net/core/secure_seq.c
+++ b/net/core/secure_seq.c
@@ -1,3 +1,5 @@
+/* Copyright (C) 2016 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved. */
+
 #include <linux/kernel.h>
 #include <linux/init.h>
 #include <linux/cryptohash.h>
@@ -8,14 +10,14 @@
 #include <linux/ktime.h>
 #include <linux/string.h>
 #include <linux/net.h>
-
+#include <linux/siphash.h>
 #include <net/secure_seq.h>
 
 #if IS_ENABLED(CONFIG_IPV6) || IS_ENABLED(CONFIG_INET)
+#include <linux/in6.h>
 #include <net/tcp.h>
-#define NET_SECRET_SIZE (MD5_MESSAGE_BYTES / 4)
 
-static u32 net_secret[NET_SECRET_SIZE] ____cacheline_aligned;
+static siphash_key_t net_secret;
 
 static __always_inline void net_secret_init(void)
 {
@@ -44,80 +46,65 @@ static u32 seq_scale(u32 seq)
 u32 secure_tcpv6_sequence_number(const __be32 *saddr, const __be32 *daddr,
 				 __be16 sport, __be16 dport, u32 *tsoff)
 {
-	u32 secret[MD5_MESSAGE_BYTES / 4];
-	u32 hash[MD5_DIGEST_WORDS];
-	u32 i;
-
+	const struct {
+		struct in6_addr saddr;
+		struct in6_addr daddr;
+		__be16 sport;
+		__be16 dport;
+	} __aligned(SIPHASH_ALIGNMENT) combined = {
+		.saddr = *(struct in6_addr *)saddr,
+		.daddr = *(struct in6_addr *)daddr,
+		.sport = sport,
+		.dport = dport
+	};
+	u64 hash;
 	net_secret_init();
-	memcpy(hash, saddr, 16);
-	for (i = 0; i < 4; i++)
-		secret[i] = net_secret[i] + (__force u32)daddr[i];
-	secret[4] = net_secret[4] +
-		(((__force u16)sport << 16) + (__force u16)dport);
-	for (i = 5; i < MD5_MESSAGE_BYTES / 4; i++)
-		secret[i] = net_secret[i];
-
-	md5_transform(hash, secret);
-
-	*tsoff = sysctl_tcp_timestamps == 1 ? hash[1] : 0;
-	return seq_scale(hash[0]);
+	hash = siphash(&combined, offsetofend(typeof(combined), dport), net_secret);
+	*tsoff = sysctl_tcp_timestamps == 1 ? (hash >> 32) : 0;
+	return seq_scale(hash);
 }
 EXPORT_SYMBOL(secure_tcpv6_sequence_number);
 
 u32 secure_ipv6_port_ephemeral(const __be32 *saddr, const __be32 *daddr,
 			       __be16 dport)
 {
-	u32 secret[MD5_MESSAGE_BYTES / 4];
-	u32 hash[MD5_DIGEST_WORDS];
-	u32 i;
-
+	const struct {
+		struct in6_addr saddr;
+		struct in6_addr daddr;
+		__be16 dport;
+	} __aligned(SIPHASH_ALIGNMENT) combined = {
+		.saddr = *(struct in6_addr *)saddr,
+		.daddr = *(struct in6_addr *)daddr,
+		.dport = dport
+	};
 	net_secret_init();
-	memcpy(hash, saddr, 16);
-	for (i = 0; i < 4; i++)
-		secret[i] = net_secret[i] + (__force u32) daddr[i];
-	secret[4] = net_secret[4] + (__force u32)dport;
-	for (i = 5; i < MD5_MESSAGE_BYTES / 4; i++)
-		secret[i] = net_secret[i];
-
-	md5_transform(hash, secret);
-
-	return hash[0];
+	return siphash(&combined, offsetofend(typeof(combined), dport), net_secret);
 }
 EXPORT_SYMBOL(secure_ipv6_port_ephemeral);
 #endif
 
 #ifdef CONFIG_INET
 
+/* secure_tcp_sequence_number(a, b, 0, d) == secure_ipv4_port_ephemeral(a, b, d),
+ * but fortunately, `sport' cannot be 0 in any circumstances. If this changes,
+ * it would be easy enough to have the former function use siphash_4u32, passing
+ * the arguments as separate u32.
+ */
+
 u32 secure_tcp_sequence_number(__be32 saddr, __be32 daddr,
 			       __be16 sport, __be16 dport, u32 *tsoff)
 {
-	u32 hash[MD5_DIGEST_WORDS];
-
+	u64 hash;
 	net_secret_init();
-	hash[0] = (__force u32)saddr;
-	hash[1] = (__force u32)daddr;
-	hash[2] = ((__force u16)sport << 16) + (__force u16)dport;
-	hash[3] = net_secret[15];
-
-	md5_transform(hash, net_secret);
-
-	*tsoff = sysctl_tcp_timestamps == 1 ? hash[1] : 0;
-	return seq_scale(hash[0]);
+	hash = siphash_3u32(saddr, daddr, (u32)sport << 16 | dport, net_secret);
+	*tsoff = sysctl_tcp_timestamps == 1 ? (hash >> 32) : 0;
+	return seq_scale(hash);
 }
 
 u32 secure_ipv4_port_ephemeral(__be32 saddr, __be32 daddr, __be16 dport)
 {
-	u32 hash[MD5_DIGEST_WORDS];
-
 	net_secret_init();
-	hash[0] = (__force u32)saddr;
-	hash[1] = (__force u32)daddr;
-	hash[2] = (__force u32)dport ^ net_secret[14];
-	hash[3] = net_secret[15];
-
-	md5_transform(hash, net_secret);
-
-	return hash[0];
+	return siphash_3u32(saddr, daddr, dport, net_secret);
 }
 EXPORT_SYMBOL_GPL(secure_ipv4_port_ephemeral);
 #endif
@@ -126,21 +113,11 @@ EXPORT_SYMBOL_GPL(secure_ipv4_port_ephemeral);
 u64 secure_dccp_sequence_number(__be32 saddr, __be32 daddr,
 				__be16 sport, __be16 dport)
 {
-	u32 hash[MD5_DIGEST_WORDS];
 	u64 seq;
-
 	net_secret_init();
-	hash[0] = (__force u32)saddr;
-	hash[1] = (__force u32)daddr;
-	hash[2] = ((__force u16)sport << 16) + (__force u16)dport;
-	hash[3] = net_secret[15];
-
-	md5_transform(hash, net_secret);
-
-	seq = hash[0] | (((u64)hash[1]) << 32);
+	seq = siphash_3u32(saddr, daddr, (u32)sport << 16 | dport, net_secret);
 	seq += ktime_get_real_ns();
 	seq &= (1ull << 48) - 1;
-
 	return seq;
 }
 EXPORT_SYMBOL(secure_dccp_sequence_number);
@@ -149,26 +126,22 @@ EXPORT_SYMBOL(secure_dccp_sequence_number);
 u64 secure_dccpv6_sequence_number(__be32 *saddr, __be32 *daddr,
 				  __be16 sport, __be16 dport)
 {
-	u32 secret[MD5_MESSAGE_BYTES / 4];
-	u32 hash[MD5_DIGEST_WORDS];
+	const struct {
+		struct in6_addr saddr;
+		struct in6_addr daddr;
+		__be16 sport;
+		__be16 dport;
+	} __aligned(SIPHASH_ALIGNMENT) combined = {
+		.saddr = *(struct in6_addr *)saddr,
+		.daddr = *(struct in6_addr *)daddr,
+		.sport = sport,
+		.dport = dport
+	};
 	u64 seq;
-	u32 i;
-
 	net_secret_init();
-	memcpy(hash, saddr, 16);
-	for (i = 0; i < 4; i++)
-		secret[i] = net_secret[i] + (__force u32)daddr[i];
-	secret[4] = net_secret[4] +
-		(((__force u16)sport << 16) + (__force u16)dport);
-	for (i = 5; i < MD5_MESSAGE_BYTES / 4; i++)
-		secret[i] = net_secret[i];
-
-	md5_transform(hash, secret);
-
-	seq = hash[0] | (((u64)hash[1]) << 32);
+	seq = siphash(&combined, offsetofend(typeof(combined), dport), net_secret);
 	seq += ktime_get_real_ns();
 	seq &= (1ull << 48) - 1;
-
 	return seq;
 }
 EXPORT_SYMBOL(secure_dccpv6_sequence_number);
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH net-next 4/4] syncookies: use SipHash in place of SHA1
  2017-01-06 20:10   ` [PATCH net-next 0/4] Introduce The SipHash PRF Jason A. Donenfeld
                       ` (2 preceding siblings ...)
  2017-01-06 20:10     ` [PATCH net-next 3/4] secure_seq: use SipHash in place of MD5 Jason A. Donenfeld
@ 2017-01-06 20:10     ` Jason A. Donenfeld
  3 siblings, 0 replies; 18+ messages in thread
From: Jason A. Donenfeld @ 2017-01-06 20:10 UTC (permalink / raw)
  To: davem, netdev, linux-kernel; +Cc: Jason A. Donenfeld, Eric Dumazet

SHA1 is slower and less secure than SipHash, and so replacing syncookie
generation with SipHash makes natural sense. Some BSDs have been doing
this for several years in fact.

The speedup should be similar -- and even more impressive -- to the
speedup from the sequence number fix in this series.

Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Cc: David Miller <davem@davemloft.net>
---
 net/ipv4/syncookies.c | 20 ++++----------------
 net/ipv6/syncookies.c | 37 ++++++++++++++++---------------------
 2 files changed, 20 insertions(+), 37 deletions(-)

diff --git a/net/ipv4/syncookies.c b/net/ipv4/syncookies.c
index 3e88467d70ee..03bb068f8888 100644
--- a/net/ipv4/syncookies.c
+++ b/net/ipv4/syncookies.c
@@ -13,13 +13,13 @@
 #include <linux/tcp.h>
 #include <linux/slab.h>
 #include <linux/random.h>
-#include <linux/cryptohash.h>
+#include <linux/siphash.h>
 #include <linux/kernel.h>
 #include <linux/export.h>
 #include <net/tcp.h>
 #include <net/route.h>
 
-static u32 syncookie_secret[2][16-4+SHA_DIGEST_WORDS] __read_mostly;
+static siphash_key_t syncookie_secret[2] __read_mostly;
 
 #define COOKIEBITS 24	/* Upper bits store count */
 #define COOKIEMASK (((__u32)1 << COOKIEBITS) - 1)
@@ -48,24 +48,12 @@ static u32 syncookie_secret[2][16-4+SHA_DIGEST_WORDS] __read_mostly;
 #define TSBITS	6
 #define TSMASK	(((__u32)1 << TSBITS) - 1)
 
-static DEFINE_PER_CPU(__u32 [16 + 5 + SHA_WORKSPACE_WORDS], ipv4_cookie_scratch);
-
 static u32 cookie_hash(__be32 saddr, __be32 daddr, __be16 sport, __be16 dport,
 		       u32 count, int c)
 {
-	__u32 *tmp;
-
 	net_get_random_once(syncookie_secret, sizeof(syncookie_secret));
-
-	tmp  = this_cpu_ptr(ipv4_cookie_scratch);
-	memcpy(tmp + 4, syncookie_secret[c], sizeof(syncookie_secret[c]));
-	tmp[0] = (__force u32)saddr;
-	tmp[1] = (__force u32)daddr;
-	tmp[2] = ((__force u32)sport << 16) + (__force u32)dport;
-	tmp[3] = count;
-	sha_transform(tmp + 16, (__u8 *)tmp, tmp + 16 + 5);
-
-	return tmp[17];
+	return siphash_4u32(saddr, daddr, (u32)sport << 16 | dport, count,
+			    syncookie_secret[c]);
 }
 
 
diff --git a/net/ipv6/syncookies.c b/net/ipv6/syncookies.c
index a4d49760bf43..be51fc0d99ad 100644
--- a/net/ipv6/syncookies.c
+++ b/net/ipv6/syncookies.c
@@ -16,7 +16,7 @@
 
 #include <linux/tcp.h>
 #include <linux/random.h>
-#include <linux/cryptohash.h>
+#include <linux/siphash.h>
 #include <linux/kernel.h>
 #include <net/ipv6.h>
 #include <net/tcp.h>
@@ -24,7 +24,7 @@
 #define COOKIEBITS 24	/* Upper bits store count */
 #define COOKIEMASK (((__u32)1 << COOKIEBITS) - 1)
 
-static u32 syncookie6_secret[2][16-4+SHA_DIGEST_WORDS] __read_mostly;
+static siphash_key_t syncookie6_secret[2] __read_mostly;
 
 /* RFC 2460, Section 8.3:
  * [ipv6 tcp] MSS must be computed as the maximum packet size minus 60 [..]
@@ -41,30 +41,25 @@ static __u16 const msstab[] = {
 	9000 - 60,
 };
 
-static DEFINE_PER_CPU(__u32 [16 + 5 + SHA_WORKSPACE_WORDS], ipv6_cookie_scratch);
-
 static u32 cookie_hash(const struct in6_addr *saddr, const struct in6_addr *daddr,
 		       __be16 sport, __be16 dport, u32 count, int c)
 {
-	__u32 *tmp;
+	const struct {
+		struct in6_addr saddr;
+		struct in6_addr daddr;
+		u32 count;
+		u16 sport;
+		u16 dport;
+	} __aligned(SIPHASH_ALIGNMENT) combined = {
+		.saddr = *saddr,
+		.daddr = *daddr,
+		.count = count,
+		.sport = sport,
+		.dport = dport
+	};
 
 	net_get_random_once(syncookie6_secret, sizeof(syncookie6_secret));
-
-	tmp  = this_cpu_ptr(ipv6_cookie_scratch);
-
-	/*
-	 * we have 320 bits of information to hash, copy in the remaining
-	 * 192 bits required for sha_transform, from the syncookie6_secret
-	 * and overwrite the digest with the secret
-	 */
-	memcpy(tmp + 10, syncookie6_secret[c], 44);
-	memcpy(tmp, saddr, 16);
-	memcpy(tmp + 4, daddr, 16);
-	tmp[8] = ((__force u32)sport << 16) + (__force u32)dport;
-	tmp[9] = count;
-	sha_transform(tmp + 16, (__u8 *)tmp, tmp + 16 + 5);
-
-	return tmp[17];
+	return siphash(&combined, offsetofend(typeof(combined), dport), syncookie6_secret[c]);
 }
 
 static __u32 secure_tcp_syn_cookie(const struct in6_addr *saddr,
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH net-next 1/4] siphash: add cryptographically secure PRF
  2017-01-06 20:10     ` [PATCH net-next 1/4] siphash: add cryptographically secure PRF Jason A. Donenfeld
@ 2017-01-06 20:32       ` Jean-Philippe Aumasson
  2017-01-06 20:41         ` David Miller
  2017-01-07  4:04       ` Eric Biggers
  1 sibling, 1 reply; 18+ messages in thread
From: Jean-Philippe Aumasson @ 2017-01-06 20:32 UTC (permalink / raw)
  To: Jason A. Donenfeld
  Cc: David Miller, Netdev, LKML, Linus Torvalds, Eric Biggers,
	David Laight, Eric Dumazet

>
>
> On Fri, Jan 6, 2017 at 9:11 PM Jason A. Donenfeld <Jason@zx2c4.com> wrote:
>
> SipHash is a 64-bit keyed hash function that is actually a
> cryptographically secure PRF, like HMAC. Except SipHash is super fast,
> and is meant to be used as a hashtable keyed lookup function, or as a
> general PRF for short input use cases, such as sequence numbers or RNG
> chaining.
>
> For the first usage:
>
> There are a variety of attacks known as "hashtable poisoning" in which an
> attacker forms some data such that the hash of that data will be the
> same, and then preceeds to fill up all entries of a hashbucket. This is
> a realistic and well-known denial-of-service vector. Currently
> hashtables use jhash, which is fast but not secure, and some kind of
> rotating key scheme (or none at all, which isn't good). SipHash is meant
> as a replacement for jhash in these cases.
>
> There are a modicum of places in the kernel that are vulnerable to
> hashtable poisoning attacks, either via userspace vectors or network
> vectors, and there's not a reliable mechanism inside the kernel at the
> moment to fix it. The first step toward fixing these issues is actually
> getting a secure primitive into the kernel for developers to use. Then
> we can, bit by bit, port things over to it as deemed appropriate.
>
> While SipHash is extremely fast for a cryptographically secure function,
> it is likely a bit slower than the insecure jhash, and so replacements
> will be evaluated on a case-by-case basis based on whether or not the
> difference in speed is negligible and whether or not the current jhash usage
> poses a real security risk.
>
> For the second usage:
>
> A few places in the kernel are using MD5 or SHA1 for creating secure
> sequence numbers, syn cookies, port numbers, or fast random numbers.
> SipHash is a faster and more fitting, and more secure replacement for MD5
> in those situations. Replacing MD5 and SHA1 with SipHash for these uses is
> obvious and straight-forward, and so is submitted along with this patch
> series. There shouldn't be much of a debate over its efficacy.
>
> Dozens of languages are already using this internally for their hash
> tables and PRFs. Some of the BSDs already use this in their kernels.
> SipHash is a widely known high-speed solution to a widely known set of
> problems, and it's time we catch-up.
>
> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
> Cc: Jean-Philippe Aumasson <jeanphilippe.aumasson@gmail.com>
> Cc: Linus Torvalds <torvalds@linux-foundation.org>
> Cc: Eric Biggers <ebiggers3@gmail.com>
> Cc: David Laight <David.Laight@aculab.com>
> Cc: Eric Dumazet <eric.dumazet@gmail.com>
> ---
> Documentation/siphash.txt | 79 ++++++++++++++++
> MAINTAINERS | 7 ++
> include/linux/siphash.h | 79 ++++++++++++++++
> lib/Kconfig.debug | 6 +-
> lib/Makefile | 5 +-
> lib/siphash.c | 232 ++++++++++++++++++++++++++++++++++++++++++++++
> lib/test_siphash.c | 119 ++++++++++++++++++++++++
> 7 files changed, 522 insertions(+), 5 deletions(-)
> create mode 100644 Documentation/siphash.txt
> create mode 100644 include/linux/siphash.h
> create mode 100644 lib/siphash.c
> create mode 100644 lib/test_siphash.c
>
> diff --git a/Documentation/siphash.txt b/Documentation/siphash.txt
> new file mode 100644
> index 000000000000..39ff7f0438e7
> --- /dev/null
> +++ b/Documentation/siphash.txt
> @@ -0,0 +1,79 @@
> + SipHash - a short input PRF
> +-----------------------------------------------
> +Written by Jason A. Donenfeld <jason@zx2c4.com>
> +
> +SipHash is a cryptographically secure PRF -- a keyed hash function -- that
> +performs very well for short inputs, hence the name. It was designed by
> +cryptographers Daniel J. Bernstein and Jean-Philippe Aumasson. It is intended
> +as a replacement for some uses of: `jhash`, `md5_transform`, `sha_transform`,
> +and so forth.
> +
> +SipHash takes a secret key filled with randomly generated numbers and either
> +an input buffer or several input integers. It spits out an integer that is
> +indistinguishable from random. You may then use that integer as part of secure
> +sequence numbers, secure cookies, or mask it off for use in a hash table.
> +
> +1. Generating a key
> +
> +Keys should always be generated from a cryptographically secure source of
> +random numbers, either using get_random_bytes or get_random_once:
> +
> +siphash_key_t key;
> +get_random_bytes(key, sizeof(key));
> +
> +If you're not deriving your key from here, you're doing it wrong.
> +
> +2. Using the functions
> +
> +There are two variants of the function, one that takes a list of integers, and
> +one that takes a buffer:
> +
> +u64 siphash(const void *data, size_t len, siphash_key_t key);
> +
> +And:
> +
> +u64 siphash_1u64(u64, siphash_key_t key);
> +u64 siphash_2u64(u64, u64, siphash_key_t key);
> +u64 siphash_3u64(u64, u64, u64, siphash_key_t key);
> +u64 siphash_4u64(u64, u64, u64, u64, siphash_key_t key);
> +u64 siphash_1u32(u32, siphash_key_t key);
> +u64 siphash_2u32(u32, u32, siphash_key_t key);
> +u64 siphash_3u32(u32, u32, u32, siphash_key_t key);
> +u64 siphash_4u32(u32, u32, u32, u32, siphash_key_t key);
> +
> +If you pass the generic siphash function something of a constant length, it
> +will constant fold at compile-time and automatically choose one of the
> +optimized functions.
> +
> +3. Hashtable key function usage:
> +
> +struct some_hashtable {
> + DECLARE_HASHTABLE(hashtable, 8);
> + siphash_key_t key;
> +};
> +
> +void init_hashtable(struct some_hashtable *table)
> +{
> + get_random_bytes(table->key, sizeof(table->key));
> +}
> +
> +static inline hlist_head *some_hashtable_bucket(struct some_hashtable *table, struct interesting_input *input)
> +{
> + return &table->hashtable[siphash(input, sizeof(*input), table->key) & (HASH_SIZE(table->hashtable) - 1)];
> +}
> +
> +You may then iterate like usual over the returned hash bucket.
> +
> +4. Security
> +
> +SipHash has a very high security margin, with its 128-bit key. So long as the
> +key is kept secret, it is impossible for an attacker to guess the outputs of
> +the function, even if being able to observe many outputs, since 2^128 outputs
> +is significant.
> +
> +Linux implements the "2-4" variant of SipHash.
> +
> +5. Resources
> +
> +Read the SipHash paper if you're interested in learning more:
> +https://131002.net/siphash/siphash.pdf
> diff --git a/MAINTAINERS b/MAINTAINERS
> index cfff2c9e3d94..e1384ae37344 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -11291,6 +11291,13 @@ F: arch/arm/mach-s3c24xx/mach-bast.c
> F: arch/arm/mach-s3c24xx/bast-ide.c
> F: arch/arm/mach-s3c24xx/bast-irq.c
>
> +SIPHASH PRF ROUTINES
> +M: Jason A. Donenfeld <Jason@zx2c4.com>
> +S: Maintained
> +F: lib/siphash.c
> +F: lib/test_siphash.c
> +F: include/linux/siphash.h
> +
> TI DAVINCI MACHINE SUPPORT
> M: Sekhar Nori <nsekhar@ti.com>
> M: Kevin Hilman <khilman@kernel.org>
> diff --git a/include/linux/siphash.h b/include/linux/siphash.h
> new file mode 100644
> index 000000000000..7aa666eb00d9
> --- /dev/null
> +++ b/include/linux/siphash.h
> @@ -0,0 +1,79 @@
> +/* Copyright (C) 2016 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
> + *
> + * This file is provided under a dual BSD/GPLv2 license.
> + *
> + * SipHash: a fast short-input PRF
> + * https://131002.net/siphash/
> + *
> + * This implementation is specifically for SipHash2-4.
> + */
> +
> +#ifndef _LINUX_SIPHASH_H
> +#define _LINUX_SIPHASH_H
> +
> +#include <linux/types.h>
> +#include <linux/kernel.h>
> +
> +#define SIPHASH_ALIGNMENT __alignof__(u64)
> +typedef u64 siphash_key_t[2];
> +
> +u64 __siphash_aligned(const void *data, size_t len, const siphash_key_t key);
> +#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
> +u64 __siphash_unaligned(const void *data, size_t len, const siphash_key_t key);
> +#endif
> +
> +u64 siphash_1u64(const u64 a, const siphash_key_t key);
> +u64 siphash_2u64(const u64 a, const u64 b, const siphash_key_t key);
> +u64 siphash_3u64(const u64 a, const u64 b, const u64 c,
> + const siphash_key_t key);
> +u64 siphash_4u64(const u64 a, const u64 b, const u64 c, const u64 d,
> + const siphash_key_t key);
> +u64 siphash_1u32(const u32 a, const siphash_key_t key);
> +u64 siphash_3u32(const u32 a, const u32 b, const u32 c, const siphash_key_t key);
> +
> +static inline u64 siphash_2u32(const u32 a, const u32 b, const siphash_key_t key)
> +{
> + return siphash_1u64((u64)b << 32 | a, key);
> +}
> +static inline u64 siphash_4u32(const u32 a, const u32 b, const u32 c, const u32 d,
> + const siphash_key_t key)
> +{
> + return siphash_2u64((u64)b << 32 | a, (u64)d << 32 | c, key);
> +}
> +
> +
> +static inline u64 ___siphash_aligned(const __le64 *data, size_t len, const siphash_key_t key)
> +{
> + if (__builtin_constant_p(len) && len == 4)
> + return siphash_1u32(le32_to_cpu(data[0]), key);
> + if (__builtin_constant_p(len) && len == 8)
> + return siphash_1u64(le64_to_cpu(data[0]), key);
> + if (__builtin_constant_p(len) && len == 16)
> + return siphash_2u64(le64_to_cpu(data[0]), le64_to_cpu(data[1]),
> + key);
> + if (__builtin_constant_p(len) && len == 24)
> + return siphash_3u64(le64_to_cpu(data[0]), le64_to_cpu(data[1]),
> + le64_to_cpu(data[2]), key);
> + if (__builtin_constant_p(len) && len == 32)
> + return siphash_4u64(le64_to_cpu(data[0]), le64_to_cpu(data[1]),
> + le64_to_cpu(data[2]), le64_to_cpu(data[3]),
> + key);
> + return __siphash_aligned(data, len, key);
> +}
> +
> +/**
> + * siphash - compute 64-bit siphash PRF value
> + * @data: buffer to hash
> + * @size: size of @data
> + * @key: the siphash key
> + */
> +static inline u64 siphash(const void *data, size_t len, const siphash_key_t key)
> +{
> +#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
> + if (!IS_ALIGNED((unsigned long)data, SIPHASH_ALIGNMENT))
> + return __siphash_unaligned(data, len, key);
> +#endif
> + return ___siphash_aligned(data, len, key);
> +}
> +
> +#endif /* _LINUX_SIPHASH_H */
> diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
> index b06848a104e6..3d2515a770c3 100644
> --- a/lib/Kconfig.debug
> +++ b/lib/Kconfig.debug
> @@ -1819,9 +1819,9 @@ config TEST_HASH
> tristate "Perform selftest on hash functions"
> default n
> help
> - Enable this option to test the kernel's integer (<linux/hash,h>)
> - and string (<linux/stringhash.h>) hash functions on boot
> - (or module load).
> + Enable this option to test the kernel's integer (<linux/hash.h>),
> + string (<linux/stringhash.h>), and siphash (<linux/siphash.h>)
> + hash functions on boot (or module load).
>
> This is intended to help people writing architecture-specific
> optimized versions. If unsure, say N.
> diff --git a/lib/Makefile b/lib/Makefile
> index bc4073a8cd08..7b3008d58600 100644
> --- a/lib/Makefile
> +++ b/lib/Makefile
> @@ -22,7 +22,8 @@ lib-y := ctype.o string.o vsprintf.o cmdline.o \
> sha1.o chacha20.o md5.o irq_regs.o argv_split.o \
> flex_proportions.o ratelimit.o show_mem.o \
> is_single_threaded.o plist.o decompress.o kobject_uevent.o \
> - earlycpio.o seq_buf.o nmi_backtrace.o nodemask.o win_minmax.o
> + earlycpio.o seq_buf.o siphash.o \
> + nmi_backtrace.o nodemask.o win_minmax.o
>
> lib-$(CONFIG_MMU) += ioremap.o
> lib-$(CONFIG_SMP) += cpumask.o
> @@ -44,7 +45,7 @@ obj-$(CONFIG_TEST_HEXDUMP) += test_hexdump.o
> obj-y += kstrtox.o
> obj-$(CONFIG_TEST_BPF) += test_bpf.o
> obj-$(CONFIG_TEST_FIRMWARE) += test_firmware.o
> -obj-$(CONFIG_TEST_HASH) += test_hash.o
> +obj-$(CONFIG_TEST_HASH) += test_hash.o test_siphash.o
> obj-$(CONFIG_TEST_KASAN) += test_kasan.o
> obj-$(CONFIG_TEST_KSTRTOX) += test-kstrtox.o
> obj-$(CONFIG_TEST_LKM) += test_module.o
> diff --git a/lib/siphash.c b/lib/siphash.c
> new file mode 100644
> index 000000000000..ff2151313667
> --- /dev/null
> +++ b/lib/siphash.c
> @@ -0,0 +1,232 @@
> +/* Copyright (C) 2016 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
> + *
> + * This file is provided under a dual BSD/GPLv2 license.
> + *
> + * SipHash: a fast short-input PRF
> + * https://131002.net/siphash/
> + *
> + * This implementation is specifically for SipHash2-4.
> + */
> +
> +#include <linux/siphash.h>
> +#include <asm/unaligned.h>
> +
> +#if defined(CONFIG_DCACHE_WORD_ACCESS) && BITS_PER_LONG == 64
> +#include <linux/dcache.h>
> +#include <asm/word-at-a-time.h>
> +#endif
> +
> +#define SIPROUND \
> + do { \
> + v0 += v1; v1 = rol64(v1, 13); v1 ^= v0; v0 = rol64(v0, 32); \
> + v2 += v3; v3 = rol64(v3, 16); v3 ^= v2; \
> + v0 += v3; v3 = rol64(v3, 21); v3 ^= v0; \
> + v2 += v1; v1 = rol64(v1, 17); v1 ^= v2; v2 = rol64(v2, 32); \
> + } while(0)
> +
> +#define PREAMBLE(len) \
> + u64 v0 = 0x736f6d6570736575ULL; \
> + u64 v1 = 0x646f72616e646f6dULL; \
> + u64 v2 = 0x6c7967656e657261ULL; \
> + u64 v3 = 0x7465646279746573ULL; \
> + u64 b = ((u64)len) << 56; \
> + v3 ^= key[1]; \
> + v2 ^= key[0]; \
> + v1 ^= key[1]; \
> + v0 ^= key[0];
> +
> +#define POSTAMBLE \
> + v3 ^= b; \
> + SIPROUND; \
> + SIPROUND; \
> + v0 ^= b; \
> + v2 ^= 0xff; \
> + SIPROUND; \
> + SIPROUND; \
> + SIPROUND; \
> + SIPROUND; \
> + return (v0 ^ v1) ^ (v2 ^ v3);
> +
> +u64 __siphash_aligned(const void *data, size_t len, const siphash_key_t key)
> +{
> + const u8 *end = data + len - (len % sizeof(u64));
> + const u8 left = len & (sizeof(u64) - 1);
> + u64 m;
> + PREAMBLE(len)
> + for (; data != end; data += sizeof(u64)) {
> + m = le64_to_cpup(data);
> + v3 ^= m;
> + SIPROUND;
> + SIPROUND;
> + v0 ^= m;
> + }
> +#if defined(CONFIG_DCACHE_WORD_ACCESS) && BITS_PER_LONG == 64
> + if (left)
> + b |= le64_to_cpu((__force __le64)(load_unaligned_zeropad(data) &
> + bytemask_from_count(left)));
> +#else
> + switch (left) {
> + case 7: b |= ((u64)end[6]) << 48;
> + case 6: b |= ((u64)end[5]) << 40;
> + case 5: b |= ((u64)end[4]) << 32;
> + case 4: b |= le32_to_cpup(data); break;
> + case 3: b |= ((u64)end[2]) << 16;
> + case 2: b |= le16_to_cpup(data); break;
> + case 1: b |= end[0];
> + }
> +#endif
> + POSTAMBLE
> +}
> +EXPORT_SYMBOL(__siphash_aligned);
> +
> +#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
> +u64 __siphash_unaligned(const void *data, size_t len, const siphash_key_t key)
> +{
> + const u8 *end = data + len - (len % sizeof(u64));
> + const u8 left = len & (sizeof(u64) - 1);
> + u64 m;
> + PREAMBLE(len)
> + for (; data != end; data += sizeof(u64)) {
> + m = get_unaligned_le64(data);
> + v3 ^= m;
> + SIPROUND;
> + SIPROUND;
> + v0 ^= m;
> + }
> +#if defined(CONFIG_DCACHE_WORD_ACCESS) && BITS_PER_LONG == 64
> + if (left)
> + b |= le64_to_cpu((__force __le64)(load_unaligned_zeropad(data) &
> + bytemask_from_count(left)));
> +#else
> + switch (left) {
> + case 7: b |= ((u64)end[6]) << 48;
> + case 6: b |= ((u64)end[5]) << 40;
> + case 5: b |= ((u64)end[4]) << 32;
> + case 4: b |= get_unaligned_le32(end); break;
> + case 3: b |= ((u64)end[2]) << 16;
> + case 2: b |= get_unaligned_le16(end); break;
> + case 1: b |= end[0];
> + }
> +#endif
> + POSTAMBLE
> +}
> +EXPORT_SYMBOL(__siphash_unaligned);
> +#endif
> +
> +/**
> + * siphash_1u64 - compute 64-bit siphash PRF value of a u64
> + * @first: first u64
> + * @key: the siphash key
> + */
> +u64 siphash_1u64(const u64 first, const siphash_key_t key)
> +{
> + PREAMBLE(8)
> + v3 ^= first;
> + SIPROUND;
> + SIPROUND;
> + v0 ^= first;
> + POSTAMBLE
> +}
> +EXPORT_SYMBOL(siphash_1u64);
> +
> +/**
> + * siphash_2u64 - compute 64-bit siphash PRF value of 2 u64
> + * @first: first u64
> + * @second: second u64
> + * @key: the siphash key
> + */
> +u64 siphash_2u64(const u64 first, const u64 second, const siphash_key_t key)
> +{
> + PREAMBLE(16)
> + v3 ^= first;
> + SIPROUND;
> + SIPROUND;
> + v0 ^= first;
> + v3 ^= second;
> + SIPROUND;
> + SIPROUND;
> + v0 ^= second;
> + POSTAMBLE
> +}
> +EXPORT_SYMBOL(siphash_2u64);
> +
> +/**
> + * siphash_3u64 - compute 64-bit siphash PRF value of 3 u64
> + * @first: first u64
> + * @second: second u64
> + * @third: third u64
> + * @key: the siphash key
> + */
> +u64 siphash_3u64(const u64 first, const u64 second, const u64 third,
> + const siphash_key_t key)
> +{
> + PREAMBLE(24)
> + v3 ^= first;
> + SIPROUND;
> + SIPROUND;
> + v0 ^= first;
> + v3 ^= second;
> + SIPROUND;
> + SIPROUND;
> + v0 ^= second;
> + v3 ^= third;
> + SIPROUND;
> + SIPROUND;
> + v0 ^= third;
> + POSTAMBLE
> +}
> +EXPORT_SYMBOL(siphash_3u64);
> +
> +/**
> + * siphash_4u64 - compute 64-bit siphash PRF value of 4 u64
> + * @first: first u64
> + * @second: second u64
> + * @third: third u64
> + * @forth: forth u64
> + * @key: the siphash key
> + */
> +u64 siphash_4u64(const u64 first, const u64 second, const u64 third,
> + const u64 forth, const siphash_key_t key)
> +{
> + PREAMBLE(32)
> + v3 ^= first;
> + SIPROUND;
> + SIPROUND;
> + v0 ^= first;
> + v3 ^= second;
> + SIPROUND;
> + SIPROUND;
> + v0 ^= second;
> + v3 ^= third;
> + SIPROUND;
> + SIPROUND;
> + v0 ^= third;
> + v3 ^= forth;
> + SIPROUND;
> + SIPROUND;
> + v0 ^= forth;
> + POSTAMBLE
> +}
> +EXPORT_SYMBOL(siphash_4u64);
> +
> +u64 siphash_1u32(const u32 first, const siphash_key_t key)
> +{
> + PREAMBLE(4)
> + b |= first;
> + POSTAMBLE
> +}
> +EXPORT_SYMBOL(siphash_1u32);
> +
> +u64 siphash_3u32(const u32 first, const u32 second, const u32 third,
> + const siphash_key_t key)
> +{
> + u64 combined = (u64)second << 32 | first;
> + PREAMBLE(12)
> + v3 ^= combined;
> + SIPROUND;
> + SIPROUND;
> + v0 ^= combined;
> + b |= third;
> + POSTAMBLE
> +}
> +EXPORT_SYMBOL(siphash_3u32);
> diff --git a/lib/test_siphash.c b/lib/test_siphash.c
> new file mode 100644
> index 000000000000..e0ba2cf8dc67
> --- /dev/null
> +++ b/lib/test_siphash.c
> @@ -0,0 +1,119 @@
> +/* Test cases for siphash.c
> + *
> + * Copyright (C) 2016 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
> + *
> + * This file is provided under a dual BSD/GPLv2 license.
> + *
> + * SipHash: a fast short-input PRF
> + * https://131002.net/siphash/
> + *
> + * This implementation is specifically for SipHash2-4.
> + */
> +
> +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
> +
> +#include <linux/siphash.h>
> +#include <linux/kernel.h>
> +#include <linux/string.h>
> +#include <linux/errno.h>
> +#include <linux/module.h>
> +
> +/* Test vectors taken from official reference source available at:
> + * https://131002.net/siphash/siphash24.c
> + */
> +static const u64 test_vectors[64] = {
> + 0x726fdb47dd0e0e31ULL, 0x74f839c593dc67fdULL, 0x0d6c8009d9a94f5aULL,
> + 0x85676696d7fb7e2dULL, 0xcf2794e0277187b7ULL, 0x18765564cd99a68dULL,
> + 0xcbc9466e58fee3ceULL, 0xab0200f58b01d137ULL, 0x93f5f5799a932462ULL,
> + 0x9e0082df0ba9e4b0ULL, 0x7a5dbbc594ddb9f3ULL, 0xf4b32f46226bada7ULL,
> + 0x751e8fbc860ee5fbULL, 0x14ea5627c0843d90ULL, 0xf723ca908e7af2eeULL,
> + 0xa129ca6149be45e5ULL, 0x3f2acc7f57c29bdbULL, 0x699ae9f52cbe4794ULL,
> + 0x4bc1b3f0968dd39cULL, 0xbb6dc91da77961bdULL, 0xbed65cf21aa2ee98ULL,
> + 0xd0f2cbb02e3b67c7ULL, 0x93536795e3a33e88ULL, 0xa80c038ccd5ccec8ULL,
> + 0xb8ad50c6f649af94ULL, 0xbce192de8a85b8eaULL, 0x17d835b85bbb15f3ULL,
> + 0x2f2e6163076bcfadULL, 0xde4daaaca71dc9a5ULL, 0xa6a2506687956571ULL,
> + 0xad87a3535c49ef28ULL, 0x32d892fad841c342ULL, 0x7127512f72f27cceULL,
> + 0xa7f32346f95978e3ULL, 0x12e0b01abb051238ULL, 0x15e034d40fa197aeULL,
> + 0x314dffbe0815a3b4ULL, 0x027990f029623981ULL, 0xcadcd4e59ef40c4dULL,
> + 0x9abfd8766a33735cULL, 0x0e3ea96b5304a7d0ULL, 0xad0c42d6fc585992ULL,
> + 0x187306c89bc215a9ULL, 0xd4a60abcf3792b95ULL, 0xf935451de4f21df2ULL,
> + 0xa9538f0419755787ULL, 0xdb9acddff56ca510ULL, 0xd06c98cd5c0975ebULL,
> + 0xe612a3cb9ecba951ULL, 0xc766e62cfcadaf96ULL, 0xee64435a9752fe72ULL,
> + 0xa192d576b245165aULL, 0x0a8787bf8ecb74b2ULL, 0x81b3e73d20b49b6fULL,
> + 0x7fa8220ba3b2eceaULL, 0x245731c13ca42499ULL, 0xb78dbfaf3a8d83bdULL,
> + 0xea1ad565322a1a0bULL, 0x60e61c23a3795013ULL, 0x6606d7e446282b93ULL,
> + 0x6ca4ecb15c5f91e1ULL, 0x9f626da15c9625f3ULL, 0xe51b38608ef25f57ULL,
> + 0x958a324ceb064572ULL
> +};
> +static const siphash_key_t test_key =
> + { 0x0706050403020100ULL , 0x0f0e0d0c0b0a0908ULL };
> +
> +static int __init siphash_test_init(void)
> +{
> + u8 in[64] __aligned(SIPHASH_ALIGNMENT);
> + u8 in_unaligned[65];
> + u8 i;
> + int ret = 0;
> +
> + for (i = 0; i < 64; ++i) {
> + in[i] = i;
> + in_unaligned[i + 1] = i;
> + if (siphash(in, i, test_key) != test_vectors[i]) {
> + pr_info("self-test aligned %u: FAIL\n", i + 1);
> + ret = -EINVAL;
> + }
> + if (siphash(in_unaligned + 1, i, test_key) != test_vectors[i]) {
> + pr_info("self-test unaligned %u: FAIL\n", i + 1);
> + ret = -EINVAL;
> + }
> + }
> + if (siphash_1u64(0x0706050403020100ULL, test_key) != test_vectors[8]) {
> + pr_info("self-test 1u64: FAIL\n");
> + ret = -EINVAL;
> + }
> + if (siphash_2u64(0x0706050403020100ULL, 0x0f0e0d0c0b0a0908ULL, test_key) != test_vectors[16]) {
> + pr_info("self-test 2u64: FAIL\n");
> + ret = -EINVAL;
> + }
> + if (siphash_3u64(0x0706050403020100ULL, 0x0f0e0d0c0b0a0908ULL,
> + 0x1716151413121110ULL, test_key) != test_vectors[24]) {
> + pr_info("self-test 3u64: FAIL\n");
> + ret = -EINVAL;
> + }
> + if (siphash_4u64(0x0706050403020100ULL, 0x0f0e0d0c0b0a0908ULL,
> + 0x1716151413121110ULL, 0x1f1e1d1c1b1a1918ULL, test_key) != test_vectors[32]) {
> + pr_info("self-test 4u64: FAIL\n");
> + ret = -EINVAL;
> + }
> + if (siphash_1u32(0x03020100U, test_key) != test_vectors[4]) {
> + pr_info("self-test 1u32: FAIL\n");
> + ret = -EINVAL;
> + }
> + if (siphash_2u32(0x03020100U, 0x07060504U, test_key) != test_vectors[8]) {
> + pr_info("self-test 2u32: FAIL\n");
> + ret = -EINVAL;
> + }
> + if (siphash_3u32(0x03020100U, 0x07060504U,
> + 0x0b0a0908U, test_key) != test_vectors[12]) {
> + pr_info("self-test 3u32: FAIL\n");
> + ret = -EINVAL;
> + }
> + if (siphash_4u32(0x03020100U, 0x07060504U,
> + 0x0b0a0908U, 0x0f0e0d0cU, test_key) != test_vectors[16]) {
> + pr_info("self-test 4u32: FAIL\n");
> + ret = -EINVAL;
> + }
> + if (!ret)
> + pr_info("self-tests: pass\n");
> + return ret;
> +}
> +
> +static void __exit siphash_test_exit(void)
> +{
> +}
> +
> +module_init(siphash_test_init);
> +module_exit(siphash_test_exit);
> +
> +MODULE_AUTHOR("Jason A. Donenfeld <Jason@zx2c4.com>");
> +MODULE_LICENSE("Dual BSD/GPL");
> --
> 2.11.0


Looks good to me (co-designer of SipHash).

Reviewed-by: Jean-Philippe Aumasson <jeanphilippe.aumasson@gmail.com>

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH net-next 2/4] siphash: implement HalfSipHash1-3 for hash tables
  2017-01-06 20:10     ` [PATCH net-next 2/4] siphash: implement HalfSipHash1-3 for hash tables Jason A. Donenfeld
@ 2017-01-06 20:33       ` Jean-Philippe Aumasson
  0 siblings, 0 replies; 18+ messages in thread
From: Jean-Philippe Aumasson @ 2017-01-06 20:33 UTC (permalink / raw)
  To: Jason A. Donenfeld; +Cc: David Miller, Netdev, LKML

On Fri, Jan 6, 2017 at 9:10 PM, Jason A. Donenfeld <Jason@zx2c4.com> wrote:
> HalfSipHash, or hsiphash, is a shortened version of SipHash, which
> generates 32-bit outputs using a weaker 64-bit key. It has *much* lower
> security margins, and shouldn't be used for anything too sensitive, but
> it could be used as a hashtable key function replacement, if the output
> is never exposed, and if the security requirement is not too high.
>
> The goal is to make this something that performance-critical jhash users
> would be willing to use.
>
> On 64-bit machines, HalfSipHash1-3 is slower than SipHash1-3, so we alias
> SipHash1-3 to HalfSipHash1-3 on those systems.
>
> 64-bit x86_64:
> [    0.509409] test_siphash:     SipHash2-4 cycles: 4049181
> [    0.510650] test_siphash:     SipHash1-3 cycles: 2512884
> [    0.512205] test_siphash: HalfSipHash1-3 cycles: 3429920
> [    0.512904] test_siphash:    JenkinsHash cycles:  978267
> So, we map hsiphash() -> SipHash1-3
>
> 32-bit x86:
> [    0.509868] test_siphash:     SipHash2-4 cycles: 14812892
> [    0.513601] test_siphash:     SipHash1-3 cycles:  9510710
> [    0.515263] test_siphash: HalfSipHash1-3 cycles:  3856157
> [    0.515952] test_siphash:    JenkinsHash cycles:  1148567
> So, we map hsiphash() -> HalfSipHash1-3
>
> hsiphash() is roughly 3 times slower than jhash(), but comes with a
> considerable security improvement.
>
> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
> Cc: Jean-Philippe Aumasson <jeanphilippe.aumasson@gmail.com>
> ---
>  Documentation/siphash.txt |  75 +++++++++++
>  include/linux/siphash.h   |  56 +++++++-
>  lib/siphash.c             | 318 +++++++++++++++++++++++++++++++++++++++++++++-
>  lib/test_siphash.c        | 139 ++++++++++++++++----
>  4 files changed, 561 insertions(+), 27 deletions(-)
>
> diff --git a/Documentation/siphash.txt b/Documentation/siphash.txt
> index 39ff7f0438e7..f93c1d7104c4 100644
> --- a/Documentation/siphash.txt
> +++ b/Documentation/siphash.txt
> @@ -77,3 +77,78 @@ Linux implements the "2-4" variant of SipHash.
>
>  Read the SipHash paper if you're interested in learning more:
>  https://131002.net/siphash/siphash.pdf
> +
> +
> +~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~=~
> +
> +HalfSipHash - SipHash's insecure younger cousin
> +-----------------------------------------------
> +Written by Jason A. Donenfeld <jason@zx2c4.com>
> +
> +On the off-chance that SipHash is not fast enough for your needs, you might be
> +able to justify using HalfSipHash, a terrifying but potentially useful
> +possibility. HalfSipHash cuts SipHash's rounds down from "2-4" to "1-3" and,
> +even scarier, uses an easily brute-forcable 64-bit key (with a 32-bit output)
> +instead of SipHash's 128-bit key. However, this may appeal to some
> +high-performance `jhash` users.
> +
> +Danger!
> +
> +Do not ever use HalfSipHash except for as a hashtable key function, and only
> +then when you can be absolutely certain that the outputs will never be
> +transmitted out of the kernel. This is only remotely useful over `jhash` as a
> +means of mitigating hashtable flooding denial of service attacks.
> +
> +1. Generating a key
> +
> +Keys should always be generated from a cryptographically secure source of
> +random numbers, either using get_random_bytes or get_random_once:
> +
> +hsiphash_key_t key;
> +get_random_bytes(key, sizeof(key));
> +
> +If you're not deriving your key from here, you're doing it wrong.
> +
> +2. Using the functions
> +
> +There are two variants of the function, one that takes a list of integers, and
> +one that takes a buffer:
> +
> +u32 hsiphash(const void *data, size_t len, siphash_key_t key);
> +
> +And:
> +
> +u32 hsiphash_1u32(u32, hsiphash_key_t key);
> +u32 hsiphash_2u32(u32, u32, hsiphash_key_t key);
> +u32 hsiphash_3u32(u32, u32, u32, hsiphash_key_t key);
> +u32 hsiphash_4u32(u32, u32, u32, u32, hsiphash_key_t key);
> +
> +If you pass the generic hsiphash function something of a constant length, it
> +will constant fold at compile-time and automatically choose one of the
> +optimized functions.
> +
> +3. Hashtable key function usage:
> +
> +struct some_hashtable {
> +       DECLARE_HASHTABLE(hashtable, 8);
> +       hsiphash_key_t key;
> +};
> +
> +void init_hashtable(struct some_hashtable *table)
> +{
> +       get_random_bytes(table->key, sizeof(table->key));
> +}
> +
> +static inline hlist_head *some_hashtable_bucket(struct some_hashtable *table, struct interesting_input *input)
> +{
> +       return &table->hashtable[hsiphash(input, sizeof(*input), table->key) & (HASH_SIZE(table->hashtable) - 1)];
> +}
> +
> +You may then iterate like usual over the returned hash bucket.
> +
> +4. Performance
> +
> +HalfSipHash is roughly 3 times slower than JenkinsHash. For many replacements,
> +this will not be a problem, as the hashtable lookup isn't the bottleneck. And
> +in general, this is probably a good sacrifice to make for the security and DoS
> +resistance of HalfSipHash.
> diff --git a/include/linux/siphash.h b/include/linux/siphash.h
> index 7aa666eb00d9..efab44c654f3 100644
> --- a/include/linux/siphash.h
> +++ b/include/linux/siphash.h
> @@ -5,7 +5,9 @@
>   * SipHash: a fast short-input PRF
>   * https://131002.net/siphash/
>   *
> - * This implementation is specifically for SipHash2-4.
> + * This implementation is specifically for SipHash2-4 for a secure PRF
> + * and HalfSipHash1-3/SipHash1-3 for an insecure PRF only suitable for
> + * hashtables.
>   */
>
>  #ifndef _LINUX_SIPHASH_H
> @@ -76,4 +78,56 @@ static inline u64 siphash(const void *data, size_t len, const siphash_key_t key)
>         return ___siphash_aligned(data, len, key);
>  }
>
> +#if BITS_PER_LONG == 64
> +typedef siphash_key_t hsiphash_key_t;
> +#define HSIPHASH_ALIGNMENT SIPHASH_ALIGNMENT
> +#else
> +typedef u32 hsiphash_key_t[2];
> +#define HSIPHASH_ALIGNMENT __alignof__(u32)
> +#endif
> +
> +u32 __hsiphash_aligned(const void *data, size_t len, const hsiphash_key_t key);
> +#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
> +u32 __hsiphash_unaligned(const void *data, size_t len, const hsiphash_key_t key);
> +#endif
> +
> +u32 hsiphash_1u32(const u32 a, const hsiphash_key_t key);
> +u32 hsiphash_2u32(const u32 a, const u32 b, const hsiphash_key_t key);
> +u32 hsiphash_3u32(const u32 a, const u32 b, const u32 c,
> +                 const hsiphash_key_t key);
> +u32 hsiphash_4u32(const u32 a, const u32 b, const u32 c, const u32 d,
> +                 const hsiphash_key_t key);
> +
> +static inline u32 ___hsiphash_aligned(const __le32 *data, size_t len, const hsiphash_key_t key)
> +{
> +       if (__builtin_constant_p(len) && len == 4)
> +               return hsiphash_1u32(le32_to_cpu(data[0]), key);
> +       if (__builtin_constant_p(len) && len == 8)
> +               return hsiphash_2u32(le32_to_cpu(data[0]), le32_to_cpu(data[1]),
> +                                    key);
> +       if (__builtin_constant_p(len) && len == 12)
> +               return hsiphash_3u32(le32_to_cpu(data[0]), le32_to_cpu(data[1]),
> +                                    le32_to_cpu(data[2]), key);
> +       if (__builtin_constant_p(len) && len == 16)
> +               return hsiphash_4u32(le32_to_cpu(data[0]), le32_to_cpu(data[1]),
> +                                    le32_to_cpu(data[2]), le32_to_cpu(data[3]),
> +                                    key);
> +       return __hsiphash_aligned(data, len, key);
> +}
> +
> +/**
> + * hsiphash - compute 32-bit hsiphash PRF value
> + * @data: buffer to hash
> + * @size: size of @data
> + * @key: the hsiphash key
> + */
> +static inline u32 hsiphash(const void *data, size_t len, const hsiphash_key_t key)
> +{
> +#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
> +       if (!IS_ALIGNED((unsigned long)data, HSIPHASH_ALIGNMENT))
> +               return __hsiphash_unaligned(data, len, key);
> +#endif
> +       return ___hsiphash_aligned(data, len, key);
> +}
> +
>  #endif /* _LINUX_SIPHASH_H */
> diff --git a/lib/siphash.c b/lib/siphash.c
> index ff2151313667..e2481226d96c 100644
> --- a/lib/siphash.c
> +++ b/lib/siphash.c
> @@ -5,7 +5,9 @@
>   * SipHash: a fast short-input PRF
>   * https://131002.net/siphash/
>   *
> - * This implementation is specifically for SipHash2-4.
> + * This implementation is specifically for SipHash2-4 for a secure PRF
> + * and HalfSipHash1-3/SipHash1-3 for an insecure PRF only suitable for
> + * hashtables.
>   */
>
>  #include <linux/siphash.h>
> @@ -230,3 +232,317 @@ u64 siphash_3u32(const u32 first, const u32 second, const u32 third,
>         POSTAMBLE
>  }
>  EXPORT_SYMBOL(siphash_3u32);
> +
> +#if BITS_PER_LONG == 64
> +/* Note that this HalfSipHash1-3 implementation on 64-bit
> + * isn't actually HalfSipHash1-3 but rather SipHash1-3. */
> +
> +#define HSIPROUND SIPROUND
> +#define HPREAMBLE(len) PREAMBLE(len)
> +#define HPOSTAMBLE \
> +       v3 ^= b; \
> +       HSIPROUND; \
> +       v0 ^= b; \
> +       v2 ^= 0xff; \
> +       HSIPROUND; \
> +       HSIPROUND; \
> +       HSIPROUND; \
> +       return (v0 ^ v1) ^ (v2 ^ v3);
> +
> +u32 __hsiphash_aligned(const void *data, size_t len, const hsiphash_key_t key)
> +{
> +       const u8 *end = data + len - (len % sizeof(u64));
> +       const u8 left = len & (sizeof(u64) - 1);
> +       u64 m;
> +       HPREAMBLE(len)
> +       for (; data != end; data += sizeof(u64)) {
> +               m = le64_to_cpup(data);
> +               v3 ^= m;
> +               HSIPROUND;
> +               v0 ^= m;
> +       }
> +#if defined(CONFIG_DCACHE_WORD_ACCESS) && BITS_PER_LONG == 64
> +       if (left)
> +               b |= le64_to_cpu((__force __le64)(load_unaligned_zeropad(data) &
> +                                                 bytemask_from_count(left)));
> +#else
> +       switch (left) {
> +       case 7: b |= ((u64)end[6]) << 48;
> +       case 6: b |= ((u64)end[5]) << 40;
> +       case 5: b |= ((u64)end[4]) << 32;
> +       case 4: b |= le32_to_cpup(data); break;
> +       case 3: b |= ((u64)end[2]) << 16;
> +       case 2: b |= le16_to_cpup(data); break;
> +       case 1: b |= end[0];
> +       }
> +#endif
> +       HPOSTAMBLE
> +}
> +EXPORT_SYMBOL(__hsiphash_aligned);
> +
> +#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
> +u32 __hsiphash_unaligned(const void *data, size_t len, const hsiphash_key_t key)
> +{
> +       const u8 *end = data + len - (len % sizeof(u64));
> +       const u8 left = len & (sizeof(u64) - 1);
> +       u64 m;
> +       HPREAMBLE(len)
> +       for (; data != end; data += sizeof(u64)) {
> +               m = get_unaligned_le64(data);
> +               v3 ^= m;
> +               HSIPROUND;
> +               v0 ^= m;
> +       }
> +#if defined(CONFIG_DCACHE_WORD_ACCESS) && BITS_PER_LONG == 64
> +       if (left)
> +               b |= le64_to_cpu((__force __le64)(load_unaligned_zeropad(data) &
> +                                                 bytemask_from_count(left)));
> +#else
> +       switch (left) {
> +       case 7: b |= ((u64)end[6]) << 48;
> +       case 6: b |= ((u64)end[5]) << 40;
> +       case 5: b |= ((u64)end[4]) << 32;
> +       case 4: b |= get_unaligned_le32(end); break;
> +       case 3: b |= ((u64)end[2]) << 16;
> +       case 2: b |= get_unaligned_le16(end); break;
> +       case 1: b |= end[0];
> +       }
> +#endif
> +       HPOSTAMBLE
> +}
> +EXPORT_SYMBOL(__hsiphash_unaligned);
> +#endif
> +
> +/**
> + * hsiphash_1u32 - compute 64-bit hsiphash PRF value of a u32
> + * @first: first u32
> + * @key: the hsiphash key
> + */
> +u32 hsiphash_1u32(const u32 first, const hsiphash_key_t key)
> +{
> +       HPREAMBLE(4)
> +       b |= first;
> +       HPOSTAMBLE
> +}
> +EXPORT_SYMBOL(hsiphash_1u32);
> +
> +/**
> + * hsiphash_2u32 - compute 32-bit hsiphash PRF value of 2 u32
> + * @first: first u32
> + * @second: second u32
> + * @key: the hsiphash key
> + */
> +u32 hsiphash_2u32(const u32 first, const u32 second, const hsiphash_key_t key)
> +{
> +       u64 combined = (u64)second << 32 | first;
> +       HPREAMBLE(8)
> +       v3 ^= combined;
> +       HSIPROUND;
> +       v0 ^= combined;
> +       HPOSTAMBLE
> +}
> +EXPORT_SYMBOL(hsiphash_2u32);
> +
> +/**
> + * hsiphash_3u32 - compute 32-bit hsiphash PRF value of 3 u32
> + * @first: first u32
> + * @second: second u32
> + * @third: third u32
> + * @key: the hsiphash key
> + */
> +u32 hsiphash_3u32(const u32 first, const u32 second, const u32 third,
> +                 const hsiphash_key_t key)
> +{
> +       u64 combined = (u64)second << 32 | first;
> +       HPREAMBLE(12)
> +       v3 ^= combined;
> +       HSIPROUND;
> +       v0 ^= combined;
> +       b |= third;
> +       HPOSTAMBLE
> +}
> +EXPORT_SYMBOL(hsiphash_3u32);
> +
> +/**
> + * hsiphash_4u32 - compute 32-bit hsiphash PRF value of 4 u32
> + * @first: first u32
> + * @second: second u32
> + * @third: third u32
> + * @forth: forth u32
> + * @key: the hsiphash key
> + */
> +u32 hsiphash_4u32(const u32 first, const u32 second, const u32 third,
> +                 const u32 forth, const hsiphash_key_t key)
> +{
> +       u64 combined = (u64)second << 32 | first;
> +       HPREAMBLE(16)
> +       v3 ^= combined;
> +       HSIPROUND;
> +       v0 ^= combined;
> +       combined = (u64)forth << 32 | third;
> +       v3 ^= combined;
> +       HSIPROUND;
> +       v0 ^= combined;
> +       HPOSTAMBLE
> +}
> +EXPORT_SYMBOL(hsiphash_4u32);
> +#else
> +#define HSIPROUND \
> +       do { \
> +       v0 += v1; v1 = rol32(v1, 5); v1 ^= v0; v0 = rol32(v0, 16); \
> +       v2 += v3; v3 = rol32(v3, 8); v3 ^= v2; \
> +       v0 += v3; v3 = rol32(v3, 7); v3 ^= v0; \
> +       v2 += v1; v1 = rol32(v1, 13); v1 ^= v2; v2 = rol32(v2, 16); \
> +       } while(0)
> +
> +#define HPREAMBLE(len) \
> +       u32 v0 = 0; \
> +       u32 v1 = 0; \
> +       u32 v2 = 0x6c796765U; \
> +       u32 v3 = 0x74656462U; \
> +       u32 b = ((u32)len) << 24; \
> +       v3 ^= key[1]; \
> +       v2 ^= key[0]; \
> +       v1 ^= key[1]; \
> +       v0 ^= key[0];
> +
> +#define HPOSTAMBLE \
> +       v3 ^= b; \
> +       HSIPROUND; \
> +       v0 ^= b; \
> +       v2 ^= 0xff; \
> +       HSIPROUND; \
> +       HSIPROUND; \
> +       HSIPROUND; \
> +       return v1 ^ v3;
> +
> +u32 __hsiphash_aligned(const void *data, size_t len, const hsiphash_key_t key)
> +{
> +       const u8 *end = data + len - (len % sizeof(u32));
> +       const u8 left = len & (sizeof(u32) - 1);
> +       u32 m;
> +       HPREAMBLE(len)
> +       for (; data != end; data += sizeof(u32)) {
> +               m = le32_to_cpup(data);
> +               v3 ^= m;
> +               HSIPROUND;
> +               v0 ^= m;
> +       }
> +       switch (left) {
> +       case 3: b |= ((u32)end[2]) << 16;
> +       case 2: b |= le16_to_cpup(data); break;
> +       case 1: b |= end[0];
> +       }
> +       HPOSTAMBLE
> +}
> +EXPORT_SYMBOL(__hsiphash_aligned);
> +
> +#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
> +u32 __hsiphash_unaligned(const void *data, size_t len, const hsiphash_key_t key)
> +{
> +       const u8 *end = data + len - (len % sizeof(u32));
> +       const u8 left = len & (sizeof(u32) - 1);
> +       u32 m;
> +       HPREAMBLE(len)
> +       for (; data != end; data += sizeof(u32)) {
> +               m = get_unaligned_le32(data);
> +               v3 ^= m;
> +               HSIPROUND;
> +               v0 ^= m;
> +       }
> +       switch (left) {
> +       case 3: b |= ((u32)end[2]) << 16;
> +       case 2: b |= get_unaligned_le16(end); break;
> +       case 1: b |= end[0];
> +       }
> +       HPOSTAMBLE
> +}
> +EXPORT_SYMBOL(__hsiphash_unaligned);
> +#endif
> +
> +/**
> + * hsiphash_1u32 - compute 32-bit hsiphash PRF value of a u32
> + * @first: first u32
> + * @key: the hsiphash key
> + */
> +u32 hsiphash_1u32(const u32 first, const hsiphash_key_t key)
> +{
> +       HPREAMBLE(4)
> +       v3 ^= first;
> +       HSIPROUND;
> +       v0 ^= first;
> +       HPOSTAMBLE
> +}
> +EXPORT_SYMBOL(hsiphash_1u32);
> +
> +/**
> + * hsiphash_2u32 - compute 32-bit hsiphash PRF value of 2 u32
> + * @first: first u32
> + * @second: second u32
> + * @key: the hsiphash key
> + */
> +u32 hsiphash_2u32(const u32 first, const u32 second, const hsiphash_key_t key)
> +{
> +       HPREAMBLE(8)
> +       v3 ^= first;
> +       HSIPROUND;
> +       v0 ^= first;
> +       v3 ^= second;
> +       HSIPROUND;
> +       v0 ^= second;
> +       HPOSTAMBLE
> +}
> +EXPORT_SYMBOL(hsiphash_2u32);
> +
> +/**
> + * hsiphash_3u32 - compute 32-bit hsiphash PRF value of 3 u32
> + * @first: first u32
> + * @second: second u32
> + * @third: third u32
> + * @key: the hsiphash key
> + */
> +u32 hsiphash_3u32(const u32 first, const u32 second, const u32 third,
> +                 const hsiphash_key_t key)
> +{
> +       HPREAMBLE(12)
> +       v3 ^= first;
> +       HSIPROUND;
> +       v0 ^= first;
> +       v3 ^= second;
> +       HSIPROUND;
> +       v0 ^= second;
> +       v3 ^= third;
> +       HSIPROUND;
> +       v0 ^= third;
> +       HPOSTAMBLE
> +}
> +EXPORT_SYMBOL(hsiphash_3u32);
> +
> +/**
> + * hsiphash_4u32 - compute 32-bit hsiphash PRF value of 4 u32
> + * @first: first u32
> + * @second: second u32
> + * @third: third u32
> + * @forth: forth u32
> + * @key: the hsiphash key
> + */
> +u32 hsiphash_4u32(const u32 first, const u32 second, const u32 third,
> +                 const u32 forth, const hsiphash_key_t key)
> +{
> +       HPREAMBLE(16)
> +       v3 ^= first;
> +       HSIPROUND;
> +       v0 ^= first;
> +       v3 ^= second;
> +       HSIPROUND;
> +       v0 ^= second;
> +       v3 ^= third;
> +       HSIPROUND;
> +       v0 ^= third;
> +       v3 ^= forth;
> +       HSIPROUND;
> +       v0 ^= forth;
> +       HPOSTAMBLE
> +}
> +EXPORT_SYMBOL(hsiphash_4u32);
> +#endif
> diff --git a/lib/test_siphash.c b/lib/test_siphash.c
> index e0ba2cf8dc67..ac291ec27fb6 100644
> --- a/lib/test_siphash.c
> +++ b/lib/test_siphash.c
> @@ -7,7 +7,9 @@
>   * SipHash: a fast short-input PRF
>   * https://131002.net/siphash/
>   *
> - * This implementation is specifically for SipHash2-4.
> + * This implementation is specifically for SipHash2-4 for a secure PRF
> + * and HalfSipHash1-3/SipHash1-3 for an insecure PRF only suitable for
> + * hashtables.
>   */
>
>  #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
> @@ -18,10 +20,16 @@
>  #include <linux/errno.h>
>  #include <linux/module.h>
>
> -/* Test vectors taken from official reference source available at:
> - *     https://131002.net/siphash/siphash24.c
> +/* Test vectors taken from reference source available at:
> + *     https://github.com/veorq/SipHash
>   */
> -static const u64 test_vectors[64] = {
> +
> +
> +
> +static const siphash_key_t test_key_siphash =
> +       { 0x0706050403020100ULL , 0x0f0e0d0c0b0a0908ULL };
> +
> +static const u64 test_vectors_siphash[64] = {
>         0x726fdb47dd0e0e31ULL, 0x74f839c593dc67fdULL, 0x0d6c8009d9a94f5aULL,
>         0x85676696d7fb7e2dULL, 0xcf2794e0277187b7ULL, 0x18765564cd99a68dULL,
>         0xcbc9466e58fee3ceULL, 0xab0200f58b01d137ULL, 0x93f5f5799a932462ULL,
> @@ -45,9 +53,64 @@ static const u64 test_vectors[64] = {
>         0x6ca4ecb15c5f91e1ULL, 0x9f626da15c9625f3ULL, 0xe51b38608ef25f57ULL,
>         0x958a324ceb064572ULL
>  };
> -static const siphash_key_t test_key =
> +#if BITS_PER_LONG == 64
> +static const hsiphash_key_t test_key_hsiphash =
>         { 0x0706050403020100ULL , 0x0f0e0d0c0b0a0908ULL };
>
> +static const u32 test_vectors_hsiphash[64] = {
> +       0x050fc4dcU, 0x7d57ca93U, 0x4dc7d44dU,
> +       0xe7ddf7fbU, 0x88d38328U, 0x49533b67U,
> +       0xc59f22a7U, 0x9bb11140U, 0x8d299a8eU,
> +       0x6c063de4U, 0x92ff097fU, 0xf94dc352U,
> +       0x57b4d9a2U, 0x1229ffa7U, 0xc0f95d34U,
> +       0x2a519956U, 0x7d908b66U, 0x63dbd80cU,
> +       0xb473e63eU, 0x8d297d1cU, 0xa6cce040U,
> +       0x2b45f844U, 0xa320872eU, 0xdae6c123U,
> +       0x67349c8cU, 0x705b0979U, 0xca9913a5U,
> +       0x4ade3b35U, 0xef6cd00dU, 0x4ab1e1f4U,
> +       0x43c5e663U, 0x8c21d1bcU, 0x16a7b60dU,
> +       0x7a8ff9bfU, 0x1f2a753eU, 0xbf186b91U,
> +       0xada26206U, 0xa3c33057U, 0xae3a36a1U,
> +       0x7b108392U, 0x99e41531U, 0x3f1ad944U,
> +       0xc8138825U, 0xc28949a6U, 0xfaf8876bU,
> +       0x9f042196U, 0x68b1d623U, 0x8b5114fdU,
> +       0xdf074c46U, 0x12cc86b3U, 0x0a52098fU,
> +       0x9d292f9aU, 0xa2f41f12U, 0x43a71ed0U,
> +       0x73f0bce6U, 0x70a7e980U, 0x243c6d75U,
> +       0xfdb71513U, 0xa67d8a08U, 0xb7e8f148U,
> +       0xf7a644eeU, 0x0f1837f2U, 0x4b6694e0U,
> +       0xb7bbb3a8U
> +};
> +#else
> +static const hsiphash_key_t test_key_hsiphash =
> +       { 0x03020100U, 0x07060504U };
> +
> +static const u32 test_vectors_hsiphash[64] = {
> +       0x5814c896U, 0xe7e864caU, 0xbc4b0e30U,
> +       0x01539939U, 0x7e059ea6U, 0x88e3d89bU,
> +       0xa0080b65U, 0x9d38d9d6U, 0x577999b1U,
> +       0xc839caedU, 0xe4fa32cfU, 0x959246eeU,
> +       0x6b28096cU, 0x66dd9cd6U, 0x16658a7cU,
> +       0xd0257b04U, 0x8b31d501U, 0x2b1cd04bU,
> +       0x06712339U, 0x522aca67U, 0x911bb605U,
> +       0x90a65f0eU, 0xf826ef7bU, 0x62512debU,
> +       0x57150ad7U, 0x5d473507U, 0x1ec47442U,
> +       0xab64afd3U, 0x0a4100d0U, 0x6d2ce652U,
> +       0x2331b6a3U, 0x08d8791aU, 0xbc6dda8dU,
> +       0xe0f6c934U, 0xb0652033U, 0x9b9851ccU,
> +       0x7c46fb7fU, 0x732ba8cbU, 0xf142997aU,
> +       0xfcc9aa1bU, 0x05327eb2U, 0xe110131cU,
> +       0xf9e5e7c0U, 0xa7d708a6U, 0x11795ab1U,
> +       0x65671619U, 0x9f5fff91U, 0xd89c5267U,
> +       0x007783ebU, 0x95766243U, 0xab639262U,
> +       0x9c7e1390U, 0xc368dda6U, 0x38ddc455U,
> +       0xfa13d379U, 0x979ea4e8U, 0x53ecd77eU,
> +       0x2ee80657U, 0x33dbb66aU, 0xae3f0577U,
> +       0x88b4c4ccU, 0x3e7f480bU, 0x74c1ebf8U,
> +       0x87178304U
> +};
> +#endif
> +
>  static int __init siphash_test_init(void)
>  {
>         u8 in[64] __aligned(SIPHASH_ALIGNMENT);
> @@ -58,49 +121,75 @@ static int __init siphash_test_init(void)
>         for (i = 0; i < 64; ++i) {
>                 in[i] = i;
>                 in_unaligned[i + 1] = i;
> -               if (siphash(in, i, test_key) != test_vectors[i]) {
> -                       pr_info("self-test aligned %u: FAIL\n", i + 1);
> +               if (siphash(in, i, test_key_siphash) != test_vectors_siphash[i]) {
> +                       pr_info("siphash self-test aligned %u: FAIL\n", i + 1);
> +                       ret = -EINVAL;
> +               }
> +               if (siphash(in_unaligned + 1, i, test_key_siphash) != test_vectors_siphash[i]) {
> +                       pr_info("siphash self-test unaligned %u: FAIL\n", i + 1);
>                         ret = -EINVAL;
>                 }
> -               if (siphash(in_unaligned + 1, i, test_key) != test_vectors[i]) {
> -                       pr_info("self-test unaligned %u: FAIL\n", i + 1);
> +               if (hsiphash(in, i, test_key_hsiphash) != test_vectors_hsiphash[i]) {
> +                       pr_info("hsiphash self-test aligned %u: FAIL\n", i + 1);
> +                       ret = -EINVAL;
> +               }
> +               if (hsiphash(in_unaligned + 1, i, test_key_hsiphash) != test_vectors_hsiphash[i]) {
> +                       pr_info("hsiphash self-test unaligned %u: FAIL\n", i + 1);
>                         ret = -EINVAL;
>                 }
>         }
> -       if (siphash_1u64(0x0706050403020100ULL, test_key) != test_vectors[8]) {
> -               pr_info("self-test 1u64: FAIL\n");
> +       if (siphash_1u64(0x0706050403020100ULL, test_key_siphash) != test_vectors_siphash[8]) {
> +               pr_info("siphash self-test 1u64: FAIL\n");
>                 ret = -EINVAL;
>         }
> -       if (siphash_2u64(0x0706050403020100ULL, 0x0f0e0d0c0b0a0908ULL, test_key) != test_vectors[16]) {
> -               pr_info("self-test 2u64: FAIL\n");
> +       if (siphash_2u64(0x0706050403020100ULL, 0x0f0e0d0c0b0a0908ULL, test_key_siphash) != test_vectors_siphash[16]) {
> +               pr_info("siphash self-test 2u64: FAIL\n");
>                 ret = -EINVAL;
>         }
>         if (siphash_3u64(0x0706050403020100ULL, 0x0f0e0d0c0b0a0908ULL,
> -                        0x1716151413121110ULL, test_key) != test_vectors[24]) {
> -               pr_info("self-test 3u64: FAIL\n");
> +                        0x1716151413121110ULL, test_key_siphash) != test_vectors_siphash[24]) {
> +               pr_info("siphash self-test 3u64: FAIL\n");
>                 ret = -EINVAL;
>         }
>         if (siphash_4u64(0x0706050403020100ULL, 0x0f0e0d0c0b0a0908ULL,
> -                        0x1716151413121110ULL, 0x1f1e1d1c1b1a1918ULL, test_key) != test_vectors[32]) {
> -               pr_info("self-test 4u64: FAIL\n");
> +                        0x1716151413121110ULL, 0x1f1e1d1c1b1a1918ULL, test_key_siphash) != test_vectors_siphash[32]) {
> +               pr_info("siphash self-test 4u64: FAIL\n");
>                 ret = -EINVAL;
>         }
> -       if (siphash_1u32(0x03020100U, test_key) != test_vectors[4]) {
> -               pr_info("self-test 1u32: FAIL\n");
> +       if (siphash_1u32(0x03020100U, test_key_siphash) != test_vectors_siphash[4]) {
> +               pr_info("siphash self-test 1u32: FAIL\n");
>                 ret = -EINVAL;
>         }
> -       if (siphash_2u32(0x03020100U, 0x07060504U, test_key) != test_vectors[8]) {
> -               pr_info("self-test 2u32: FAIL\n");
> +       if (siphash_2u32(0x03020100U, 0x07060504U, test_key_siphash) != test_vectors_siphash[8]) {
> +               pr_info("siphash self-test 2u32: FAIL\n");
>                 ret = -EINVAL;
>         }
>         if (siphash_3u32(0x03020100U, 0x07060504U,
> -                        0x0b0a0908U, test_key) != test_vectors[12]) {
> -               pr_info("self-test 3u32: FAIL\n");
> +                        0x0b0a0908U, test_key_siphash) != test_vectors_siphash[12]) {
> +               pr_info("siphash self-test 3u32: FAIL\n");
>                 ret = -EINVAL;
>         }
>         if (siphash_4u32(0x03020100U, 0x07060504U,
> -                        0x0b0a0908U, 0x0f0e0d0cU, test_key) != test_vectors[16]) {
> -               pr_info("self-test 4u32: FAIL\n");
> +                        0x0b0a0908U, 0x0f0e0d0cU, test_key_siphash) != test_vectors_siphash[16]) {
> +               pr_info("siphash self-test 4u32: FAIL\n");
> +               ret = -EINVAL;
> +       }
> +       if (hsiphash_1u32(0x03020100U, test_key_hsiphash) != test_vectors_hsiphash[4]) {
> +               pr_info("hsiphash self-test 1u32: FAIL\n");
> +               ret = -EINVAL;
> +       }
> +       if (hsiphash_2u32(0x03020100U, 0x07060504U, test_key_hsiphash) != test_vectors_hsiphash[8]) {
> +               pr_info("hsiphash self-test 2u32: FAIL\n");
> +               ret = -EINVAL;
> +       }
> +       if (hsiphash_3u32(0x03020100U, 0x07060504U,
> +                         0x0b0a0908U, test_key_hsiphash) != test_vectors_hsiphash[12]) {
> +               pr_info("hsiphash self-test 3u32: FAIL\n");
> +               ret = -EINVAL;
> +       }
> +       if (hsiphash_4u32(0x03020100U, 0x07060504U,
> +                         0x0b0a0908U, 0x0f0e0d0cU, test_key_hsiphash) != test_vectors_hsiphash[16]) {
> +               pr_info("hsiphash self-test 4u32: FAIL\n");
>                 ret = -EINVAL;
>         }
>         if (!ret)
> --
> 2.11.0
>

Looks good to me (co-designer of SipHash).

Reviewed-by: Jean-Philippe Aumasson <jeanphilippe.aumasson@gmail.com>

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH net-next 1/4] siphash: add cryptographically secure PRF
  2017-01-06 20:32       ` Jean-Philippe Aumasson
@ 2017-01-06 20:41         ` David Miller
  0 siblings, 0 replies; 18+ messages in thread
From: David Miller @ 2017-01-06 20:41 UTC (permalink / raw)
  To: jeanphilippe.aumasson
  Cc: Jason, netdev, linux-kernel, torvalds, ebiggers3, David.Laight,
	eric.dumazet


Please do not quote an entire large patch, just to make a small comment or
annotation.  This makes it so that every reader of your posting has to
scroll down a lot just to see a small amount of new content.

Simply edit down the quoted material to the actually required context, and
then add the content you wish.

I do realize that a lot of email clients make this difficult, but that is
no excuse for not maintaining this basic level of consideration for the
other people on this mailing list.

Thanks.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH net-next 1/4] siphash: add cryptographically secure PRF
  2017-01-06 20:10     ` [PATCH net-next 1/4] siphash: add cryptographically secure PRF Jason A. Donenfeld
  2017-01-06 20:32       ` Jean-Philippe Aumasson
@ 2017-01-07  4:04       ` Eric Biggers
  2017-01-07 13:11         ` Jason A. Donenfeld
  2017-01-12 15:04         ` Herbert Xu
  1 sibling, 2 replies; 18+ messages in thread
From: Eric Biggers @ 2017-01-07  4:04 UTC (permalink / raw)
  To: Jason A. Donenfeld
  Cc: davem, netdev, linux-kernel, Jean-Philippe Aumasson,
	Linus Torvalds, David Laight, Eric Dumazet

Hi Jason, just a few comments:

On Fri, Jan 06, 2017 at 09:10:52PM +0100, Jason A. Donenfeld wrote:
> +#define SIPHASH_ALIGNMENT __alignof__(u64)
> +typedef u64 siphash_key_t[2];

I was confused by all the functions passing siphash_key_t "by value" until I saw
that it's actually typedefed to u64[2].  Have you considered making it a struct
instead, something like this?

typedef struct {
	u64 v[2];
} siphash_key_t;

Then it would be strongly typed and thus harder to misuse, and all the functions
would take 'const siphash_key_t *' instead of 'const siphash_key_t' which would
make it clear that the key is passed by pointer not by value.

> +static inline u64 ___siphash_aligned(const __le64 *data, size_t len, const siphash_key_t key)
> +{
> +	if (__builtin_constant_p(len) && len == 4)
> +		return siphash_1u32(le32_to_cpu(data[0]), key);

Small bug here: data[0] is not valid if len is 4.  This can be fixed by casting
to a le32 pointer:

		return siphash_1u32(le32_to_cpup((const __le32 *)data), key);
	
> +static int __init siphash_test_init(void)
> +{
> +	u8 in[64] __aligned(SIPHASH_ALIGNMENT);
> +	u8 in_unaligned[65];

It seems that in_unaligned+1 is meant to be misaligned, but that's not
guaranteed because in_unaligned has no alignment restriction, so it could
theoretically be misaligned in a way that makes in_unaligned+1 aligned.  So it
should be 'in_unaligned[65] __aligned(SIPHASH_ALIGNMENT)'.

There are also a lot of checkpatch warnings produced by this patch.  It looks
like many of them can be ignored, but there may be some that should be
addressed.

- Eric

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH net-next 1/4] siphash: add cryptographically secure PRF
  2017-01-07  4:04       ` Eric Biggers
@ 2017-01-07 13:11         ` Jason A. Donenfeld
  2017-01-12 15:04         ` Herbert Xu
  1 sibling, 0 replies; 18+ messages in thread
From: Jason A. Donenfeld @ 2017-01-07 13:11 UTC (permalink / raw)
  To: Eric Biggers
  Cc: David Miller, Netdev, LKML, Jean-Philippe Aumasson,
	Linus Torvalds, David Laight, Eric Dumazet

Hi Eric,

Thanks for the review. I wish we had gotten to this much earlier
before the merge, when there were quite a few revisions and
refinements, but better late than never, and I'm quite pleased to have
your feedback for making this patchset perfect. Comments are inline
below.

On Sat, Jan 7, 2017 at 5:04 AM, Eric Biggers <ebiggers3@gmail.com> wrote:
> I was confused by all the functions passing siphash_key_t "by value" until I saw
> that it's actually typedefed to u64[2].  Have you considered making it a struct
> instead, something like this?
>
> typedef struct {
>         u64 v[2];
> } siphash_key_t;

That's a good idea. I'll make this change.

>
>> +static inline u64 ___siphash_aligned(const __le64 *data, size_t len, const siphash_key_t key)
>> +{
>> +     if (__builtin_constant_p(len) && len == 4)
>> +             return siphash_1u32(le32_to_cpu(data[0]), key);
>
> Small bug here: data[0] is not valid if len is 4.  This can be fixed by casting
> to a le32 pointer:
>
>                 return siphash_1u32(le32_to_cpup((const __le32 *)data), key);

Since data[0] is then passed to a function that takes a u32, gcc
actually does the right thing here and doesn't generate an out of
bounds read. But of course this isn't good behavior to rely on. I'll
fix this up. Thanks for catching it.

>
>> +static int __init siphash_test_init(void)
>> +{
>> +     u8 in[64] __aligned(SIPHASH_ALIGNMENT);
>> +     u8 in_unaligned[65];
>
> It seems that in_unaligned+1 is meant to be misaligned, but that's not
> guaranteed because in_unaligned has no alignment restriction, so it could
> theoretically be misaligned in a way that makes in_unaligned+1 aligned.  So it
> should be 'in_unaligned[65] __aligned(SIPHASH_ALIGNMENT)'.

Thanks, will do.

>
> There are also a lot of checkpatch warnings produced by this patch.  It looks
> like many of them can be ignored, but there may be some that should be
> addressed.

Will have a look and address for v2.

Thanks again for your review!

Regards,
Jason

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH net-next 1/4] siphash: add cryptographically secure PRF
  2017-01-07  4:04       ` Eric Biggers
  2017-01-07 13:11         ` Jason A. Donenfeld
@ 2017-01-12 15:04         ` Herbert Xu
  2017-01-12 18:30           ` Jason A. Donenfeld
  1 sibling, 1 reply; 18+ messages in thread
From: Herbert Xu @ 2017-01-12 15:04 UTC (permalink / raw)
  To: Eric Biggers
  Cc: Jason, davem, netdev, linux-kernel, jeanphilippe.aumasson,
	torvalds, David.Laight, eric.dumazet

Eric Biggers <ebiggers3@gmail.com> wrote:
> Hi Jason, just a few comments:
> 
> On Fri, Jan 06, 2017 at 09:10:52PM +0100, Jason A. Donenfeld wrote:
>> +#define SIPHASH_ALIGNMENT __alignof__(u64)
>> +typedef u64 siphash_key_t[2];
> 
> I was confused by all the functions passing siphash_key_t "by value" until I saw
> that it's actually typedefed to u64[2].  Have you considered making it a struct
> instead, something like this?
> 
> typedef struct {
>        u64 v[2];
> } siphash_key_t;

If it's just an 128-bit value then we have u128 in crypto/b128ops.h
that could be generalised for this.

Cheers,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH net-next 1/4] siphash: add cryptographically secure PRF
  2017-01-12 15:04         ` Herbert Xu
@ 2017-01-12 18:30           ` Jason A. Donenfeld
  0 siblings, 0 replies; 18+ messages in thread
From: Jason A. Donenfeld @ 2017-01-12 18:30 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Eric Biggers, David Miller, Netdev, LKML, Jean-Philippe Aumasson,
	Linus Torvalds, David Laight, Eric Dumazet

On Thu, Jan 12, 2017 at 4:04 PM, Herbert Xu <herbert@gondor.apana.org.au> wrote:
>> typedef struct {
>>        u64 v[2];
>> } siphash_key_t;
>
> If it's just an 128-bit value then we have u128 in crypto/b128ops.h
> that could be generalised for this.

Nope, it's actually two 64-bit values. Yes, the user fills it in as
one blob to get_random_bytes, but it's used internally by the
algorithm as two distinct variables (which conveniently fit into
64-bit registers).

^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2017-01-12 18:30 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-01-06 18:37 [PATCH net-next 1/4] siphash: add cryptographically secure PRF Jason A. Donenfeld
2017-01-06 18:37 ` [PATCH net-next 2/4] siphash: implement HalfSipHash1-3 for hash tables Jason A. Donenfeld
2017-01-06 18:37 ` [PATCH net-next 3/4] secure_seq: use SipHash in place of MD5 Jason A. Donenfeld
2017-01-06 18:37 ` [PATCH net-next 4/4] syncookies: use SipHash in place of SHA1 Jason A. Donenfeld
2017-01-06 19:57 ` [PATCH net-next 1/4] siphash: add cryptographically secure PRF David Miller
2017-01-06 20:03   ` Jason A. Donenfeld
2017-01-06 20:10   ` [PATCH net-next 0/4] Introduce The SipHash PRF Jason A. Donenfeld
2017-01-06 20:10     ` [PATCH net-next 1/4] siphash: add cryptographically secure PRF Jason A. Donenfeld
2017-01-06 20:32       ` Jean-Philippe Aumasson
2017-01-06 20:41         ` David Miller
2017-01-07  4:04       ` Eric Biggers
2017-01-07 13:11         ` Jason A. Donenfeld
2017-01-12 15:04         ` Herbert Xu
2017-01-12 18:30           ` Jason A. Donenfeld
2017-01-06 20:10     ` [PATCH net-next 2/4] siphash: implement HalfSipHash1-3 for hash tables Jason A. Donenfeld
2017-01-06 20:33       ` Jean-Philippe Aumasson
2017-01-06 20:10     ` [PATCH net-next 3/4] secure_seq: use SipHash in place of MD5 Jason A. Donenfeld
2017-01-06 20:10     ` [PATCH net-next 4/4] syncookies: use SipHash in place of SHA1 Jason A. Donenfeld

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).