linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Re: [patch 2/7] lib/hashmod: Add modulo based hash mechanism
       [not found] <CA+55aFxBWfAHQNAdBbdVr+z8ror4GVteyce3D3=vwDWxhu5KqQ@mail.gmail.com>
@ 2016-04-30 20:52 ` George Spelvin
  2016-05-01  8:35   ` Thomas Gleixner
  0 siblings, 1 reply; 21+ messages in thread
From: George Spelvin @ 2016-04-30 20:52 UTC (permalink / raw)
  To: tglx; +Cc: eric.dumazet, linux, linux-kernel, riel, torvalds

Thomas Gleixner wrote:
> I'll send a patch to replace hash_64 and hash_32.

Before you do that, could we look for a way to tweak the constants
in the existing hash?

It seems the basic "take the high bits of x * K" algorithm is actually
a decent hash function if K is chosen properly, and has a significant
speed advantage on machines with half-decent multipliers.

I'm researching how to do the multiply with fewer shifts and adds on
machines that need it.  (Or we could use a totally different function
in that case.)

You say that
> hash64 is slightly faster as the modulo prime as it does not have the
> multiplication.

Um... are you sure you benchmarked that right?  The hash_64 code you
used (Thomas Wang's 64->32-bit hash) has a critical path consisting of 6
shifts and 7 adds.  I can't believe that's faster than a single multiply.

For 1,000,000 iterations on an Ivy Bridge, the multiply is 4x
faster (5x if out of line) for me!

The constants I recommend are
#define GOLDEN_RATIO_64 0x61C8864680B583EBull
#define GOLDEN_RATIO_32 0x61C88647


rdtsc times for 1,000,000 iterations of each of the two.
(The sum of all hashes is printed to prevent dead code elimination.)

  hash_64  (sum)      * PHI   (sum)      hash_64  (sum)      * PHI   (sum)
  17552154 a52752df   3431821 2ce5398c   17485381 a52752df   3375535 2ce5398c
  17522273 a52752df   3487206 2ce5398c   17551217 a52752df   3374221 2ce5398c
  17546242 a52752df   3377355 2ce5398c   17494306 a52752df   3374202 2ce5398c
  17495702 a52752df   3409768 2ce5398c   17505839 a52752df   3398205 2ce5398c
  17501114 a52752df   3375435 2ce5398c   17539388 a52752df   3374202 2ce5398c
And with hash_64 forced inline:
  13596945 a52752df   3374931 2ce5398c   13585916 a52752df   3411107 2ce5398c
  13564616 a52752df   3374928 2ce5398c   13573465 a52752df   3425160 2ce5398c
  13569712 a52752df   3374915 2ce5398c   13580461 a52752df   3397773 2ce5398c
  13577481 a52752df   3374912 2ce5398c   13558708 a52752df   3417456 2ce5398c
  13569044 a52752df   3374909 2ce5398c   13557193 a52752df   3407912 2ce5398c

That's 3.5 cycles vs. 13.5.

(I actually have two copies of the inlined code, to show code alignment
issues.)

On a Phenom, it's worse, 4 cycles vs. 35.
  35083119 a52752df   4020754 2ce5398c   35068116 a52752df   4015659 2ce5398c
  35074377 a52752df   4000819 2ce5398c   35068735 a52752df   4016943 2ce5398c
  35067596 a52752df   4025397 2ce5398c   35074365 a52752df   4000108 2ce5398c
  35071050 a52752df   4016190 2ce5398c   35058775 a52752df   4017988 2ce5398c
  35055091 a52752df   4000066 2ce5398c   35201158 a52752df   4000094 2ce5398c




My simple test code appended for anyone who cares...

#include <stdint.h>
#include <stdio.h>

/*  Phi = 0x0.9E3779B97F4A7C15F... */
/* -Phi = 0x0.61C8864680B583EA1... */
#define K 0x61C8864680B583EBull

static inline uint32_t hash1(uint64_t key)
{
       key  = ~key + (key << 18);
       key ^= key >> 31;
       key += (key << 2) + (key << 4);
       key ^= key >> 11;
       key += key << 6;
       key ^= key >> 22;
       return (uint32_t)key;
}

static inline uint32_t hash2(uint64_t key)
{
	return (uint32_t)(key * K >> 32);
}

static inline uint64_t rdtsc(void)
{
	uint32_t lo, hi;
	asm volatile("rdtsc" : "=a" (lo), "=d" (hi));
	return (uint64_t)hi << 32 | lo;
}

int
main(void)
{
	int i, j;
	uint32_t sum, sums[20];
	uint64_t start, times[20];

	for (i = 0; i < 20; i += 4) {
		sum = 0;
		start = rdtsc();
		for (j = 0; j < 1000000; j++)
			sum += hash1(j+0xdeadbeef);
		times[i] = rdtsc() - start;
		sums[i] = sum;

		sum = 0;
		start = rdtsc();
		for (j = 0; j < 1000000; j++)
			sum += hash2(j+0xdeadbeef);
		times[i+1] = rdtsc() - start;
		sums[i+1] = sum;

		sum = 0;
		start = rdtsc();
		for (j = 0; j < 1000000; j++)
			sum += hash1(j+0xdeadbeef);
		times[i+2] = rdtsc() - start;
		sums[i+2] = sum;

		sum = 0;
		start = rdtsc();
		for (j = 0; j < 1000000; j++)
			sum += hash2(j+0xdeadbeef);
		times[i+3] = rdtsc() - start;
		sums[i+3] = sum;
	}
	for (i = 0; i < 20; i++)
		printf("  %llu %08x%c",
			times[i], sums[i], (~i & 3) ? ' ' : '\n');
	return 0;
}

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [patch 2/7] lib/hashmod: Add modulo based hash mechanism
  2016-04-30 20:52 ` [patch 2/7] lib/hashmod: Add modulo based hash mechanism George Spelvin
@ 2016-05-01  8:35   ` Thomas Gleixner
  2016-05-01  9:43     ` George Spelvin
  0 siblings, 1 reply; 21+ messages in thread
From: Thomas Gleixner @ 2016-05-01  8:35 UTC (permalink / raw)
  To: George Spelvin; +Cc: eric.dumazet, linux-kernel, riel, torvalds

On Sat, 30 Apr 2016, George Spelvin wrote:
> Thomas Gleixner wrote:
> You say that
> > hash64 is slightly faster as the modulo prime as it does not have the
> > multiplication.
> 
> Um... are you sure you benchmarked that right?  The hash_64 code you
> used (Thomas Wang's 64->32-bit hash) has a critical path consisting of 6
> shifts and 7 adds.  I can't believe that's faster than a single multiply.

Sorry I did not express myself clear enough.

hash64 (the single multiply with the adjusted golden ratio) is slightly faster
than the modulo one which has two mutiplications.
 
So here is the list:

hash_64(): (key * GOLDEN_RATIO) >> (64 - bits)		31Mio Ops/sec

modulo:	   	  		       	 		28Mio Ops/sec

Thomas Wangs 64 -> 32 bit				21Mio Ops/sec

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [patch 2/7] lib/hashmod: Add modulo based hash mechanism
  2016-05-01  8:35   ` Thomas Gleixner
@ 2016-05-01  9:43     ` George Spelvin
  2016-05-01 16:51       ` Linus Torvalds
  2016-05-02  7:11       ` Thomas Gleixner
  0 siblings, 2 replies; 21+ messages in thread
From: George Spelvin @ 2016-05-01  9:43 UTC (permalink / raw)
  To: linux, tglx; +Cc: eric.dumazet, linux-kernel, riel, torvalds

> Sorry I did not express myself clear enough.

> hash64 (the single multiply with the adjusted golden ratio) is
> slightly faster than the modulo one which has two mutiplications.

Yes, I figured out that was probably what you were talking about,
and benchmarked it similarly.

But I noticed a much greater difference.

 Wang      * PHI    % 4093   Wang      * PHI    % 4093
 13599486  3494816  5238442  13584166  3460266  5239463
 13589552  3466764  5237327  13582381  3422304  5276253
 13569409  3407317  5236239  13566738  3393215  5267909
 13575257  3413736  5237708  13596428  3379811  5280118
 13583607  3540416  5325609  13650964  3380301  5265210

At 3.7 GHz, that's 

* PHI:     1059 M ops/second
* Modulo:   706 M ops/second
* Wang:     271 M ops/second

Of course, that's a tight loop hashing; I presume your test case
has more overhead.


Anyway, my recommendation (I'll write the patches if you like) is:

* Modify the multiplicative constants to be
  #define COLDEN_RATIO_32 0x61C88647
  #define COLDEN_RATIO_64 0x61C8864680B583EB

* Change the prototype of hash_64 to return u32.

* Create a separate 32-bit implementation of hash_64() for the
  BITS_PER_LONG < 64 case.  This should not be Wang's or anything
  similar because 64-bit shifts are slow and awkward on 32-bit
  machines.
  One option is something like __jhash_final(), but I think
  it will suffice to do:

  static __always_inline u32 hash_64(u64 val, unsigned int bits)
  {
	u32 hash = (u32)(val >> 32) * GOLDEN_RATIO_32 + (u32)val;
	hash *= GOLDEN_RATIO_32;
        return hash >> (32 - bits);
  }
  (S-o-b on that if you want it, of course.)

  (If you want it out of line, make a helper function that
  computes the 32-bit hash and have an inline wrapper that
  does the shifting.)

* Eliminate the !CONFIG_ARCH_HAS_FAST_MULTIPLIER code path.  Once we've
  got rid of the 32-bit processors, we're left with 64-bit processors,
  and they *all* have hardware multipliers.  Even the MIPS R4000 (first
  64-bit processor ever) and Alpha 21064 had one.

  (Admittedly, the R4000 was 20 cycles, which is slower than Wang's hash,
  but 18 of them could overlap other instructions.)

  The worst multiply support is SPARCv9, which has 64x64-bit
  multiply with a 64-bit result, but no widening multiply.

* If you feel ambitious, add a 32-bit CONFIG_ARCH_HAS_SLOW_MULTIPLIER
  exception path.

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [patch 2/7] lib/hashmod: Add modulo based hash mechanism
  2016-05-01  9:43     ` George Spelvin
@ 2016-05-01 16:51       ` Linus Torvalds
  2016-05-14  3:54         ` George Spelvin
  2016-05-02  7:11       ` Thomas Gleixner
  1 sibling, 1 reply; 21+ messages in thread
From: Linus Torvalds @ 2016-05-01 16:51 UTC (permalink / raw)
  To: George Spelvin, David Woodhouse, Chris Mason
  Cc: Thomas Gleixner, Eric Dumazet, Linux Kernel Mailing List, Rik van Riel

On Sun, May 1, 2016 at 2:43 AM, George Spelvin <linux@horizon.com> wrote:
>
> Anyway, my recommendation (I'll write the patches if you like) is:
>
> * Modify the multiplicative constants to be
>   #define COLDEN_RATIO_32 0x61C88647
>   #define COLDEN_RATIO_64 0x61C8864680B583EB

Interestingly, looking at where hash_64() is used, I notice the btrfs
raid56 code. And the comment there in rio_bucket():

         * we shift down quite a bit.  We're using byte
         * addressing, and most of the lower bits are zeros.
         * This tends to upset hash_64, and it consistently
         * returns just one or two different values.
         *
         * shifting off the lower bits fixes things.

so people had actually noticed the breakage before.

Adding DavidW and Chris Mason explicitly to the cc, to perhaps have
them verify that fixing the hash_64 constant would make that btrfs
hack be unnecessary too..

Chris? Do you have a test-case? Do things end up being ok if you
change that COLDEN_RATIO_64 value and remove the ">>16"?

> * Change the prototype of hash_64 to return u32.

Fair enough. That simplifies things for 32-bit. Not that there are a
_lot_ of 32-bit cases, and most of the callers seem to just assign the
value directly to an int or just use it as an index, so it's probably
not really ever going to change anything, but it might make it easier
to write a simpler 32-bit code that uses two explicitly 32-bit fields
and mixes them.

> * Create a separate 32-bit implementation of hash_64() for the
>   BITS_PER_LONG < 64 case.  This should not be Wang's or anything
>   similar because 64-bit shifts are slow and awkward on 32-bit
>   machines.
>   One option is something like __jhash_final(), but I think
>   it will suffice to do:
>
>   static __always_inline u32 hash_64(u64 val, unsigned int bits)
>   {
>         u32 hash = (u32)(val >> 32) * GOLDEN_RATIO_32 + (u32)val;
>         hash *= GOLDEN_RATIO_32;
>         return hash >> (32 - bits);
>   }
>   (S-o-b on that if you want it, of course.)

Agreed. However, we need to make very certain that nobody wants a
value bigger than 32 bits. It all looks fine: the name hash folding
does want exactly 32 bits, but that code is only enabled for 64-bit
anyway, and it would be ok.

But it might be worth double-checking that nobody uses hash_64() for
anything else odd. My quick grep didn't show anything, but it was
quick.

> * Eliminate the !CONFIG_ARCH_HAS_FAST_MULTIPLIER code path.  Once we've
>   got rid of the 32-bit processors, we're left with 64-bit processors,
>   and they *all* have hardware multipliers.  Even the MIPS R4000 (first
>   64-bit processor ever) and Alpha 21064 had one.
>
>   (Admittedly, the R4000 was 20 cycles, which is slower than Wang's hash,
>   but 18 of them could overlap other instructions.)
>
>   The worst multiply support is SPARCv9, which has 64x64-bit
>   multiply with a 64-bit result, but no widening multiply.

Ack.

> * If you feel ambitious, add a 32-bit CONFIG_ARCH_HAS_SLOW_MULTIPLIER
>   exception path.

Let's make that a separate worry, and just fix hash_64() first.

In particular, that means "let's not touch COLDEN_RATIO_32 yet". I
suspect that when we *do* change that value, we do want the
non-multiplying version you had.

If you wrote out the patch for the hash_64 case as you outlined above,
I will definitely apply it. The current hash_64 is clearly broken, and
we now have two real life examples (ie futex problems and the btrfs
code both).

                      Linus

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [patch 2/7] lib/hashmod: Add modulo based hash mechanism
  2016-05-01  9:43     ` George Spelvin
  2016-05-01 16:51       ` Linus Torvalds
@ 2016-05-02  7:11       ` Thomas Gleixner
  2016-05-02 10:20         ` [PATCH 1/2] <linux/hash.h>: Make hash_64(), hash_ptr() return 32 bits George Spelvin
  1 sibling, 1 reply; 21+ messages in thread
From: Thomas Gleixner @ 2016-05-02  7:11 UTC (permalink / raw)
  To: George Spelvin; +Cc: eric.dumazet, linux-kernel, riel, torvalds

On Sun, 1 May 2016, George Spelvin wrote:
> But I noticed a much greater difference.
> 
>  Wang      * PHI    % 4093   Wang      * PHI    % 4093
>  13599486  3494816  5238442  13584166  3460266  5239463
>  13589552  3466764  5237327  13582381  3422304  5276253
>  13569409  3407317  5236239  13566738  3393215  5267909
>  13575257  3413736  5237708  13596428  3379811  5280118
>  13583607  3540416  5325609  13650964  3380301  5265210
> 
> At 3.7 GHz, that's 
> 
> * PHI:     1059 M ops/second
> * Modulo:   706 M ops/second
> * Wang:     271 M ops/second
> 
> Of course, that's a tight loop hashing; I presume your test case
> has more overhead.

Indeed.
 
> Anyway, my recommendation (I'll write the patches if you like) is:
> 
> * Modify the multiplicative constants to be
>   #define COLDEN_RATIO_32 0x61C88647
>   #define COLDEN_RATIO_64 0x61C8864680B583EB

Works for me. I ran them through my test case and they behaved reasonably
well.
 
> * Change the prototype of hash_64 to return u32.

Makes sense.
 
> * Create a separate 32-bit implementation of hash_64() for the
>   BITS_PER_LONG < 64 case.  This should not be Wang's or anything
>   similar because 64-bit shifts are slow and awkward on 32-bit
>   machines.
>   One option is something like __jhash_final(), but I think
>   it will suffice to do:
> 
>   static __always_inline u32 hash_64(u64 val, unsigned int bits)
>   {
> 	u32 hash = (u32)(val >> 32) * GOLDEN_RATIO_32 + (u32)val;
> 	hash *= GOLDEN_RATIO_32;
>         return hash >> (32 - bits);
>   }

Works. That's more or less the same overhead as the modulo one, which behaved
well on 32bit.
 
Thanks,

	tglx

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [PATCH 1/2] <linux/hash.h>: Make hash_64(), hash_ptr() return 32 bits
  2016-05-02  7:11       ` Thomas Gleixner
@ 2016-05-02 10:20         ` George Spelvin
  2016-05-02 10:22           ` [PATCH 2/2] <linux/hash.h>: Fix hash_64()'s horrible collision problem George Spelvin
                             ` (4 more replies)
  0 siblings, 5 replies; 21+ messages in thread
From: George Spelvin @ 2016-05-02 10:20 UTC (permalink / raw)
  To: linux-kernel, tglx, torvalds
  Cc: bfields, eric.dumazet, jlayton, linux, linux-nfs, riel

This also affects hash_str() and hash_mem() in <linux/sunrpc/svcauth.h>.

After a careful scan through the kernel code, no caller asks any of
those four for  more than 32 bits of hash result, except that the
latter two need 64 bits from hash_long() if BITS_PER_LONG == 64.

This is in preparation for the following patch, which will create
a new implementation of hash_64 for the BITS_PER_LONG == 32 case
which is optimized for 32-bit machines.

Signed-off-by: George Spelvin <linux@horizon.com>
Cc: "J. Bruce Fields" <bfields@fieldses.org>
Cc: Jeff Layton <jlayton@poochiereds.net>
Cc: linux-nfs@vger.kernel.org
---
Cc: to NFS folks because it touches the sunrpc directory.

Is that "TODO" comment too presumptuous of me?

 include/linux/hash.h           | 22 ++++++++++++++++------
 include/linux/sunrpc/svcauth.h | 15 +++++++--------
 2 files changed, 23 insertions(+), 14 deletions(-)

diff --git a/include/linux/hash.h b/include/linux/hash.h
index 1afde47e..05003fdc 100644
--- a/include/linux/hash.h
+++ b/include/linux/hash.h
@@ -24,15 +24,17 @@
 
 #if BITS_PER_LONG == 32
 #define GOLDEN_RATIO_PRIME GOLDEN_RATIO_PRIME_32
+#define __hash_long(val) __hash_32(val)
 #define hash_long(val, bits) hash_32(val, bits)
 #elif BITS_PER_LONG == 64
+#define __hash_long(val) __hash_64(val)
 #define hash_long(val, bits) hash_64(val, bits)
 #define GOLDEN_RATIO_PRIME GOLDEN_RATIO_PRIME_64
 #else
 #error Wordsize not 32 or 64
 #endif
 
-static __always_inline u64 hash_64(u64 val, unsigned int bits)
+static __always_inline u64 __hash_64(u64 val)
 {
 	u64 hash = val;
 
@@ -55,20 +57,28 @@ static __always_inline u64 hash_64(u64 val, unsigned int bits)
 	hash += n;
 #endif
 
+	return hash;
+}
+
+static __always_inline u64 hash_64(u64 val, unsigned bits)
+{
 	/* High bits are more random, so use them. */
-	return hash >> (64 - bits);
+	return __hash_64(val) >> (64 - bits);
 }
 
-static inline u32 hash_32(u32 val, unsigned int bits)
+static inline u32 __hash_32(u32 val)
 {
 	/* On some cpus multiply is faster, on others gcc will do shifts */
-	u32 hash = val * GOLDEN_RATIO_PRIME_32;
+	return val * GOLDEN_RATIO_PRIME_32;
+}
 
+static inline u32 hash_32(u32 val, unsigned bits)
+{
 	/* High bits are more random, so use them. */
-	return hash >> (32 - bits);
+	return __hash_32(val) >> (32 - bits);
 }
 
-static inline unsigned long hash_ptr(const void *ptr, unsigned int bits)
+static inline u32 hash_ptr(const void *ptr, unsigned bits)
 {
 	return hash_long((unsigned long)ptr, bits);
 }
diff --git a/include/linux/sunrpc/svcauth.h b/include/linux/sunrpc/svcauth.h
index c00f53a4..eb1241b3 100644
--- a/include/linux/sunrpc/svcauth.h
+++ b/include/linux/sunrpc/svcauth.h
@@ -165,7 +165,8 @@ extern int svcauth_unix_set_client(struct svc_rqst *rqstp);
 extern int unix_gid_cache_create(struct net *net);
 extern void unix_gid_cache_destroy(struct net *net);
 
-static inline unsigned long hash_str(char *name, int bits)
+/* TODO: Update to <asm/word-at-a-time.h> when CONFIG_DCACHE_WORD_ACCESS */
+static inline u32 hash_str(const char *name, int bits)
 {
 	unsigned long hash = 0;
 	unsigned long l = 0;
@@ -176,14 +177,13 @@ static inline unsigned long hash_str(char *name, int bits)
 			c = (char)len; len = -1;
 		}
 		l = (l << 8) | c;
-		len++;
-		if ((len & (BITS_PER_LONG/8-1))==0)
-			hash = hash_long(hash^l, BITS_PER_LONG);
+		if (++len % sizeof(hash) == 0)
+			hash = __hash_long(hash^l);
 	} while (len);
 	return hash >> (BITS_PER_LONG - bits);
 }
 
-static inline unsigned long hash_mem(char *buf, int length, int bits)
+static inline u32 hash_mem(const char *buf, int length, int bits)
 {
 	unsigned long hash = 0;
 	unsigned long l = 0;
@@ -195,9 +195,8 @@ static inline unsigned long hash_mem(char *buf, int length, int bits)
 		} else
 			c = *buf++;
 		l = (l << 8) | c;
-		len++;
-		if ((len & (BITS_PER_LONG/8-1))==0)
-			hash = hash_long(hash^l, BITS_PER_LONG);
+		if (++len % sizeof(hash) == 0)
+			hash = __hash_long(hash^l);
 	} while (len);
 	return hash >> (BITS_PER_LONG - bits);
 }
-- 
2.8.1

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 2/2] <linux/hash.h>: Fix hash_64()'s horrible collision problem
  2016-05-02 10:20         ` [PATCH 1/2] <linux/hash.h>: Make hash_64(), hash_ptr() return 32 bits George Spelvin
@ 2016-05-02 10:22           ` George Spelvin
  2016-05-02 20:08             ` Linus Torvalds
  2016-05-02 10:27           ` [RFC PATCH 3/2] (Rant) Fix various hash abuses George Spelvin
                             ` (3 subsequent siblings)
  4 siblings, 1 reply; 21+ messages in thread
From: George Spelvin @ 2016-05-02 10:22 UTC (permalink / raw)
  To: linux-kernel, tglx, torvalds
  Cc: bfields, eric.dumazet, jlayton, linux-nfs, linux, riel

hash_64() was using a low-bit-weight multiplier, which resulted in
very bad mixing of the high bits of the input.  In particular,
page-aligned pointers (low 12 bits not used) were a disaster,

Since all 64-bit processors (I checked back to the MIPS R4000 and
Alpha 21064) have hardware multipliers and don't benefit from this
"optimization", use the proper golden ratio value.

Avoid performance problems on 32-bit machines by providing a totally
separate implementation for them based on 32-bit arithmetic.

Keep the bad multiplier for hash_32() for now, at Linus' request.
"Sparse in 32 bits" is not quite as bad as "sparse in 64 bits".

Explicitly document that the algorithm is not stable.  I've tried to
check all callers for inadvertent dependence on the exact numerical value,
but some them are so confusing (*cough* Lustre *cough*) that I can't
tell for sure.

Reported-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: George Spelvin <linux@horizon.com>
---
 include/linux/hash.h | 151 ++++++++++++++++++++++++++++++++++++---------------
 1 file changed, 107 insertions(+), 44 deletions(-)

diff --git a/include/linux/hash.h b/include/linux/hash.h
index 05003fdc..64c44e20 100644
--- a/include/linux/hash.h
+++ b/include/linux/hash.h
@@ -1,63 +1,56 @@
 #ifndef _LINUX_HASH_H
 #define _LINUX_HASH_H
-/* Fast hashing routine for ints,  longs and pointers.
-   (C) 2002 Nadia Yvette Chambers, IBM */
-
 /*
- * Knuth recommends primes in approximately golden ratio to the maximum
- * integer representable by a machine word for multiplicative hashing.
+ * Fast hashing routine for ints, longs and pointers.
+ * (C) 2002 Nadia Yvette Chambers, IBM
+ *
+ * These are used for small in-memory hash tables, where speed is a
+ * primary concern.  If you want something a little bit stronger, see
+ * <linux/jhash.h>, especially functions like jhash_3words().  If your
+ * hash table is subject to a hash collision denial of service attack,
+ * use something cryptographic.
+ *
+ * Note that the algorithms used are not guaranteed stable across kernel
+ * versions or architectures!  In particular, hash_64() is implemented
+ * differently on 32- and 64-bit machines.  Do not let external behavior
+ * depend on the hash values.
+ *
+ * The algorithm used is straight from Knuth: multiply a w-bit word by
+ * a suitable large constant, and take the high bits of the w-bit result.
+ *
  * Chuck Lever verified the effectiveness of this technique:
  * http://www.citi.umich.edu/techreports/reports/citi-tr-00-1.pdf
  *
- * These primes are chosen to be bit-sparse, that is operations on
- * them can use shifts and additions instead of multiplications for
- * machines where multiplications are slow.
+ * A good reference is Mikkel Thorup, "High Speed Hashing for
+ * Integers and Strings" at http://arxiv.org/abs/1504.06804 and
+ * https://www.youtube.com/watch?v=cB85UZKJQTc
+ *
+ * Because the current algorithm is linear (hash(a+b) = hash(a) + hash(b)),
+ * adding or subtracting hash values is just as likely to cause collisions
+ * as adding or subtracting the keys themselves.
  */
-
 #include <asm/types.h>
 #include <linux/compiler.h>
 
-/* 2^31 + 2^29 - 2^25 + 2^22 - 2^19 - 2^16 + 1 */
-#define GOLDEN_RATIO_PRIME_32 0x9e370001UL
-/*  2^63 + 2^61 - 2^57 + 2^54 - 2^51 - 2^18 + 1 */
-#define GOLDEN_RATIO_PRIME_64 0x9e37fffffffc0001UL
+/*
+ * Although a random odd number will do, it turns out that the golden ratio
+ * phi = (sqrt(5)-1)/2, or its negative, has particularly nice properties.
+ *
+ * These are actually the negative, (1 - phi) = (phi^2) = (3 - sqrt(5))/2.
+ * (See Knuth vol 3, section 6.4, exercise 9.)
+ */
+#define GOLDEN_RATIO_32 0x61C88647
+#define GOLDEN_RATIO_64 0x61C8864680B583EBull
 
-#if BITS_PER_LONG == 32
-#define GOLDEN_RATIO_PRIME GOLDEN_RATIO_PRIME_32
-#define __hash_long(val) __hash_32(val)
-#define hash_long(val, bits) hash_32(val, bits)
-#elif BITS_PER_LONG == 64
+#if BITS_PER_LONG == 64
+
+#define GOLDEN_RATIO_PRIME GOLDEN_RATIO_64	/* Used in fs/inode.c */
 #define __hash_long(val) __hash_64(val)
 #define hash_long(val, bits) hash_64(val, bits)
-#define GOLDEN_RATIO_PRIME GOLDEN_RATIO_PRIME_64
-#else
-#error Wordsize not 32 or 64
-#endif
 
 static __always_inline u64 __hash_64(u64 val)
 {
-	u64 hash = val;
-
-#if defined(CONFIG_ARCH_HAS_FAST_MULTIPLIER) && BITS_PER_LONG == 64
-	hash = hash * GOLDEN_RATIO_PRIME_64;
-#else
-	/*  Sigh, gcc can't optimise this alone like it does for 32 bits. */
-	u64 n = hash;
-	n <<= 18;
-	hash -= n;
-	n <<= 33;
-	hash -= n;
-	n <<= 3;
-	hash += n;
-	n <<= 3;
-	hash -= n;
-	n <<= 4;
-	hash += n;
-	n <<= 2;
-	hash += n;
-#endif
-
-	return hash;
+	return val * GOLDEN_RATIO_64;
 }
 
 static __always_inline u64 hash_64(u64 val, unsigned bits)
@@ -66,6 +59,75 @@ static __always_inline u64 hash_64(u64 val, unsigned bits)
 	return __hash_64(val) >> (64 - bits);
 }
 
+#elif BITS_PER_LONG == 32
+
+#define GOLDEN_RATIO_PRIME GOLDEN_RATIO_32
+#define __hash_long(val) __hash_32(val)
+#define hash_long(val, bits) hash_32(val, bits)
+
+/*
+ * Because 64-bit multiplications are very expensive on 32-bit machines,
+ * provide a completely separate implementation for them.
+ *
+ * This is mostly used via the hash_long() and hash_ptr() wrappers,
+ * which use hash_32() on 32-bit platforms, but there are some direct
+ * users of hash_64() in 32-bit kernels.
+ *
+ * Note that there is no __hash_64 function at all; that exists
+ * only to implement __hash_long().
+ *
+ * The algorithm is somewhat ad-hoc, but achieves decent mixing.
+ */
+static __always_inline u32 hash_64(u64 val, unsigned bits)
+{
+	u32 hash = (uint32)(val >> 32) * GOLDEN_RATIO_32;
+	hash += (uint32)val;
+	hash *= GOLDEN_RATIO_32;
+	return hash >> (32 - bits);
+}
+
+#else /* BITS_PER_LONG is something else */
+#error Wordsize not 32 or 64
+#endif
+
+
+/*
+ * This is the old bastard constant: a low-bit-weight
+ * prime close to 2^32 * phi = 0x9E3779B9.
+ *
+ * The purpose of the low bit weight is to make the shift-and-add
+ * code faster on processors like ARMv2 without hardware multiply.
+ * The downside is that the high bits of the input are hashed very weakly.
+ * In particular, the high 16 bits of input are just shifted up and added,
+ * so if you ask for b < 16 bits of output, bits 16..31-b of the input
+ * barely affect the output.
+ *
+ * Annoyingly, GCC compiles this into 6 shifts and adds, which
+ * is enough to multiply by the full GOLDEN_RATIO_32 using a
+ * cleverer algorithm:
+ *
+ * unsigned hash_32(unsigned x)
+ * {
+ * 	unsigned y, z;
+ *
+ * 	y = (x << 19) + x;
+ * 	z = (x << 9) + y;
+ * 	x = (x << 23) + z;
+ * 	z = (z << 8) + y;
+ * 	x = (x << 6) - x;
+ * 	return (z << 3) + x;
+ * }
+ *
+ * (Found by Yevgen Voronenko's Hcub algorithm, from
+ * http://spiral.ece.cmu.edu/mcm/gen.html)
+ *
+ * Unfortunately, figuring out which version to compile requires
+ * replicating the compiler's logic in Kconfig or the preprocessor.
+ */
+
+/* 2^31 + 2^29 - 2^25 + 2^22 - 2^19 - 2^16 + 1 */
+#define GOLDEN_RATIO_PRIME_32 0x9e370001UL
+
 static inline u32 __hash_32(u32 val)
 {
 	/* On some cpus multiply is faster, on others gcc will do shifts */
@@ -83,6 +145,7 @@ static inline u32 hash_ptr(const void *ptr, unsigned bits)
 	return hash_long((unsigned long)ptr, bits);
 }
 
+/* This really should be called "fold32_ptr"; it barely hashes at all. */
 static inline u32 hash32_ptr(const void *ptr)
 {
 	unsigned long val = (unsigned long)ptr;
-- 
2.8.1

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [RFC PATCH 3/2] (Rant) Fix various hash abuses
  2016-05-02 10:20         ` [PATCH 1/2] <linux/hash.h>: Make hash_64(), hash_ptr() return 32 bits George Spelvin
  2016-05-02 10:22           ` [PATCH 2/2] <linux/hash.h>: Fix hash_64()'s horrible collision problem George Spelvin
@ 2016-05-02 10:27           ` George Spelvin
  2016-05-02 10:31           ` [RFC PATCH 4/2] namei: Improve hash mixing if CONFIG_DCACHE_WORD_ACCESS George Spelvin
                             ` (2 subsequent siblings)
  4 siblings, 0 replies; 21+ messages in thread
From: George Spelvin @ 2016-05-02 10:27 UTC (permalink / raw)
  To: linux-kernel, tglx, torvalds; +Cc: eric.dumazet, linux, riel

>From c7d6cf96d3b3695ef1a7a3da9e8be58add9c827d Mon Sep 17 00:00:00 2001
From: George Spelvin <linux@horizon.com>
Date: Mon, 2 May 2016 00:00:22 -0400
Subject: [RFC PATCH 3/2] (Rant) Fix various hash abuses

(This patch is not seriously meant to be applied as-is, but should
be divided up and sent to the various subsystems.  I haven't even
compiled all of the code I touched.)

While checking the call sites of hash functions in for the previous
patches, I found numerous stupidities or even bugs.  This patch
fixes them.

Particularly common was calling hash_long() on values declared as
32 bits, where hash_32 would be just as good and faster.  (Except
for the different rounding of the constant phi used, it would compute
the same hash value!)

The Mellanox mlx4 driver did this and a bit more: it XORed together IP
addresses into a 32-bit value and then apparently hoped that hash_long()
would fix the resultant collisions.  Migrated to jhash_3words(),

Lustre had several places where it did the opposite: used hash_long()
on 64-bit values.  That would ignore the high 32 bits on a 32-bit kernel.

It's not all Lustre's fault; the same bug was in include/linux/hashtable.h.

Several place, I changed hash_long() with a cast to hash_ptr().
It does the same thing, just with less clutter.

CIFS did some strange things with hash_64 that could be better done
with modular reduction.

ima_hash_key() from security/integrity/ima/ima.h was too hard to figure
out so I just added a rude comment to it, but it doesn't inspire feelings
of security or integrity.

Signed-off-by: George Spelvin <linux@horizon.com>
Cc: kvm-ppc@vger.kernel.org
Cc: Alexander Graf <agraf@suse.com>
Cc: linux-bcache@vger.kernel.org
Cc: Kent Overstreet <kent.overstreet@gmail.com>
Cc: Eugenia Emantayev <eugenia@mellanox.com>
Cc: lustre-devel@lists.lustre.org
Cc: Oleg Drokin <oleg.drokin@intel.com>
Cc: Andreas Dilger <andreas.dilger@intel.com>
Cc: Steve French <sfrench@samba.org>
Cc: linux-cifs@vger.kernel.org
Cc: Tyler Hicks <tyhicks@canonical.com>
Cc: ecryptfs@vger.kernel.org
Cc: "J. Bruce Fields" <bfields@fieldses.org>
Cc: Jeff Layton <jlayton@poochiereds.net>
Cc: Sasha Levin <levinsasha928@gmail.com>
Cc: Peter Zijlstra <pzijlstr@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Eric W. Biederman <ebiederm@xmission.com>
Cc: Mimi Zohar <zohar@linux.vnet.ibm.com>
Cc: Dmitry Kasatkin <dmitry.kasatkin@gmail.com>
Cc: linux-ima-devel@lists.sourceforge.net
Cc: Kentaro Takeda <takedakn@nttdata.co.jp>
Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
---
(Cc: list above is a reminder to self.  I haven't spammed everyone on it
yet while I think about this.)

 arch/powerpc/kvm/book3s_mmu_hpte.c                   |  6 +++---
 drivers/md/bcache/request.c                          |  2 +-
 drivers/net/ethernet/mellanox/mlx4/en_netdev.c       | 13 ++++---------
 drivers/staging/lustre/include/linux/lnet/lib-lnet.h |  2 +-
 drivers/staging/lustre/lnet/lnet/api-ni.c            |  2 +-
 drivers/staging/lustre/lnet/lnet/lib-ptl.c           |  4 ++--
 drivers/staging/lustre/lustre/include/lustre_fid.h   |  2 +-
 drivers/staging/lustre/lustre/ldlm/ldlm_resource.c   |  4 ++--
 drivers/staging/lustre/lustre/obdclass/lu_object.c   |  4 ++--
 fs/cifs/cifsfs.h                                     | 16 ++++++++++------
 fs/ecryptfs/messaging.c                              |  2 +-
 fs/nfsd/nfs4idmap.c                                  |  4 ++--
 include/linux/hashtable.h                            |  2 +-
 kernel/locking/lockdep.c                             |  2 +-
 kernel/sched/wait.c                                  |  3 +--
 lib/debugobjects.c                                   |  2 +-
 net/sunrpc/auth.c                                    |  2 +-
 net/sunrpc/svcauth_unix.c                            |  2 +-
 security/integrity/ima/ima.h                         | 11 ++++++++++-
 security/tomoyo/memory.c                             |  2 +-
 tools/perf/builtin-lock.c                            |  4 ++--
 21 files changed, 49 insertions(+), 42 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_mmu_hpte.c b/arch/powerpc/kvm/book3s_mmu_hpte.c
index 5a1ab125..e0499ac9 100644
--- a/arch/powerpc/kvm/book3s_mmu_hpte.c
+++ b/arch/powerpc/kvm/book3s_mmu_hpte.c
@@ -41,7 +41,7 @@ static inline u64 kvmppc_mmu_hash_pte(u64 eaddr)
 
 static inline u64 kvmppc_mmu_hash_pte_long(u64 eaddr)
 {
-	return hash_64((eaddr & 0x0ffff000) >> PTE_SIZE,
+	return hash_32((eaddr & 0x0ffff000) >> PTE_SIZE,
 		       HPTEG_HASH_BITS_PTE_LONG);
 }
 
@@ -52,14 +52,14 @@ static inline u64 kvmppc_mmu_hash_vpte(u64 vpage)
 
 static inline u64 kvmppc_mmu_hash_vpte_long(u64 vpage)
 {
-	return hash_64((vpage & 0xffffff000ULL) >> 12,
+	return hash_32((vpage & 0xffffff000ULL) >> 12,
 		       HPTEG_HASH_BITS_VPTE_LONG);
 }
 
 #ifdef CONFIG_PPC_BOOK3S_64
 static inline u64 kvmppc_mmu_hash_vpte_64k(u64 vpage)
 {
-	return hash_64((vpage & 0xffffffff0ULL) >> 4,
+	return hash_32((vpage & 0xffffffff0ULL) >> 4,
 		       HPTEG_HASH_BITS_VPTE_64K);
 }
 #endif
diff --git a/drivers/md/bcache/request.c b/drivers/md/bcache/request.c
index 25fa8445..5137ab31 100644
--- a/drivers/md/bcache/request.c
+++ b/drivers/md/bcache/request.c
@@ -664,7 +664,7 @@ static inline struct search *search_alloc(struct bio *bio,
 	s->iop.c		= d->c;
 	s->iop.bio		= NULL;
 	s->iop.inode		= d->id;
-	s->iop.write_point	= hash_long((unsigned long) current, 16);
+	s->iop.write_point	= hash_ptr(current, 16);
 	s->iop.write_prio	= 0;
 	s->iop.error		= 0;
 	s->iop.flags		= 0;
diff --git a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
index b4b258c8..180d0b7d 100644
--- a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
+++ b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
@@ -36,7 +36,7 @@
 #include <linux/if_vlan.h>
 #include <linux/delay.h>
 #include <linux/slab.h>
-#include <linux/hash.h>
+#include <linux/jhash.h>
 #include <net/ip.h>
 #include <net/busy_poll.h>
 #include <net/vxlan.h>
@@ -194,14 +194,9 @@ static inline struct hlist_head *
 filter_hash_bucket(struct mlx4_en_priv *priv, __be32 src_ip, __be32 dst_ip,
 		   __be16 src_port, __be16 dst_port)
 {
-	unsigned long l;
-	int bucket_idx;
-
-	l = (__force unsigned long)src_port |
-	    ((__force unsigned long)dst_port << 2);
-	l ^= (__force unsigned long)(src_ip ^ dst_ip);
-
-	bucket_idx = hash_long(l, MLX4_EN_FILTER_HASH_SHIFT);
+	u32 ports = (__force u32)src_port << 16 | (__force u32)dst_port;
+	int bucket_idx = jhash_3words(ports, (__force u32)src_ip,
+		(__force u32)dst_ip, 0) >> (32 - MLX4_EN_FILTER_HASH_SHIFT);
 
 	return &priv->filter_hash[bucket_idx];
 }
diff --git a/drivers/staging/lustre/include/linux/lnet/lib-lnet.h b/drivers/staging/lustre/include/linux/lnet/lib-lnet.h
index dfc0208d..9b42fb55 100644
--- a/drivers/staging/lustre/include/linux/lnet/lib-lnet.h
+++ b/drivers/staging/lustre/include/linux/lnet/lib-lnet.h
@@ -428,7 +428,7 @@ lnet_ni_alloc(__u32 net, struct cfs_expr_list *el, struct list_head *nilist);
 static inline int
 lnet_nid2peerhash(lnet_nid_t nid)
 {
-	return hash_long(nid, LNET_PEER_HASH_BITS);
+	return hash_64(nid, LNET_PEER_HASH_BITS);
 }
 
 static inline struct list_head *
diff --git a/drivers/staging/lustre/lnet/lnet/api-ni.c b/drivers/staging/lustre/lnet/lnet/api-ni.c
index 87647555..4de382a2 100644
--- a/drivers/staging/lustre/lnet/lnet/api-ni.c
+++ b/drivers/staging/lustre/lnet/lnet/api-ni.c
@@ -700,7 +700,7 @@ lnet_nid_cpt_hash(lnet_nid_t nid, unsigned int number)
 	if (number == 1)
 		return 0;
 
-	val = hash_long(key, LNET_CPT_BITS);
+	val = hash_64(key, LNET_CPT_BITS);
 	/* NB: LNET_CP_NUMBER doesn't have to be PO2 */
 	if (val < number)
 		return val;
diff --git a/drivers/staging/lustre/lnet/lnet/lib-ptl.c b/drivers/staging/lustre/lnet/lnet/lib-ptl.c
index 3947e8b7..dc600c3b 100644
--- a/drivers/staging/lustre/lnet/lnet/lib-ptl.c
+++ b/drivers/staging/lustre/lnet/lnet/lib-ptl.c
@@ -360,13 +360,13 @@ lnet_mt_match_head(struct lnet_match_table *mtable,
 		   lnet_process_id_t id, __u64 mbits)
 {
 	struct lnet_portal *ptl = the_lnet.ln_portals[mtable->mt_portal];
-	unsigned long hash = mbits;
+	u64 hash = mbits;
 
 	if (!lnet_ptl_is_wildcard(ptl)) {
 		hash += id.nid + id.pid;
 
 		LASSERT(lnet_ptl_is_unique(ptl));
-		hash = hash_long(hash, LNET_MT_HASH_BITS);
+		hash = hash_64(hash, LNET_MT_HASH_BITS);
 	}
 	return &mtable->mt_mhash[hash & LNET_MT_HASH_MASK];
 }
diff --git a/drivers/staging/lustre/lustre/include/lustre_fid.h b/drivers/staging/lustre/lustre/include/lustre_fid.h
index ab4a9239..5a19bd50 100644
--- a/drivers/staging/lustre/lustre/include/lustre_fid.h
+++ b/drivers/staging/lustre/lustre/include/lustre_fid.h
@@ -594,7 +594,7 @@ static inline __u32 fid_hash(const struct lu_fid *f, int bits)
 	/* all objects with same id and different versions will belong to same
 	 * collisions list.
 	 */
-	return hash_long(fid_flatten(f), bits);
+	return hash_64(fid_flatten(f), bits);
 }
 
 /**
diff --git a/drivers/staging/lustre/lustre/ldlm/ldlm_resource.c b/drivers/staging/lustre/lustre/ldlm/ldlm_resource.c
index 9dede87a..d3b66496 100644
--- a/drivers/staging/lustre/lustre/ldlm/ldlm_resource.c
+++ b/drivers/staging/lustre/lustre/ldlm/ldlm_resource.c
@@ -475,9 +475,9 @@ static unsigned ldlm_res_hop_fid_hash(struct cfs_hash *hs,
 	} else {
 		val = fid_oid(&fid);
 	}
-	hash = hash_long(hash, hs->hs_bkt_bits);
+	hash = hash_32(hash, hs->hs_bkt_bits);
 	/* give me another random factor */
-	hash -= hash_long((unsigned long)hs, val % 11 + 3);
+	hash -= hash_ptr(hs, val % 11 + 3);
 
 	hash <<= hs->hs_cur_bits - hs->hs_bkt_bits;
 	hash |= ldlm_res_hop_hash(hs, key, CFS_HASH_NBKT(hs) - 1);
diff --git a/drivers/staging/lustre/lustre/obdclass/lu_object.c b/drivers/staging/lustre/lustre/obdclass/lu_object.c
index 978568ad..369a193b 100644
--- a/drivers/staging/lustre/lustre/obdclass/lu_object.c
+++ b/drivers/staging/lustre/lustre/obdclass/lu_object.c
@@ -869,10 +869,10 @@ static unsigned lu_obj_hop_hash(struct cfs_hash *hs,
 
 	hash = fid_flatten32(fid);
 	hash += (hash >> 4) + (hash << 12); /* mixing oid and seq */
-	hash = hash_long(hash, hs->hs_bkt_bits);
+	hash = hash_32(hash, hs->hs_bkt_bits);
 
 	/* give me another random factor */
-	hash -= hash_long((unsigned long)hs, fid_oid(fid) % 11 + 3);
+	hash -= hash_ptr(hs, fid_oid(fid) % 11 + 3);
 
 	hash <<= hs->hs_cur_bits - hs->hs_bkt_bits;
 	hash |= (fid_seq(fid) + fid_oid(fid)) & (CFS_HASH_NBKT(hs) - 1);
diff --git a/fs/cifs/cifsfs.h b/fs/cifs/cifsfs.h
index 83aac8ba..b10522e3 100644
--- a/fs/cifs/cifsfs.h
+++ b/fs/cifs/cifsfs.h
@@ -22,20 +22,24 @@
 #ifndef _CIFSFS_H
 #define _CIFSFS_H
 
-#include <linux/hash.h>
-
 #define ROOT_I 2
 
 /*
  * ino_t is 32-bits on 32-bit arch. We have to squash the 64-bit value down
- * so that it will fit. We use hash_64 to convert the value to 31 bits, and
- * then add 1, to ensure that we don't end up with a 0 as the value.
+ * so that it will fit, and we have to ensure that only a zero fileid will
+ * map to the zero ino_t.
+ *
+ * This can be done very simply by reducing mod-2^32-1, similar to IP
+ * checksums.  After the first addition, the sum is at most 0x1fffffffe.
+ * The second sum cannot overflow 32 bits.
  */
 static inline ino_t
 cifs_uniqueid_to_ino_t(u64 fileid)
 {
-	if ((sizeof(ino_t)) < (sizeof(u64)))
-		return (ino_t)hash_64(fileid, (sizeof(ino_t) * 8) - 1) + 1;
+	if (sizeof(ino_t) < sizeof(u64)) {
+		fileid = (fileid & 0xffffffff) + (fileid >> 32);
+		fileid = (fileid & 0xffffffff) + (fileid >> 32);
+	}
 
 	return (ino_t)fileid;
 
diff --git a/fs/ecryptfs/messaging.c b/fs/ecryptfs/messaging.c
index 286f10b0..bd5ae06b 100644
--- a/fs/ecryptfs/messaging.c
+++ b/fs/ecryptfs/messaging.c
@@ -33,7 +33,7 @@ static struct hlist_head *ecryptfs_daemon_hash;
 struct mutex ecryptfs_daemon_hash_mux;
 static int ecryptfs_hash_bits;
 #define ecryptfs_current_euid_hash(uid) \
-	hash_long((unsigned long)from_kuid(&init_user_ns, current_euid()), ecryptfs_hash_bits)
+	hash_32(from_kuid(&init_user_ns, current_euid()), ecryptfs_hash_bits)
 
 static u32 ecryptfs_msg_counter;
 static struct ecryptfs_msg_ctx *ecryptfs_msg_ctx_arr;
diff --git a/fs/nfsd/nfs4idmap.c b/fs/nfsd/nfs4idmap.c
index 5b20577d..8e99a418 100644
--- a/fs/nfsd/nfs4idmap.c
+++ b/fs/nfsd/nfs4idmap.c
@@ -111,8 +111,8 @@ idtoname_hash(struct ent *ent)
 {
 	uint32_t hash;
 
-	hash = hash_str(ent->authname, ENT_HASHBITS);
-	hash = hash_long(hash ^ ent->id, ENT_HASHBITS);
+	hash = hash_str(ent->authname, 32);
+	hash = hash_32(hash ^ ent->id, ENT_HASHBITS);
 
 	/* Flip LSB for user/group */
 	if (ent->type == IDMAP_TYPE_GROUP)
diff --git a/include/linux/hashtable.h b/include/linux/hashtable.h
index 661e5c2a..91e3fc5d 100644
--- a/include/linux/hashtable.h
+++ b/include/linux/hashtable.h
@@ -28,7 +28,7 @@
 
 /* Use hash_32 when possible to allow for fast 32bit hashing in 64bit kernels. */
 #define hash_min(val, bits)							\
-	(sizeof(val) <= 4 ? hash_32(val, bits) : hash_long(val, bits))
+	(sizeof(val) <= 4 ? hash_32(val, bits) : hash_64(val, bits))
 
 static inline void __hash_init(struct hlist_head *ht, unsigned int sz)
 {
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 78c1c0ee..11dca9fc 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -286,7 +286,7 @@ LIST_HEAD(all_lock_classes);
  */
 #define CLASSHASH_BITS		(MAX_LOCKDEP_KEYS_BITS - 1)
 #define CLASSHASH_SIZE		(1UL << CLASSHASH_BITS)
-#define __classhashfn(key)	hash_long((unsigned long)key, CLASSHASH_BITS)
+#define __classhashfn(key)	hash_ptr(key, CLASSHASH_BITS)
 #define classhashentry(key)	(classhash_table + __classhashfn((key)))
 
 static struct hlist_head classhash_table[CLASSHASH_SIZE];
diff --git a/kernel/sched/wait.c b/kernel/sched/wait.c
index f15d6b6a..a23611d2 100644
--- a/kernel/sched/wait.c
+++ b/kernel/sched/wait.c
@@ -485,9 +485,8 @@ EXPORT_SYMBOL(wake_up_bit);
 
 wait_queue_head_t *bit_waitqueue(void *word, int bit)
 {
-	const int shift = BITS_PER_LONG == 32 ? 5 : 6;
 	const struct zone *zone = page_zone(virt_to_page(word));
-	unsigned long val = (unsigned long)word << shift | bit;
+	unsigned long val = (unsigned long)word * BITS_PER_LONG + bit;
 
 	return &zone->wait_table[hash_long(val, zone->wait_table_bits)];
 }
diff --git a/lib/debugobjects.c b/lib/debugobjects.c
index 519b5a10..df65cfba 100644
--- a/lib/debugobjects.c
+++ b/lib/debugobjects.c
@@ -242,7 +242,7 @@ static void debug_objects_oom(void)
  */
 static struct debug_bucket *get_bucket(unsigned long addr)
 {
-	unsigned long hash;
+	unsigned int hash;
 
 	hash = hash_long((addr >> ODEBUG_CHUNK_SHIFT), ODEBUG_HASH_BITS);
 	return &obj_hash[hash];
diff --git a/net/sunrpc/auth.c b/net/sunrpc/auth.c
index 02f53674..1524dd3b 100644
--- a/net/sunrpc/auth.c
+++ b/net/sunrpc/auth.c
@@ -551,7 +551,7 @@ rpcauth_lookup_credcache(struct rpc_auth *auth, struct auth_cred * acred,
 			*entry, *new;
 	unsigned int nr;
 
-	nr = hash_long(from_kuid(&init_user_ns, acred->uid), cache->hashbits);
+	nr = hash_32(from_kuid(&init_user_ns, acred->uid), cache->hashbits);
 
 	rcu_read_lock();
 	hlist_for_each_entry_rcu(entry, &cache->hashtable[nr], cr_hash) {
diff --git a/net/sunrpc/svcauth_unix.c b/net/sunrpc/svcauth_unix.c
index dfacdc95..44773bdc 100644
--- a/net/sunrpc/svcauth_unix.c
+++ b/net/sunrpc/svcauth_unix.c
@@ -416,7 +416,7 @@ struct unix_gid {
 
 static int unix_gid_hash(kuid_t uid)
 {
-	return hash_long(from_kuid(&init_user_ns, uid), GID_HASHBITS);
+	return hash_32(from_kuid(&init_user_ns, uid), GID_HASHBITS);
 }
 
 static void unix_gid_put(struct kref *kref)
diff --git a/security/integrity/ima/ima.h b/security/integrity/ima/ima.h
index 5d0f6116..80296292 100644
--- a/security/integrity/ima/ima.h
+++ b/security/integrity/ima/ima.h
@@ -135,9 +135,18 @@ struct ima_h_table {
 };
 extern struct ima_h_table ima_htable;
 
+/*
+ * FIXME: What the hell is the point of hashing one byte to produce
+ * a 9-bit hash value?  Should this just be "return *digest"?  Or should
+ * we be hashing more than the first byte of the digest?  Callers seem
+ * to pass a buffer of TPM_DIGEST_SIZE bytes.
+ *
+ * It's always fun to be WTFing over "security" code for
+ * "integrity management".
+ */
 static inline unsigned long ima_hash_key(u8 *digest)
 {
-	return hash_long(*digest, IMA_HASH_BITS);
+	return hash_32(*digest, IMA_HASH_BITS);
 }
 
 enum ima_hooks {
diff --git a/security/tomoyo/memory.c b/security/tomoyo/memory.c
index 0e995716..594c4aac 100644
--- a/security/tomoyo/memory.c
+++ b/security/tomoyo/memory.c
@@ -155,7 +155,7 @@ const struct tomoyo_path_info *tomoyo_get_name(const char *name)
 		return NULL;
 	len = strlen(name) + 1;
 	hash = full_name_hash((const unsigned char *) name, len - 1);
-	head = &tomoyo_name_list[hash_long(hash, TOMOYO_HASH_BITS)];
+	head = &tomoyo_name_list[hash_32(hash, TOMOYO_HASH_BITS)];
 	if (mutex_lock_interruptible(&tomoyo_policy_lock))
 		return NULL;
 	list_for_each_entry(ptr, head, head.list) {
diff --git a/tools/perf/builtin-lock.c b/tools/perf/builtin-lock.c
index ce3bfb48..34f764a9 100644
--- a/tools/perf/builtin-lock.c
+++ b/tools/perf/builtin-lock.c
@@ -35,8 +35,8 @@ static struct perf_session *session;
 
 static struct list_head lockhash_table[LOCKHASH_SIZE];
 
-#define __lockhashfn(key)	hash_long((unsigned long)key, LOCKHASH_BITS)
-#define lockhashentry(key)	(lockhash_table + __lockhashfn((key)))
+#define __lockhashfn(addr)	hash_ptr(addr, LOCKHASH_BITS)
+#define lockhashentry(addr)	(lockhash_table + __lockhashfn(addr))
 
 struct lock_stat {
 	struct list_head	hash_entry;
-- 
2.8.1

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [RFC PATCH 4/2] namei: Improve hash mixing if CONFIG_DCACHE_WORD_ACCESS
  2016-05-02 10:20         ` [PATCH 1/2] <linux/hash.h>: Make hash_64(), hash_ptr() return 32 bits George Spelvin
  2016-05-02 10:22           ` [PATCH 2/2] <linux/hash.h>: Fix hash_64()'s horrible collision problem George Spelvin
  2016-05-02 10:27           ` [RFC PATCH 3/2] (Rant) Fix various hash abuses George Spelvin
@ 2016-05-02 10:31           ` George Spelvin
  2016-05-16 18:51             ` Linus Torvalds
  2016-05-02 13:28           ` [PATCH 1/2] <linux/hash.h>: Make hash_64(), hash_ptr() return 32 bits Peter Zijlstra
  2016-05-02 16:24           ` Linus Torvalds
  4 siblings, 1 reply; 21+ messages in thread
From: George Spelvin @ 2016-05-02 10:31 UTC (permalink / raw)
  To: linux-kernel, tglx, torvalds; +Cc: eric.dumazet, linux, riel

The hash mixing between adding the next 64 bits of name
was just a bit weak.

Replaced with a still very fast but slightly more effective
mixing function.

Signed-off-by: George Spelvin <linux@horizon.com>
---
As long as I was looking at all sorts of hashing in the kernel, I noticed
this.  I'm not sure if this is still too expansive and will slow down
the loop.

 fs/namei.c | 33 ++++++++++++++++++++++++++-------
 1 file changed, 26 insertions(+), 7 deletions(-)

diff --git a/fs/namei.c b/fs/namei.c
index 1d9ca2d5..e2bff05d 100644
--- a/fs/namei.c
+++ b/fs/namei.c
@@ -1794,30 +1794,49 @@ static inline unsigned int fold_hash(unsigned long hash)
 	return hash_64(hash, 32);
 }
 
+/*
+ * This is George Marsaglia's XORSHIFT generator.
+ * It implements a maximum-period LFSR in only a few
+ * instructions.  It also has the property (required
+ * by hash_name()) that mix_hash(0) = 0.
+ */
+static inline unsigned long mix_hash(unsigned long hash)
+{
+	hash ^= hash << 13;
+	hash ^= hash >> 7;
+	hash ^= hash << 17;
+	return hash;
+}
+
 #else	/* 32-bit case */
 
 #define fold_hash(x) (x)
 
+static inline unsigned long mix_hash(unsigned long hash)
+{
+	hash ^= hash << 13;
+	hash ^= hash >> 17;
+	hash ^= hash << 5;
+	return hash;
+}
+
 #endif
 
 unsigned int full_name_hash(const unsigned char *name, unsigned int len)
 {
-	unsigned long a, mask;
-	unsigned long hash = 0;
+	unsigned long a, hash = 0;
 
 	for (;;) {
 		a = load_unaligned_zeropad(name);
 		if (len < sizeof(unsigned long))
 			break;
-		hash += a;
-		hash *= 9;
+		hash = mix_hash(hash + a);
 		name += sizeof(unsigned long);
 		len -= sizeof(unsigned long);
 		if (!len)
 			goto done;
 	}
-	mask = bytemask_from_count(len);
-	hash += mask & a;
+	hash += a & bytemask_from_count(len);
 done:
 	return fold_hash(hash);
 }
@@ -1835,7 +1854,7 @@ static inline u64 hash_name(const char *name)
 	hash = a = 0;
 	len = -sizeof(unsigned long);
 	do {
-		hash = (hash + a) * 9;
+		hash = mix_hash(hash + a);
 		len += sizeof(unsigned long);
 		a = load_unaligned_zeropad(name+len);
 		b = a ^ REPEAT_BYTE('/');
-- 
2.8.1

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/2] <linux/hash.h>: Make hash_64(), hash_ptr() return 32 bits
  2016-05-02 10:20         ` [PATCH 1/2] <linux/hash.h>: Make hash_64(), hash_ptr() return 32 bits George Spelvin
                             ` (2 preceding siblings ...)
  2016-05-02 10:31           ` [RFC PATCH 4/2] namei: Improve hash mixing if CONFIG_DCACHE_WORD_ACCESS George Spelvin
@ 2016-05-02 13:28           ` Peter Zijlstra
  2016-05-02 19:08             ` George Spelvin
  2016-05-02 16:24           ` Linus Torvalds
  4 siblings, 1 reply; 21+ messages in thread
From: Peter Zijlstra @ 2016-05-02 13:28 UTC (permalink / raw)
  To: George Spelvin
  Cc: linux-kernel, tglx, torvalds, bfields, eric.dumazet, jlayton,
	linux-nfs, riel

On Mon, May 02, 2016 at 06:20:16AM -0400, George Spelvin wrote:
> Subject: [PATCH 1/2] <linux/hash.h>: Make hash_64(), hash_ptr() return 32 bits

> +static __always_inline u64 hash_64(u64 val, unsigned bits)
> +{
>  	/* High bits are more random, so use them. */
> +	return __hash_64(val) >> (64 - bits);
>  }

Is the subject stale or the above a mistake? Because hash_64() still
very much seems to return u64.

Also, I think I would prefer to keep it like this, I would like to use
it for kernel/locking/lockdep.c:iterate_chain_key(), which currently is
a somewhat crap hash.

Something like:

static inline u64 iterate_chain_key(u64 key1, u64 key2)
{
	return hash_64(key1 ^ key2, 64);
}

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/2] <linux/hash.h>: Make hash_64(), hash_ptr() return 32 bits
  2016-05-02 10:20         ` [PATCH 1/2] <linux/hash.h>: Make hash_64(), hash_ptr() return 32 bits George Spelvin
                             ` (3 preceding siblings ...)
  2016-05-02 13:28           ` [PATCH 1/2] <linux/hash.h>: Make hash_64(), hash_ptr() return 32 bits Peter Zijlstra
@ 2016-05-02 16:24           ` Linus Torvalds
  2016-05-02 20:26             ` George Spelvin
  4 siblings, 1 reply; 21+ messages in thread
From: Linus Torvalds @ 2016-05-02 16:24 UTC (permalink / raw)
  To: George Spelvin
  Cc: Linux Kernel Mailing List, Thomas Gleixner, Bruce Fields,
	Eric Dumazet, Jeff Layton, Linux NFS Mailing List, Rik van Riel

On Mon, May 2, 2016 at 3:20 AM, George Spelvin <linux@horizon.com> wrote:
>
> After a careful scan through the kernel code, no caller asks any of
> those four for  more than 32 bits of hash result, except that the
> latter two need 64 bits from hash_long() if BITS_PER_LONG == 64.

Ugh. I hate this patch.

I really think that we should *not* convuse those two odd svcauth.h
users with the whole hash_32/64 thing.

I think  hash_str/hash_mem should be moved to lib/hash.c, and they
just shouldn't use "hash_long()" at all, except at the verty end (they
currently have a very odd way of doing "every <n> bytes _and_ at the
end".

In particular, the hashing in the *middle* is very different from the
hashing at the end.

At the end, you need to make sure the lower bits get spread out
particularly to the upper bits, since you're going to shift things
down.

But in the middle, you just want to spread the bits out (and in
particular, destroy any byte-per-byte patterns that it build it in
between).

Quite frankly, I think those functions should just use something like
the FNV hash (or Jenkins hash), and then maybe use "hash_long()" at
the *end* to shift the result down to "bits".

I don't want to make our <linux/hash.h> odder just because of two
entirely broken users.

That said, I actually think hash_str() should be killed entirely.
Better just use what we use for pathnames: full_name_hash() (which
gets a pointer and length) and hash_name (which gets the string).

Those functions do the "word-at-a-time" optimization, and they should
do a good enough job. If they aren't, we should fix them, because they
are a hell of a lot more important than anything that the svcauth code
does.

                Linus

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/2] <linux/hash.h>: Make hash_64(), hash_ptr() return 32 bits
  2016-05-02 13:28           ` [PATCH 1/2] <linux/hash.h>: Make hash_64(), hash_ptr() return 32 bits Peter Zijlstra
@ 2016-05-02 19:08             ` George Spelvin
  0 siblings, 0 replies; 21+ messages in thread
From: George Spelvin @ 2016-05-02 19:08 UTC (permalink / raw)
  To: linux, peterz
  Cc: bfields, eric.dumazet, jlayton, linux-kernel, linux-nfs, riel,
	tglx, torvalds

Peter Zijlstra wrote:
> Is the subject stale or the above a mistake? Because hash_64() still
> very much seems to return u64.

Damn it no, it's a brown-paper-bag typo caused by a recent rebase.
It's meant to be u32, it was developed with u32, but the typo snuck
in during late testing and I didn't catch it.

I know Linus hates recent rebases, but I actually had a good
reason if you want to hear the saga...

I developed the patch while running v4.4.x.  I'd been doing other hacking
on top of v4.5 that resulted in an unstable system, so I kept going back
to the "last known good" v4.4.x kernel to get work done.

Developing this patch, I backed out that buggy work and based it on my
4.5 tree, since Linus hates basing work on random kernels.

Most of it was compile testing, but just before submitting, I of course
had to boot it and test.

When I booted it, I discovered I couldn't load microcode.  How the hell
did I cause that?  Oh, I have CONFIG_MICROCODE turned off... huh?  Oh,
v4.5 has bug where CONFIG_MICROCODE depende on CONFIG_BLK_DEV_INITRD
which I don't use, and the fix went in to v4.6-rc1.

Okay, fine, in the interest of getting a clean boot for testing, I'll
rebase to v4.6-rc6.  See, I told you I had a reason!

Now, I actually have a fair pile of local patches for hacking projects in
progress (I'm running 4.6.0-rc6-0130), so rebasing my whole stack takes
me about an hour and a half, with several merge conflict resolutions.

Finally, I get my clean boot, and everything seems to be working
fine, and I'm ready to post.

But by this time it's late, I'm tired, and I didn't notice that I somehow
managed to screw that up!  In hindsight, I think I remember the sequence
of edits that caused it (I deleted something by accident and cut & pasted
it back), but that's an even more obscure saga.

I will now go and fix it and boot test again, just to be sure.

Grump.

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 2/2] <linux/hash.h>: Fix hash_64()'s horrible collision problem
  2016-05-02 10:22           ` [PATCH 2/2] <linux/hash.h>: Fix hash_64()'s horrible collision problem George Spelvin
@ 2016-05-02 20:08             ` Linus Torvalds
  0 siblings, 0 replies; 21+ messages in thread
From: Linus Torvalds @ 2016-05-02 20:08 UTC (permalink / raw)
  To: George Spelvin
  Cc: Linux Kernel Mailing List, Thomas Gleixner, Bruce Fields,
	Eric Dumazet, Jeff Layton, Linux NFS Mailing List, Rik van Riel

[-- Attachment #1: Type: text/plain, Size: 1166 bytes --]

On Mon, May 2, 2016 at 3:22 AM, George Spelvin <linux@horizon.com> wrote:
> hash_64() was using a low-bit-weight multiplier, which resulted in
> very bad mixing of the high bits of the input.  In particular,
> page-aligned pointers (low 12 bits not used) were a disaster,

So I did just a minimal for fro 4.6 (and back-porting), which took
just the constants and made _only_ the 64-bit architevture case use
this improved constant for hash_64.

In other words, people who use "hash_long()" or use "hash_64()" on
64-bit architectures will get the improvements, but if you use
hash_64() on a 32-bit architecture you'll conteinue to see the old
behavior.

Quite frankly, looking at some of the explicit hash_64() users, they
seem to be a big dubious anyway. And it won't make things *worse* for
them.

So that simple "just use multiplication unconditionally on 64-bit, and
use the better constant" should fix the actual _practical_ problems
that we've seen. And it shouldn't have any negative consequences,
since as you say, 64-bit architectures universally do have a
multiplier.

The bigger changes will have to be for 4.7 by now, I think.

                     Linus

[-- Attachment #2: patch.diff --]
[-- Type: text/plain, Size: 2805 bytes --]

From 689de1d6ca95b3b5bd8ee446863bf81a4883ea25 Mon Sep 17 00:00:00 2001
From: Linus Torvalds <torvalds@linux-foundation.org>
Date: Mon, 2 May 2016 12:46:42 -0700
Subject: [PATCH] Minimal fix-up of bad hashing behavior of hash_64()

This is a fairly minimal fixup to the horribly bad behavior of hash_64()
with certain input patterns.

In particular, because the multiplicative value used for the 64-bit hash
was intentionally bit-sparse (so that the multiply could be done with
shifts and adds on architectures without hardware multipliers), some
bits did not get spread out very much.  In particular, certain fairly
common bit ranges in the input (roughly bits 12-20: commonly with the
most information in them when you hash things like byte offsets in files
or memory that have block factors that mean that the low bits are often
zero) would not necessarily show up much in the result.

There's a bigger patch-series brewing to fix up things more completely,
but this is the fairly minimal fix for the 64-bit hashing problem.  It
simply picks a much better constant multiplier, spreading the bits out a
lot better.

NOTE! For 32-bit architectures, the bad old hash_64() remains the same
for now, since 64-bit multiplies are expensive.  The bigger hashing
cleanup will replace the 32-bit case with something better.

The new constants were picked by George Spelvin who wrote that bigger
cleanup series.  I just picked out the constants and part of the comment
from that series.

Cc: stable@vger.kernel.org
Cc: George Spelvin <linux@horizon.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

diff --git a/include/linux/hash.h b/include/linux/hash.h
index 1afde47e1528..79c52fa81cac 100644
--- a/include/linux/hash.h
+++ b/include/linux/hash.h
@@ -32,12 +32,28 @@
 #error Wordsize not 32 or 64
 #endif
 
+/*
+ * The above primes are actively bad for hashing, since they are
+ * too sparse. The 32-bit one is mostly ok, the 64-bit one causes
+ * real problems. Besides, the "prime" part is pointless for the
+ * multiplicative hash.
+ *
+ * Although a random odd number will do, it turns out that the golden
+ * ratio phi = (sqrt(5)-1)/2, or its negative, has particularly nice
+ * properties.
+ *
+ * These are the negative, (1 - phi) = (phi^2) = (3 - sqrt(5))/2.
+ * (See Knuth vol 3, section 6.4, exercise 9.)
+ */
+#define GOLDEN_RATIO_32 0x61C88647
+#define GOLDEN_RATIO_64 0x61C8864680B583EBull
+
 static __always_inline u64 hash_64(u64 val, unsigned int bits)
 {
 	u64 hash = val;
 
-#if defined(CONFIG_ARCH_HAS_FAST_MULTIPLIER) && BITS_PER_LONG == 64
-	hash = hash * GOLDEN_RATIO_PRIME_64;
+#if BITS_PER_LONG == 64
+	hash = hash * GOLDEN_RATIO_64;
 #else
 	/*  Sigh, gcc can't optimise this alone like it does for 32 bits. */
 	u64 n = hash;

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/2] <linux/hash.h>: Make hash_64(), hash_ptr() return 32 bits
  2016-05-02 16:24           ` Linus Torvalds
@ 2016-05-02 20:26             ` George Spelvin
  2016-05-02 21:19               ` Linus Torvalds
  0 siblings, 1 reply; 21+ messages in thread
From: George Spelvin @ 2016-05-02 20:26 UTC (permalink / raw)
  To: linux, torvalds
  Cc: bfields, eric.dumazet, jlayton, linux-kernel, linux-nfs, riel, tglx

Linus wrote:
> I really think that we should *not* convuse those two odd svcauth.h
> users with the whole hash_32/64 thing.
>
> I think  hash_str/hash_mem should be moved to lib/hash.c, and they
> just shouldn't use "hash_long()" at all, except at the very end (they
> currently have a very odd way of doing "every <n> bytes _and_ at the
> end".

Moving them is fine.  I have no problem with that, except that I agree
that merging them with the fs/namei.c hashes would be even better.

But the hash_long it not odd at all.  They're using it as the mix function
between adding words.  Like hash_name() uses "hash = (hash + a) * 9".

So it's

	x = (first 8 bytes of string)
	x = __hash64(x);
	x ^= (next 8 bytes of string)
	x = __hash64(x);
	... repeat ...
	x ^= (last 1..8 bytes of string)
	return __hash64(x);

It's a quite reasonable mix function.  One multiply (4 cycles or so) per
8 bytes.  It's definitely swamped by the byte-at-a-time string loading.

(The one peculiar thing is that the "last 1..8 bytes of string" is, due
to the way it's shiften into an accumulator, actually the last 8 bytes,
which may include some previously-hashed bytes.  But that has nothing
to do with the mixing.)


But... the fundamental reason I didn't was that this is late -rc.
I'm trying to fix a *bug* tglx found, not add risk and delay with a much
more ambitious patch series.

This is a related but separate issue that can be addressed separately.

> In particular, the hashing in the *middle* is very different from the
> hashing at the end.
>
> At the end, you need to make sure the lower bits get spread out
> particularly to the upper bits, since you're going to shift things
> down.
>
> But in the middle, you just want to spread the bits out (and in
> particular, destroy any byte-per-byte patterns that it build it in
> between).

Yes, in the middle you have a full width hash to spread the bits among,
while at the end you have to "concentrate" the hash value in a few bits.
The latter is a more difficult task and might be slower.

But there's nothing really *wrong* with using the same mix operation
both places if it's both good enough and fast enough.

In particualr, if you're loading the string a word at a time, you need
a much better mixing function than if you're processing it byte at a time.

> Quite frankly, I think those functions should just use something like
> the FNV hash (or Jenkins hash), and then maybe use "hash_long()" at
> the *end* to shift the result down to "bits".

It *is* the FNV hash!  More specifically, FNV-1a done a word at a time;
doing it byte at a time like most implementations would result in 8
times as many multiplies and be slower.

The only difference is the exact multiplier.  The published FNV-1
uses a low-bit-weight multiplier.  Since this is being done a word
at a time, I think the stronger multiplier is worth it.

Jenkins hash is (if you have a HW multiplier) slower.  Better mixing,
so adding a multiply at the end would be redundant.

> I don't want to make our <linux/hash.h> odder just because of two
> entirely broken users.

> That said, I actually think hash_str() should be killed entirely.
> Better just use what we use for pathnames: full_name_hash() (which
> gets a pointer and length) and hash_name (which gets the string).

I was thinking the exact same thing.  Repeated grepping over the kernel
tree for "hash_*" functions showed me just how many there are in various
places, and combining some would be a good idea.

For example, partial_name_hash() is still used in many places
even if the word-at-a-time code is used in namei.c.

> Those functions do the "word-at-a-time" optimization, and they should
> do a good enough job. If they aren't, we should fix them, because they
> are a hell of a lot more important than anything that the svcauth code
> does.

Okay, let me list the problems...

1) hash_name() stops at a slash or a nul.  hash_str() only looks
   for a nul.  Should I add a third variant?  Should that go in fs/namei,
   or should the whole pole be moved elsewhere?

2) Some places need the length *and* the hash.  Calling strlen() and then
   full_name_hash() somewhat defeats the purpose of word-at-a-time access.
   hash_name returns both jammed into a 64-bit word.  Is that a good
   interface in general?

   Myself, I think the length should be computed at copy_from_user()
   time and I'd like to look at each such call site and understand why
   it *doesn't* have the length ready.  But that's a lot of work.

3) They do particularly crappy mixing.  See that "RFC PATCH 4/2" I posted
   that because I couldn't stand how bad it was.

   If you don't have word at a time, the partial_name_hash() is decent,
   but the word-at-a-time mixes by multiplying by 9.  So the hashes
   of the strings "1.......0" and "0.......9" are identical. 

   (I assume this was chosen as the best available one-instruction (LEA)
   mix operation due to an obsessive interest in speed in the dcache.)

   More crappy mixing is the final folding.  On a 64-bit machine, the
   high and low 32 bits are just XORed together.  So the hashes of
   "deadbeef" and "beefdead" are identical.

   I agree we should be very careful of the mixing function, since it's
   the only thing limiting loop cycle time.  The has_zero hackery has a
   cricital path about 6 cycles long, but they're independent per loop
   and a sufficiently aggressive OOO machine could pipeline them.

   (If you have a particular cycle count budget in mind, I can come up with
   something.)

4) They don't do a final mix.  Again, obsessive interest in speed when
   you're storing the whole 32 bits and comparing that.  For applications
   that use only a few bits of hash index and need the entropy
   "concentrated", should this be done outside?

Basicaly, I can understand why someone might prefer a stronger hash
for less speed-critical applications.


I agree that we should ideally have just two string hash functions for
general kernel use (plus any imposed by ecternal standards like file
systems or ELF).

One is the dcache hash, for applications where collision DoS attacks are
not expected and is optimized strongly for speed.

A second, something like SipHash, for applcations which require
sollision resistance against malicious attackers.

But achieving that ideal is a project of significant size.
There are a lot of corner cases to worry about.

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/2] <linux/hash.h>: Make hash_64(), hash_ptr() return 32 bits
  2016-05-02 20:26             ` George Spelvin
@ 2016-05-02 21:19               ` Linus Torvalds
  2016-05-02 21:41                 ` Linus Torvalds
  2016-05-03  1:59                 ` George Spelvin
  0 siblings, 2 replies; 21+ messages in thread
From: Linus Torvalds @ 2016-05-02 21:19 UTC (permalink / raw)
  To: George Spelvin
  Cc: Bruce Fields, Eric Dumazet, Jeff Layton,
	Linux Kernel Mailing List, Linux NFS Mailing List, Rik van Riel,
	Thomas Gleixner

On Mon, May 2, 2016 at 1:26 PM, George Spelvin <linux@horizon.com> wrote:
>
> But the hash_long it not odd at all.  They're using it as the mix function
> between adding words.  Like hash_name() uses "hash = (hash + a) * 9".

Right. But there is no reason to think that that should be the same
thing as the final hash.

In fact, there are many reasons to think it should *not*.

The final hash is different.

It's duifferent not just because it wants to concentrate the bits at
the highb end (which the middle hash does not), but it's different
exactly because the whole calling convention is different: the final
hash returns a "small" value (in your version an "u32"), while the
hash in the middle very much does not.

So thinking that they are somehow related is wrong. It screws up
hash.h, and incorrectly conflates this middle hash_mix() thing with
the final has_32/64() thing.

> It's a quite reasonable mix function.  One multiply (4 cycles or so) per
> 8 bytes.  It's definitely swamped by the byte-at-a-time string loading.

.. but it's not. Exactly because of the above.

Make it be a "hash_mix()" function. Make it use the multiply, by all
means. Same multiplier, even. BUT IT IS STILL NOT THE SAME FUNCTION,
for the above reason. One wants to return "u32", the other does not.

Really.

> But... the fundamental reason I didn't was that this is late -rc.
> I'm trying to fix a *bug* tglx found, not add risk and delay with a much
> more ambitious patch series.

Oh, this late in the rc we're not doing _any_ of this. I sent out my
suggested "late in the rc and for stable" patch that fixes the
practical problem we have, that has nothing to do with cleaning things
up.

> It *is* the FNV hash!  More specifically, FNV-1a done a word at a time;
> doing it byte at a time like most implementations would result in 8
> times as many multiplies and be slower.

I refuse to call that shit "word at a time".

It's "byte at a time with a slow conditional that will screw up your
branch predictor and a multiply in the middle".

A compiler might be able to turn it into some disgusting unrolled
thing that avoids some of the problems, but at no point is that a good
thing.

I seriously think that it would be

 (a) more readable

 (b) have a better mix function

if it was just kept as a byte-at-a-time thing entirely with the
per-byte mixing thing done better than just "shift by 8 bits".

And then at the end you could do a single hash_long().

That horrible svc thing needs to go.

It's alll pretty moot, since we have a reasonable version that
actually does do work-at-a-time.

That can be improved too, I'm sure, but that svcauth garbage should
just be thrown out.

> The only difference is the exact multiplier.  The published FNV-1
> uses a low-bit-weight multiplier.  Since this is being done a word
> at a time, I think the stronger multiplier is worth it.

.. Considering that the "word-at-a-time" is likely *slower* than doing
it a byte-at-a-time the way it has been encoded, I'm not in the least
convinced.

> For example, partial_name_hash() is still used in many places
> even if the word-at-a-time code is used in namei.c.

Those places aren't actually all that performance-critical. They
really don't matter.


> Okay, let me list the problems...
>
> 1) hash_name() stops at a slash or a nul.  hash_str() only looks
>    for a nul.  Should I add a third variant?  Should that go in fs/namei,
>    or should the whole pole be moved elsewhere?

We'll never get rid of "hash_name()", it not only has that '/' case,
it's also inlined for a reason. You'd copy it without the magic for
'/' and turn that into str_hash() for others to use.

full_name_hash() can presumably be used pretty much as-is as mem_hash().

> 2) Some places need the length *and* the hash.  Calling strlen() and then
>    full_name_hash() somewhat defeats the purpose of word-at-a-time access.
>    hash_name returns both jammed into a 64-bit word.  Is that a good
>    interface in general?

Hmm. We actually have that exact case in the dentry code and
hash_name(), except we handle it by doing that "hashlen" thing that
contains both the length and the hash in one 64-bit thing.

Maybe we could expose that kind of interface, even if it's pretty ugly.

>    Myself, I think the length should be computed at copy_from_user()
>    time and I'd like to look at each such call site and understand why
>    it *doesn't* have the length ready.  But that's a lot of work.

If it's copied from user space, we already do have the length.

You do realize that pathnames are different from pretty much every
other case in the kernel? There's a reason pathnames have their own
logic. The string length is very different from the component length,
and it turns out that component length and hash is generally critical.

> 3) They do particularly crappy mixing.  See that "RFC PATCH 4/2" I posted
>    that because I couldn't stand how bad it was.
>
>    If you don't have word at a time, the partial_name_hash() is decent,
>    but the word-at-a-time mixes by multiplying by 9.  So the hashes
>    of the strings "1.......0" and "0.......9" are identical.
>
>    (I assume this was chosen as the best available one-instruction (LEA)
>    mix operation due to an obsessive interest in speed in the dcache.)

The thing is, a lot of people tend to optimize performance (and
behavior) for large strings.

For path components, the most common lengths are less than a single
8-byte word! That "mixing" function almost doesn't matter, because the
case that matters the most (by far) are strings that fit in one or
_maybe_ two words.

Yes, things are slowly moving towards longer pathnames, but even with
long filenames, most component names are the directory entries that
still tend to be shorter. We still have the old 3-letter ones close to
the root, of course, but the statistics are still pretty heavily
skewed to <15 characters especially if you look at directory names.

So for pathnames, the mixing in the middle has tended to not be
_nearly_ as important as the final mix, for the simple reason that it
happens maybe once, often not at all.

>    More crappy mixing is the final folding.  On a 64-bit machine, the
>    high and low 32 bits are just XORed together.  So the hashes of
>    "deadbeef" and "beefdead" are identical.

Hmm? It uses "fold_hash()", which definitely doesn't do that.

Are you still looking at partial_name_hash() and friends? Don't. They
are garbage. They are used for things like case-insensitive
filesystems etc.

>    (If you have a particular cycle count budget in mind, I can come up with
>    something.)

The real issue I had in fs/namei.c is that link_path_walk() and
__d_lookup_rcu() are literally _the_ hottest functions in the kernel
under a few (common) loads, and for code generation things like
register pressure ends up mattering.

The reason that "hash_len" is a single 64-bit field rather than two
32-bit fields, for example, is that that way it takes on _register_
when we do the has lookup. Some of that code was tuned to inline - and
_not_ inline in particular patterns.

Now, some of that tuning may have bitrotted, of course, so I'm not
saying it's necessarily doing great now. But some of that code was
tuned to not need a stack frame etc.

> 4) They don't do a final mix.  Again, obsessive interest in speed when
>    you're storing the whole 32 bits and comparing that.  For applications
>    that use only a few bits of hash index and need the entropy
>    "concentrated", should this be done outside?

I think the caller should do it, yes. There's a difference between
"calculate a hash for a string" and "turn that hash into a hashtable
lookup".

               Linus

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/2] <linux/hash.h>: Make hash_64(), hash_ptr() return 32 bits
  2016-05-02 21:19               ` Linus Torvalds
@ 2016-05-02 21:41                 ` Linus Torvalds
  2016-05-03  1:59                 ` George Spelvin
  1 sibling, 0 replies; 21+ messages in thread
From: Linus Torvalds @ 2016-05-02 21:41 UTC (permalink / raw)
  To: George Spelvin
  Cc: Bruce Fields, Eric Dumazet, Jeff Layton,
	Linux Kernel Mailing List, Linux NFS Mailing List, Rik van Riel,
	Thomas Gleixner

On Mon, May 2, 2016 at 2:19 PM, Linus Torvalds
<torvalds@linux-foundation.org> wrote:
>
> The reason that "hash_len" is a single 64-bit field rather than two
> 32-bit fields, for example, is that that way it takes on _register_
> when we do the has lookup. Some of that code was tuned to inline - and
> _not_ inline in particular patterns.

Actually, I think the tuning for no stack frame etc was mostly for the
permission testing with the selinux AVC code.

The filename hashing does have some of it too - like making sure that
the call to ->d_hash() is in an unlikely path etc and doesn't pollute
the case that actually matters. But I don't think any of it is simple
enough to avoid a stack frame.

The hash_len thing did make the innermost hash lookup loop smaller,
which was noticeable at some point.

What I really wanted to do there was actually have a direct-mapped "L1
dentry hash cache", that didn't have a hash loop at all (it would fall
back to the slow case for that). I had a patch for that (which worked
*beautifully*, partly because it also moved the hot entries to that
hash cache and thus effectively moved the active entries to the head
of the queue), but I couldn't get the L1 cache update to be coherent
without locking, which killed the thing.

Anyway, I suspect that your mixing function changes should be fine.
That link_path_walk() is important, but a couple of shifts and xors
shouldn't kill it.

             Linus

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/2] <linux/hash.h>: Make hash_64(), hash_ptr() return 32 bits
  2016-05-02 21:19               ` Linus Torvalds
  2016-05-02 21:41                 ` Linus Torvalds
@ 2016-05-03  1:59                 ` George Spelvin
  2016-05-03  3:01                   ` Linus Torvalds
  1 sibling, 1 reply; 21+ messages in thread
From: George Spelvin @ 2016-05-03  1:59 UTC (permalink / raw)
  To: linux, torvalds
  Cc: bfields, eric.dumazet, jlayton, linux-kernel, linux-nfs, riel, tglx

> Right. But there is no reason to think that that should be the same
> thing as the final hash.

Your logic is absolutely correct.  I agree with you.  The two operations
have different purposes, and should be thought of differently.

HOWEVER, if you can find one operation which is good enough to
serve both purposes, that's not evidence of incompetence.

I'm not saying hash_str is anything wonderful.  Just that it's not a
complete disaster.  (As a hash function.  Your comments about its
performance are addressed below.)

It's frankly a lot *better* than the hash computed by namei.c.
That one *is* perilously close to a complete disaster.


The main problem with the multiplicative hash (which describes both
hash_str() and hash_name()) is that input bits can only affect higher
state bits.  The lsbit of the 64-bit state is just the XOR of all the
input lsbits.  The lsbyte depends only on the lsbytes.

But that's not a disaster.  In hash_str(), we throw away the lsbits
at the end so their imperfect mixing doesn't matter.

What's bad is that the msbits of the inputs can only affect the
msbits of the hash.  The msbits of the inputs are just XORed together
into the msbit of the hash.  It's fundamentally impossible for
such a hash to detect any 2-bit change to the msbits of the input.

That said, you're right that most names aren't long enough (16 bytes on
little-endian) to *have* a second msbit to worry about.

> I refuse to call that shit "word at a time".
>
> It's "byte at a time with a slow conditional that will screw up your
> branch predictor and a multiply in the middle".

The *hashing* is word at a time.  The *fetching* is done byte at a time
and I agree that it doesn't qualify in any way at all.

But yeah, your point about the branch predictor penalty being worse than
the 7 saved multiplies is well taken.

Rule #1 of performane: Avoid lock contention at any cost.
Rule #2 of performane: Avoid cache misses unless it conflicts with #1.
(Rule 2a: Cache bouncing is a paritucularly permicious form of cache missing.)
(Rule 2b: Locking, even uncontended, tends to cause cache bouncing.)
Rule #3 of performance: Avoid unpredictable branches.

>> For example, partial_name_hash() is still used in many places
>> even if the word-at-a-time code is used in namei.c.

> Those places aren't actually all that performance-critical.  They
> really don't matter.

AFAICT, none of the string-hash functions outside of fs/ are
on particularly hot paths.  The goal is deleting redundant code.

> We'll never get rid of "hash_name()", it not only has that '/' case,
> it's also inlined for a reason. You'd copy it without the magic for
> '/' and turn that into str_hash() for others to use.
>
> full_name_hash() can presumably be used pretty much as-is as mem_hash().

That part is obvious. I was just caught between two unpleasant
possibiites:

- Placing a str_hash() helper function in the middle of fs/namei.c which
  nothing in fs/namei.c actually calls, or
- Copying it to some outside file and then having to keep the
  two in sync.

Thinking about the function comments on each, the first seems less
embarrasing to write.

> Maybe we could expose that kind of interface, even if it's pretty ugly.

Yeah, that's pretty much what I thought.  I just hoped you had some
brilliant idea for avoiding it.

> The thing is, a lot of people tend to optimize performance (and
> behavior) for large strings.
>
> For path components, the most common lengths are less than a single
> 8-byte word! That "mixing" function almost doesn't matter, because the
>  case that matters the most (by far) are strings that fit in one or
> _maybe_ two words.

I'll remember that next time I look up
.git/objects/69/466b786e99a0a2d86f0cb99e0f4bb61588d13c

:-)

But yes, it makes your point about pathname components.
The first three are all a single word.

I just worry with people having directories full of PHP state
cookies.

> Hmm? It uses "fold_hash()", which definitely doesn't do that.

Mea culpa; that's a side effect of repeately grepping the kernel.
There was a *different* hash folding function in some other source file
elsewhere that did that, and I got the two mixed up in my memory.

The word-at-a-time one does "hash_64(x)", which doesn't have that problem.

(Maybe it was the IP checksum folding?)

>>    (If you have a particular cycle count budget in mind, I can come up with
>>    something.)

> The real issue I had in fs/namei.c is that link_path_walk() and
> __d_lookup_rcu() are literally _the_ hottest functions in the kernel
> under a few (common) loads, and for code generation things like
> register pressure ends up mattering.
> 
> The reason that "hash_len" is a single 64-bit field rather than two
> 32-bit fields, for example, is that that way it takes on _register_
> when we do the has lookup. Some of that code was tuned to inline - and
> _not_ inline in particular patterns.
> 
> Now, some of that tuning may have bitrotted, of course, so I'm not
> saying it's necessarily doing great now. But some of that code was
> tuned to not need a stack frame etc.

Yes, and it's hard to make a general purpose helper out of code
that's tuned to piano wire tension like that.

I was thinking about very fast hash functions (2 or 3 cycles
per word), and the obvious solution is to use a second register.

As you pointed out above, the mixing function can take advantage of an
internal state which is larger than the final state, so the easy way to
minimize collisions without very expensive state mixing between words
is to give the bits more room to spread out.

I was playing with the idea of an ARX structure like the "Speck" block
cipher (https://en.wikipedia.org/wiki/Speck_%28cipher%29):

	h1 ^= a;
	h2 ^= h1; h1 = ROTL(h1, K);
	h1 += h2; h2 *= 9;

The "h2 *= 9" replaces "ROTL(h2, 3)" in Speck, achieves a little more
mixing, is one cycle on most machines, and is probably supported by
more functional units than a general barrel shift.

It's only one more cycle on the critical path than the current
"hash = (hash + a) * 9"... but it's one more register.


> I think the caller should do it, yes. There's a difference between
> "calculate a hash for a string" and "turn that hash into a hashtable
> lookup".

Makes sense, thanks.

Another thing worth doing is having a slightly less thorough folding
function than hash64(x, 32).  (x >> 32) + (uint32)x * CONSTANT does
just as well if you're keeping all 32 bits.

This is hasically the first half of my 32-bit hash_64().  Then if you
want less than 32 bits, you can do a second hash_32() on the result.

> Anyway, I suspect that your mixing function changes should be fine.
> That link_path_walk() is important, but a couple of shifts and xors
> shouldn't kill it.

Actually, that code
1) Uses an extra register for a shift temporary anyway, and
2) Has a 6-cycle critical path, which is pretty borderline.

The ARX code above seems like a more efficient use of two registers.

(It's also conveniently nonlinear, so if we want, feeding a salt into
h2 makes it less utterly trivial to force collisions.)

I'll play with that for a bit.  Thank you for mentioning those critical
functions; I'll check the x86 code generation for them to make sure it
doesn't get worse.



P.S. Here's a way to improve partial_name_hash on x86.
Compare the assembly for

unsigned long
pnh1(unsigned long c, unsigned long prevhash)
{
        return (prevhash + (c << 4) + (c >> 4)) * 11;
}

pnh1:
	movl    %eax, %ecx
        shrl    $4, %eax
        sall    $4, %ecx
        addl    %ecx, %eax
        addl    %eax, %edx
        leal    (%edx,%edx,4), %eax
        leal    (%edx,%eax,2), %eax
        ret

unsigned long
pnh2(unsigned long c, unsigned long prevhash)
{
        prevhash += c <<= 4;
        prevhash += c >> 8;
        return prevhash * 11;
}

pnh2:
        sall    $4, %eax
        addl    %eax, %edx
        shrl    $8, %eax
        addl    %eax, %edx
        leal    (%edx,%edx,4), %eax
        leal    (%edx,%eax,2), %eax
        ret

pnh1 doesn't know that "c" is really only 8 bits and so it doesn't need
to copy it to another register to compute the two shifted forms.

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/2] <linux/hash.h>: Make hash_64(), hash_ptr() return 32 bits
  2016-05-03  1:59                 ` George Spelvin
@ 2016-05-03  3:01                   ` Linus Torvalds
  0 siblings, 0 replies; 21+ messages in thread
From: Linus Torvalds @ 2016-05-03  3:01 UTC (permalink / raw)
  To: George Spelvin
  Cc: Bruce Fields, Eric Dumazet, Jeff Layton,
	Linux Kernel Mailing List, Linux NFS Mailing List, Rik van Riel,
	Thomas Gleixner

On Mon, May 2, 2016 at 6:59 PM, George Spelvin <linux@horizon.com> wrote:
>
> AFAICT, none of the string-hash functions outside of fs/ are
> on particularly hot paths.  The goal is deleting redundant code.

Yes, agreed.

>> We'll never get rid of "hash_name()", it not only has that '/' case,
>> it's also inlined for a reason. You'd copy it without the magic for
>> '/' and turn that into str_hash() for others to use.
>>
>> full_name_hash() can presumably be used pretty much as-is as mem_hash().
>
> That part is obvious. I was just caught between two unpleasant
> possibiites:
>
> - Placing a str_hash() helper function in the middle of fs/namei.c which
>   nothing in fs/namei.c actually calls, or
> - Copying it to some outside file and then having to keep the
>   two in sync.

So I don't think the "keep the two in sync" is necessarily all that problematic.

The word-at-a-time logic _used_ to be very specific to the name_hash()
code, but it got made generic enough over a few iterations that it's
actually a fairly reasonable pattern now.

So the code loop ends up being really not very complex:

        const struct word_at_a_time constants = WORD_AT_A_TIME_CONSTANTS;

        hash = a = 0;
        len = -sizeof(unsigned long);
        do {
                hash = (hash + a) * 9;
                len += sizeof(unsigned long);
                a = load_unaligned_zeropad(name+len);
                b = a ^ REPEAT_BYTE('/');
        } while (!(has_zero(a, &adata, &constants) | has_zero(b,
&bdata, &constants)));

and with your suggested "hash_mix()" function (may I suggest we just
make it take both the old hash and the new value as two separate
arguments, and it can choose how to mix them), there remains pretty
much nothing to keep in sync.

Yes, there's the tail part, but that ends up being pretty simple too.

The non-path-component case (so checking only for an ending NUL
character, not the '/') ends up being exactly the same, except all the
'b' logic goes away, so you end up with

        hash = a = 0;
        len = -sizeof(unsigned long);
        do {
                hash = hash_mix(hash, a);
                len += sizeof(unsigned long);
                a = load_unaligned_zeropad(name+len);
        } while (!has_zero(a, &adata, &constants));

which really seems to not be a pain. Very simple, in fact. There's
hardly any code left to keep in sync: any tweaking of the hash would
happen by tweaking the hash_mix()

The tail part would presumably be:

        adata = prep_zero_mask(a, adata, &constants);

        mask = create_zero_mask(adata | bdata);
        return hash_mix(hash, a & zero_bytemask(mask));

and you're done (or we could perhaps decide that the last mix is fine
just doing a single add? We will have mixed up all previous hash
values, so that last "hash_mix()" might just be a simple addition).

Yes, if we want to return some mix of the length and the hash, we'd
have to play those hashlen games, but I suspect the only case that
*really* cares deeply about that is the dentry hash case, and we'd
just keep it separate.

In other words, I think just the addition of your "hash_mix()" helper
is enough to abstract things out enough that there really is nothing
left but tying all those pieces together, and no maintenance burden
from having "two copies" of that tying-togetherness.

>> For path components, the most common lengths are less than a single
>> 8-byte word! That "mixing" function almost doesn't matter, because the
>>  case that matters the most (by far) are strings that fit in one or
>> _maybe_ two words.
>
> I'll remember that next time I look up
> .git/objects/69/466b786e99a0a2d86f0cb99e0f4bb61588d13c
>
> :-)

Yes, they happen, but when people have pathnames like that, your
hashing function generally isn't going to much matter.

Except you absolutely want to avoid 8-bit and 4-bit boundaries when
mixing. The "*9" we have now does that, we had a *11 in an earlier
incarnation (but that was coupled with shifting right too - I think
our old hash remains in the partial_name_hash())

I do agree that it's not a great hash mixing function, but I don't
think it's been particularly problematic either. I did run my whole
filesystem through the hash at some point just to verify, and the
statistics seemed fairly balanced.

But yes, I think your hash_mix() function is likely a noticeable
improvement from a hashing standpoint.  And while it may not be all
that noticeable for path components that are usually pretty short, if
we extend this code to be a generic string hash then the whole "three
characters is the most common component length" argument gores away ;)

> I was playing with the idea of an ARX structure like the "Speck" block
> cipher (https://en.wikipedia.org/wiki/Speck_%28cipher%29):
>
>         h1 ^= a;
>         h2 ^= h1; h1 = ROTL(h1, K);
>         h1 += h2; h2 *= 9;
>
> The "h2 *= 9" replaces "ROTL(h2, 3)" in Speck, achieves a little more
> mixing, is one cycle on most machines, and is probably supported by
> more functional units than a general barrel shift.
>
> It's only one more cycle on the critical path than the current
> "hash = (hash + a) * 9"... but it's one more register.

Try it.

I wouldn't worry too much about 32-bit x86 any more (we want ti to not
suck horribly, but it's not the primary target for anybody who cares
about best performance) any more. But x86-64 code generation is worth
looking at. The register pressure issue is still real, but it's not
quite as bad as the old 32-bit code.

The other main architectures that it would be good to verify are ok
are ARMv8 and powerpc.

> P.S. Here's a way to improve partial_name_hash on x86.
> Compare the assembly for
>
> unsigned long
> pnh1(unsigned long c, unsigned long prevhash)
> {
>         return (prevhash + (c << 4) + (c >> 4)) * 11;
> }
>
> pnh1:
>         movl    %eax, %ecx
>         shrl    $4, %eax
>         sall    $4, %ecx
>         addl    %ecx, %eax
>         addl    %eax, %edx
>         leal    (%edx,%edx,4), %eax
>         leal    (%edx,%eax,2), %eax
>         ret
>
> unsigned long
> pnh2(unsigned long c, unsigned long prevhash)
> {
>         prevhash += c <<= 4;
>         prevhash += c >> 8;
>         return prevhash * 11;
> }
>
> pnh2:
>         sall    $4, %eax
>         addl    %eax, %edx
>         shrl    $8, %eax
>         addl    %eax, %edx
>         leal    (%edx,%edx,4), %eax
>         leal    (%edx,%eax,2), %eax
>         ret
>
> pnh1 doesn't know that "c" is really only 8 bits and so it doesn't need
> to copy it to another register to compute the two shifted forms.

Well, if I cared about the partial_name_hash() (which I don't), I'd
suggest you just convince the compiler to generate

  rolb $4,%al

instead of two shifts, and just be done with it.

You might find that you end up with partial register write stalls,
though, so you might have to add a "movzbq %al,%rax" to get around
those.

In fact, it looks like you can get gcc to generate that code:

    unsigned long pnh3(unsigned char c, unsigned long prevhash)
    {
        c = (c << 4) | (c >> 4);
        return (prevhash + c)*11;
    }

generates

    pnh3:
        rolb $4, %dil
        movzbl %dil, %edi
        addq %rdi, %rsi
        leaq (%rsi,%rsi,4), %rax
        leaq (%rsi,%rax,2), %rax
        ret

which is one instruction less than your pnh2, but should perform even
better because I think "movzbl" ends up being done as a rename and
mask in the microarchitecture - without any actual ALU costs or added
latency.

But I suspect it can be a bit fragile to get gcc to actually generate
that rotate instruction. I could easily see that being inlined, and
than the pattern that turns into a rotate instruction gets perturbed
enough that gcc no longer generates the rot..

                  Linus

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [patch 2/7] lib/hashmod: Add modulo based hash mechanism
  2016-05-01 16:51       ` Linus Torvalds
@ 2016-05-14  3:54         ` George Spelvin
  2016-05-14 18:35           ` Linus Torvalds
  0 siblings, 1 reply; 21+ messages in thread
From: George Spelvin @ 2016-05-14  3:54 UTC (permalink / raw)
  To: torvalds; +Cc: eric.dumazet, linux, linux-kernel, riel, tglx

On May 1, 2016 at 9:51 AM, Lnus Torvalds wrote:
> On Sun, May 1, 2016 at 2:43 AM, George Spelvin <linux@horizon.com> wrote:
>> * If you feel ambitious, add a 32-bit CONFIG_ARCH_HAS_SLOW_MULTIPLIER
>>   exception path.

> Let's make that a separate worry, and just fix hash_64() first.
> 
> In particular, that means "let's not touch COLDEN_RATIO_32 yet". I
> suspect that when we *do* change that value, we do want the
> non-multiplying version you had.

I've been working on this, and just a brief status update: it's definitely
one of those rabbit holes.

There are exactly three architectures which (some models) don't have
an efficient 32x32->32-bit multiply:

- arch/m58k: MC68000 (and 68010 and 68328) no-mmu
- arch/h8300: Most (all?) of the H8 processor series
- arch/microblaze: Depending on Verilog compilation options

The thing is, they all don't have a barrel shifter, either.
Indeed, only the m68k even has multi-bit shift instructions.

So the upshot is that it's not clear that shift-and-add is a whole lot
better.  Working out the timing on the 68000, I can beat the multiply
code, but not by much.

So I'm working on arch-specific solutions for those three cases.

H8 and 68000 have 16x16->32-bit multiplies, which can be used to make a
reasonable hash function (some H8 models can multiply faster than they
can shift!), but if you configure a Microblaze with neither multiplier
nor barrel shifter (which arch/microblaze/Kconfig.platform lets you do),
I have no idea what to do.

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [patch 2/7] lib/hashmod: Add modulo based hash mechanism
  2016-05-14  3:54         ` George Spelvin
@ 2016-05-14 18:35           ` Linus Torvalds
  0 siblings, 0 replies; 21+ messages in thread
From: Linus Torvalds @ 2016-05-14 18:35 UTC (permalink / raw)
  To: George Spelvin
  Cc: Eric Dumazet, Linux Kernel Mailing List, Rik van Riel, Thomas Gleixner

On Fri, May 13, 2016 at 8:54 PM, George Spelvin <linux@horizon.com> wrote:
>
> There are exactly three architectures which (some models) don't have
> an efficient 32x32->32-bit multiply:
>
> - arch/m58k: MC68000 (and 68010 and 68328) no-mmu
> - arch/h8300: Most (all?) of the H8 processor series
> - arch/microblaze: Depending on Verilog compilation options

I wouldn't worry about it too much.

The architectures where performance really matters are x86, ARM and powerpc.

The rest need to *work* and not suck horribly, but we're not going to
try to do cycle counting for them. It's not worth the pain.

If an architecture doesn't have a barrel shifter, it's not going to
have fast hash functions.

So I'd be ok with just saying "32-bit architectures are going to use a
multiply with non-sparse bits". Not a problem.

We do want to make sure that hash_64 isn't totally disgusting on
32-bit architectures (but that's a fairly rare case), so we probably
do want to have that function fall back on something else than a 64x64
multiply on a 32-bit architecture. Presumably just "mix the two 32-bit
words into one, then use hash_32() on that" is good enough.

                     Linus

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [RFC PATCH 4/2] namei: Improve hash mixing if CONFIG_DCACHE_WORD_ACCESS
  2016-05-02 10:31           ` [RFC PATCH 4/2] namei: Improve hash mixing if CONFIG_DCACHE_WORD_ACCESS George Spelvin
@ 2016-05-16 18:51             ` Linus Torvalds
  0 siblings, 0 replies; 21+ messages in thread
From: Linus Torvalds @ 2016-05-16 18:51 UTC (permalink / raw)
  To: George Spelvin
  Cc: Linux Kernel Mailing List, Thomas Gleixner, Eric Dumazet, Rik van Riel

On Mon, May 2, 2016 at 3:31 AM, George Spelvin <linux@horizon.com> wrote:
> The hash mixing between adding the next 64 bits of name
> was just a bit weak.
>
> Replaced with a still very fast but slightly more effective
> mixing function.

I'e applied this patch independently of all your other hash rework to my tree.

I verified that the code generation for the inner loop is still fine,
and it does look like a much better mixing function, as well as just
clean up the code.

I hope to get new versions of the actual <linux/hash.h> fixes during
this merge window from you.

Thanks,

              Linus

^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2016-05-16 18:51 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <CA+55aFxBWfAHQNAdBbdVr+z8ror4GVteyce3D3=vwDWxhu5KqQ@mail.gmail.com>
2016-04-30 20:52 ` [patch 2/7] lib/hashmod: Add modulo based hash mechanism George Spelvin
2016-05-01  8:35   ` Thomas Gleixner
2016-05-01  9:43     ` George Spelvin
2016-05-01 16:51       ` Linus Torvalds
2016-05-14  3:54         ` George Spelvin
2016-05-14 18:35           ` Linus Torvalds
2016-05-02  7:11       ` Thomas Gleixner
2016-05-02 10:20         ` [PATCH 1/2] <linux/hash.h>: Make hash_64(), hash_ptr() return 32 bits George Spelvin
2016-05-02 10:22           ` [PATCH 2/2] <linux/hash.h>: Fix hash_64()'s horrible collision problem George Spelvin
2016-05-02 20:08             ` Linus Torvalds
2016-05-02 10:27           ` [RFC PATCH 3/2] (Rant) Fix various hash abuses George Spelvin
2016-05-02 10:31           ` [RFC PATCH 4/2] namei: Improve hash mixing if CONFIG_DCACHE_WORD_ACCESS George Spelvin
2016-05-16 18:51             ` Linus Torvalds
2016-05-02 13:28           ` [PATCH 1/2] <linux/hash.h>: Make hash_64(), hash_ptr() return 32 bits Peter Zijlstra
2016-05-02 19:08             ` George Spelvin
2016-05-02 16:24           ` Linus Torvalds
2016-05-02 20:26             ` George Spelvin
2016-05-02 21:19               ` Linus Torvalds
2016-05-02 21:41                 ` Linus Torvalds
2016-05-03  1:59                 ` George Spelvin
2016-05-03  3:01                   ` Linus Torvalds

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).