All of lore.kernel.org
 help / color / mirror / Atom feed
* Name hashing function causing a perf regression
@ 2014-09-09 19:30 Josef Bacik
  2014-09-12 19:11 ` Andi Kleen
  0 siblings, 1 reply; 23+ messages in thread
From: Josef Bacik @ 2014-09-09 19:30 UTC (permalink / raw)
  To: Linus Torvalds; +Cc: linux-fsdevel, viro, hch, clm

Hello,

I'm debugging a performance regression between 3.2 and 3.10 and I 
narrowed it down to this patch

commit bfcfaa77bdf0f775263e906015982a608df01c76
vfs: use 'unsigned long' accesses for dcache name comparison and hashing

The test case is essentially

for (i = 0; i < 1000000; i++)
	mkdir("a$i");

On xfs on a fio card this goes at about 20k dir/sec with 3.2, and 12k 
dir/sec with 3.10.  This is because we spend waaaaay more time in 
__d_lookup on 3.10 than in 3.2.  The new hashing function for strings is 
suboptimal for < sizeof(unsigned long) string names (and hell even > 
sizeof(unsigned long) string names that I've tested).  I broke out the 
old hashing function and the new one into a userspace helper to get real 
numbers and this is what I'm getting

Old hash table had 1000000 entries, 0 dupes, 0 max dupes
New hash table had 12628 entries, 987372 dupes, 900 max dupes
We had 11400 buckets with a p50 of 30 dupes, p90 of 240 dupes, p99 of 
567 dupes for the new hash

My test does the hash, and then does the d_hash into a integer pointer 
array the same size as the dentry hash table on my system, and then just 
increments the value at the address we got to see how many entries we 
overlap with.  As you can see the old hash function ended up with all 1 
million entries in their own bucket, whereas the new one they are only 
distributed among ~12.5k buckets, which is why we're using so much more 
CPU in __d_lookup.

The new hash function basically just uses the unsigned long value of the 
string if it is less than unsigned long as the hash.  If the string is 
longer than unsigned long we still only hash the parts that fill a full 
word, and then simply add the remaining bit to the hash.  This seems to 
make strings that are close together create hashes that are close 
together as well, which when coupled with d_hash end up getting put into 
existing buckets.

So the question is what do we do here?  I tested other random strings 
and every one of them ended up worse as far as collisions go with the 
new function vs the old one.  I assume we want to keep the word at a 
time functionality, so should we switch to a different hashing scheme, 
like murmur3/fnv/xxhash/crc32c/whatever?  Or should we just go back to 
what we had and maybe just use the word at a time stuff for comparisons? 
  Thanks,

Josef

^ permalink raw reply	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2014-09-15 17:31 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-09-09 19:30 Name hashing function causing a perf regression Josef Bacik
2014-09-12 19:11 ` Andi Kleen
2014-09-12 19:21   ` Linus Torvalds
2014-09-12 19:52     ` Josef Bacik
2014-09-12 20:39       ` Linus Torvalds
2014-09-12 21:25         ` Josef Bacik
2014-09-12 22:01           ` Linus Torvalds
2014-09-12 22:08             ` Josef Bacik
2014-09-12 22:25               ` Linus Torvalds
2014-09-13 18:58                 ` Linus Torvalds
2014-09-15  1:32                   ` Linus Torvalds
2014-09-15  2:49                     ` Tetsuo Handa
2014-09-15  3:37                       ` Linus Torvalds
2014-09-15  4:58                         ` Tetsuo Handa
2014-09-15 14:17                           ` Linus Torvalds
2014-09-15 15:55                     ` Josef Bacik
2014-09-15 16:22                       ` Linus Torvalds
2014-09-15 16:25                         ` Al Viro
2014-09-15 16:33                           ` Linus Torvalds
2014-09-15 16:35                         ` Greg KH
2014-09-15 16:45                           ` Linus Torvalds
2014-09-15 16:53                             ` Jiri Slaby
2014-09-15 17:31                             ` Greg KH

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.