All of lore.kernel.org
 help / color / mirror / Atom feed
* An interesting performance thing ?
@ 2005-12-14 18:22 Iozone
  2005-12-14 22:26 ` Neil Brown
  2005-12-15  2:22 ` J. Bruce Fields
  0 siblings, 2 replies; 21+ messages in thread
From: Iozone @ 2005-12-14 18:22 UTC (permalink / raw)
  To: neilb; +Cc: nfs

[-- Attachment #1: Type: text/plain, Size: 35414 bytes --]

Neil,

    I think I have discovered an interesting performance anomaly.

    In the svcauth code there are places that do:

       hash_long((unsigned long)item->m_addr.s_addr, IP_HASHBITS);

    Ok... That seems reasonable, but then again...maybe not....

    I believe that s_addr is an IP address in Network Neutral format. 
     (Big Endian)

    So... When one is on a Little Endian system, the hash_long
    function gets handed a Big Endian value as a long, and
    later, via the magic of being a Little Endian system,
    gets byte swapped.

    Step 1. 192.168.1.2 becomes 2.1.168.192 (byte swap)

    Step 2.  The 32 bit IP address becomes a 64 bit long when
                 this code is compiled and run on an Opteron, or
                 an IA-64 system.
                 2.1.168.192 -> 0.0.0.0.2.1.168.192
    Step 3. Call the hash_long() and get back a hash value
                that is IP_HASHBITS (8) in size.

    You'll notice that the hash distribution is not nearly as
    good as one might believe.  If one would have done:

        hash_long( inet_lnaof(item->m_addr.s_addr)),IP_HASHBITS)

    Then the hash_long function would have done a nice job.  Since
    one is not converting the network neutral IP address into a 
    host binary format, here is an example of the hash distribution
    that, I believe via experimentation, is currently being seen on 
    Little Endian 64 Bit systems....

------------------------------------------------
Testing byte 4 (least significant)
Input: 192.168.0.0 Hash-> 3e
Input: 192.168.0.1 Hash-> 3e
Input: 192.168.0.2 Hash-> 3e
Input: 192.168.0.3 Hash-> 3e
Input: 192.168.0.4 Hash-> 3e
Input: 192.168.0.5 Hash-> 3e
Input: 192.168.0.6 Hash-> 3e
Input: 192.168.0.7 Hash-> 3e
Input: 192.168.0.8 Hash-> 3e
Input: 192.168.0.9 Hash-> 3e
Input: 192.168.0.10 Hash-> 3e
Input: 192.168.0.11 Hash-> 3e
Input: 192.168.0.12 Hash-> 3e
Input: 192.168.0.13 Hash-> 3e
Input: 192.168.0.14 Hash-> 3e
Input: 192.168.0.15 Hash-> 3e
Input: 192.168.0.16 Hash-> 3e
Input: 192.168.0.17 Hash-> 3e
Input: 192.168.0.18 Hash-> 3e
Input: 192.168.0.19 Hash-> 3e
Input: 192.168.0.20 Hash-> 3e
Input: 192.168.0.21 Hash-> 3e
Input: 192.168.0.22 Hash-> 3e
Input: 192.168.0.23 Hash-> 3e
Input: 192.168.0.24 Hash-> 3e
Input: 192.168.0.25 Hash-> 3e
Input: 192.168.0.26 Hash-> 3e
Input: 192.168.0.27 Hash-> 3e
Input: 192.168.0.28 Hash-> 3e
Input: 192.168.0.29 Hash-> 3e
Input: 192.168.0.30 Hash-> 3e
Input: 192.168.0.31 Hash-> 3e
Input: 192.168.0.32 Hash-> 3e
Input: 192.168.0.33 Hash-> 3e
Input: 192.168.0.34 Hash-> 3e
Input: 192.168.0.35 Hash-> 3e
Input: 192.168.0.36 Hash-> 3e
Input: 192.168.0.37 Hash-> 3e
Input: 192.168.0.38 Hash-> 3e
Input: 192.168.0.39 Hash-> 3e
Input: 192.168.0.40 Hash-> 3e
Input: 192.168.0.41 Hash-> 3e
Input: 192.168.0.42 Hash-> 3e
Input: 192.168.0.43 Hash-> 3e
Input: 192.168.0.44 Hash-> 3e
Input: 192.168.0.45 Hash-> 3e
Input: 192.168.0.46 Hash-> 3e
Input: 192.168.0.47 Hash-> 3e
Input: 192.168.0.48 Hash-> 3e
Input: 192.168.0.49 Hash-> 3e
Input: 192.168.0.50 Hash-> 3e
Input: 192.168.0.51 Hash-> 3e
Input: 192.168.0.52 Hash-> 3e
Input: 192.168.0.53 Hash-> 3e
Input: 192.168.0.54 Hash-> 3e
Input: 192.168.0.55 Hash-> 3e
Input: 192.168.0.56 Hash-> 3e
Input: 192.168.0.57 Hash-> 3e
Input: 192.168.0.58 Hash-> 3e
Input: 192.168.0.59 Hash-> 3e
Input: 192.168.0.60 Hash-> 3e
Input: 192.168.0.61 Hash-> 3e
Input: 192.168.0.62 Hash-> 3e
Input: 192.168.0.63 Hash-> 3e
Input: 192.168.0.64 Hash-> 3e
Input: 192.168.0.65 Hash-> 3e
Input: 192.168.0.66 Hash-> 3e
Input: 192.168.0.67 Hash-> 3e
Input: 192.168.0.68 Hash-> 3e
Input: 192.168.0.69 Hash-> 3e
Input: 192.168.0.70 Hash-> 3e
Input: 192.168.0.71 Hash-> 3e
Input: 192.168.0.72 Hash-> 3e
Input: 192.168.0.73 Hash-> 3e
Input: 192.168.0.74 Hash-> 3e
Input: 192.168.0.75 Hash-> 3e
Input: 192.168.0.76 Hash-> 3e
Input: 192.168.0.77 Hash-> 3e
Input: 192.168.0.78 Hash-> 3e
Input: 192.168.0.79 Hash-> 3e
Input: 192.168.0.80 Hash-> 3e
Input: 192.168.0.81 Hash-> 3e
Input: 192.168.0.82 Hash-> 3e
Input: 192.168.0.83 Hash-> 3e
Input: 192.168.0.84 Hash-> 3e
Input: 192.168.0.85 Hash-> 3e
Input: 192.168.0.86 Hash-> 3e
Input: 192.168.0.87 Hash-> 3e
Input: 192.168.0.88 Hash-> 3e
Input: 192.168.0.89 Hash-> 3e
Input: 192.168.0.90 Hash-> 3e
Input: 192.168.0.91 Hash-> 3e
Input: 192.168.0.92 Hash-> 3e
Input: 192.168.0.93 Hash-> 3e
Input: 192.168.0.94 Hash-> 3e
Input: 192.168.0.95 Hash-> 3e
Input: 192.168.0.96 Hash-> 3e
Input: 192.168.0.97 Hash-> 3e
Input: 192.168.0.98 Hash-> 3e
Input: 192.168.0.99 Hash-> 3e
Input: 192.168.0.100 Hash-> 3e
Input: 192.168.0.101 Hash-> 3e
Input: 192.168.0.102 Hash-> 3e
Input: 192.168.0.103 Hash-> 3e
Input: 192.168.0.104 Hash-> 3e
Input: 192.168.0.105 Hash-> 3e
Input: 192.168.0.106 Hash-> 3e
Input: 192.168.0.107 Hash-> 3e
Input: 192.168.0.108 Hash-> 3e
Input: 192.168.0.109 Hash-> 3e
Input: 192.168.0.110 Hash-> 3e
Input: 192.168.0.111 Hash-> 3e
Input: 192.168.0.112 Hash-> 3e
Input: 192.168.0.113 Hash-> 3e
Input: 192.168.0.114 Hash-> 3e
Input: 192.168.0.115 Hash-> 3e
Input: 192.168.0.116 Hash-> 3e
Input: 192.168.0.117 Hash-> 3e
Input: 192.168.0.118 Hash-> 3e
Input: 192.168.0.119 Hash-> 3e
Input: 192.168.0.120 Hash-> 3e
Input: 192.168.0.121 Hash-> 3e
Input: 192.168.0.122 Hash-> 3e
Input: 192.168.0.123 Hash-> 3e
Input: 192.168.0.124 Hash-> 3e
Input: 192.168.0.125 Hash-> 3e
Input: 192.168.0.126 Hash-> 3e
Input: 192.168.0.127 Hash-> 3e
Input: 192.168.0.128 Hash-> 3d
Input: 192.168.0.129 Hash-> 3d
Input: 192.168.0.130 Hash-> 3d
Input: 192.168.0.131 Hash-> 3d
Input: 192.168.0.132 Hash-> 3d
Input: 192.168.0.133 Hash-> 3d
Input: 192.168.0.134 Hash-> 3d
Input: 192.168.0.135 Hash-> 3d
Input: 192.168.0.136 Hash-> 3d
Input: 192.168.0.137 Hash-> 3d
Input: 192.168.0.138 Hash-> 3d
Input: 192.168.0.139 Hash-> 3d
Input: 192.168.0.140 Hash-> 3d
Input: 192.168.0.141 Hash-> 3d
Input: 192.168.0.142 Hash-> 3d
Input: 192.168.0.143 Hash-> 3d
Input: 192.168.0.144 Hash-> 3d
Input: 192.168.0.145 Hash-> 3d
Input: 192.168.0.146 Hash-> 3d
Input: 192.168.0.147 Hash-> 3d
Input: 192.168.0.148 Hash-> 3d
Input: 192.168.0.149 Hash-> 3d
Input: 192.168.0.150 Hash-> 3d
Input: 192.168.0.151 Hash-> 3d
Input: 192.168.0.152 Hash-> 3d
Input: 192.168.0.153 Hash-> 3d
Input: 192.168.0.154 Hash-> 3d
Input: 192.168.0.155 Hash-> 3d
Input: 192.168.0.156 Hash-> 3d
Input: 192.168.0.157 Hash-> 3d
Input: 192.168.0.158 Hash-> 3d
Input: 192.168.0.159 Hash-> 3d
Input: 192.168.0.160 Hash-> 3d
Input: 192.168.0.161 Hash-> 3d
Input: 192.168.0.162 Hash-> 3d
Input: 192.168.0.163 Hash-> 3d
Input: 192.168.0.164 Hash-> 3d
Input: 192.168.0.165 Hash-> 3d
Input: 192.168.0.166 Hash-> 3d
Input: 192.168.0.167 Hash-> 3d
Input: 192.168.0.168 Hash-> 3d
Input: 192.168.0.169 Hash-> 3d
Input: 192.168.0.170 Hash-> 3d
Input: 192.168.0.171 Hash-> 3d
Input: 192.168.0.172 Hash-> 3d
Input: 192.168.0.173 Hash-> 3d
Input: 192.168.0.174 Hash-> 3d
Input: 192.168.0.175 Hash-> 3d
Input: 192.168.0.176 Hash-> 3d
Input: 192.168.0.177 Hash-> 3d
Input: 192.168.0.178 Hash-> 3d
Input: 192.168.0.179 Hash-> 3d
Input: 192.168.0.180 Hash-> 3d
Input: 192.168.0.181 Hash-> 3d
Input: 192.168.0.182 Hash-> 3d
Input: 192.168.0.183 Hash-> 3d
Input: 192.168.0.184 Hash-> 3d
Input: 192.168.0.185 Hash-> 3d
Input: 192.168.0.186 Hash-> 3d
Input: 192.168.0.187 Hash-> 3d
Input: 192.168.0.188 Hash-> 3d
Input: 192.168.0.189 Hash-> 3d
Input: 192.168.0.190 Hash-> 3d
Input: 192.168.0.191 Hash-> 3d
Input: 192.168.0.192 Hash-> 3d
Input: 192.168.0.193 Hash-> 3d
Input: 192.168.0.194 Hash-> 3d
Input: 192.168.0.195 Hash-> 3d
Input: 192.168.0.196 Hash-> 3d
Input: 192.168.0.197 Hash-> 3d
Input: 192.168.0.198 Hash-> 3d
Input: 192.168.0.199 Hash-> 3d
Input: 192.168.0.200 Hash-> 3d
Input: 192.168.0.201 Hash-> 3d
Input: 192.168.0.202 Hash-> 3d
Input: 192.168.0.203 Hash-> 3d
Input: 192.168.0.204 Hash-> 3d
Input: 192.168.0.205 Hash-> 3d
Input: 192.168.0.206 Hash-> 3d
Input: 192.168.0.207 Hash-> 3d
Input: 192.168.0.208 Hash-> 3d
Input: 192.168.0.209 Hash-> 3d
Input: 192.168.0.210 Hash-> 3d
Input: 192.168.0.211 Hash-> 3d
Input: 192.168.0.212 Hash-> 3d
Input: 192.168.0.213 Hash-> 3d
Input: 192.168.0.214 Hash-> 3d
Input: 192.168.0.215 Hash-> 3d
Input: 192.168.0.216 Hash-> 3d
Input: 192.168.0.217 Hash-> 3d
Input: 192.168.0.218 Hash-> 3d
Input: 192.168.0.219 Hash-> 3d
Input: 192.168.0.220 Hash-> 3d
Input: 192.168.0.221 Hash-> 3d
Input: 192.168.0.222 Hash-> 3d
Input: 192.168.0.223 Hash-> 3d
Input: 192.168.0.224 Hash-> 3d
Input: 192.168.0.225 Hash-> 3d
Input: 192.168.0.226 Hash-> 3d
Input: 192.168.0.227 Hash-> 3d
Input: 192.168.0.228 Hash-> 3d
Input: 192.168.0.229 Hash-> 3d
Input: 192.168.0.230 Hash-> 3d
Input: 192.168.0.231 Hash-> 3d
Input: 192.168.0.232 Hash-> 3d
Input: 192.168.0.233 Hash-> 3d
Input: 192.168.0.234 Hash-> 3d
Input: 192.168.0.235 Hash-> 3d
Input: 192.168.0.236 Hash-> 3d
Input: 192.168.0.237 Hash-> 3d
Input: 192.168.0.238 Hash-> 3d
Input: 192.168.0.239 Hash-> 3d
Input: 192.168.0.240 Hash-> 3d
Input: 192.168.0.241 Hash-> 3d
Input: 192.168.0.242 Hash-> 3d
Input: 192.168.0.243 Hash-> 3d
Input: 192.168.0.244 Hash-> 3d
Input: 192.168.0.245 Hash-> 3d
Input: 192.168.0.246 Hash-> 3d
Input: 192.168.0.247 Hash-> 3d
Input: 192.168.0.248 Hash-> 3d
Input: 192.168.0.249 Hash-> 3d
Input: 192.168.0.250 Hash-> 3d
Input: 192.168.0.251 Hash-> 3d
Input: 192.168.0.252 Hash-> 3d
Input: 192.168.0.253 Hash-> 3d
Input: 192.168.0.254 Hash-> 3d
Input: 192.168.0.255 Hash-> 3d
Testing byte 3 
Input: 192.168.1.0 Hash-> 3e
Input: 192.168.1.1 Hash-> 3e
Input: 192.168.1.2 Hash-> 3e
Input: 192.168.1.3 Hash-> 3e
Input: 192.168.1.4 Hash-> 3e
Input: 192.168.1.5 Hash-> 3e
Input: 192.168.1.6 Hash-> 3e
Input: 192.168.1.7 Hash-> 3e
Input: 192.168.1.8 Hash-> 3e
Input: 192.168.1.9 Hash-> 3e
Input: 192.168.1.10 Hash-> 3e
Input: 192.168.1.11 Hash-> 3e
Input: 192.168.1.12 Hash-> 3e
Input: 192.168.1.13 Hash-> 3e
Input: 192.168.1.14 Hash-> 3e
Input: 192.168.1.15 Hash-> 3e
Input: 192.168.1.16 Hash-> 3e
Input: 192.168.1.17 Hash-> 3e
Input: 192.168.1.18 Hash-> 3e
Input: 192.168.1.19 Hash-> 3e
Input: 192.168.1.20 Hash-> 3e
Input: 192.168.1.21 Hash-> 3e
Input: 192.168.1.22 Hash-> 3e
Input: 192.168.1.23 Hash-> 3e
Input: 192.168.1.24 Hash-> 3e
Input: 192.168.1.25 Hash-> 3e
Input: 192.168.1.26 Hash-> 3e
Input: 192.168.1.27 Hash-> 3e
Input: 192.168.1.28 Hash-> 3e
Input: 192.168.1.29 Hash-> 3e
Input: 192.168.1.30 Hash-> 3e
Input: 192.168.1.31 Hash-> 3e
Input: 192.168.1.32 Hash-> 3e
Input: 192.168.1.33 Hash-> 3e
Input: 192.168.1.34 Hash-> 3e
Input: 192.168.1.35 Hash-> 3e
Input: 192.168.1.36 Hash-> 3e
Input: 192.168.1.37 Hash-> 3e
Input: 192.168.1.38 Hash-> 3e
Input: 192.168.1.39 Hash-> 3e
Input: 192.168.1.40 Hash-> 3e
Input: 192.168.1.41 Hash-> 3e
Input: 192.168.1.42 Hash-> 3e
Input: 192.168.1.43 Hash-> 3e
Input: 192.168.1.44 Hash-> 3e
Input: 192.168.1.45 Hash-> 3e
Input: 192.168.1.46 Hash-> 3e
Input: 192.168.1.47 Hash-> 3e
Input: 192.168.1.48 Hash-> 3e
Input: 192.168.1.49 Hash-> 3e
Input: 192.168.1.50 Hash-> 3e
Input: 192.168.1.51 Hash-> 3e
Input: 192.168.1.52 Hash-> 3e
Input: 192.168.1.53 Hash-> 3e
Input: 192.168.1.54 Hash-> 3e
Input: 192.168.1.55 Hash-> 3e
Input: 192.168.1.56 Hash-> 3e
Input: 192.168.1.57 Hash-> 3e
Input: 192.168.1.58 Hash-> 3e
Input: 192.168.1.59 Hash-> 3e
Input: 192.168.1.60 Hash-> 3e
Input: 192.168.1.61 Hash-> 3e
Input: 192.168.1.62 Hash-> 3e
Input: 192.168.1.63 Hash-> 3e
Input: 192.168.1.64 Hash-> 3e
Input: 192.168.1.65 Hash-> 3e
Input: 192.168.1.66 Hash-> 3e
Input: 192.168.1.67 Hash-> 3e
Input: 192.168.1.68 Hash-> 3e
Input: 192.168.1.69 Hash-> 3e
Input: 192.168.1.70 Hash-> 3e
Input: 192.168.1.71 Hash-> 3e
Input: 192.168.1.72 Hash-> 3e
Input: 192.168.1.73 Hash-> 3e
Input: 192.168.1.74 Hash-> 3e
Input: 192.168.1.75 Hash-> 3e
Input: 192.168.1.76 Hash-> 3e
Input: 192.168.1.77 Hash-> 3e
Input: 192.168.1.78 Hash-> 3e
Input: 192.168.1.79 Hash-> 3e
Input: 192.168.1.80 Hash-> 3e
Input: 192.168.1.81 Hash-> 3e
Input: 192.168.1.82 Hash-> 3e
Input: 192.168.1.83 Hash-> 3e
Input: 192.168.1.84 Hash-> 3e
Input: 192.168.1.85 Hash-> 3e
Input: 192.168.1.86 Hash-> 3e
Input: 192.168.1.87 Hash-> 3e
Input: 192.168.1.88 Hash-> 3e
Input: 192.168.1.89 Hash-> 3e
Input: 192.168.1.90 Hash-> 3e
Input: 192.168.1.91 Hash-> 3e
Input: 192.168.1.92 Hash-> 3e
Input: 192.168.1.93 Hash-> 3e
Input: 192.168.1.94 Hash-> 3e
Input: 192.168.1.95 Hash-> 3e
Input: 192.168.1.96 Hash-> 3e
Input: 192.168.1.97 Hash-> 3e
Input: 192.168.1.98 Hash-> 3e
Input: 192.168.1.99 Hash-> 3e
Input: 192.168.1.100 Hash-> 3e
Input: 192.168.1.101 Hash-> 3e
Input: 192.168.1.102 Hash-> 3e
Input: 192.168.1.103 Hash-> 3e
Input: 192.168.1.104 Hash-> 3e
Input: 192.168.1.105 Hash-> 3e
Input: 192.168.1.106 Hash-> 3e
Input: 192.168.1.107 Hash-> 3e
Input: 192.168.1.108 Hash-> 3e
Input: 192.168.1.109 Hash-> 3e
Input: 192.168.1.110 Hash-> 3e
Input: 192.168.1.111 Hash-> 3e
Input: 192.168.1.112 Hash-> 3e
Input: 192.168.1.113 Hash-> 3e
Input: 192.168.1.114 Hash-> 3e
Input: 192.168.1.115 Hash-> 3e
Input: 192.168.1.116 Hash-> 3e
Input: 192.168.1.117 Hash-> 3e
Input: 192.168.1.118 Hash-> 3e
Input: 192.168.1.119 Hash-> 3e
Input: 192.168.1.120 Hash-> 3e
Input: 192.168.1.121 Hash-> 3e
Input: 192.168.1.122 Hash-> 3e
Input: 192.168.1.123 Hash-> 3e
Input: 192.168.1.124 Hash-> 3e
Input: 192.168.1.125 Hash-> 3e
Input: 192.168.1.126 Hash-> 3e
Input: 192.168.1.127 Hash-> 3e
Input: 192.168.1.128 Hash-> 3d
Input: 192.168.1.129 Hash-> 3d
Input: 192.168.1.130 Hash-> 3d
Input: 192.168.1.131 Hash-> 3d
Input: 192.168.1.132 Hash-> 3d
Input: 192.168.1.133 Hash-> 3d
Input: 192.168.1.134 Hash-> 3d
Input: 192.168.1.135 Hash-> 3d
Input: 192.168.1.136 Hash-> 3d
Input: 192.168.1.137 Hash-> 3d
Input: 192.168.1.138 Hash-> 3d
Input: 192.168.1.139 Hash-> 3d
Input: 192.168.1.140 Hash-> 3d
Input: 192.168.1.141 Hash-> 3d
Input: 192.168.1.142 Hash-> 3d
Input: 192.168.1.143 Hash-> 3d
Input: 192.168.1.144 Hash-> 3d
Input: 192.168.1.145 Hash-> 3d
Input: 192.168.1.146 Hash-> 3d
Input: 192.168.1.147 Hash-> 3d
Input: 192.168.1.148 Hash-> 3d
Input: 192.168.1.149 Hash-> 3d
Input: 192.168.1.150 Hash-> 3d
Input: 192.168.1.151 Hash-> 3d
Input: 192.168.1.152 Hash-> 3d
Input: 192.168.1.153 Hash-> 3d
Input: 192.168.1.154 Hash-> 3d
Input: 192.168.1.155 Hash-> 3d
Input: 192.168.1.156 Hash-> 3d
Input: 192.168.1.157 Hash-> 3d
Input: 192.168.1.158 Hash-> 3d
Input: 192.168.1.159 Hash-> 3d
Input: 192.168.1.160 Hash-> 3d
Input: 192.168.1.161 Hash-> 3d
Input: 192.168.1.162 Hash-> 3d
Input: 192.168.1.163 Hash-> 3d
Input: 192.168.1.164 Hash-> 3d
Input: 192.168.1.165 Hash-> 3d
Input: 192.168.1.166 Hash-> 3d
Input: 192.168.1.167 Hash-> 3d
Input: 192.168.1.168 Hash-> 3d
Input: 192.168.1.169 Hash-> 3d
Input: 192.168.1.170 Hash-> 3d
Input: 192.168.1.171 Hash-> 3d
Input: 192.168.1.172 Hash-> 3d
Input: 192.168.1.173 Hash-> 3d
Input: 192.168.1.174 Hash-> 3d
Input: 192.168.1.175 Hash-> 3d
Input: 192.168.1.176 Hash-> 3d
Input: 192.168.1.177 Hash-> 3d
Input: 192.168.1.178 Hash-> 3d
Input: 192.168.1.179 Hash-> 3d
Input: 192.168.1.180 Hash-> 3d
Input: 192.168.1.181 Hash-> 3d
Input: 192.168.1.182 Hash-> 3d
Input: 192.168.1.183 Hash-> 3d
Input: 192.168.1.184 Hash-> 3d
Input: 192.168.1.185 Hash-> 3d
Input: 192.168.1.186 Hash-> 3d
Input: 192.168.1.187 Hash-> 3d
Input: 192.168.1.188 Hash-> 3d
Input: 192.168.1.189 Hash-> 3d
Input: 192.168.1.190 Hash-> 3d
Input: 192.168.1.191 Hash-> 3d
Input: 192.168.1.192 Hash-> 3d
Input: 192.168.1.193 Hash-> 3d
Input: 192.168.1.194 Hash-> 3d
Input: 192.168.1.195 Hash-> 3d
Input: 192.168.1.196 Hash-> 3d
Input: 192.168.1.197 Hash-> 3d
Input: 192.168.1.198 Hash-> 3d
Input: 192.168.1.199 Hash-> 3d
Input: 192.168.1.200 Hash-> 3d
Input: 192.168.1.201 Hash-> 3d
Input: 192.168.1.202 Hash-> 3d
Input: 192.168.1.203 Hash-> 3d
Input: 192.168.1.204 Hash-> 3d
Input: 192.168.1.205 Hash-> 3d
Input: 192.168.1.206 Hash-> 3d
Input: 192.168.1.207 Hash-> 3d
Input: 192.168.1.208 Hash-> 3d
Input: 192.168.1.209 Hash-> 3d
Input: 192.168.1.210 Hash-> 3d
Input: 192.168.1.211 Hash-> 3d
Input: 192.168.1.212 Hash-> 3d
Input: 192.168.1.213 Hash-> 3d
Input: 192.168.1.214 Hash-> 3d
Input: 192.168.1.215 Hash-> 3d
Input: 192.168.1.216 Hash-> 3d
Input: 192.168.1.217 Hash-> 3d
Input: 192.168.1.218 Hash-> 3d
Input: 192.168.1.219 Hash-> 3d
Input: 192.168.1.220 Hash-> 3d
Input: 192.168.1.221 Hash-> 3d
Input: 192.168.1.222 Hash-> 3d
Input: 192.168.1.223 Hash-> 3d
Input: 192.168.1.224 Hash-> 3d
Input: 192.168.1.225 Hash-> 3d
Input: 192.168.1.226 Hash-> 3d
Input: 192.168.1.227 Hash-> 3d
Input: 192.168.1.228 Hash-> 3d
Input: 192.168.1.229 Hash-> 3d
Input: 192.168.1.230 Hash-> 3d
Input: 192.168.1.231 Hash-> 3d
Input: 192.168.1.232 Hash-> 3d
Input: 192.168.1.233 Hash-> 3d
Input: 192.168.1.234 Hash-> 3d
Input: 192.168.1.235 Hash-> 3d
Input: 192.168.1.236 Hash-> 3d
Input: 192.168.1.237 Hash-> 3d
Input: 192.168.1.238 Hash-> 3d
Input: 192.168.1.239 Hash-> 3d
Input: 192.168.1.240 Hash-> 3d
Input: 192.168.1.241 Hash-> 3d
Input: 192.168.1.242 Hash-> 3d
Input: 192.168.1.243 Hash-> 3d
Input: 192.168.1.244 Hash-> 3d
Input: 192.168.1.245 Hash-> 3d
Input: 192.168.1.246 Hash-> 3d
Input: 192.168.1.247 Hash-> 3d
Input: 192.168.1.248 Hash-> 3d
Input: 192.168.1.249 Hash-> 3d
Input: 192.168.1.250 Hash-> 3d
Input: 192.168.1.251 Hash-> 3d
Input: 192.168.1.252 Hash-> 3d
Input: 192.168.1.253 Hash-> 3d
Input: 192.168.1.254 Hash-> 3d
Input: 192.168.1.255 Hash-> 3d
Testing byte 2 
Input: 192.169.0.0 Hash-> f6
Input: 192.169.0.1 Hash-> f6
Input: 192.169.0.2 Hash-> f6
Input: 192.169.0.3 Hash-> f6
Input: 192.169.0.4 Hash-> f6
Input: 192.169.0.5 Hash-> f6
Input: 192.169.0.6 Hash-> f6
Input: 192.169.0.7 Hash-> f6
Input: 192.169.0.8 Hash-> f6
Input: 192.169.0.9 Hash-> f6
Input: 192.169.0.10 Hash-> f6
Input: 192.169.0.11 Hash-> f6
Input: 192.169.0.12 Hash-> f6
Input: 192.169.0.13 Hash-> f6
Input: 192.169.0.14 Hash-> f6
Input: 192.169.0.15 Hash-> f6
Input: 192.169.0.16 Hash-> f6
Input: 192.169.0.17 Hash-> f6
Input: 192.169.0.18 Hash-> f6
Input: 192.169.0.19 Hash-> f6
Input: 192.169.0.20 Hash-> f6
Input: 192.169.0.21 Hash-> f6
Input: 192.169.0.22 Hash-> f6
Input: 192.169.0.23 Hash-> f6
Input: 192.169.0.24 Hash-> f6
Input: 192.169.0.25 Hash-> f6
Input: 192.169.0.26 Hash-> f6
Input: 192.169.0.27 Hash-> f6
Input: 192.169.0.28 Hash-> f6
Input: 192.169.0.29 Hash-> f6
Input: 192.169.0.30 Hash-> f6
Input: 192.169.0.31 Hash-> f6
Input: 192.169.0.32 Hash-> f6
Input: 192.169.0.33 Hash-> f6
Input: 192.169.0.34 Hash-> f6
Input: 192.169.0.35 Hash-> f6
Input: 192.169.0.36 Hash-> f6
Input: 192.169.0.37 Hash-> f6
Input: 192.169.0.38 Hash-> f6
Input: 192.169.0.39 Hash-> f6
Input: 192.169.0.40 Hash-> f6
Input: 192.169.0.41 Hash-> f6
Input: 192.169.0.42 Hash-> f6
Input: 192.169.0.43 Hash-> f6
Input: 192.169.0.44 Hash-> f6
Input: 192.169.0.45 Hash-> f6
Input: 192.169.0.46 Hash-> f6
Input: 192.169.0.47 Hash-> f6
Input: 192.169.0.48 Hash-> f6
Input: 192.169.0.49 Hash-> f6
Input: 192.169.0.50 Hash-> f6
Input: 192.169.0.51 Hash-> f6
Input: 192.169.0.52 Hash-> f6
Input: 192.169.0.53 Hash-> f6
Input: 192.169.0.54 Hash-> f6
Input: 192.169.0.55 Hash-> f6
Input: 192.169.0.56 Hash-> f6
Input: 192.169.0.57 Hash-> f6
Input: 192.169.0.58 Hash-> f6
Input: 192.169.0.59 Hash-> f6
Input: 192.169.0.60 Hash-> f6
Input: 192.169.0.61 Hash-> f6
Input: 192.169.0.62 Hash-> f6
Input: 192.169.0.63 Hash-> f6
Input: 192.169.0.64 Hash-> f6
Input: 192.169.0.65 Hash-> f6
Input: 192.169.0.66 Hash-> f6
Input: 192.169.0.67 Hash-> f6
Input: 192.169.0.68 Hash-> f6
Input: 192.169.0.69 Hash-> f6
Input: 192.169.0.70 Hash-> f6
Input: 192.169.0.71 Hash-> f6
Input: 192.169.0.72 Hash-> f6
Input: 192.169.0.73 Hash-> f6
Input: 192.169.0.74 Hash-> f6
Input: 192.169.0.75 Hash-> f6
Input: 192.169.0.76 Hash-> f6
Input: 192.169.0.77 Hash-> f6
Input: 192.169.0.78 Hash-> f6
Input: 192.169.0.79 Hash-> f6
Input: 192.169.0.80 Hash-> f6
Input: 192.169.0.81 Hash-> f6
Input: 192.169.0.82 Hash-> f6
Input: 192.169.0.83 Hash-> f6
Input: 192.169.0.84 Hash-> f6
Input: 192.169.0.85 Hash-> f6
Input: 192.169.0.86 Hash-> f6
Input: 192.169.0.87 Hash-> f6
Input: 192.169.0.88 Hash-> f6
Input: 192.169.0.89 Hash-> f6
Input: 192.169.0.90 Hash-> f6
Input: 192.169.0.91 Hash-> f6
Input: 192.169.0.92 Hash-> f6
Input: 192.169.0.93 Hash-> f6
Input: 192.169.0.94 Hash-> f6
Input: 192.169.0.95 Hash-> f6
Input: 192.169.0.96 Hash-> f6
Input: 192.169.0.97 Hash-> f6
Input: 192.169.0.98 Hash-> f6
Input: 192.169.0.99 Hash-> f6
Input: 192.169.0.100 Hash-> f6
Input: 192.169.0.101 Hash-> f6
Input: 192.169.0.102 Hash-> f6
Input: 192.169.0.103 Hash-> f6
Input: 192.169.0.104 Hash-> f6
Input: 192.169.0.105 Hash-> f6
Input: 192.169.0.106 Hash-> f6
Input: 192.169.0.107 Hash-> f6
Input: 192.169.0.108 Hash-> f6
Input: 192.169.0.109 Hash-> f6
Input: 192.169.0.110 Hash-> f6
Input: 192.169.0.111 Hash-> f6
Input: 192.169.0.112 Hash-> f6
Input: 192.169.0.113 Hash-> f6
Input: 192.169.0.114 Hash-> f6
Input: 192.169.0.115 Hash-> f6
Input: 192.169.0.116 Hash-> f6
Input: 192.169.0.117 Hash-> f6
Input: 192.169.0.118 Hash-> f6
Input: 192.169.0.119 Hash-> f6
Input: 192.169.0.120 Hash-> f6
Input: 192.169.0.121 Hash-> f6
Input: 192.169.0.122 Hash-> f6
Input: 192.169.0.123 Hash-> f6
Input: 192.169.0.124 Hash-> f6
Input: 192.169.0.125 Hash-> f6
Input: 192.169.0.126 Hash-> f6
Input: 192.169.0.127 Hash-> f6
Input: 192.169.0.128 Hash-> f5
Input: 192.169.0.129 Hash-> f5
Input: 192.169.0.130 Hash-> f5
Input: 192.169.0.131 Hash-> f5
Input: 192.169.0.132 Hash-> f5
Input: 192.169.0.133 Hash-> f5
Input: 192.169.0.134 Hash-> f5
Input: 192.169.0.135 Hash-> f5
Input: 192.169.0.136 Hash-> f5
Input: 192.169.0.137 Hash-> f5
Input: 192.169.0.138 Hash-> f5
Input: 192.169.0.139 Hash-> f5
Input: 192.169.0.140 Hash-> f5
Input: 192.169.0.141 Hash-> f5
Input: 192.169.0.142 Hash-> f5
Input: 192.169.0.143 Hash-> f5
Input: 192.169.0.144 Hash-> f5
Input: 192.169.0.145 Hash-> f5
Input: 192.169.0.146 Hash-> f5
Input: 192.169.0.147 Hash-> f5
Input: 192.169.0.148 Hash-> f5
Input: 192.169.0.149 Hash-> f5
Input: 192.169.0.150 Hash-> f5
Input: 192.169.0.151 Hash-> f5
Input: 192.169.0.152 Hash-> f5
Input: 192.169.0.153 Hash-> f5
Input: 192.169.0.154 Hash-> f5
Input: 192.169.0.155 Hash-> f5
Input: 192.169.0.156 Hash-> f5
Input: 192.169.0.157 Hash-> f5
Input: 192.169.0.158 Hash-> f5
Input: 192.169.0.159 Hash-> f5
Input: 192.169.0.160 Hash-> f5
Input: 192.169.0.161 Hash-> f5
Input: 192.169.0.162 Hash-> f5
Input: 192.169.0.163 Hash-> f5
Input: 192.169.0.164 Hash-> f5
Input: 192.169.0.165 Hash-> f5
Input: 192.169.0.166 Hash-> f5
Input: 192.169.0.167 Hash-> f5
Input: 192.169.0.168 Hash-> f5
Input: 192.169.0.169 Hash-> f5
Input: 192.169.0.170 Hash-> f5
Input: 192.169.0.171 Hash-> f5
Input: 192.169.0.172 Hash-> f5
Input: 192.169.0.173 Hash-> f5
Input: 192.169.0.174 Hash-> f5
Input: 192.169.0.175 Hash-> f5
Input: 192.169.0.176 Hash-> f5
Input: 192.169.0.177 Hash-> f5
Input: 192.169.0.178 Hash-> f5
Input: 192.169.0.179 Hash-> f5
Input: 192.169.0.180 Hash-> f5
Input: 192.169.0.181 Hash-> f5
Input: 192.169.0.182 Hash-> f5
Input: 192.169.0.183 Hash-> f5
Input: 192.169.0.184 Hash-> f5
Input: 192.169.0.185 Hash-> f5
Input: 192.169.0.186 Hash-> f5
Input: 192.169.0.187 Hash-> f5
Input: 192.169.0.188 Hash-> f5
Input: 192.169.0.189 Hash-> f5
Input: 192.169.0.190 Hash-> f5
Input: 192.169.0.191 Hash-> f5
Input: 192.169.0.192 Hash-> f5
Input: 192.169.0.193 Hash-> f5
Input: 192.169.0.194 Hash-> f5
Input: 192.169.0.195 Hash-> f5
Input: 192.169.0.196 Hash-> f5
Input: 192.169.0.197 Hash-> f5
Input: 192.169.0.198 Hash-> f5
Input: 192.169.0.199 Hash-> f5
Input: 192.169.0.200 Hash-> f5
Input: 192.169.0.201 Hash-> f5
Input: 192.169.0.202 Hash-> f5
Input: 192.169.0.203 Hash-> f5
Input: 192.169.0.204 Hash-> f5
Input: 192.169.0.205 Hash-> f5
Input: 192.169.0.206 Hash-> f5
Input: 192.169.0.207 Hash-> f5
Input: 192.169.0.208 Hash-> f5
Input: 192.169.0.209 Hash-> f5
Input: 192.169.0.210 Hash-> f5
Input: 192.169.0.211 Hash-> f5
Input: 192.169.0.212 Hash-> f5
Input: 192.169.0.213 Hash-> f5
Input: 192.169.0.214 Hash-> f5
Input: 192.169.0.215 Hash-> f5
Input: 192.169.0.216 Hash-> f5
Input: 192.169.0.217 Hash-> f5
Input: 192.169.0.218 Hash-> f5
Input: 192.169.0.219 Hash-> f5
Input: 192.169.0.220 Hash-> f5
Input: 192.169.0.221 Hash-> f5
Input: 192.169.0.222 Hash-> f5
Input: 192.169.0.223 Hash-> f5
Input: 192.169.0.224 Hash-> f5
Input: 192.169.0.225 Hash-> f5
Input: 192.169.0.226 Hash-> f5
Input: 192.169.0.227 Hash-> f5
Input: 192.169.0.228 Hash-> f5
Input: 192.169.0.229 Hash-> f5
Input: 192.169.0.230 Hash-> f5
Input: 192.169.0.231 Hash-> f5
Input: 192.169.0.232 Hash-> f5
Input: 192.169.0.233 Hash-> f5
Input: 192.169.0.234 Hash-> f5
Input: 192.169.0.235 Hash-> f5
Input: 192.169.0.236 Hash-> f5
Input: 192.169.0.237 Hash-> f5
Input: 192.169.0.238 Hash-> f5
Input: 192.169.0.239 Hash-> f5
Input: 192.169.0.240 Hash-> f5
Input: 192.169.0.241 Hash-> f5
Input: 192.169.0.242 Hash-> f5
Input: 192.169.0.243 Hash-> f5
Input: 192.169.0.244 Hash-> f5
Input: 192.169.0.245 Hash-> f5
Input: 192.169.0.246 Hash-> f5
Input: 192.169.0.247 Hash-> f5
Input: 192.169.0.248 Hash-> f5
Input: 192.169.0.249 Hash-> f5
Input: 192.169.0.250 Hash-> f5
Input: 192.169.0.251 Hash-> f5
Input: 192.169.0.252 Hash-> f5
Input: 192.169.0.253 Hash-> f5
Input: 192.169.0.254 Hash-> f5
Input: 192.169.0.255 Hash-> f5
Testing byte 1 (Most significant byte)
Input: 193.169.2.0 Hash-> 17
Input: 193.169.2.1 Hash-> 17
Input: 193.169.2.2 Hash-> 17
Input: 193.169.2.3 Hash-> 17
Input: 193.169.2.4 Hash-> 17
Input: 193.169.2.5 Hash-> 17
Input: 193.169.2.6 Hash-> 17
Input: 193.169.2.7 Hash-> 17
Input: 193.169.2.8 Hash-> 17
Input: 193.169.2.9 Hash-> 17
Input: 193.169.2.10 Hash-> 17
Input: 193.169.2.11 Hash-> 17
Input: 193.169.2.12 Hash-> 17
Input: 193.169.2.13 Hash-> 17
Input: 193.169.2.14 Hash-> 17
Input: 193.169.2.15 Hash-> 17
Input: 193.169.2.16 Hash-> 17
Input: 193.169.2.17 Hash-> 17
Input: 193.169.2.18 Hash-> 17
Input: 193.169.2.19 Hash-> 17
Input: 193.169.2.20 Hash-> 17
Input: 193.169.2.21 Hash-> 17
Input: 193.169.2.22 Hash-> 17
Input: 193.169.2.23 Hash-> 17
Input: 193.169.2.24 Hash-> 17
Input: 193.169.2.25 Hash-> 17
Input: 193.169.2.26 Hash-> 17
Input: 193.169.2.27 Hash-> 17
Input: 193.169.2.28 Hash-> 17
Input: 193.169.2.29 Hash-> 17
Input: 193.169.2.30 Hash-> 17
Input: 193.169.2.31 Hash-> 17
Input: 193.169.2.32 Hash-> 17
Input: 193.169.2.33 Hash-> 17
Input: 193.169.2.34 Hash-> 17
Input: 193.169.2.35 Hash-> 17
Input: 193.169.2.36 Hash-> 17
Input: 193.169.2.37 Hash-> 17
Input: 193.169.2.38 Hash-> 17
Input: 193.169.2.39 Hash-> 17
Input: 193.169.2.40 Hash-> 17
Input: 193.169.2.41 Hash-> 17
Input: 193.169.2.42 Hash-> 17
Input: 193.169.2.43 Hash-> 17
Input: 193.169.2.44 Hash-> 17
Input: 193.169.2.45 Hash-> 17
Input: 193.169.2.46 Hash-> 17
Input: 193.169.2.47 Hash-> 17
Input: 193.169.2.48 Hash-> 17
Input: 193.169.2.49 Hash-> 17
Input: 193.169.2.50 Hash-> 17
Input: 193.169.2.51 Hash-> 17
Input: 193.169.2.52 Hash-> 17
Input: 193.169.2.53 Hash-> 17
Input: 193.169.2.54 Hash-> 17
Input: 193.169.2.55 Hash-> 17
Input: 193.169.2.56 Hash-> 17
Input: 193.169.2.57 Hash-> 17
Input: 193.169.2.58 Hash-> 17
Input: 193.169.2.59 Hash-> 17
Input: 193.169.2.60 Hash-> 17
Input: 193.169.2.61 Hash-> 17
Input: 193.169.2.62 Hash-> 17
Input: 193.169.2.63 Hash-> 17
Input: 193.169.2.64 Hash-> 17
Input: 193.169.2.65 Hash-> 17
Input: 193.169.2.66 Hash-> 17
Input: 193.169.2.67 Hash-> 17
Input: 193.169.2.68 Hash-> 17
Input: 193.169.2.69 Hash-> 17
Input: 193.169.2.70 Hash-> 17
Input: 193.169.2.71 Hash-> 17
Input: 193.169.2.72 Hash-> 17
Input: 193.169.2.73 Hash-> 17
Input: 193.169.2.74 Hash-> 17
Input: 193.169.2.75 Hash-> 17
Input: 193.169.2.76 Hash-> 17
Input: 193.169.2.77 Hash-> 17
Input: 193.169.2.78 Hash-> 17
Input: 193.169.2.79 Hash-> 17
Input: 193.169.2.80 Hash-> 17
Input: 193.169.2.81 Hash-> 17
Input: 193.169.2.82 Hash-> 17
Input: 193.169.2.83 Hash-> 17
Input: 193.169.2.84 Hash-> 17
Input: 193.169.2.85 Hash-> 17
Input: 193.169.2.86 Hash-> 17
Input: 193.169.2.87 Hash-> 17
Input: 193.169.2.88 Hash-> 17
Input: 193.169.2.89 Hash-> 17
Input: 193.169.2.90 Hash-> 17
Input: 193.169.2.91 Hash-> 17
Input: 193.169.2.92 Hash-> 17
Input: 193.169.2.93 Hash-> 17
Input: 193.169.2.94 Hash-> 17
Input: 193.169.2.95 Hash-> 17
Input: 193.169.2.96 Hash-> 17
Input: 193.169.2.97 Hash-> 17
Input: 193.169.2.98 Hash-> 17
Input: 193.169.2.99 Hash-> 17
Input: 193.169.2.100 Hash-> 17
Input: 193.169.2.101 Hash-> 17
Input: 193.169.2.102 Hash-> 17
Input: 193.169.2.103 Hash-> 17
Input: 193.169.2.104 Hash-> 17
Input: 193.169.2.105 Hash-> 17
Input: 193.169.2.106 Hash-> 17
Input: 193.169.2.107 Hash-> 17
Input: 193.169.2.108 Hash-> 17
Input: 193.169.2.109 Hash-> 17
Input: 193.169.2.110 Hash-> 17
Input: 193.169.2.111 Hash-> 17
Input: 193.169.2.112 Hash-> 17
Input: 193.169.2.113 Hash-> 17
Input: 193.169.2.114 Hash-> 17
Input: 193.169.2.115 Hash-> 17
Input: 193.169.2.116 Hash-> 17
Input: 193.169.2.117 Hash-> 17
Input: 193.169.2.118 Hash-> 17
Input: 193.169.2.119 Hash-> 17
Input: 193.169.2.120 Hash-> 17
Input: 193.169.2.121 Hash-> 17
Input: 193.169.2.122 Hash-> 17
Input: 193.169.2.123 Hash-> 17
Input: 193.169.2.124 Hash-> 17
Input: 193.169.2.125 Hash-> 17
Input: 193.169.2.126 Hash-> 17
Input: 193.169.2.127 Hash-> 17
Input: 193.169.2.128 Hash-> 17
Input: 193.169.2.129 Hash-> 17
Input: 193.169.2.130 Hash-> 17
Input: 193.169.2.131 Hash-> 17
Input: 193.169.2.132 Hash-> 17
Input: 193.169.2.133 Hash-> 17
Input: 193.169.2.134 Hash-> 17
Input: 193.169.2.135 Hash-> 17
Input: 193.169.2.136 Hash-> 17
Input: 193.169.2.137 Hash-> 17
Input: 193.169.2.138 Hash-> 17
Input: 193.169.2.139 Hash-> 17
Input: 193.169.2.140 Hash-> 17
Input: 193.169.2.141 Hash-> 17
Input: 193.169.2.142 Hash-> 17
Input: 193.169.2.143 Hash-> 17
Input: 193.169.2.144 Hash-> 17
Input: 193.169.2.145 Hash-> 17
Input: 193.169.2.146 Hash-> 17
Input: 193.169.2.147 Hash-> 17
Input: 193.169.2.148 Hash-> 17
Input: 193.169.2.149 Hash-> 17
Input: 193.169.2.150 Hash-> 17
Input: 193.169.2.151 Hash-> 17
Input: 193.169.2.152 Hash-> 17
Input: 193.169.2.153 Hash-> 17
Input: 193.169.2.154 Hash-> 17
Input: 193.169.2.155 Hash-> 17
Input: 193.169.2.156 Hash-> 17
Input: 193.169.2.157 Hash-> 17
Input: 193.169.2.158 Hash-> 17
Input: 193.169.2.159 Hash-> 17
Input: 193.169.2.160 Hash-> 17
Input: 193.169.2.161 Hash-> 17
Input: 193.169.2.162 Hash-> 17
Input: 193.169.2.163 Hash-> 17
Input: 193.169.2.164 Hash-> 17
Input: 193.169.2.165 Hash-> 17
Input: 193.169.2.166 Hash-> 17
Input: 193.169.2.167 Hash-> 17
Input: 193.169.2.168 Hash-> 17
Input: 193.169.2.169 Hash-> 17
Input: 193.169.2.170 Hash-> 17
Input: 193.169.2.171 Hash-> 17
Input: 193.169.2.172 Hash-> 17
Input: 193.169.2.173 Hash-> 17
Input: 193.169.2.174 Hash-> 17
Input: 193.169.2.175 Hash-> 17
Input: 193.169.2.176 Hash-> 17
Input: 193.169.2.177 Hash-> 17
Input: 193.169.2.178 Hash-> 17
Input: 193.169.2.179 Hash-> 17
Input: 193.169.2.180 Hash-> 17
Input: 193.169.2.181 Hash-> 17
Input: 193.169.2.182 Hash-> 17
Input: 193.169.2.183 Hash-> 17
Input: 193.169.2.184 Hash-> 17
Input: 193.169.2.185 Hash-> 17
Input: 193.169.2.186 Hash-> 17
Input: 193.169.2.187 Hash-> 17
Input: 193.169.2.188 Hash-> 17
Input: 193.169.2.189 Hash-> 17
Input: 193.169.2.190 Hash-> 17
Input: 193.169.2.191 Hash-> 17
Input: 193.169.2.192 Hash-> 17
Input: 193.169.2.193 Hash-> 17
Input: 193.169.2.194 Hash-> 17
Input: 193.169.2.195 Hash-> 17
Input: 193.169.2.196 Hash-> 17
Input: 193.169.2.197 Hash-> 17
Input: 193.169.2.198 Hash-> 17
Input: 193.169.2.199 Hash-> 17
Input: 193.169.2.200 Hash-> 17
Input: 193.169.2.201 Hash-> 17
Input: 193.169.2.202 Hash-> 17
Input: 193.169.2.203 Hash-> 17
Input: 193.169.2.204 Hash-> 17
Input: 193.169.2.205 Hash-> 17
Input: 193.169.2.206 Hash-> 17
Input: 193.169.2.207 Hash-> 17
Input: 193.169.2.208 Hash-> 17
Input: 193.169.2.209 Hash-> 17
Input: 193.169.2.210 Hash-> 17
Input: 193.169.2.211 Hash-> 17
Input: 193.169.2.212 Hash-> 17
Input: 193.169.2.213 Hash-> 17
Input: 193.169.2.214 Hash-> 17
Input: 193.169.2.215 Hash-> 17
Input: 193.169.2.216 Hash-> 17
Input: 193.169.2.217 Hash-> 17
Input: 193.169.2.218 Hash-> 17
Input: 193.169.2.219 Hash-> 17
Input: 193.169.2.220 Hash-> 17
Input: 193.169.2.221 Hash-> 17
Input: 193.169.2.222 Hash-> 17
Input: 193.169.2.223 Hash-> 17
Input: 193.169.2.224 Hash-> 17
Input: 193.169.2.225 Hash-> 17
Input: 193.169.2.226 Hash-> 17
Input: 193.169.2.227 Hash-> 17
Input: 193.169.2.228 Hash-> 17
Input: 193.169.2.229 Hash-> 17
Input: 193.169.2.230 Hash-> 17
Input: 193.169.2.231 Hash-> 17
Input: 193.169.2.232 Hash-> 17
Input: 193.169.2.233 Hash-> 17
Input: 193.169.2.234 Hash-> 17
Input: 193.169.2.235 Hash-> 17
Input: 193.169.2.236 Hash-> 17
Input: 193.169.2.237 Hash-> 17
Input: 193.169.2.238 Hash-> 17
Input: 193.169.2.239 Hash-> 17
Input: 193.169.2.240 Hash-> 17
Input: 193.169.2.241 Hash-> 17
Input: 193.169.2.242 Hash-> 17
Input: 193.169.2.243 Hash-> 17
Input: 193.169.2.244 Hash-> 17
Input: 193.169.2.245 Hash-> 17
Input: 193.169.2.246 Hash-> 17
Input: 193.169.2.247 Hash-> 17
Input: 193.169.2.248 Hash-> 17
Input: 193.169.2.249 Hash-> 17
Input: 193.169.2.250 Hash-> 17
Input: 193.169.2.251 Hash-> 17
Input: 193.169.2.252 Hash-> 17
Input: 193.169.2.253 Hash-> 17
Input: 193.169.2.254 Hash-> 17
Input: 193.169.2.255 Hash-> 17
------------

---------

    Now you say "Ok that's not too cool, but why is this really important ?"

        Well, it turns out that the RedHat EL 4 release has some issues
    with the locking around the buckets that this hash is being
    used to index into, and with the distribution being so not... it
    leads to a lock race, followed by a kernel dereference of null,
    followed shortly by angry phone calls.... True, the race needs
    to be fixed, but it sure would be nice if the pressure were a 
    tad lower on the lock. It would greatly reduce the probability
    of the panic, and also improve the performance (scaling) of
    the system, if a few more buckets were used :-)

    I've talked this over with Charles, and Bruce, and they pointed
    me in your direction ... 
    
    How would you prefer to proceed ?

        A. It's your code, and you would prefer to tinker without
            some bozo adding his two bits.

        B. You're way too busy to go after this, and would 
            welcome a diff -u patch.

        C. You'll scratch your head, think about it, and get back
            after morning coffee :-)

        D. It will only take a few seconds to add the 
                inet_lnaof( ) to the offending lines, and it will be
                done before you can say Jack Flash :-)
    
        E. Go away, you're bothering me :-)

Enjoy,
Don Capps
capps_at_iozone_dot_org





[-- Attachment #2: Type: text/html, Size: 46659 bytes --]

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: An interesting performance thing ?
  2005-12-14 18:22 An interesting performance thing ? Iozone
@ 2005-12-14 22:26 ` Neil Brown
  2005-12-14 22:46   ` Chuck Lever
  2005-12-14 22:50   ` Iozone
  2005-12-15  2:22 ` J. Bruce Fields
  1 sibling, 2 replies; 21+ messages in thread
From: Neil Brown @ 2005-12-14 22:26 UTC (permalink / raw)
  To: Iozone; +Cc: wli, Chuck Lever, nfs

On Wednesday December 14, capps@iozone.org wrote:
> Neil,

Hi, Don,

> 
>     I think I have discovered an interesting performance anomaly.

Indeed you have, thanks.

> 
>     In the svcauth code there are places that do:
> 
>        hash_long((unsigned long)item->m_addr.s_addr, IP_HASHBITS);
> 
>     Ok... That seems reasonable, but then again...maybe not....

No, perfectly reasonable.  I don't write "maybe reasonable" code.  It
is either perfect, or rubbish :-)

> 
>     I believe that s_addr is an IP address in Network Neutral format. 
>      (Big Endian)
> 
>     So... When one is on a Little Endian system, the hash_long
>     function gets handed a Big Endian value as a long, and
>     later, via the magic of being a Little Endian system,
>     gets byte swapped.
> 
>     Step 1. 192.168.1.2 becomes 2.1.168.192 (byte swap)
> 
>     Step 2.  The 32 bit IP address becomes a 64 bit long when
>                  this code is compiled and run on an Opteron, or
>                  an IA-64 system.
>                  2.1.168.192 -> 0.0.0.0.2.1.168.192
>     Step 3. Call the hash_long() and get back a hash value
>                 that is IP_HASHBITS (8) in size.
> 
>     You'll notice that the hash distribution is not nearly as
>     good as one might believe.  If one would have done:
> 
>         hash_long( inet_lnaof(item->m_addr.s_addr)),IP_HASHBITS)
> 
>     Then the hash_long function would have done a nice job. 

True, but irrelevant. 
hash_long(X, 8) will take the top 8 bits of the result.  So pushing
the noisy bits into the low order bits should have no significant
effect.
The fact that it does suggests that hash_long is broken.

I wonder if I blame William or Chuck?  Maybe I'll just blame both.
After all it is Christmas time and we should share the good will
around :-)

> 
> ---------
> 
>     Now you say "Ok that's not too cool, but why is this really
>     important ?"

Oh, no.  I can easily see that it is important.  And I agree it is
seriously uncool (like the weather down here is oz (aka .au)).


> 
>         Well, it turns out that the RedHat EL 4 release has some issues
>     with the locking around the buckets that this hash is being
>     used to index into, and with the distribution being so not... it
>     leads to a lock race, followed by a kernel dereference of null,
>     followed shortly by angry phone calls.... True, the race needs
>     to be fixed, but it sure would be nice if the pressure were a 
>     tad lower on the lock. It would greatly reduce the probability
>     of the panic, and also improve the performance (scaling) of
>     the system, if a few more buckets were used :-)

What mainline kernel is RedHat EL 4 based on?
I think I know the race you mean.... Funny how one writes buggy code,
then fixes it, then finds it still existing in "enterprise" kernels
months later. (SLES only gets it fixed in 9SP3).

> 
>     I've talked this over with Charles, and Bruce, and they pointed
>     me in your direction ... 

Go back to Charles and tell him I sent you :-)

>     
>     How would you prefer to proceed ?
> 
>         A. It's your code, and you would prefer to tinker without
>             some bozo adding his two bits.

Nonono, bozo bits are worth their weight in gold (sometimes).

> 
>         B. You're way too busy to go after this, and would 
>             welcome a diff -u patch.

In general, yes.  In this case, the patch would have been wrong.

> 
>         C. You'll scratch your head, think about it, and get back
>             after morning coffee :-)

I'm a tea drinker, so that wouldn't work.

> 
>         D. It will only take a few seconds to add the 
>                 inet_lnaof( ) to the offending lines, and it will be
>                 done before you can say Jack Flash :-)

It was 'Jack Robinson' in my day - no idea why.

>     
>         E. Go away, you're bothering me :-)
> 

Yeh, stop bothering me with such interesting puzzles.....

If you look at the top of include/linux/hash.h  you will see a very
helpful comment:

/*
 * Knuth recommends primes in approximately golden ratio to the maximum
 * integer representable by a machine word for multiplicative hashing.
 * Chuck Lever verified the effectiveness of this technique:
 * http://www.citi.umich.edu/techreports/reports/citi-tr-00-1.pdf
 *
 * These primes are chosen to be bit-sparse, that is operations on
 * them can use shifts and additions instead of multiplications for
 * machines where multiplications are slow.
 */

I think there is a tension between the 'close to golden ratio'
requirement and the 'bit-sparse' requirement.  The prime chosen for
64bit work is 
#define GOLDEN_RATIO_PRIME 0x9e37fffffffc0001UL
which is nicely bit-sparse, bit is a long way from the golden ratio
which is 0x9E3779B97F4A7C15
The closest prime to this is 
         0x9E3779B97F4A7C55
which is not particularly bit-sparse, but produces much better
distribution of hashes for IP addresses.

In fact, I wouldn't be at all surprised if 'bit-sparse' tends to have a
directly negative effect on hash quality when the variations in the
input line up with the sparse bits (so to speak).

Now I don't really know how much of an issue this 'bit sparseness' is
for speed, and how much cost it would be to just change those
shift/adds into a multiply.  But there is something definitely wrong
with hash_long on 64bit, and I suspect it could affect more than just
IP addresses.

William, Chunk:  Any suggestions on whether a straight multiply would
be too expensive, or what else we could do to make hash_long both fast
and effective?
Maybe we should just have 'hash32' and write 'hash64' as 
  hash32(x ^ hash32(x>>32));

But then the current hash_long for 32bit doesn't work brilliantly when
the variation is in the 3rd byte (i.e. within the mask 0x0f00).

I wonder if http://burtleburtle.net/bob/hash/evahash.html might end up
being better ... would need to do some serious tests.

Help?

NeilBrown

btw, I did testing with the following little program
called like:
  for i in 0 8 16 24 ; do ./hashtest 0x00007659 $i 256; done
                                       ^^^^^^^ random number.

#include <stdio.h>

unsigned hash(unsigned long val)
{
	unsigned long long hash = val;
	/* hash *=  0x9e37fffffffc0001ULL;*/
	hash *= 11400714819323198549ULL;
	return (hash >> (64-8)) & 255;
}

main(int argc, char*argv[])
{
	unsigned long start = strtoul(argv[1], 0);
	int shift = strtoul(argv[2], 0);
	int count = strtoul(argv[3], 0);
	int max = 0;

	int cnt[256];
	int i;
	
	for (i=0; i<256; i++)
		cnt[i] = 0;

	while (count) {
		cnt[hash(start)] ++;
		start += (1<<shift);
		count--;
	}
	count = 0;
	for (i=0; i<256; i++) {
		if (cnt[i])
			count++;
		if (cnt[i] > max)
			max = cnt[i];
	}
	printf("count %d/256 max %d\n", count, max);
}



-------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click
_______________________________________________
NFS maillist  -  NFS@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: An interesting performance thing ?
  2005-12-14 22:26 ` Neil Brown
@ 2005-12-14 22:46   ` Chuck Lever
  2005-12-14 23:47     ` Iozone
  2005-12-14 22:50   ` Iozone
  1 sibling, 1 reply; 21+ messages in thread
From: Chuck Lever @ 2005-12-14 22:46 UTC (permalink / raw)
  To: Neil Brown; +Cc: Iozone, wli, nfs

[-- Attachment #1: Type: text/plain, Size: 614 bytes --]

Neil Brown wrote:
> William, Chunk:  Any suggestions on whether a straight multiply would
> be too expensive, or what else we could do to make hash_long both fast
> and effective?

my original proposal was a hash function which computed the index by a 
single multiplication with a large prime number.  i was told that 
multiplication was too expensive, especially on older platforms, so the 
existing computation took its place.  but i don't think anyone ever did 
any real studies.  i can't imagine that multiplication would be worse 
than extra elements in a hash chain, especially on modern CPU architectures.

[-- Attachment #2: cel.vcf --]
[-- Type: text/x-vcard, Size: 253 bytes --]

begin:vcard
fn:Chuck Lever
n:Lever;Chuck
org:Network Appliance, Incorporated;Open Source NFS Client Engineering
email;internet:cel@citi.umich.edu
title:Member of Technical Staff
x-mozilla-html:FALSE
url:http://www.monkey.org/~cel
version:2.1
end:vcard


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: An interesting performance thing ?
  2005-12-14 22:26 ` Neil Brown
  2005-12-14 22:46   ` Chuck Lever
@ 2005-12-14 22:50   ` Iozone
  1 sibling, 0 replies; 21+ messages in thread
From: Iozone @ 2005-12-14 22:50 UTC (permalink / raw)
  To: Neil Brown; +Cc: wli, Chuck Lever, nfs

Neil,

    Post at bottom:

----- Original Message ----- 
From: "Neil Brown" <neilb@suse.de>
To: "Iozone" <capps@iozone.org>
Cc: <wli@holomorphy.com>; "Chuck Lever" <cel@citi.umich.edu>; 
<nfs@lists.sourceforge.net>
Sent: Wednesday, December 14, 2005 4:26 PM
Subject: Re: An interesting performance thing ?


> On Wednesday December 14, capps@iozone.org wrote:
>> Neil,
>
> Hi, Don,
>
>>
>>     I think I have discovered an interesting performance anomaly.
>
> Indeed you have, thanks.
>
>>
>>     In the svcauth code there are places that do:
>>
>>        hash_long((unsigned long)item->m_addr.s_addr, IP_HASHBITS);
>>
>>     Ok... That seems reasonable, but then again...maybe not....
>
> No, perfectly reasonable.  I don't write "maybe reasonable" code.  It
> is either perfect, or rubbish :-)
>
>>
>>     I believe that s_addr is an IP address in Network Neutral format.
>>      (Big Endian)
>>
>>     So... When one is on a Little Endian system, the hash_long
>>     function gets handed a Big Endian value as a long, and
>>     later, via the magic of being a Little Endian system,
>>     gets byte swapped.
>>
>>     Step 1. 192.168.1.2 becomes 2.1.168.192 (byte swap)
>>
>>     Step 2.  The 32 bit IP address becomes a 64 bit long when
>>                  this code is compiled and run on an Opteron, or
>>                  an IA-64 system.
>>                  2.1.168.192 -> 0.0.0.0.2.1.168.192
>>     Step 3. Call the hash_long() and get back a hash value
>>                 that is IP_HASHBITS (8) in size.
>>
>>     You'll notice that the hash distribution is not nearly as
>>     good as one might believe.  If one would have done:
>>
>>         hash_long( inet_lnaof(item->m_addr.s_addr)),IP_HASHBITS)
>>
>>     Then the hash_long function would have done a nice job.
>
> True, but irrelevant.
> hash_long(X, 8) will take the top 8 bits of the result.  So pushing
> the noisy bits into the low order bits should have no significant
> effect.
> The fact that it does suggests that hash_long is broken.
>
> I wonder if I blame William or Chuck?  Maybe I'll just blame both.
> After all it is Christmas time and we should share the good will
> around :-)
>
>>
>> ---------
>>
>>     Now you say "Ok that's not too cool, but why is this really
>>     important ?"
>
> Oh, no.  I can easily see that it is important.  And I agree it is
> seriously uncool (like the weather down here is oz (aka .au)).
>
>
>>
>>         Well, it turns out that the RedHat EL 4 release has some issues
>>     with the locking around the buckets that this hash is being
>>     used to index into, and with the distribution being so not... it
>>     leads to a lock race, followed by a kernel dereference of null,
>>     followed shortly by angry phone calls.... True, the race needs
>>     to be fixed, but it sure would be nice if the pressure were a
>>     tad lower on the lock. It would greatly reduce the probability
>>     of the panic, and also improve the performance (scaling) of
>>     the system, if a few more buckets were used :-)
>
> What mainline kernel is RedHat EL 4 based on?
> I think I know the race you mean.... Funny how one writes buggy code,
> then fixes it, then finds it still existing in "enterprise" kernels
> months later. (SLES only gets it fixed in 9SP3).
>
>>
>>     I've talked this over with Charles, and Bruce, and they pointed
>>     me in your direction ...
>
> Go back to Charles and tell him I sent you :-)
>
>>
>>     How would you prefer to proceed ?
>>
>>         A. It's your code, and you would prefer to tinker without
>>             some bozo adding his two bits.
>
> Nonono, bozo bits are worth their weight in gold (sometimes).
>
>>
>>         B. You're way too busy to go after this, and would
>>             welcome a diff -u patch.
>
> In general, yes.  In this case, the patch would have been wrong.
>
>>
>>         C. You'll scratch your head, think about it, and get back
>>             after morning coffee :-)
>
> I'm a tea drinker, so that wouldn't work.
>
>>
>>         D. It will only take a few seconds to add the
>>                 inet_lnaof( ) to the offending lines, and it will be
>>                 done before you can say Jack Flash :-)
>
> It was 'Jack Robinson' in my day - no idea why.
>
>>
>>         E. Go away, you're bothering me :-)
>>
>
> Yeh, stop bothering me with such interesting puzzles.....
>
> If you look at the top of include/linux/hash.h  you will see a very
> helpful comment:
>
> /*
> * Knuth recommends primes in approximately golden ratio to the maximum
> * integer representable by a machine word for multiplicative hashing.
> * Chuck Lever verified the effectiveness of this technique:
> * http://www.citi.umich.edu/techreports/reports/citi-tr-00-1.pdf
> *
> * These primes are chosen to be bit-sparse, that is operations on
> * them can use shifts and additions instead of multiplications for
> * machines where multiplications are slow.
> */
>
> I think there is a tension between the 'close to golden ratio'
> requirement and the 'bit-sparse' requirement.  The prime chosen for
> 64bit work is
> #define GOLDEN_RATIO_PRIME 0x9e37fffffffc0001UL
> which is nicely bit-sparse, bit is a long way from the golden ratio
> which is 0x9E3779B97F4A7C15
> The closest prime to this is
>         0x9E3779B97F4A7C55
> which is not particularly bit-sparse, but produces much better
> distribution of hashes for IP addresses.
>
> In fact, I wouldn't be at all surprised if 'bit-sparse' tends to have a
> directly negative effect on hash quality when the variations in the
> input line up with the sparse bits (so to speak).
>
> Now I don't really know how much of an issue this 'bit sparseness' is
> for speed, and how much cost it would be to just change those
> shift/adds into a multiply.  But there is something definitely wrong
> with hash_long on 64bit, and I suspect it could affect more than just
> IP addresses.
>
> William, Chunk:  Any suggestions on whether a straight multiply would
> be too expensive, or what else we could do to make hash_long both fast
> and effective?
> Maybe we should just have 'hash32' and write 'hash64' as
>  hash32(x ^ hash32(x>>32));
>
> But then the current hash_long for 32bit doesn't work brilliantly when
> the variation is in the 3rd byte (i.e. within the mask 0x0f00).
>
> I wonder if http://burtleburtle.net/bob/hash/evahash.html might end up
> being better ... would need to do some serious tests.
>
> Help?
>
> NeilBrown
>
> btw, I did testing with the following little program
> called like:
>  for i in 0 8 16 24 ; do ./hashtest 0x00007659 $i 256; done
>                                       ^^^^^^^ random number.
>
> #include <stdio.h>
>
> unsigned hash(unsigned long val)
> {
> unsigned long long hash = val;
> /* hash *=  0x9e37fffffffc0001ULL;*/
> hash *= 11400714819323198549ULL;
> return (hash >> (64-8)) & 255;
> }
>
> main(int argc, char*argv[])
> {
> unsigned long start = strtoul(argv[1], 0);
> int shift = strtoul(argv[2], 0);
> int count = strtoul(argv[3], 0);
> int max = 0;
>
> int cnt[256];
> int i;
>
> for (i=0; i<256; i++)
> cnt[i] = 0;
>
> while (count) {
> cnt[hash(start)] ++;
> start += (1<<shift);
> count--;
> }
> count = 0;
> for (i=0; i<256; i++) {
> if (cnt[i])
> count++;
> if (cnt[i] > max)
> max = cnt[i];
> }
> printf("count %d/256 max %d\n", count, max);
> }
>
>

    In my tests I tried several IP addresses, and watched the resultant 
hash.
    I then tried the same experiment with inet_lnaof(x) and watched
    the resultant hash. In both cases on a 64 bit Little Endian system.

    Test results:

Testing byte 4 (least significant)
Input: 192.168.1.0 Hash-> 3e
Input: 192.168.1.0 Hash-with-inet_lnaof() -> 57
Input: 192.168.1.1 Hash-> 3e
Input: 192.168.1.1 Hash-with-inet_lnaof() -> c9
Input: 192.168.1.2 Hash-> 3e
Input: 192.168.1.2 Hash-with-inet_lnaof() -> 6b
Input: 192.168.1.3 Hash-> 3e
Input: 192.168.1.3 Hash-with-inet_lnaof() -> 8d
Input: 192.168.1.4 Hash-> 3e
Input: 192.168.1.4 Hash-with-inet_lnaof() -> 2f
Input: 192.168.1.5 Hash-> 3e
Input: 192.168.1.5 Hash-with-inet_lnaof() -> 40
Input: 192.168.1.6 Hash-> 3e
Input: 192.168.1.6 Hash-with-inet_lnaof() -> e2
Input: 192.168.1.7 Hash-> 3e
Input: 192.168.1.7 Hash-with-inet_lnaof() -> 4

    It looks like simply adding the hash_long(inet_lnaof(s_sddr),8)
    seems to do a better job than hash_long(s_addr,8) and is a
    pretty simple change.
    (low risk).

    I'll leave the discussion on GOLDEN to others, and
    go with simple/easy/works.  :-)

Enjoy,
Don Capps





-------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click
_______________________________________________
NFS maillist  -  NFS@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: An interesting performance thing ?
  2005-12-14 22:46   ` Chuck Lever
@ 2005-12-14 23:47     ` Iozone
  2005-12-15  0:02       ` Neil Brown
  0 siblings, 1 reply; 21+ messages in thread
From: Iozone @ 2005-12-14 23:47 UTC (permalink / raw)
  To: cel, Neil Brown; +Cc: wli, nfs


----- Original Message ----- 
From: "Chuck Lever" <cel@citi.umich.edu>
To: "Neil Brown" <neilb@suse.de>
Cc: "Iozone" <capps@iozone.org>; <wli@holomorphy.com>; 
<nfs@lists.sourceforge.net>
Sent: Wednesday, December 14, 2005 4:46 PM
Subject: Re: An interesting performance thing ?


> Neil Brown wrote:
>> William, Chunk:  Any suggestions on whether a straight multiply would
>> be too expensive, or what else we could do to make hash_long both fast
>> and effective?
>
> my original proposal was a hash function which computed the index by a
> single multiplication with a large prime number.  i was told that
> multiplication was too expensive, especially on older platforms, so the
> existing computation took its place.  but i don't think anyone ever did
> any real studies.  i can't imagine that multiplication would be worse
> than extra elements in a hash chain, especially on modern CPU 
> architectures.
>

Neil,

    In a perfect world, a perfect hash would be ideal...but,

    1. It is fairly normal practice for code to call inet_lnaof( in_addr )
        so that the host native format is used by the host. Just
        grabbing s_addr is a bit odd.
    2. Simply adding the inet_lnaof( in_addr ) would do nice
        things with the existing hash algorithm, for CDIR allocated
        monotonically incremented IP address spaces.
    3. Gosh, it seems like a pretty easy and safe change.

        I think I understand Neil's position. If one hands a 64 bit
    opaque object to a hash function, it should do a better
    job of distribution, so who cares if it was in the native
    format or not.  Ok..  But, given 1->3 above, it seems
    like there might be a lower risk, less controversial, solution
    for the immediate future, and permit folks to do a study
    while the world goes on its happy way :-)

    Thanks again for being kind to new kid tossing
    spitballs :-)

Enjoy,
Don Capps





-------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click
_______________________________________________
NFS maillist  -  NFS@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: An interesting performance thing ?
  2005-12-14 23:47     ` Iozone
@ 2005-12-15  0:02       ` Neil Brown
  2005-12-15  0:43         ` Chuck Lever
  2005-12-15  2:32         ` J. Bruce Fields
  0 siblings, 2 replies; 21+ messages in thread
From: Neil Brown @ 2005-12-15  0:02 UTC (permalink / raw)
  To: Iozone; +Cc: cel, wli, nfs

On Wednesday December 14, capps@iozone.org wrote:
> 
> Neil,
> 
>     In a perfect world, a perfect hash would be ideal...but,
> 
>     1. It is fairly normal practice for code to call inet_lnaof( in_addr )
>         so that the host native format is used by the host. Just
>         grabbing s_addr is a bit odd.
>     2. Simply adding the inet_lnaof( in_addr ) would do nice
>         things with the existing hash algorithm, for CDIR allocated
>         monotonically incremented IP address spaces.
>     3. Gosh, it seems like a pretty easy and safe change.
> 
>         I think I understand Neil's position. If one hands a 64 bit
>     opaque object to a hash function, it should do a better
>     job of distribution, so who cares if it was in the native
>     format or not.  Ok..  But, given 1->3 above, it seems
>     like there might be a lower risk, less controversial, solution
>     for the immediate future, and permit folks to do a study
>     while the world goes on its happy way :-)

The trouble is that just because inet_lnaof makes the final hash
better for your mix of clients, that doesn't mean it won't make it
worse for someone else.  I admit that I cannot provide a like sample
mix of clients what would be worse with inet_lnaof, but that doesn't
mean they don't exist.

If the only symptom that has been identified is that it makes a
locking race easier to hit, then the obvious first solution is to fix
that locking race, and I believe such a fix is available.

If anyone were experiencing a measurable slowness due to the bad hash,
then I would have not problem suggesting they try the inet_lnaof
solution.

But I don't propose submitting it to Linus because - useful as it is -
it is simply wrong.
We need to fix that hash function, and this clear problem is a good
motivation to do that.

Or to put it another way, I don't think it is clear that we *need* a
solution for the immediate future.

NeilBrown




-------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click
_______________________________________________
NFS maillist  -  NFS@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Re: An interesting performance thing ?
  2005-12-15  0:02       ` Neil Brown
@ 2005-12-15  0:43         ` Chuck Lever
  2005-12-15  0:57           ` Neil Brown
  2005-12-16 10:15           ` Aurélien Charbon
  2005-12-15  2:32         ` J. Bruce Fields
  1 sibling, 2 replies; 21+ messages in thread
From: Chuck Lever @ 2005-12-15  0:43 UTC (permalink / raw)
  To: Neil Brown; +Cc: Iozone, wli, nfs

[-- Attachment #1: Type: text/plain, Size: 1255 bytes --]

Neil Brown wrote:
> The trouble is that just because inet_lnaof makes the final hash
> better for your mix of clients, that doesn't mean it won't make it
> worse for someone else.  I admit that I cannot provide a like sample
> mix of clients what would be worse with inet_lnaof, but that doesn't
> mean they don't exist.
> 
> If the only symptom that has been identified is that it makes a
> locking race easier to hit, then the obvious first solution is to fix
> that locking race, and I believe such a fix is available.
> 
> If anyone were experiencing a measurable slowness due to the bad hash,
> then I would have not problem suggesting they try the inet_lnaof
> solution.
> 
> But I don't propose submitting it to Linus because - useful as it is -
> it is simply wrong.
> We need to fix that hash function, and this clear problem is a good
> motivation to do that.
> 
> Or to put it another way, I don't think it is clear that we *need* a
> solution for the immediate future.

we might also think a little bit of the future (IPv6).

IPv6 addresses are larger than IPv4 addresses, so they will need their 
own hash function, *or* we will have to design a reasonable hash 
function for variably-sized addresses now, which seems like a harder 
problem.

[-- Attachment #2: cel.vcf --]
[-- Type: text/x-vcard, Size: 451 bytes --]

begin:vcard
fn:Chuck Lever
n:Lever;Charles
org:Network Appliance, Incorporated;Open Source NFS Client Development
adr:535 West William Street, Suite 3100;;Center for Information Technology Integration;Ann Arbor;MI;48103-4943;USA
email;internet:cel@citi.umich.edu
title:Member of Technical Staff
tel;work:+1 734 763-4415
tel;fax:+1 734 763 4434
tel;home:+1 734 668-1089
x-mozilla-html:FALSE
url:http://troy.citi.umich.edu/u/cel/
version:2.1
end:vcard


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Re: An interesting performance thing ?
  2005-12-15  0:43         ` Chuck Lever
@ 2005-12-15  0:57           ` Neil Brown
  2005-12-15  0:59             ` Chuck Lever
  2005-12-16 10:15           ` Aurélien Charbon
  1 sibling, 1 reply; 21+ messages in thread
From: Neil Brown @ 2005-12-15  0:57 UTC (permalink / raw)
  To: cel; +Cc: Iozone, wli, nfs

On Wednesday December 14, cel@citi.umich.edu wrote:
> 
> we might also think a little bit of the future (IPv6).
> 
> IPv6 addresses are larger than IPv4 addresses, so they will need their 
> own hash function, *or* we will have to design a reasonable hash 
> function for variably-sized addresses now, which seems like a harder 
> problem.

A hash for a variable sized value isn't really a problem.
Just break it into small bits and compose the hashes, similar
to what hash_str in include/linux/sunrpc/svcauth.h does.

A good question is:  Do we ever need more than 4 bytes of hash
value?
We wouldn't for hash tables, but they may well be other uses of
hashes.

If we don't then a function that maps and arbitrary mem buffer into
a u32 would be suitable general for all uses or all architectures,
though some special cases like 4byte and 8byte inputs could probably
be optimised sensibly.

 I suspect the function given in
   http://burtleburtle.net/bob/hash/evahash.html

my really be a good choice.  
It does about 40 add/subtracts for each 4bytes, but seems to have good
properties.
(Hmm.. that is probably similar to a constant 32bit multiply, but
 maybe the current code with a better non-sparse prime is just as
 good).

NeilBrown



-------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click
_______________________________________________
NFS maillist  -  NFS@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Re: An interesting performance thing ?
  2005-12-15  0:57           ` Neil Brown
@ 2005-12-15  0:59             ` Chuck Lever
  0 siblings, 0 replies; 21+ messages in thread
From: Chuck Lever @ 2005-12-15  0:59 UTC (permalink / raw)
  To: Neil Brown; +Cc: Iozone, wli, nfs

[-- Attachment #1: Type: text/plain, Size: 894 bytes --]

Neil Brown wrote:
> A good question is:  Do we ever need more than 4 bytes of hash
> value?
> We wouldn't for hash tables, but they may well be other uses of
> hashes.
> 
> If we don't then a function that maps and arbitrary mem buffer into
> a u32 would be suitable general for all uses or all architectures,
> though some special cases like 4byte and 8byte inputs could probably
> be optimised sensibly.
> 
>  I suspect the function given in
>    http://burtleburtle.net/bob/hash/evahash.html
> 
> my really be a good choice.  
> It does about 40 add/subtracts for each 4bytes, but seems to have good
> properties.
> (Hmm.. that is probably similar to a constant 32bit multiply, but
>  maybe the current code with a better non-sparse prime is just as
>  good).

i haven't looked at how the network layer handles this issue, but it 
might be a good start if there is something there to reuse.

[-- Attachment #2: cel.vcf --]
[-- Type: text/x-vcard, Size: 451 bytes --]

begin:vcard
fn:Chuck Lever
n:Lever;Charles
org:Network Appliance, Incorporated;Open Source NFS Client Development
adr:535 West William Street, Suite 3100;;Center for Information Technology Integration;Ann Arbor;MI;48103-4943;USA
email;internet:cel@citi.umich.edu
title:Member of Technical Staff
tel;work:+1 734 763-4415
tel;fax:+1 734 763 4434
tel;home:+1 734 668-1089
x-mozilla-html:FALSE
url:http://troy.citi.umich.edu/u/cel/
version:2.1
end:vcard


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: An interesting performance thing ?
  2005-12-14 18:22 An interesting performance thing ? Iozone
  2005-12-14 22:26 ` Neil Brown
@ 2005-12-15  2:22 ` J. Bruce Fields
  1 sibling, 0 replies; 21+ messages in thread
From: J. Bruce Fields @ 2005-12-15  2:22 UTC (permalink / raw)
  To: Iozone; +Cc: neilb, nfs

On Wed, Dec 14, 2005 at 12:22:38PM -0600, Iozone wrote:
>     Then the hash_long function would have done a nice job.  Since
>     one is not converting the network neutral IP address into a 
>     host binary format, here is an example of the hash distribution
>     that, I believe via experimentation, is currently being seen on 
>     Little Endian 64 Bit systems....

A little bit of fiddling about with pencil and paper shows what's
happening:  suppose longs are 64 bits and let x be a long.  Then
hash_long operates on x by multiplying it by

	 1 - 2^18 - 2^51 + 2^54 - 2^57 + 2^61 + 2^63

and then taking the high bits.

You can think of that as adding and subtracting a bunch of left-shifts.
Note that the last four terms are all shifts by at least 51 bits, so
wipe out all but the lowest 13 bits.  So if those low 13 bits are all
constant, then the only variation is from multiplication by (1 - 2^18).
But if the input is small (say the top 32 bits are all zero...) then
multiplying by (1 - 2^18) doesn't affect the high bits of the output at
all.

So in our case, where the 32 high bits are zero and we're only asking
for the top 8 bits, it looks like all but the bottom 13 bits of x (the
top 13 bits of the IP address) are ignored.

--b.


-------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click
_______________________________________________
NFS maillist  -  NFS@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Re: An interesting performance thing ?
  2005-12-15  0:02       ` Neil Brown
  2005-12-15  0:43         ` Chuck Lever
@ 2005-12-15  2:32         ` J. Bruce Fields
  2005-12-15  4:51           ` Iozone
  1 sibling, 1 reply; 21+ messages in thread
From: J. Bruce Fields @ 2005-12-15  2:32 UTC (permalink / raw)
  To: Neil Brown; +Cc: Iozone, cel, wli, nfs

On Thu, Dec 15, 2005 at 11:02:22AM +1100, Neil Brown wrote:
> The trouble is that just because inet_lnaof makes the final hash
> better for your mix of clients, that doesn't mean it won't make it
> worse for someone else.  I admit that I cannot provide a like sample
> mix of clients what would be worse with inet_lnaof, but that doesn't
> mean they don't exist.

It strikes me as extremely unlikely that any set of clients would have
good variation in the *high* 13 bits of their IP addresses.

In fact, in the common case the high 13 bits are probably completely
constant.

So for these architectures, the ip address lookup is probably usually
degenerating to a linear search.  Since that lookup has to be performed
on every rpc call, this is likely to be painful.

> But I don't propose submitting it to Linus because - useful as it is -
> it is simply wrong.  We need to fix that hash function, and this clear
> problem is a good motivation to do that.

It'd be worth checking whether other callers may be giving hash_long
32-bit inputs, since they might have similar problems.

--b.


-------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click
_______________________________________________
NFS maillist  -  NFS@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Re: An interesting performance thing ?
  2005-12-15  2:32         ` J. Bruce Fields
@ 2005-12-15  4:51           ` Iozone
  2005-12-15 14:49             ` J. Bruce Fields
  0 siblings, 1 reply; 21+ messages in thread
From: Iozone @ 2005-12-15  4:51 UTC (permalink / raw)
  To: J. Bruce Fields, Neil Brown; +Cc: cel, wli, nfs


----- Original Message ----- 
From: "J. Bruce Fields" <bfields@fieldses.org>
To: "Neil Brown" <neilb@suse.de>
Cc: "Iozone" <capps@iozone.org>; <cel@citi.umich.edu>; <wli@holomorphy.com>; 
<nfs@lists.sourceforge.net>
Sent: Wednesday, December 14, 2005 8:32 PM
Subject: Re: [NFS] Re: An interesting performance thing ?


> On Thu, Dec 15, 2005 at 11:02:22AM +1100, Neil Brown wrote:
>> The trouble is that just because inet_lnaof makes the final hash
>> better for your mix of clients, that doesn't mean it won't make it
>> worse for someone else.  I admit that I cannot provide a like sample
>> mix of clients what would be worse with inet_lnaof, but that doesn't
>> mean they don't exist.
>
> It strikes me as extremely unlikely that any set of clients would have
> good variation in the *high* 13 bits of their IP addresses.
>
> In fact, in the common case the high 13 bits are probably completely
> constant.
>
> So for these architectures, the ip address lookup is probably usually
> degenerating to a linear search.  Since that lookup has to be performed
> on every rpc call, this is likely to be painful.
>
>> But I don't propose submitting it to Linus because - useful as it is -
>> it is simply wrong.  We need to fix that hash function, and this clear
>> problem is a good motivation to do that.
>
> It'd be worth checking whether other callers may be giving hash_long
> 32-bit inputs, since they might have similar problems.
>
> --b.
>

Bruce,

        One of the interesting things I noticed is that the general
    purpose hash_long() function may not be as optimal
    as a more focused hash_IP_addr() function might be,
    even if GOLDEN were GOLDEN :-) And, trying
    to smash 128 bit IPV6 addresses into a 8 bit hash
    value, that is somehow uniformly distributed, well,
    that's going to be quite a neat trick, and making it
    work for 32 bit, 64, and 128 bit objects, with uniform
    distribution over a variable number of output bits,
    is approaching magical.

        If one knows that the frequency of change of the
    bytes in the value, to be hashed, then a more targeted hash
    algorithm might take advantage of this pre-knowledge
    to contribute to the uniformity of the output hash.

    With respect to IPV4 addresses:

        aa.bb.cc.dd

    where dd changes the fastest, then cc, then bb, then aa.

    Thus the bits in dd are more interesting than the bits in
    cc, and the bits in cc are more interesting than in bb,
    and the bits in aa are pretty much static.
        (Not many NFS servers have clients that span
         large numbers of class "A networks :-)

    Hash_long(), being general purpose, could not take
    advantage of this, but something else might ?

    Bruce:  You noticed that my suggestion of inet_lnaof()
    didn't cure the hash_long limitations, just moved the
    frequently modified bits into an active region of the
    hash algorithm :-) Sort of like tricking hash_long() into
    a performing like a more targeted hash, just for IPV4
    address ranges that an NFS server would be most likely
    to see....... :-)

    You did raise an interesting question...Are other 32 bit values
    being handed to hash_long() ? Good question. Wonder
    if these callers also have particular needs that might
    be addressed by a targeted hash algorithm ? Hmmmmm...

Enjoy,
Don Capps




-------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click
_______________________________________________
NFS maillist  -  NFS@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Re: An interesting performance thing ?
  2005-12-15  4:51           ` Iozone
@ 2005-12-15 14:49             ` J. Bruce Fields
  2005-12-15 15:36               ` Iozone
  0 siblings, 1 reply; 21+ messages in thread
From: J. Bruce Fields @ 2005-12-15 14:49 UTC (permalink / raw)
  To: Iozone; +Cc: Neil Brown, cel, wli, nfs

On Wed, Dec 14, 2005 at 10:51:40PM -0600, Iozone wrote:
>        One of the interesting things I noticed is that the general
>    purpose hash_long() function may not be as optimal
>    as a more focused hash_IP_addr() function might be,
>    even if GOLDEN were GOLDEN :-) And, trying
>    to smash 128 bit IPV6 addresses into a 8 bit hash
>    value, that is somehow uniformly distributed, well,
>    that's going to be quite a neat trick, and making it
>    work for 32 bit, 64, and 128 bit objects, with uniform
>    distribution over a variable number of output bits,
>    is approaching magical.

I'm not so pessimistic, but I'm also not expert enough about hash
functions to know how much magic is reasonable to expect out of a good
one.  Time to pull out Knuth, maybe.

There might also be an argument for experimenting with different data
structures.  I'd expect high temporal locality--in typical situations a
server with lots of clients may have only a few that are active at a
particular time--so one of those balanced trees that migrates recently
looked-up items to the top might be helpful.

Aside from seeing the race condition triggered, have you done any
profiling to make sure this is actually a big problem, even with the
current worst-case linear-search behaviour?

--b.


-------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click
_______________________________________________
NFS maillist  -  NFS@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Re: An interesting performance thing ?
  2005-12-15 14:49             ` J. Bruce Fields
@ 2005-12-15 15:36               ` Iozone
  2005-12-15 16:14                 ` J. Bruce Fields
  0 siblings, 1 reply; 21+ messages in thread
From: Iozone @ 2005-12-15 15:36 UTC (permalink / raw)
  To: J. Bruce Fields; +Cc: Neil Brown, cel, wli, nfs


----- Original Message ----- 
From: "J. Bruce Fields" <bfields@fieldses.org>
To: "Iozone" <capps@iozone.org>
Cc: "Neil Brown" <neilb@suse.de>; <cel@citi.umich.edu>; 
<wli@holomorphy.com>; <nfs@lists.sourceforge.net>
Sent: Thursday, December 15, 2005 8:49 AM
Subject: Re: [NFS] Re: An interesting performance thing ?


> On Wed, Dec 14, 2005 at 10:51:40PM -0600, Iozone wrote:
>>        One of the interesting things I noticed is that the general
>>    purpose hash_long() function may not be as optimal
>>    as a more focused hash_IP_addr() function might be,
>>    even if GOLDEN were GOLDEN :-) And, trying
>>    to smash 128 bit IPV6 addresses into a 8 bit hash
>>    value, that is somehow uniformly distributed, well,
>>    that's going to be quite a neat trick, and making it
>>    work for 32 bit, 64, and 128 bit objects, with uniform
>>    distribution over a variable number of output bits,
>>    is approaching magical.
>
> I'm not so pessimistic, but I'm also not expert enough about hash
> functions to know how much magic is reasonable to expect out of a good
> one.  Time to pull out Knuth, maybe.
>
> There might also be an argument for experimenting with different data
> structures.  I'd expect high temporal locality--in typical situations a
> server with lots of clients may have only a few that are active at a
> particular time--so one of those balanced trees that migrates recently
> looked-up items to the top might be helpful.
>
> Aside from seeing the race condition triggered, have you done any
> profiling to make sure this is actually a big problem, even with the
> current worst-case linear-search behaviour?
>
> --b.
>

Bruce,

        I have not done a comprehensive profile analysis, but
    have some data I can share.

        At the time when the race caused the system to crash,
    there were 47 NFS clients currently all stacked up
    on one queue. No other queues had any items due
    to the fact that the hash_long function had placed
    all clients on a single queue.
        With the current behavior of hash_long, one
    can reasonably expect queue depths of 127 items
    per queue, (see earlier post demonstrating the hash distribution
    over a block of IP addresses where low order byte is
    monotonimically incremented) and very few queues to be
    active.
        With IP_HASHBITS = 8, there were 256 queues,
    but only one was active. If the hash distribution were
    uniform, then the queue depth could have been 1 (one)
    but instead is a linear search of a relatively large number
    of items.

    Does that help ?

Enjoy,
Don Capps


 




-------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click
_______________________________________________
NFS maillist  -  NFS@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Re: An interesting performance thing ?
  2005-12-15 15:36               ` Iozone
@ 2005-12-15 16:14                 ` J. Bruce Fields
  2005-12-15 16:41                   ` Iozone
  2005-12-16  1:25                   ` Neil Brown
  0 siblings, 2 replies; 21+ messages in thread
From: J. Bruce Fields @ 2005-12-15 16:14 UTC (permalink / raw)
  To: Iozone; +Cc: Neil Brown, cel, wli, nfs

On Thu, Dec 15, 2005 at 09:36:37AM -0600, Iozone wrote:
>        With IP_HASHBITS = 8, there were 256 queues,
>    but only one was active. If the hash distribution were
>    uniform, then the queue depth could have been 1 (one)
>    but instead is a linear search of a relatively large number
>    of items.
> 
>    Does that help ?

I'm mainly curious how much effort it's worth expending on optimizing
that hash.  If it turns out that even the current linear search isn't
that expensive, then that's an argument against doing any more
optimizing (beyond just fixing the current obvious problem).

So maybe playing with oprofile or something would help answer my
question.

--b.


-------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click
_______________________________________________
NFS maillist  -  NFS@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Re: An interesting performance thing ?
  2005-12-15 16:14                 ` J. Bruce Fields
@ 2005-12-15 16:41                   ` Iozone
  2005-12-15 17:07                     ` J. Bruce Fields
  2005-12-16  1:25                   ` Neil Brown
  1 sibling, 1 reply; 21+ messages in thread
From: Iozone @ 2005-12-15 16:41 UTC (permalink / raw)
  To: J. Bruce Fields; +Cc: Neil Brown, cel, wli, nfs


----- Original Message ----- 
From: "J. Bruce Fields" <bfields@fieldses.org>
To: "Iozone" <capps@iozone.org>
Cc: "Neil Brown" <neilb@suse.de>; <cel@citi.umich.edu>; 
<wli@holomorphy.com>; <nfs@lists.sourceforge.net>
Sent: Thursday, December 15, 2005 10:14 AM
Subject: Re: [NFS] Re: An interesting performance thing ?


> On Thu, Dec 15, 2005 at 09:36:37AM -0600, Iozone wrote:
>>        With IP_HASHBITS = 8, there were 256 queues,
>>    but only one was active. If the hash distribution were
>>    uniform, then the queue depth could have been 1 (one)
>>    but instead is a linear search of a relatively large number
>>    of items.
>>
>>    Does that help ?
>
> I'm mainly curious how much effort it's worth expending on optimizing
> that hash.  If it turns out that even the current linear search isn't
> that expensive, then that's an argument against doing any more
> optimizing (beyond just fixing the current obvious problem).
>
> So maybe playing with oprofile or something would help answer my
> question.
>
> --b.
>
Bruce,

        I'm still working on trying to track down the patch to
    fix RedHat EL 4  (2.6.9-11) so that it will not panic.
    Until I get that patch located and installed, the profiling
    may not be very successful :-(

    It's a good idea, I'm just not able to grant your wish
    at this point in time. Please try your wish again later :-)
    ( EAGAIN ? :-)

Enjoy,
Don Capps




-------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click
_______________________________________________
NFS maillist  -  NFS@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Re: An interesting performance thing ?
  2005-12-15 16:41                   ` Iozone
@ 2005-12-15 17:07                     ` J. Bruce Fields
  0 siblings, 0 replies; 21+ messages in thread
From: J. Bruce Fields @ 2005-12-15 17:07 UTC (permalink / raw)
  To: Iozone; +Cc: Neil Brown, cel, wli, nfs

On Thu, Dec 15, 2005 at 10:41:19AM -0600, Iozone wrote:
>        I'm still working on trying to track down the patch to
>    fix RedHat EL 4  (2.6.9-11) so that it will not panic.
>    Until I get that patch located and installed, the profiling
>    may not be very successful :-(

OK!

I think I'd start by looking through patches that touched
include/linux/sunrpc/cache.h:

http://kernel.org/git/?p=linux/kernel/git/torvalds/old-2.6-bkcvs.git;a=history;h=9839690102056c6515cff426221522b73eb1a94d;f=include/linux/sunrpc/cache.h

or net/sunrpc/cache.c:

http://kernel.org/git/?p=linux/kernel/git/torvalds/old-2.6-bkcvs.git;a=history;h=9839690102056c6515cff426221522b73eb1a94d;f=net/sunrpc/cache.c

The topmost commit there looks like a likely candidate.

--b.


-------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click
_______________________________________________
NFS maillist  -  NFS@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Re: An interesting performance thing ?
  2005-12-15 16:14                 ` J. Bruce Fields
  2005-12-15 16:41                   ` Iozone
@ 2005-12-16  1:25                   ` Neil Brown
  2005-12-16  3:59                     ` Iozone
  1 sibling, 1 reply; 21+ messages in thread
From: Neil Brown @ 2005-12-16  1:25 UTC (permalink / raw)
  To: J. Bruce Fields; +Cc: Iozone, cel, wli, nfs

On Thursday December 15, bfields@fieldses.org wrote:
> I'm mainly curious how much effort it's worth expending on optimizing
> that hash.  If it turns out that even the current linear search isn't
> that expensive, then that's an argument against doing any more
> optimizing (beyond just fixing the current obvious problem).
> 

Thinking a bit more about this, a very suitable hash to produce 8 bits
from an IPv4 address would be to xor all the bytes together:
       hash = addr ^ (addr>>16);
       hash = (hash ^ (hash>>8)) & 0xff;

I think this would have good properties on any natural set of IP
addresses.

I'd still prefer to use a good general purpose hash function because
it is conceptually simpler.  But when one isn't available....

NeilBrown


-------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click
_______________________________________________
NFS maillist  -  NFS@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Re: An interesting performance thing ?
  2005-12-16  1:25                   ` Neil Brown
@ 2005-12-16  3:59                     ` Iozone
  0 siblings, 0 replies; 21+ messages in thread
From: Iozone @ 2005-12-16  3:59 UTC (permalink / raw)
  To: Neil Brown, J. Bruce Fields; +Cc: cel, wli, nfs


----- Original Message ----- 
From: "Neil Brown" <neilb@suse.de>
To: "J. Bruce Fields" <bfields@fieldses.org>
Cc: "Iozone" <capps@iozone.org>; <cel@citi.umich.edu>; <wli@holomorphy.com>; 
<nfs@lists.sourceforge.net>
Sent: Thursday, December 15, 2005 7:25 PM
Subject: Re: [NFS] Re: An interesting performance thing ?


> On Thursday December 15, bfields@fieldses.org wrote:
>> I'm mainly curious how much effort it's worth expending on optimizing
>> that hash.  If it turns out that even the current linear search isn't
>> that expensive, then that's an argument against doing any more
>> optimizing (beyond just fixing the current obvious problem).
>>
>
> Thinking a bit more about this, a very suitable hash to produce 8 bits
> from an IPv4 address would be to xor all the bytes together:
>       hash = addr ^ (addr>>16);
>       hash = (hash ^ (hash>>8)) & 0xff;
>
> I think this would have good properties on any natural set of IP
> addresses.
>
> I'd still prefer to use a good general purpose hash function because
> it is conceptually simpler.  But when one isn't available....
>
> NeilBrown
>

Neil,

        I think we're in phase :-)  I too was thinking of
    something much simpler and faster, that took advantage
    of the pre-knowledge of variably of the input, the
    needed size, and distribution of the hash output.

    This approach has some interesting benefits:

    1) Runs faster, and consumes far fewer CPU cycles.
        Zero multiples, or golden primes, and a bunch fewer
        other instructions.
    2) Does a better distribution than any general purpose hash function.
    3) Can easily be adapted for IPV6
    4) Needs far less testing, than a one fits all model.
    5) Can be implemented quickly, easily, and has low risk
        of affecting un-related subsystems.
    6) Will not need a PHD (strong in number theory) to understand that
        it's actually doing a good job.

        Don't get me wrong, I still see a need for a generalized hash
    algorithm that needs primes, and heavy magic. Its needed for
    nodalization with unpredictable invariant bit pattern inputs (due to 
compiler
    alignments of native data types, and structure alignments, on pointers)
    and other non-predictive input data sets. But with IP addresses, the
    natural distribution does not have the same issues, and can
    be solved much more effectively, and efficiently. As is
    demonstrated so well, by you, above :-)

    Is it too late to place an order for one of these for Christmas ?

Enjoy,
Don Capps

P.S. If you go with your algorithm above, I'll concede the
        need for inet_lnaof(s_addr)  The folding and XORs
        work fine on either native or network neutral formats.
         :-)






-------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click
_______________________________________________
NFS maillist  -  NFS@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Re: An interesting performance thing ?
  2005-12-15  0:43         ` Chuck Lever
  2005-12-15  0:57           ` Neil Brown
@ 2005-12-16 10:15           ` Aurélien Charbon
  2005-12-16 14:23             ` Iozone
  1 sibling, 1 reply; 21+ messages in thread
From: Aurélien Charbon @ 2005-12-16 10:15 UTC (permalink / raw)
  To: cel; +Cc: Neil Brown, Iozone, wli, nfs

Chuck Lever wrote:

> we might also think a little bit of the future (IPv6).
>
> IPv6 addresses are larger than IPv4 addresses, so they will need their=20
> own hash function, *or* we will have to design a reasonable hash=20
> function for variably-sized addresses now, which seems like a harder=20
> problem.

For information, in the IPv6 client support, we are currently using the=20
hash_long function by doing a xor operation between the four 32 bit=20
parts or an IPv6 address.

 static inline int ip_map_hash(struct ip_map *item)
 {
 	return hash_str(item->m_class, IP_HASHBITS) ^=20
-		hash_long((unsigned long)item->m_addr.s_addr, IP_HASHBITS);
+		hash_long((unsigned long)(item->m_addr.s6_addr32[0] ^
+			    item->m_addr.s6_addr32[1] ^
+			    item->m_addr.s6_addr32[2] ^
+			    item->m_addr.s6_addr32[3]), IP_HASHBITS);
 }

Regards,

Aur=E9lien



-------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click
_______________________________________________
NFS maillist  -  NFS@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: Re: An interesting performance thing ?
  2005-12-16 10:15           ` Aurélien Charbon
@ 2005-12-16 14:23             ` Iozone
  0 siblings, 0 replies; 21+ messages in thread
From: Iozone @ 2005-12-16 14:23 UTC (permalink / raw)
  To: Aurélien Charbon, cel; +Cc: Neil Brown, wli, nfs


----- Original Message -----=20
=46rom: "Aur=E9lien Charbon" <aurelien.charbon@ext.bull.net>
To: <cel@citi.umich.edu>
Cc: "Neil Brown" <neilb@suse.de>; "Iozone" <capps@iozone.org>;=20
<wli@holomorphy.com>; <nfs@lists.sourceforge.net>
Sent: Friday, December 16, 2005 4:15 AM
Subject: Re: [NFS] Re: An interesting performance thing ?


Chuck Lever wrote:

> we might also think a little bit of the future (IPv6).
>
> IPv6 addresses are larger than IPv4 addresses, so they will need th=
eir own=20
> hash function, *or* we will have to design a reasonable hash functi=
on for=20
> variably-sized addresses now, which seems like a harder problem.

For information, in the IPv6 client support, we are currently using t=
he
hash_long function by doing a xor operation between the four 32 bit
parts or an IPv6 address.

 static inline int ip_map_hash(struct ip_map *item)
 {
  return hash_str(item->m_class, IP_HASHBITS) ^
- hash_long((unsigned long)item->m_addr.s_addr, IP_HASHBITS);
+ hash_long((unsigned long)(item->m_addr.s6_addr32[0] ^
+     item->m_addr.s6_addr32[1] ^
+     item->m_addr.s6_addr32[2] ^
+     item->m_addr.s6_addr32[3]), IP_HASHBITS);
 }

Regards,

Aur=E9lien
-------------

Aur=E9lien,

    Perhaps in the future one might consider using Neil's new
    hash for IP addresses ?  Something like:

 static inline int ip_map_hash(struct ip_map *item)
 {
  return hash_str(item->m_class, IP_HASHBITS) ^
- hash_long((unsigned long)item->m_addr.s_addr, IP_HASHBITS);
+ hash_ip((unsigned long)(item->m_addr.s6_addr32[0] ^
+     item->m_addr.s6_addr32[1] ^
+     item->m_addr.s6_addr32[2] ^
+     item->m_addr.s6_addr32[3]));
 }

Enjoy,
Don Capps








-------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click
_______________________________________________
NFS maillist  -  NFS@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs

^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2005-12-16 14:23 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2005-12-14 18:22 An interesting performance thing ? Iozone
2005-12-14 22:26 ` Neil Brown
2005-12-14 22:46   ` Chuck Lever
2005-12-14 23:47     ` Iozone
2005-12-15  0:02       ` Neil Brown
2005-12-15  0:43         ` Chuck Lever
2005-12-15  0:57           ` Neil Brown
2005-12-15  0:59             ` Chuck Lever
2005-12-16 10:15           ` Aurélien Charbon
2005-12-16 14:23             ` Iozone
2005-12-15  2:32         ` J. Bruce Fields
2005-12-15  4:51           ` Iozone
2005-12-15 14:49             ` J. Bruce Fields
2005-12-15 15:36               ` Iozone
2005-12-15 16:14                 ` J. Bruce Fields
2005-12-15 16:41                   ` Iozone
2005-12-15 17:07                     ` J. Bruce Fields
2005-12-16  1:25                   ` Neil Brown
2005-12-16  3:59                     ` Iozone
2005-12-14 22:50   ` Iozone
2005-12-15  2:22 ` J. Bruce Fields

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.